Commit Graph

75 Commits

Author SHA1 Message Date
Simon Pilgrim e425d0b92a [InstCombine][X86] Add basic addsub intrinsic SimplifyDemandedVectorElts support (PR46277)
Pass through the demanded elts mask to the source operands.

The next step will be to add support for folding to add/sub if we only demand odd/even elements.
2020-11-30 18:40:16 +00:00
Simon Pilgrim 8ca484b946 [InstCombine][X86] Add addsub PR46277 test case
Also fix a copy+paste typo in the elts_addsub_v4f32 demanded elts test from the godbolt reference
2020-11-30 18:11:05 +00:00
Simon Pilgrim 9c2b2952e4 [InstCombine][X86] Add addsub tests showing failure to simplify demandedelts (PR46277) 2020-11-30 13:40:51 +00:00
Philip Reames 06f136f61e [instcombine][x86] Converted pdep/pext with shifted mask to simple arithmetic
If the mask of a pdep or pext instruction is a shift masked (i.e. one contiguous block of ones) we need at most one and and one shift to represent the operation without the intrinsic. One all platforms I know of, this is faster than the pdep/pext.

The cost modelling for multiple contiguous blocks might be worth exploring in a follow up, but it's not relevant for my current use case. It would almost certainly be a win on AMDs where these are really really slow though.

Differential Revision: https://reviews.llvm.org/D87861
2020-09-18 14:54:24 -07:00
Simon Pilgrim 3170d54842 [InstCombine][X86] Covert masked load/stores with (sign extended) bool vector masks to generic intrinsics.
As detailed on PR11210, if the mask is known to come from a (sign extended) bool vector (e.g. comparisons) then we can represent with a generic masked load/store without losing anything.

We already do something similar for BLENDV -> SELECT conversion.
2020-09-12 15:09:28 +01:00
Simon Pilgrim d030aad789 [InstCombine][X86] Add tests for masked load/stores with comparisons.
As detailed on PR11210, if the mask is known to come from a (sign extended) bool vector (e.g. comparisons) then we can represent with a generic masked load/store without losing anything.
2020-09-12 14:32:27 +01:00
Wang, Pengfei 9512525947 [X86][FPEnv] Teach X86 mask compare intrinsics to respect strict FP semantics.
When we use mask compare intrinsics under strict FP option, the masked
elements shouldn't raise any exception. So, we cann't replace the
intrinsic with a full compare + "and" operation.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D85385
2020-08-11 10:28:41 +08:00
Sebastian Neubauer 2a6c871596 [InstCombine] Move target-specific inst combining
For a long time, the InstCombine pass handled target specific
intrinsics. Having target specific code in general passes was noted as
an area for improvement for a long time.

D81728 moves most target specific code out of the InstCombine pass.
Applying the target specific combinations in an extra pass would
probably result in inferior optimizations compared to the current
fixed-point iteration, therefore the InstCombine pass resorts to newly
introduced functions in the TargetTransformInfo when it encounters
unknown intrinsics.
The patch should not have any effect on generated code (under the
assumption that code never uses intrinsics from a foreign target).

This introduces three new functions:
TargetTransformInfo::instCombineIntrinsic
TargetTransformInfo::simplifyDemandedUseBitsIntrinsic
TargetTransformInfo::simplifyDemandedVectorEltsIntrinsic

A few target specific parts are left in the InstCombine folder, where
it makes sense to share code. The largest left-over part in
InstCombineCalls.cpp is the code shared between arm and aarch64.

This allows to move about 3000 lines out from InstCombine to the targets.

Differential Revision: https://reviews.llvm.org/D81728
2020-07-22 15:59:49 +02:00
Simon Pilgrim f4f4a8bfef [InstCombine][X86] Add repeated ops demanded elts tests for SSE intrinsics (PR24523) 2020-03-27 14:51:09 +00:00
Simon Pilgrim ec3bb6c3e7 [InstCombine][X86] Regenerate SSE2 tests 2020-03-27 14:51:09 +00:00
Simon Pilgrim 34659de5fd [InstCombine][X86] simplifyX86immShift - convert variable in-range vector shift by scalar amounts to generic shifts (PR40391)
The sll/srl/sra scalar vector shifts can be replaced with generic shifts if the shift amount is known to be in range.

This also required public DemandedElts variants of llvm::computeKnownBits to be exposed (PR36319).
2020-03-20 15:48:06 +00:00
Simon Pilgrim c2586cab89 [InstCombine][X86] Tests for variable but in-range vector-by-scalar shift amounts (PR40391)
These shifts are masked to be inrange so we should be able to replace them with generic shifts.
2020-03-19 19:24:55 +00:00
Simon Pilgrim 433897da4a [InstCombine][X86] simplifyX86immShift - convert variable in-range vector shift by immediate amounts to generic shifts (PR40391)
The slli/srli/srai 'immediate' vector shifts (although its not immediate anymore to match gcc) can be replaced with generic shifts if the shift amount is known to be in range.
2020-03-19 15:44:24 +00:00
Simon Pilgrim fb11455038 [InstCombine][X86] Tests for variable but in-range vector-by-scalar shift amounts (PR40391)
These shifts are masked to be inrange so we should be able to replace them with generic shifts.
2020-03-19 13:11:06 +00:00
Simon Pilgrim 06150e8356 [ValueTracking] Add computeKnownBits DemandedElts support to AND instructions (PR36319) 2020-03-18 15:38:15 +00:00
Simon Pilgrim 24c2e61362 [InstCombine][X86] Add additional demandedelts style test for in-range variable per-element shift amounts (PR40391)
If we've shuffled the shift amount some of the (undemanded) elements may have become undef - this should be handled by the missing support in PR36319.
2020-03-18 14:36:34 +00:00
Simon Pilgrim f4e495a18e [InstCombine][X86] simplifyX86varShift - convert variable in-range per-element shift amounts to generic shifts (PR40391)
AVX2/AVX512 per-element shifts can be replaced with generic shifts if the shift amounts are guaranteed to be in-range (upper bits are known zero).
2020-03-18 11:26:54 +00:00
Simon Pilgrim cda2b0769f [InstCombine][X86] Tests for variable but in-range per-element shift amounts (PR40391)
These shifts are masked to be inrange so we should be able to replace them with generic shifts.
2020-03-18 10:29:47 +00:00
Simon Moll d871ef4e6a [instcombine] remove fsub to fneg hacks; only emit fneg
Summary: Rewrite the fsub-0.0 idiom to fneg and always emit fneg for fp
negation. This also extends the scalarization cost in instcombine for unary
operators to result in the same IR rewrites for fneg as for the idiom.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D75467
2020-03-10 16:57:02 +01:00
Simon Pilgrim 7e9747b50b [X86][F16C] Remove cvtph2ps intrinsics and use generic half2float conversion (PR37554)
This removes everything but int_x86_avx512_mask_vcvtph2ps_512 which provides the SAE variant, but even this can use the fpext generic if the rounding control is the default.

Differential Revision: https://reviews.llvm.org/D75162
2020-02-29 18:57:35 +00:00
Craig Topper 374e0299cf [X86][InstCombine] Add constant folding and simplification support for pdep and pext
The instructions use a mask to either pack disjoint bits together(pext) or spread bits to disjoint locations(pdep). If the mask is all 0s then no bits are extracted or deposited. If the mask is all ones, then the source value is written to the result since no compression or expansion happens. Otherwise if both the source and mask are constant we can walk the bits in the source/mask and calculate the result.

There other crazier things we could do like computeKnownBits or turning pext into shift/and if only a single contiguous range of bits is extracted.

Fixes PR44389

Differential Revision: https://reviews.llvm.org/D71952
2019-12-31 15:06:47 -08:00
Craig Topper 3dd93dc2a1 [X86][InstCombine] Move instcombine test from test/CodeGen/X86 to test/Transforms/InstCombine/ and replace grep with FileCheck 2019-12-01 10:31:04 -08:00
Sanjay Patel f575f12c64 [InstCombine] remove identity shuffle simplification for mask with undefs
And simultaneously enhance SimplifyDemandedVectorElts() to rcognize that
pattern. That preserves some of the old optimizations in IR.

Given a shuffle that includes undef elements in an otherwise identity mask like:

define <4 x float> @shuffle(<4 x float> %arg) {
  %shuf = shufflevector <4 x float> %arg, <4 x float> undef, <4 x i32> <i32 undef, i32 1, i32 2, i32 3>
  ret <4 x float> %shuf
}

We were simplifying that to the input operand.

But as discussed in PR43958:
https://bugs.llvm.org/show_bug.cgi?id=43958
...that means that per-vector-element poison that would be stopped by the shuffle can now
leak to the result.

Also note that we still have (and there are tests for) the same transform with no undef
elements in the mask (a fully-defined identity mask). I don't think there's any
controversy about that case - it's a valid transform under any interpretation of
shufflevector/undef/poison.

Looking at a few of the diffs into codegen, I don't see any difference in final asm. So
depending on your perspective, that's good (no real loss of optimization power) or bad
(poison exists in the DAG, so we only partially fixed the bug).

Differential Revision: https://reviews.llvm.org/D70246
2019-11-24 10:06:26 -05:00
Sanjay Patel ce371ec6d7 [InstCombine] regenerate test CHECKs; NFC
There's a discussion about changing a shufflevector
transform in:
https://bugs.llvm.org/show_bug.cgi?id=43958

It would protect against our current undef/poison
behavior, and these are all tests that could be affected.
2019-11-14 10:23:16 -05:00
Craig Topper 9816d55776 [X86][InstCombine] Remove InstCombine code that turns X86 round intrinsics into llvm.ceil/floor. Remove some isel patterns that existed because that was happening.
We were turning roundss/sd/ps/pd intrinsics with immediates of 1 or 2 into
llvm.floor/ceil.  The llvm.ceil/floor intrinsics are supposed to correspond
to the libm functions.  For the libm functions we need to disable the
precision exception so the llvm.floor/ceil functions should always map to
encodings 0x9 and 0xA.

We had a mix of isel patterns where some used 0x9 and 0xA and others used
0x1 and 0x2. We need to be consistent and always use 0x9 and 0xA.

Since we have no way in isel of knowing where the llvm.ceil/floor came
from, we can't map X86 specific intrinsics with encodings 1 or 2 to it.
We could map 0x9 and 0xA to llvm.ceil/floor instead, but I'd really like
to see a use case and optimization advantage first.

I've left the backend test cases to show the blend we now emit without
the extra isel patterns. But I've removed the InstCombine tests completely.

llvm-svn: 361425
2019-05-22 20:04:55 +00:00
Cameron McInally 872dc79f20 [NFC][InstCombine] Add unary FNeg tests to X86/x86-avx512.ll
llvm-svn: 361308
2019-05-21 20:31:09 +00:00
Simon Pilgrim 46128cdf08 [InstCombine][X86] Add PACKSS tests for truncation of sign-extended comparisons
llvm-svn: 359435
2019-04-29 10:36:20 +00:00
Simon Pilgrim 86ff9d313a [InstCombine][X86] Add PACKSS/PACKUS tests for truncation where saturation won't occur
llvm-svn: 359185
2019-04-25 12:45:11 +00:00
Eric Christopher cee313d288 Revert "Temporarily Revert "Add basic loop fusion pass.""
The reversion apparently deleted the test/Transforms directory.

Will be re-reverting again.

llvm-svn: 358552
2019-04-17 04:52:47 +00:00
Eric Christopher a863435128 Temporarily Revert "Add basic loop fusion pass."
As it's causing some bot failures (and per request from kbarton).

This reverts commit r358543/ab70da07286e618016e78247e4a24fcb84077fda.

llvm-svn: 358546
2019-04-17 02:12:23 +00:00
Simon Pilgrim b4f1bfa659 [InstCombine][X86] Expand MOVMSK to generic IR (PR39927)
First step towards removing the MOVMSK intrinsics completely - this patch expands MOVMSK to the pattern:

e.g. PMOVMSKB(v16i8 x):
%cmp = icmp slt <16 x i8> %x, zeroinitializer
%int = bitcast <16 x i8> %cmp to i16
%res = zext i16 %int to i32

Which is correctly handled by ISel and FastIsel (give or take an annoying movzx move....): https://godbolt.org/z/rkrSFW

Differential Revision: https://reviews.llvm.org/D60256

llvm-svn: 357909
2019-04-08 13:17:51 +00:00
Sanjay Patel b276dd195a [InstCombine] canonicalize select shuffles by commuting
In PR41304:
https://bugs.llvm.org/show_bug.cgi?id=41304
...we have a case where we want to fold a binop of select-shuffle (blended) values.

Rather than try to match commuted variants of the pattern, we can canonicalize the
shuffles and check for mask equality with commuted operands.

We don't produce arbitrary shuffle masks in instcombine, but select-shuffles are a
special case that the backend is required to handle because we already canonicalize
vector select to this shuffle form.

So there should be no codegen difference from this change. It's possible that this
improves CSE in IR though.

Differential Revision: https://reviews.llvm.org/D60016

llvm-svn: 357366
2019-03-31 15:01:30 +00:00
Sanjay Patel 2bff8b4272 [InstCombine] regenerate test checks; NFC
llvm-svn: 357288
2019-03-29 17:47:51 +00:00
Philip Reames 70537abe52 Demanded elements support for masked.load and masked.gather
Teach instcombine to propagate demanded elements through a masked load or masked gather instruction. This is in the broader context of improving vector pointer instcombine under https://reviews.llvm.org/D57140.

Differential Revision: https://reviews.llvm.org/D57372

llvm-svn: 356510
2019-03-19 20:10:00 +00:00
Craig Topper 9bae5ba076 [X86] Add ImmArg markings to intrinsics.
Remove test cases that checked for not crashing when immediate operands were passed not an immediate. These are now considered ill-formed in IR.

This was done by manually scanning the intrinsic file for llvm_i32_ty and llvm_i8_ty which are the predominant types we use for immediates. Most of them are on vector intrinsics. I might have missed some other intrinsics.

Differential Revision: https://reviews.llvm.org/D58302

llvm-svn: 355993
2019-03-12 23:48:07 +00:00
Philip Reames 05fc7edf62 [Test] Update file w/update_test_checks.py to make a follow on change obvious
llvm-svn: 352932
2019-02-01 22:30:51 +00:00
Sanjay Patel be23a91fcd [InstCombine] try to reduce x86 addcarry to generic uaddo intrinsic
If we can reduce the x86-specific intrinsic to the generic op, it allows existing 
simplifications and value tracking folds. AFAICT, this always results in identical 
x86 codegen in the non-reduced case...which should be true because we semi-generically 
(too aggressively IMO) convert to llvm.uadd.with.overflow in CGP, so the DAG/isel must 
already combine/lower this intrinsic as expected.

This isn't quite what was requested in:
https://bugs.llvm.org/show_bug.cgi?id=40486
...but we want to have these kinds of folds early for efficiency and to enable greater 
simplifications. For the case in the bug report where we have:
_addcarry_u64(0, ahi, 0, &ahi)
...this gets completely simplified away in IR.

Differential Revision: https://reviews.llvm.org/D57453

llvm-svn: 352870
2019-02-01 14:14:47 +00:00
Philip Reames c71e996aed SimplifyDemandedVectorElts for all intrinsics
The point is that this simplifies integration of new intrinsics into SimplifiedDemandedVectorElts, and ensures we don't miss any existing ones.

This is intended to be NFC-ish, but as seen from the diffs, can produce slightly different output.  This is due to order of transforms w/in instcombine resulting in two slightly different fixed points.  That's something we should fix, but isn't a problem w/this patch per se.

Differential Revision: https://reviews.llvm.org/D57398

llvm-svn: 352653
2019-01-30 19:21:11 +00:00
Sanjay Patel ff83dc57f4 [InstCombine][x86] add tests for addcarry intrinsic; NFC
llvm-svn: 352627
2019-01-30 15:07:49 +00:00
Philip Reames 3cfd351efc Correct contents for r352453
I had a local change I hadn't realized when submitting that auto-update.  As such, the auto-update was wrong.  This should fix it, and with that, it's clearly time to stop submitting changes and go to bed.

llvm-svn: 352454
2019-01-29 06:40:02 +00:00
Philip Reames 2ddf96db50 [Tests] Regen to remove future test diffs
This file appears to have been manually editted at some point after being auto-updated. A future change adjusts this file slightly, and all of the updates makes the diff super confusing.

llvm-svn: 352453
2019-01-29 06:34:46 +00:00
Simon Pilgrim 8c4abd20eb [InstCombine] Make x86 PADDS/PSUBS constant folding tests generic
As discussed on D55894, this replaces the existing PADDS/PSUBUS intrinsics with the the sadd/ssub.sat generic intrinsics and moves the tests out of the x86 subfolder.

PR40110 has been raised to fix the regression with constant folding vectors containing undef elements.

llvm-svn: 349759
2018-12-20 13:50:12 +00:00
Simon Pilgrim 420d1e12b6 Regenerate test
llvm-svn: 349646
2018-12-19 17:24:34 +00:00
Sanjay Patel 2aa2dc76c2 [InstCombine] try to convert x86 movmsk intrinsic to generic IR (PR39927)
call iM movmsk(sext <N x i1> X) --> zext (bitcast <N x i1> X to iN) to iM

This has the potential to create less-than-8-bit scalar types as shown in 
some of the test diffs, but it looks like the backend knows how to deal 
with that in these patterns. This is the simple part of the fix suggested in:
https://bugs.llvm.org/show_bug.cgi?id=39927

Differential Revision: https://reviews.llvm.org/D55529

llvm-svn: 348862
2018-12-11 16:38:03 +00:00
Sanjay Patel 5c00519c9a [InstCombine] add tests for movmsk (PR39927) NFC
llvm-svn: 348800
2018-12-10 21:44:20 +00:00
Sanjay Patel 3436dc2923 [InstCombine] drop poison flags in SimplifyVectorDemandedElts
We established the (unfortunately complicated) rules for UB/poison
propagation with vector ops in:
D48893
D48987
D49047

It's clear from the affected tests that we are potentially creating 
poison where none existed before the transforms. For add/sub/mul,
the answer is simple: just drop the flags because the extra undef
vector lanes are generally more valuable for analysis and codegen.

llvm-svn: 343819
2018-10-04 21:36:50 +00:00
Sanjay Patel cafdeb1aa6 [InstCombine] allow SimplifyDemandedVectorElts to work with FP binops
We're a long way from D50992 and D51553, but this is where we have to start.
We weren't back-propagating undefs into binop constant values for anything but
add/sub/mul/and/or/xor. 

This is likely because we have to be careful about not introducing UB/poison 
with div/rem/shift. But I suspect we already are getting the poison part wrong 
for add/sub/mul (although it may not be possible to expose the bug currently
because we use SimplifyDemandedVectorElts from a limited set of opcodes).
See the discussion/implementation from D48987 and D49047.

This patch just enables functionality for FP ops because those do not have 
UB/poison potential.

llvm-svn: 343727
2018-10-03 21:44:59 +00:00
Sanjay Patel 09e02fbf51 [InstCombine][x86] try even harder to convert blendv intrinsic to generic IR (PR38814)
Follow-up to rL342324 (D52059):

Missing optimizations with blendv are shown in:
https://bugs.llvm.org/show_bug.cgi?id=38814

This is an easier and more powerful solution than adding pattern matching for a few 
special cases in the backend. The potential danger with this transform in IR is that 
the condition value can get separated from the select, and the backend might not be 
able to make a blendv out of it again.

llvm-svn: 342806
2018-09-22 14:43:55 +00:00
Sanjay Patel 296d35a5e9 [InstCombine][x86] try harder to convert blendv intrinsic to generic IR (PR38814)
Missing optimizations with blendv are shown in:
https://bugs.llvm.org/show_bug.cgi?id=38814

If this works, it's an easier and more powerful solution than adding pattern matching 
for a few special cases in the backend. The potential danger with this transform in IR
is that the condition value can get separated from the select, and the backend might 
not be able to make a blendv out of it again. I don't think that's too likely, but 
I've kept this patch minimal with a 'TODO', so we can test that theory in the wild 
before expanding the transform.

Differential Revision: https://reviews.llvm.org/D52059

llvm-svn: 342324
2018-09-15 14:25:44 +00:00
Sanjay Patel b437238e95 [InstCombine] add more tests for x86 blendv (PR38814); NFC
llvm-svn: 342237
2018-09-14 13:47:33 +00:00