Commit Graph

5421 Commits

Author SHA1 Message Date
Simon Pilgrim 53fb9d062b [InstCombine] Add partial bswap vector test from D88578 2020-10-02 13:19:19 +01:00
Simon Pilgrim ec07ae2a83 [InstCombine] Add some basic vector bswap tests
We get the vNi16 cases already via matching as a rotate followed by the fshl -> bswap combines
2020-10-02 11:08:12 +01:00
Simon Pilgrim 670e60c023 [InstCombine] Add partial bswap test from D88578 2020-10-02 10:34:30 +01:00
Nikita Popov 9d1c8c0ba9 [InstCombine] Fix select operand simplification with undef (PR47696)
When replacing X == Y ? f(X) : Z with X == Y ? f(Y) : Z, make sure
that Y cannot be undef. If it may be undef, we might end up picking
a different value for undef in the comparison and the select
operand.
2020-10-01 21:15:48 +02:00
Sanjay Patel 114e964dce [InstCombine] auto-generate complete test checks; NFC 2020-10-01 13:44:31 -04:00
Simon Pilgrim f425418fc4 [InstCombine] Add tests for 'partial' bswap patterns
As mentioned on PR47191, if we're bswap'ing some bytes and the zero'ing the remainder we can perform this as a bswap+mask which helps us match 'partial' bswaps as a first step towards folding into a more complex bswap pattern.
2020-09-30 16:09:09 +01:00
Simon Pilgrim 323d08e50a [InstCombine] Fix bswap(trunc(bswap(x))) -> trunc(lshr(x, c)) vector support
Use getScalarSizeInBits not getPrimitiveSizeInBits to determine the shift value at the element level.
2020-09-30 16:01:08 +01:00
Simon Pilgrim b85de2c69c [InstCombine] Add bswap(trunc(bswap(x))) -> trunc(lshr(x, c)) vector tests
Add tests showing failure to correctly fold vector bswap(trunc(bswap(x))) intrinsic patterns
2020-09-30 16:01:08 +01:00
Simon Pilgrim 2ef73025af [InstCombine] Remove %tmp variable names from bswap-fold tests
Appease update_test_checks script that was complaining about potential %TMP clashes
2020-09-30 15:38:04 +01:00
Simon Pilgrim 7fcad5583a [InstCombine] Remove %tmp variable names from bswap tests
Appease update_test_checks script that was complaining about potential %TMP clashes
2020-09-30 14:44:15 +01:00
Simon Pilgrim 08c5720405 [InstCombine] Add PR47191 bswap tests 2020-09-30 14:19:18 +01:00
Simon Pilgrim af47d40b9c [InstCombine] recognizeBSwapOrBitReverseIdiom - recognise zext(bswap(trunc(x))) patterns (PR39793)
PR39793 demonstrated an issue where we fail to recognize 'partial' bswap patterns of the lower bytes of an integer source.

In fact, most of this is already in place collectBitParts suitably tags zero bits, so we just need to correctly handle this case by finding the zero'd upper bits and reducing the bswap pattern just to the active demanded bits.

Differential Revision: https://reviews.llvm.org/D88316
2020-09-30 12:07:19 +01:00
Sanjay Patel 0527c8749b [InstCombine] ease alignment restriction for converting masked load to normal load
I think we initially made this fold conservative to be safer, but we do not
need the alignment attribute/metadata limitation because the masked load
intrinsic itself specifies the alignment. A normal vector load is better for
IR transforms and should be no worse in codegen than the masked alternative.
If it is worse for some target, the backend can reverse this transform.

Differential Revision: https://reviews.llvm.org/D88505
2020-09-29 15:26:22 -04:00
Sanjay Patel 5409e4831f [InstCombine] adjust duplicate test for masked load; NFC
The test after the changed test was checking exactly the same dereferenceable bytes.
2020-09-29 13:31:10 -04:00
Simon Pilgrim 0cf48a7065 [InstCombine] visitTrunc - trunc (*shr (trunc A), C) --> trunc(*shr A, C)
Attempt to fold trunc (*shr (trunc A), C) --> trunc(*shr A, C) iff the shift amount if small enough that all zero/sign bits created by the shift are removed by the last trunc.

Helps fix the regressions encountered in D88316.

I've tweaked a couple of shift values as suggested by @lebedev.ri to ensure we have coverage of shift values close (above/below) to the max limit.

Differential Revision: https://reviews.llvm.org/D88429
2020-09-29 18:27:42 +01:00
Sanjay Patel 388b068956 [InstCombine] fix weird formatting in test file; NFC
It apparently didn't cause trouble for the parser or FileCheck,
but it was confusing to see a function def split by asserts.
2020-09-29 13:22:25 -04:00
Simon Pilgrim e5f047f27e [InstCombine] Fix the outofrange tests and add exact shift tests for D88429 2020-09-29 17:15:16 +01:00
Sanjay Patel ee34d9b210 [InstCombine] use redirect of input file in regression tests; NFC
This is a repeat of 1880092722 from 2009. We should have less risk
of hitting bugs at this point because we auto-generate positive CHECK
lines only, but this makes things consistent.

Copying the original commit msg:
"Change tests from "opt %s" to "opt < %s" so that opt doesn't see the
input filename so that opt doesn't print the input filename in the
output so that grep lines in the tests don't unintentionally match
strings in the input filename."
2020-09-29 11:06:25 -04:00
Simon Pilgrim 7a55989dc4 [InstCombine] Add some basic trunc(lshr(zext(x),c)) tests
Copied from the sext equivalents
2020-09-29 15:49:57 +01:00
Simon Pilgrim 89a8a0c910 [InstCombine] Inherit exact flags on extended shifts in trunc (lshr (sext A), C) --> (ashr A, C)
This was missed in D88475
2020-09-29 15:32:09 +01:00
Simon Pilgrim 042f22bda5 [InstCombine] Add exact shift tests missed in D88475
I missed the post-LGTM comment from @lebedev.ri
2020-09-29 15:24:59 +01:00
Simon Pilgrim 14ff38e235 [InstCombine] visitTrunc - trunc (lshr (sext A), C) --> (ashr A, C) non-uniform support
This came from @lebedev.ri's suggestion to use m_SpecificInt_ICMP for D88429 - since I was going to change the m_APInt to m_Constant for that patch I thought I would do it for the only other user of the APInt first.

I've added a ConstantExpr::getUMin helper - its trivial to add UMAX/SMIN/SMAX but thought I'd wait until we have use cases.

Differential Revision: https://reviews.llvm.org/D88475
2020-09-29 15:01:16 +01:00
Simon Pilgrim 324df2661b [InstCombine] Add trunc(lshr(sext(x),c)) non-uniform vector tests 2020-09-29 10:56:15 +01:00
Serguei Katkov 297ec61130 [IsKnownNonZero] Handle the case with non-constant phi nodes
Handle the case when all inputs of phi are proven to be non zero.

Constants are checked in beginning of this method before check for depth of recursion,
so it is a partial case of non-constant phi.

Recursion depth is already handled by the function.

Reviewers: aqjune, nikic, efriedma
Reviewed By: nikic
Subscribers: dantrushin, hiraditya, jdoerfert, llvm-commits
Differential Revision: https://reviews.llvm.org/D88276
2020-09-29 15:22:10 +07:00
Simon Pilgrim 2f768a68a1 [InstCombine] Regenerate cast tests. NFC. 2020-09-28 21:32:12 +01:00
Simon Pilgrim d047bb1cf6 [InstCombine] Add trunc(shr(trunc(x),c)) non-uniform vector tests 2020-09-28 18:53:38 +01:00
Simon Pilgrim ad4f11a9d3 [InstCombine] Add basic trunc(shr(trunc(x),c)) tests
Helps improve the minor regressions noticed on D88316
2020-09-28 18:00:28 +01:00
Simon Pilgrim 63ee42a06b [InstCombine] matchRotate - force splat of uniform constant rotation amounts (PR46895)
Fixes minor bug in D88402 where we were using the original shift constant (with undefs) instead of one with the splat values (re)splatted to all elements.
2020-09-28 15:12:41 +01:00
Simon Pilgrim dabb14cadd [InstCombine] matchRotate - allow undef in uniform constant rotation amounts (PR46895)
An extension to D87452, we can safely permit undefs in the uniform/splat detection

https://alive2.llvm.org/ce/z/nT-ptN

Differential Revision: https://reviews.llvm.org/D88402
2020-09-28 13:36:13 +01:00
Simon Pilgrim 0c671bfe00 [InstCombine] Add tests for vector rotate by constants with undefs. 2020-09-28 09:55:43 +01:00
Simon Pilgrim 98c5eebcf7 [InstCombine] Add basic vector test coverage for icmp_eq/ne zero combines 2020-09-26 20:09:11 +01:00
Simon Pilgrim 9ff9c1d8ee [InstCombine] matchRotate - support (uniform) constant rotation amounts (PR46895)
This patch adds handling of rotation patterns with constant shift amounts - the next bit will be how we want to support non-uniform constant vectors.

Differential Revision: https://reviews.llvm.org/D87452
2020-09-25 22:03:10 +01:00
Simon Pilgrim 994ef4e7bb [InstCombine] Fix test name to match type. NFCI.
We're testing a <2 x i36> not <2 x i16>
2020-09-25 22:00:36 +01:00
Simon Pilgrim 2a0ca17f66 [InstCombine] collectBitParts - add fshl/fshr handling
Pulled from D87452, this is a fixed version of the collectBitParts fshl/fshr handling which as @nikic noticed wasn't checking for different providers or had correct bit ordering (which was hid by only testing shift amounts of bitwidth/2).

Differential Revision: https://reviews.llvm.org/D88292
2020-09-25 20:34:59 +01:00
Simon Pilgrim 132f29ce06 [InstCombine] Add some extra bswap tests from PR39793
Also test for cases where recognizeBSwapOrBitReverseIdiom checks for a truncated bswap pattern.
2020-09-25 15:46:19 +01:00
Simon Pilgrim 8d90d92f0d [InstCombine] Add 'partial' bswap tests from PR39793
Tests for basic zext(bswap(trunc(x))) patterns shown on PR39793
2020-09-25 15:28:21 +01:00
Simon Pilgrim 852447650c [InstCombine] Add bswap tests from funnel shift intrinsics
Based on (WIP) patch in D87452 - I'm intending to add the intrinsics handling to collectBitParts as a separate patch to make the changes clearer.
2020-09-25 12:40:23 +01:00
Matt Arsenault dc08185ca7 IR: Have byref imply dereferenceable
The langref already states it does, but this wasn't implemented. Also
covers inalloca and preallocated. Also helps fix a dependence on
pointer element types.
2020-09-24 09:57:28 -04:00
Sanjay Patel 8e712807e4 [InstCombine] regenerate test checks; NFC 2020-09-24 09:34:17 -04:00
David Sherwood 59c4d5aad0 [SVE] Fix InstCombinerImpl::PromoteCastOfAllocation for scalable vectors
In this patch I've fixed some warnings that arose from the implicit
cast of TypeSize -> uint64_t. I tried writing a variety of different
cases to show how this optimisation might work for scalable vectors
and found:

1. The optimisation does not work for cases where the cast type
is scalable and the allocated type is not. This because we need to
know how many times the cast type fits into the allocated type.
2. If we pass all the various checks for the case when the allocated
type is scalable and the cast type is not, then when creating the
new alloca we have to take vscale into account. This leads to
sub-optimal IR that is worse than the original IR.
3. For the remaining case when both the alloca and cast types are
scalable it is hard to find examples where the optimisation would
kick in, except for simple bitcasts, because we typically fail the
ABI alignment checks.

For now I've changed the code to bail out if only one of the alloca
and cast types is scalable. This means we continue to support the
existing cases where both types are fixed, and also the specific case
when both types are scalable with the same size and alignment, for
example a simple bitcast of an alloca to another type.

I've added tests that show we don't attempt to promote the alloca,
except for simple bitcasts:

  Transforms/InstCombine/AArch64/sve-cast-of-alloc.ll

Differential revision: https://reviews.llvm.org/D87378
2020-09-23 08:43:05 +01:00
Arthur Eubanks 61ac58e10a [NewPM] Pin tests with -debug-pass to legacy PM
-debug-pass is a legacy PM only option.

Some tests checks that the pass returned that it made a change,
which is not relevant to the NPM, since passes return PreservedAnalyses.

Some tests check that passes are freed at the proper time, which is also
not relevant to the NPM.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D87945
2020-09-22 17:54:25 -07:00
Hubert Tong a60852e9d6 [InstCombine][NFC][tests] Add ninf base value case to pow-sqrt.ll 2020-09-22 18:58:05 -04:00
Hubert Tong 32c9991dab [InstCombine] Fix errno bug in pow expansion to sqrt
A conversion from `pow` to `sqrt` shall not call an `errno`-setting
`sqrt` with -//infinity//: the `sqrt` will set `EDOM` where the `pow`
call need not.

This patch avoids the erroneous (pun not intended) transformation by
applying the restrictions discussed in the thread for
https://lists.llvm.org/pipermail/llvm-dev/2020-September/145051.html.

The existing tests are updated (depending on emphasis in the checks for
library calls, avoidance of overlap, and overall coverage):
  - to add `ninf`, retaining the intended library call,
  - to use the intrinsic, retaining the use of `select`, or
  - to expect the replacement to not occur.

The following is tested:
  - The pow intrinsic folds to a `select` instruction to
    handle -//infinity//.
  - The pow library call folds, with `ninf`, to `sqrt` without the
    `select` instruction associated with handling -//infinity//.
  - The pow library call does not fold to `sqrt` without `ninf`.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D87877
2020-09-22 18:58:05 -04:00
Hubert Tong 6801950192 [InstCombine] For pow(x, +/-0.5), stop falling into pow(x, 1.5), etc. case
The current code for handling pow(x, y) where y is an integer plus 0.5
is not explicitly guarded against attempting to transform the case where
abs(y) is exactly 0.5.

The latter case is meant to be handled by `replacePowWithSqrt`. Indeed,
if the pow(x, integer+0.5) case proceeds past a certain point, it will
hit an assertion by attempting to form pow(x, 0) using `getPow`.

This patch adds an explicit check to prevent attempting the
pow(x, integer+0.5) transformation on pow(x, +/-0.5) as suggested during
the review of D87877. This has the effect of retaining the shrinking of
`pow` to `powf` when the `sqrt` libcall cannot be formed.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D88066
2020-09-22 14:23:32 -04:00
Arthur Eubanks 44b1643d17 [NewPM] Support -disable-simplify-libcall/-disable-builtin in NPM opt
Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D87932
2020-09-21 16:38:37 -07:00
Arthur Eubanks f4f7df037e [DIE] Remove DeadInstEliminationPass
This pass is like DeadCodeEliminationPass, but only does one pass
through a function instead of iterating on users of eliminated
instructions.

DeadCodeEliminationPass should be used in all cases.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D87933
2020-09-21 12:12:25 -07:00
Sanjay Patel 7903ae4720 [InstCombine] factorize left shifts of add/sub
We do similar factorization folds in SimplifyUsingDistributiveLaws,
but that drops no-wrap properties. Propagating those optimally may
help solve:
https://llvm.org/PR47430

The propagation is all-or-nothing for these patterns: when all
3 incoming ops have nsw or nuw, the 2 new ops should have the
same no-wrap property:
https://alive2.llvm.org/ce/z/Dv8wsU

This also solves:
https://llvm.org/PR47584
2020-09-20 12:55:24 -04:00
Sanjay Patel cf75e83275 [InstCombine] replace zombie unreachable values with 'undef' before erasing
The test (currently crashing) is reduced from the example provided
in the post-commit discussion in D87149.

Differential Revision: https://reviews.llvm.org/D87965
2020-09-20 12:25:08 -04:00
Nikita Popov a2f9098f7a [InstCombine] Regenerate test checks (NFC) 2020-09-19 21:07:54 +02:00
Sanjay Patel 534e9132af [InstCombine] auto-generate test checks; NFC 2020-09-19 11:06:47 -04:00
Sanjay Patel 2c3d199fbf [InstCombine] regenerate test checks; NFC 2020-09-19 10:43:18 -04:00
Philip Reames 06f136f61e [instcombine][x86] Converted pdep/pext with shifted mask to simple arithmetic
If the mask of a pdep or pext instruction is a shift masked (i.e. one contiguous block of ones) we need at most one and and one shift to represent the operation without the intrinsic. One all platforms I know of, this is faster than the pdep/pext.

The cost modelling for multiple contiguous blocks might be worth exploring in a follow up, but it's not relevant for my current use case. It would almost certainly be a win on AMDs where these are really really slow though.

Differential Revision: https://reviews.llvm.org/D87861
2020-09-18 14:54:24 -07:00
Huihui Zhang 9ad6049736 [InstCombine][SVE] Skip scalable type for InstCombiner::getFlippedStrictnessPredicateAndConstant.
We cannot iterate on scalable vector, the number of elements is unknown at compile-time.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87918
2020-09-18 11:26:36 -07:00
Nikita Popov 13e19d2e7c Revert "[InstCombine] Canonicalize SPF_ABS to abs intrinc"
This reverts commit 05d4c4ebc2.

mstorsjo reports a miscompile after this change in
https://reviews.llvm.org/D87188#2281093. Reverting until I can
investigate this.
2020-09-18 09:38:26 +02:00
Nikita Popov 05d4c4ebc2 [InstCombine] Canonicalize SPF_ABS to abs intrinc
Enable canonicalization of SPF_ABS and SPF_NABS to the abs intrinsic.

To be conservative, the one-use check on the comparison is retained,
this may be relaxed if all goes well.

It's pretty likely that this will uncover places that missing
handling for the abs() intrinsic. Please report any seen performance
regressions.

Differential Revision: https://reviews.llvm.org/D87188
2020-09-17 22:28:34 +02:00
Nikita Popov 222bf3ffbc Reapply [InstCombine] Simplify select operand based on equality condition
Reapply after fixing SimplifyWithOpReplaced() to never return
the original value, which would lead to an infinite loop in this
transform.

-----

For selects of the type X == Y ? A : B, check if we can simplify A
by using the X == Y equality and replace the operand if that's
possible. We already try to do this in InstSimplify, but will only
fold if the result of the simplification is the same as B, in which
case the select can be dropped entirely. Here the select will be
retained, just one operand simplified.

As we are performing an actual replacement here, we don't have
problems with refinement / poison values.

Differential Revision: https://reviews.llvm.org/D87480
2020-09-16 20:53:58 +02:00
Nikita Popov 94d912021f [InstCombine] Add test for infinite combine loop (NFC)
Test courtesy of bkramer for the infinite combine loop introduced
by D87480.
2020-09-16 20:53:25 +02:00
Sebastian Neubauer 833b3b0d3a [AMDGPU] Add v3f16/v3i16 support to SDag
Fix lowering and instruction selection for v3x16 types
and enable InstCombine to emit them.

This patch only implements it for the selection dag.
GlobalISel tests in GlobalISel/llvm.amdgcn.image.load.1d.d16.ll and
GlobalISel/llvm.amdgcn.image.store.2d.d16.ll still don't work.

Differential Revision: https://reviews.llvm.org/D84420
2020-09-16 17:20:27 +02:00
Serguei Katkov 00f09dd4c1 [InstCombine] Add tests for statepoint simplification
This tests increase coverage for change introduced in D85959

Reviewers: reames, reames
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D87224
2020-09-16 10:56:38 +07:00
Sanjay Patel aa57c1c967 [InstCombine] fix bug in pow expansion
There at least one other bug related to pow -> sqrt transforms:
http://lists.llvm.org/pipermail/llvm-dev/2020-September/145051.html
...but we probably can't solve that without fixing this first.
2020-09-15 09:29:48 -04:00
Sanjay Patel 7ffc9aa538 [InstCombine] add RUN to show miscompile of pow expansion; NFC
The code drops the sqrt op instead of bailing out,
so this is very wrong.
2020-09-15 09:29:48 -04:00
Sanjay Patel eb66b04cbe [InstCombine] improve test names; NFC
This is not a valid transform unless we can prove
that the program does not read errno after the pow
call and before some other function changes it.
2020-09-15 09:29:48 -04:00
Benjamin Kramer b768546fe0 Revert "[InstCombine] Simplify select operand based on equality condition"
This reverts commit cfff88c03c. Sends
instcombine into an infinite loop.

```
define i1 @foo(i32 %arg, i32 %arg1) {
bb:
  %tmp = udiv i32 %arg, %arg1
  %tmp2 = mul nsw i32 %tmp, %arg1
  %tmp3 = icmp eq i32 %tmp2, %arg
  %tmp4 = select i1 %tmp3, i32 %tmp, i32 undef
  %tmp5 = icmp sgt i32 %tmp4, 255
  ret i1 %tmp5
}
```
2020-09-15 12:22:47 +02:00
Nikita Popov cfff88c03c [InstCombine] Simplify select operand based on equality condition
For selects of the type X == Y ? A : B, check if we can simplify A
by using the X == Y equality and replace the operand if that's
possible. We already try to do this in InstSimplify, but will only
fold if the result of the simplification is the same as B, in which
case the select can be dropped entirely. Here the select will be
retained, just one operand simplified.

As we are performing an actual replacement here, we don't have
problems with refinement / poison values.

Differential Revision: https://reviews.llvm.org/D87480
2020-09-14 20:07:06 +02:00
Nikita Popov c34a99fe58 [InstCombine] Add extra use tests for abs canonicalization (NFC) 2020-09-12 21:13:46 +02:00
Simon Pilgrim 3170d54842 [InstCombine][X86] Covert masked load/stores with (sign extended) bool vector masks to generic intrinsics.
As detailed on PR11210, if the mask is known to come from a (sign extended) bool vector (e.g. comparisons) then we can represent with a generic masked load/store without losing anything.

We already do something similar for BLENDV -> SELECT conversion.
2020-09-12 15:09:28 +01:00
Tyker 78de7297ab Reland [AssumeBundles] Use operand bundles to encode alignment assumptions
NOTE: There is a mailing list discussion on this: http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

Complemantary to the assumption outliner prototype in D71692, this patch
shows how we could simplify the code emitted for an alignemnt
assumption. The generated code is smaller, less fragile, and it makes it
easier to recognize the additional use as a "assumption use".

As mentioned in D71692 and on the mailing list, we could adopt this
scheme, and similar schemes for other patterns, without adopting the
assumption outlining.
2020-09-12 15:36:06 +02:00
Simon Pilgrim d030aad789 [InstCombine][X86] Add tests for masked load/stores with comparisons.
As detailed on PR11210, if the mask is known to come from a (sign extended) bool vector (e.g. comparisons) then we can represent with a generic masked load/store without losing anything.
2020-09-12 14:32:27 +01:00
Nikita Popov 36e2e2e12e [InstCombine] Fix incorrect SimplifyWithOpReplaced transform (PR47322)
This is a followup to D86834, which partially fixed this issue in
InstSimplify. However, InstCombine repeats the same transform while
dropping poison flags -- which does not cover cases where poison is
introduced in some other way.

The fix here is a bit more comprehensive, because things are quite
entangled, and it's hard to only partially address it without
regressing optimization. There are really two changes here:

 * Export the SimplifyWithOpReplaced API from InstSimplify, with an
   added AllowRefinement flag. For replacements inside the TrueVal
   we don't actually care whether refinement occurs or not, the
   replacement is always legal. This part of the transform is now
   done in InstSimplify only. (It should be noted that the current
   AllowRefinement check is not sufficient -- that's an issue we
   need to address separately.)
 * Change the InstCombine fold to work by temporarily dropping
   poison generating flags, running the fold and then restoring the
   flags if it didn't work out. This will ensure that the InstCombine
   fold is correct as long as the InstSimplify fold is correct.

Differential Revision: https://reviews.llvm.org/D87445
2020-09-12 14:45:06 +02:00
Eli Friedman d751f86189 [ConstantFold] Make areGlobalsPotentiallyEqual less aggressive.
In particular, we shouldn't make assumptions about globals which are
unnamed_addr: we can fold them together with other globals.

Also while I'm here, use isInterposable() instead of trying to
explicitly name all the different kinds of weak linkage.

Fixes https://bugs.llvm.org/show_bug.cgi?id=47090

Differential Revision: https://reviews.llvm.org/D87123
2020-09-11 17:23:08 -07:00
Sanjay Patel 6aa3fc4a5b Revert "[InstCombine] propagate 'nsw' on pointer difference of 'inbounds' geps (PR47430)"
This reverts commit 324a53205a.

On closer examination of at least one of the test diffs,
this does not appear to be correct in all cases. Even the
existing 'nsw' creation may be wrong based on this example:
https://alive2.llvm.org/ce/z/uL4Hw9
https://alive2.llvm.org/ce/z/fJMKQS
2020-09-11 10:54:48 -04:00
Sanjay Patel 324a53205a [InstCombine] propagate 'nsw' on pointer difference of 'inbounds' geps (PR47430)
There's no signed wrap if both geps have 'inbounds':
https://alive2.llvm.org/ce/z/nZkQTg
https://alive2.llvm.org/ce/z/7qFauh
2020-09-11 10:39:09 -04:00
Sanjay Patel 6b5b6511a5 [InstCombine] add/move tests for ptr diff; NFC 2020-09-11 10:39:09 -04:00
Nikita Popov 476836331f [InstCombine] Add more tests for select op replacement (NFC) 2020-09-10 22:02:07 +02:00
Nikita Popov adb738899e [InstCombine] Regenerate test checks (NFC) 2020-09-10 22:02:07 +02:00
Christopher Tetreault 7ddfd9b3eb [SVE] Bail from VectorUtils heuristics for scalable vectors
Bail from maskIsAllZeroOrUndef and maskIsAllOneOrUndef prior to iterating over the number of
elements for scalable vectors.

Assert that the mask type is not scalable in possiblyDemandedEltsInMask .

Assert that the types are correct in all three functions.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87424
2020-09-10 12:29:37 -07:00
Nikita Popov 4e413e1621 [InstCombine] Temporarily do not drop volatile stores before unreachable
See discussion in D87149. Dropping volatile stores here is legal
per LLVM semantics, but causes issues for real code and may result
in a change to LLVM volatile semantics. Temporarily treat volatile
stores as "not guaranteed to transfer execution" in just this place,
until this issue has been resolved.
2020-09-10 16:16:44 +02:00
Sanjay Patel 1dd4c4e0a8 [InstCombine] add tests for add/sub-of-shl; NFC 2020-09-09 15:29:08 -04:00
Nikita Popov 8453fbf088 [ValueTracking] Compute known bits of min/max intrinsics
Implement known bits for the min/max intrinsics based on the
recently added KnownBits primitives.
2020-09-08 21:08:17 +02:00
Nikita Popov 8927c90069 [InstCombine] Add tests for known bits for min/max intrinsics (NFC)
We already have test coverage for the underlying calculation,
this just checked that the folding is wired up...
2020-09-08 21:08:17 +02:00
Nikita Popov f6b87da0c7 [InstCombine] Fold comparison of abs with int min
If the abs is poisoning, this is already folded to true/false.
For non-poisoning abs, we can convert this to a comparison with
the operand.
2020-09-08 20:23:03 +02:00
Nikita Popov 6eef387ddd [InstCombine] Test comparison of abs with int min (NFC) 2020-09-08 20:20:32 +02:00
Nikita Popov e97f3b1b43 [InstCombine] Fold abs of known negative operand
If we know that the abs operand is known negative, we can replace
it with a neg.

To avoid computing known bits twice, I've removed the fold for the
non-negative case from InstSimplify. Both the non-negative and the
negative case are handled by InstCombine now, with one known bits call.

Differential Revision: https://reviews.llvm.org/D87196
2020-09-08 20:14:35 +02:00
Sanjay Patel 4964d75d70 [InstCombine] add bitwise logic fold tests for D86395; NFC 2020-09-08 09:17:42 -04:00
Sanjay Patel 8b30067919 [InstCombine] improve fold of pointer differences
This was supposed to be an NFC cleanup, but there's
a real logic difference (did not drop 'nsw') visible
in some tests in addition to an efficiency improvement.

This is because in the case where we have 2 GEPs,
the code was *always* swapping the operands and
negating the result. But if we have 2 GEPs, we
should *never* need swapping/negation AFAICT.

This is part of improving flags propagation noticed
with PR47430.
2020-09-07 15:54:32 -04:00
Sanjay Patel 70207816e3 [InstCombine] add ptr difference tests; NFC 2020-09-07 15:54:32 -04:00
Sanjay Patel 7a6d6f0f70 [InstCombine] improve folds for icmp with multiply operands (PR47432)
Check for no overflow along with an odd constant before
we lose information by converting to bitwise logic.

https://rise4fun.com/Alive/2Xl

  Pre: C1 != 0
  %mx = mul nsw i8 %x, C1
  %my = mul nsw i8 %y, C1
  %r = icmp eq i8 %mx, %my
  =>
  %r = icmp eq i8 %x, %y

  Name: nuw ne
  Pre: C1 != 0
  %mx = mul nuw i8 %x, C1
  %my = mul nuw i8 %y, C1
  %r = icmp ne i8 %mx, %my
  =>
  %r = icmp ne i8 %x, %y

  Name: odd ne
  Pre: C1 % 2 != 0
  %mx = mul i8 %x, C1
  %my = mul i8 %y, C1
  %r = icmp ne i8 %mx, %my
  =>
  %r = icmp ne i8 %x, %y
2020-09-07 12:40:37 -04:00
Sanjay Patel 11d8eedfa5 [InstCombine] move/add tests for icmp with mul operands; NFC 2020-09-07 12:40:37 -04:00
Sanjay Patel b22910daab [InstCombine] erase instructions leading up to unreachable
Normal dead code elimination ignores assume intrinsics, so we fail to
delete assumes that are not meaningful (and potentially worse if they
cause conflicts with other assumptions).

The motivating example in https://llvm.org/PR47416 suggests that we
might have problems upstream from here (difference between C and C++),
but this should be a cheap way to make sure we remove more dead code.

Differential Revision: https://reviews.llvm.org/D87149
2020-09-07 10:44:08 -04:00
Sanjay Patel 28aa60aae2 [InstCombine] add test with more unreachable insts; NFC
Goes with D87149
2020-09-07 08:19:43 -04:00
Sanjay Patel 3ca8b9a560 [InstCombine] give a name to an intermediate value for easier tracking; NFC
As noted in PR47430, we probably want to conditionally include 'nsw'
here anyway, so we are going to need to fill out the optional args.
2020-09-07 08:19:42 -04:00
Nikita Popov 5ad6552a83 [InstCombine] Add tests for known negative abs intrinsic (NFC)
And duplicate tests for known non-negative from InstSimplify.
2020-09-05 17:31:04 +02:00
Nikita Popov 4892d3a198 [InstCombine] Fold abs with dominating condition
Similar to D87168, but for abs. If we have a dominating x >= 0
condition, then we know that abs(x) is x. This fold is in
InstCombine, because we need to create a sub instruction for
the x < 0 case.

Differential Revision: https://reviews.llvm.org/D87184
2020-09-05 16:18:35 +02:00
Nikita Popov ada8a17d94 [InstCombine] Fold abs intrinsic eq zero
Following the same transform for the select version of abs.
2020-09-05 15:11:38 +02:00
Nikita Popov 94c71d6aa1 [InstCombine] Add tests for abs intrinsic eq zero (NFC) 2020-09-05 15:11:38 +02:00
Nikita Popov 58b28fa7a2 [InstCombine] Fold mul of abs intrinsic
Same as the existing SPF_ABS fold. We don't need to explicitly
handle NABS, as the negs will get folded away first.
2020-09-05 12:37:45 +02:00
Nikita Popov 3ab13348ba [InstCombine] Add tests for mul of abs intrinsic (NFC) 2020-09-05 12:36:27 +02:00
Nikita Popov 10cb23c6ca [InstCombine] Fold cttz of abs intrinsic
Same as the existing fold for SPF_ABS. We don't need to explicitly
handle the NABS variant, as we'll first fold away the neg in that
case.
2020-09-05 12:25:41 +02:00
Nikita Popov 1903a1afd9 [InstCombine] Add tests for cttz of abs intrinsic (NFC) 2020-09-05 12:22:42 +02:00
Nikita Popov d401e376e4 [InstCombine] Test abs with dominating condition (NFC) 2020-09-05 11:10:01 +02:00
Sanjay Patel 35c6d56c04 [InstCombine] rename tmp values to avoid scripted FileCheck conflicts; NFC 2020-09-04 17:02:18 -04:00
Sanjay Patel c5d6b2b7e5 [InstCombine] add test for assume in block with unreachable (PR47416); NFC 2020-09-04 16:57:35 -04:00
Bryan Chan 3404add468 [EarlyCSE] Verify hash code in regression tests
As discussed in D86843, -earlycse-debug-hash should be used in more regression
tests to catch inconsistency between the hashing and the equivalence check.

Differential Revision: https://reviews.llvm.org/D86863
2020-09-04 10:40:35 -04:00
Sanjay Patel 2391a34f9f [InstCombine] canonicalize all commutative intrinsics with constant arg 2020-09-03 12:42:04 -04:00
Sanjay Patel 9bb3a9eebb [InstCombine] add tests for commutative intrinsics; NFC 2020-09-03 12:42:04 -04:00
Eli Friedman 96ef6998df [InstCombine] Fix a couple crashes with extractelement on a scalable vector.
Differential Revision: https://reviews.llvm.org/D86989
2020-09-02 18:02:07 -07:00
Venkataramanan Kumar 626c3738cd [InstCombine] Transform 1.0/sqrt(X) * X to X/sqrt(X)
These transforms will now be performed irrespective of the number of uses for the expression "1.0/sqrt(X)":
1.0/sqrt(X) * X => X/sqrt(X)
X * 1.0/sqrt(X) => X/sqrt(X)

We already handle more general cases, and we are intentionally not creating extra (and likely expensive)
fdiv ops in IR. This pattern is the exception to the rule because we always expect the Backend to reduce
X/sqrt(X) to sqrt(X), if it has the necessary (reassoc) fast-math-flags.

Ref: DagCombiner optimizes the X/sqrt(X) to sqrt(X).

Differential Revision: https://reviews.llvm.org/D86726
2020-09-02 08:23:48 -04:00
Sanjay Patel d48699e3e8 [InstCombine] adjust recip sqrt tests for better coverage; NFC
Goes with D86726
2020-09-01 09:43:42 -04:00
Roman Lebedev c23aefd7c3
[NFC][InstCombine] visitPHINode(): cleanup PHI CSE instruction replacement
As @nikic is pointing out in https://reviews.llvm.org/rGbf21ce7b908e#inline-4647
this must be sufficient otherwise `EliminateDuplicatePHINodes()`
would have hit issues with it already.
2020-08-31 22:29:39 +03:00
Venkataramanan Kumar 86163f885b [InstCombine] add extra-use tests for fmul+sqrt; NFC 2020-08-30 18:41:01 -04:00
Roman Lebedev bf21ce7b90
[InstCombine] Take 3: Perform trivial PHI CSE
The original take 1 was 6102310d81,
which taught InstSimplify to do that, which seemed better at time,
since we got EarlyCSE support for free.

However, it was proven that we can not do that there,
the simplified-to PHI would not be reachable from the original PHI,
and that is not something InstSimplify is allowed to do,
as noted in the commit ed90f15efb
that reverted it:
> It appears to cause compilation non-determinism and caused stage3 mismatches.

Then there was take 2 3e69871ab5,
which was InstCombine-specific, but it again showed stage2-stage3 differences,
and reverted in bdaa3f86a0.
This is quite alarming.

Here, let's try to change how we find existing PHI candidate:
due to the worklist order, and the way PHI nodes are inserted
(it may be inserted as the first one, or maybe not), let's look at *all*
PHI nodes in the block.

Effects on vanilla llvm test-suite + RawSpeed:
```
| statistic name                                     | baseline  | proposed  |      Δ |        % |    \|%\| |
|----------------------------------------------------|-----------|-----------|-------:|---------:|---------:|
| asm-printer.EmittedInsts                           | 7942329   | 7942457   |    128 |    0.00% |    0.00% |
| assembler.ObjectBytes                              | 254295632 | 254312480 |  16848 |    0.01% |    0.01% |
| correlated-value-propagation.NumPhis               | 18412     | 18347     |    -65 |   -0.35% |    0.35% |
| early-cse.NumCSE                                   | 2183283   | 2183267   |    -16 |    0.00% |    0.00% |
| early-cse.NumSimplify                              | 550105    | 541842    |  -8263 |   -1.50% |    1.50% |
| instcombine.NumAggregateReconstructionsSimplified  | 73        | 4506      |   4433 | 6072.60% | 6072.60% |
| instcombine.NumCombined                            | 3640311   | 3644419   |   4108 |    0.11% |    0.11% |
| instcombine.NumDeadInst                            | 1778204   | 1783205   |   5001 |    0.28% |    0.28% |
| instcombine.NumPHICSEs                             | 0         | 22490     |  22490 |    0.00% |    0.00% |
| instcombine.NumWorklistIterations                  | 2023272   | 2024400   |   1128 |    0.06% |    0.06% |
| instcount.NumCallInst                              | 1758395   | 1758802   |    407 |    0.02% |    0.02% |
| instcount.NumInvokeInst                            | 59478     | 59502     |     24 |    0.04% |    0.04% |
| instcount.NumPHIInst                               | 330557    | 330545    |    -12 |    0.00% |    0.00% |
| instcount.TotalBlocks                              | 1077138   | 1077220   |     82 |    0.01% |    0.01% |
| instcount.TotalFuncs                               | 101442    | 101441    |     -1 |    0.00% |    0.00% |
| instcount.TotalInsts                               | 8831946   | 8832606   |    660 |    0.01% |    0.01% |
| simplifycfg.NumHoistCommonCode                     | 24186     | 24187     |      1 |    0.00% |    0.00% |
| simplifycfg.NumInvokes                             | 4300      | 4410      |    110 |    2.56% |    2.56% |
| simplifycfg.NumSimpl                               | 1019813   | 999767    | -20046 |   -1.97% |    1.97% |
```
So it fires 22490 times, which is less than ~24k the take 1 did,
but more than what take 2 did (22228 times)
.
It allows foldAggregateConstructionIntoAggregateReuse() to actually work
after PHI-of-extractvalue folds did their thing. Previously SimplifyCFG
would have done this PHI CSE, of all places. Additionally, allows some
more `invoke`->`call` folds to happen (+110, +2.56%).

All in all, expectedly, this catches less things overall,
but all the motivational cases are still caught, so all good.
2020-08-29 18:21:24 +03:00
Roman Lebedev bdaa3f86a0
Revert "[InstCombine] Take 2: Perform trivial PHI CSE"
While the original variant with doing this in InstSimplify (rightfully)
caused questions and ultimately was detected to be a culprit
of stage2-stage3 mismatch, it was expected that
InstCombine-based implementation would be fine.

But apparently it's not, as
http://lab.llvm.org:8011/builders/clang-with-thin-lto-ubuntu/builds/24095/steps/compare-compilers/logs/stdio
suggests.

Which suggests that somewhere in InstCombine there is a loop
over nondeterministically sorted container, which causes
different worklist ordering.

This reverts commit 3e69871ab5.
2020-08-29 16:05:02 +03:00
Roman Lebedev 9c1ca8a3f7
[NFC][InstCombine] select.ll: remove outdated TODO comment
Fixed by 3e69871ab5
2020-08-29 15:10:13 +03:00
Roman Lebedev 3e69871ab5
[InstCombine] Take 2: Perform trivial PHI CSE
The original take was 6102310d81,
which taught InstSimplify to do that, which seemed better at time,
since we got EarlyCSE support for free.

However, it was proven that we can not do that there,
the simplified-to PHI would not be reachable from the original PHI,
and that is not something InstSimplify is allowed to do,
as noted in the commit ed90f15efb
that reverted it :
> It appears to cause compilation non-determinism and caused stage3 mismatches.

However InstCombine already does many different optimizations,
so it should be a safe place to do it here.

Note that we still can't just compare incoming values ranges,
because there is no guarantee that these PHI's we'd simplify to
were already re-visited and sorted.
However coming up with a test is problematic.

Effects on vanilla llvm test-suite + RawSpeed:
```
| statistic name                                     | baseline  | proposed  |      Δ |        % |      |%| |
|----------------------------------------------------|-----------|-----------|-------:|---------:|---------:|
| instcombine.NumPHICSEs                             | 0         | 22228     |  22228 |    0.00% |    0.00% |
| asm-printer.EmittedInsts                           | 7942329   | 7942456   |    127 |    0.00% |    0.00% |
| assembler.ObjectBytes                              | 254295632 | 254313792 |  18160 |    0.01% |    0.01% |
| early-cse.NumCSE                                   | 2183283   | 2183272   |    -11 |    0.00% |    0.00% |
| early-cse.NumSimplify                              | 550105    | 541842    |  -8263 |   -1.50% |    1.50% |
| instcombine.NumAggregateReconstructionsSimplified  | 73        | 4506      |   4433 | 6072.60% | 6072.60% |
| instcombine.NumCombined                            | 3640311   | 3666911   |  26600 |    0.73% |    0.73% |
| instcombine.NumDeadInst                            | 1778204   | 1783318   |   5114 |    0.29% |    0.29% |
| instcount.NumCallInst                              | 1758395   | 1758804   |    409 |    0.02% |    0.02% |
| instcount.NumInvokeInst                            | 59478     | 59502     |     24 |    0.04% |    0.04% |
| instcount.NumPHIInst                               | 330557    | 330549    |     -8 |    0.00% |    0.00% |
| instcount.TotalBlocks                              | 1077138   | 1077221   |     83 |    0.01% |    0.01% |
| instcount.TotalFuncs                               | 101442    | 101441    |     -1 |    0.00% |    0.00% |
| instcount.TotalInsts                               | 8831946   | 8832611   |    665 |    0.01% |    0.01% |
| simplifycfg.NumInvokes                             | 4300      | 4410      |    110 |    2.56% |    2.56% |
| simplifycfg.NumSimpl                               | 1019813   | 999740    | -20073 |   -1.97% |    1.97% |
```
So it fires ~22k times, which is less than ~24k the take 1 did.
It allows foldAggregateConstructionIntoAggregateReuse() to actually work
after PHI-of-extractvalue folds did their thing. Previously SimplifyCFG
would have done this PHI CSE, of all places. Additionally, allows some
more `invoke`->`call` folds to happen (+110, +2.56%).

All in all, expectedly, this catches less things overall,
but all the motivational cases are still caught, so all good.
2020-08-29 13:13:06 +03:00
Roman Lebedev 3ba83f2d29
[NFC][InstCombine] Add tests for PHI CSE
As discussed in https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20200824/824235.html
even though it seems worthwhile doing so in InstSimplify,
we really can't do that there, because the other PHI wouldn't be
def-reachable from the original PHI.

So ignoring whether or not EarlyCSE should also know to do this,
InstCombine is the place.
2020-08-29 13:13:06 +03:00
Owen Anderson ed90f15efb Revert "[InstSimplify][EarlyCSE] Try to CSE PHI nodes in the same basic block"
This reverts commit 6102310d81.  It
appears to cause compilation non-determinism and caused stage3
mismatches.
2020-08-28 23:43:42 +00:00
Nikita Popov ffe05dd125 [InstCombine] usub.sat(a, b) + b => umax(a, b) (PR42178)
Fixes https://bugs.llvm.org/show_bug.cgi?id=42178 by folding
usub.sat(a, b) + b to umax(a, b). The backend will expand umax
back to usubsat if that is profitable.

We may also want to handle uadd.sat(a, b) - b in the future.

Differential Revision: https://reviews.llvm.org/D63060
2020-08-28 21:52:29 +02:00
Roman Lebedev 6102310d81
[InstSimplify][EarlyCSE] Try to CSE PHI nodes in the same basic block
Apparently, we don't do this, neither in EarlyCSE, nor in InstSimplify,
nor in (old) GVN, but do in NewGVN and SimplifyCFG of all places..

While i could teach EarlyCSE how to hash PHI nodes,
we can't really do much (anything?) even if we find two identical
PHI nodes in different basic blocks, same-BB case is the interesting one,
and if we teach InstSimplify about it (which is what i wanted originally,
https://reviews.llvm.org/D86530), we get EarlyCSE support for free.

So i would think this is pretty uncontroversial.

On vanilla llvm test-suite + RawSpeed, this has the following effects:
```
| statistic name                                     | baseline  | proposed  |      Δ |        % |    \|%\| |
|----------------------------------------------------|-----------|-----------|-------:|---------:|---------:|
| instsimplify.NumPHICSE                             | 0         | 23779     |  23779 |    0.00% |    0.00% |
| asm-printer.EmittedInsts                           | 7942328   | 7942392   |     64 |    0.00% |    0.00% |
| assembler.ObjectBytes                              | 273069192 | 273084704 |  15512 |    0.01% |    0.01% |
| correlated-value-propagation.NumPhis               | 18412     | 18539     |    127 |    0.69% |    0.69% |
| early-cse.NumCSE                                   | 2183283   | 2183227   |    -56 |    0.00% |    0.00% |
| early-cse.NumSimplify                              | 550105    | 542090    |  -8015 |   -1.46% |    1.46% |
| instcombine.NumAggregateReconstructionsSimplified  | 73        | 4506      |   4433 | 6072.60% | 6072.60% |
| instcombine.NumCombined                            | 3640264   | 3664769   |  24505 |    0.67% |    0.67% |
| instcombine.NumDeadInst                            | 1778193   | 1783183   |   4990 |    0.28% |    0.28% |
| instcount.NumCallInst                              | 1758401   | 1758799   |    398 |    0.02% |    0.02% |
| instcount.NumInvokeInst                            | 59478     | 59502     |     24 |    0.04% |    0.04% |
| instcount.NumPHIInst                               | 330557    | 330533    |    -24 |   -0.01% |    0.01% |
| instcount.TotalInsts                               | 8831952   | 8832286   |    334 |    0.00% |    0.00% |
| simplifycfg.NumInvokes                             | 4300      | 4410      |    110 |    2.56% |    2.56% |
| simplifycfg.NumSimpl                               | 1019808   | 999607    | -20201 |   -1.98% |    1.98% |
```
I.e. it fires ~24k times, causes +110 (+2.56%) more `invoke` -> `call`
transforms, and counter-intuitively results in *more* instructions total.

That being said, the PHI count doesn't decrease that much,
and looking at some examples, it seems at least some of them
were previously getting PHI CSE'd in SimplifyCFG of all places..

I'm adjusting `Instruction::isIdenticalToWhenDefined()` at the same time.
As a comment in `InstCombinerImpl::visitPHINode()` already stated,
there are no guarantees on the ordering of the operands of a PHI node,
so if we just naively compare them, we may false-negatively say that
the nodes are not equal when the only difference is operand order,
which is especially important since the fold is in InstSimplify,
so we can't rely on InstCombine sorting them beforehand.

Fixing this for the general case is costly (geomean +0.02%),
and does not appear to catch anything in test-suite, but for
the same-BB case, it's trivial, so let's fix at least that.

As per http://llvm-compile-time-tracker.com/compare.php?from=04879086b44348cad600a0a1ccbe1f7776cc3cf9&to=82bdedb888b945df1e9f130dd3ac4dd3c96e2925&stat=instructions
this appears to cause geomean +0.03% compile time increase (regression),
but geomean -0.01%..-0.04% code size decrease (improvement).
2020-08-27 18:47:04 +03:00
Florian Hahn 419c6948df [SimplifyLibCalls] Remove over-eager early return in strlen optzns.
Currently we bail out early for strlen calls with a GEP operand, if none
of the GEP specific optimizations fire. But there could be later
optimizations that still apply,  which we currently miss out on.

An example is that we do not apply the following optimization
   strlen(x) == 0 --> *x == 0

Unless I am missing something, there seems to be no reason for bailing
out early there.

Fixes PR47149.

Reviewed By: lebedev.ri, xbolva00

Differential Revision: https://reviews.llvm.org/D85886
2020-08-27 15:19:45 +01:00
Roman Lebedev 95848ea101
[Value][InstCombine] Fix one-use checks in PHI-of-op -> Op-of-PHI[s] transforms to be one-user checks
As FIXME said, they really should be checking for a single user,
not use, so let's do that. It is not *that* unusual to have
the same value as incoming value in a PHI node, not unlike
how a PHI may have the same incoming basic block more than once.

There isn't a nice way to do that, Value::users() isn't uniqified,
and Value only tracks it's uses, not Users, so the check is
potentially costly since it does indeed potentially involes
traversing the entire use list of a value.
2020-08-26 20:20:41 +03:00
Roman Lebedev 8bfe46dce2
[NFC][InstCombine] Add tests with PHI-of-{insert,extract}value with multiple uses
It is fine if the operation has multiple uses, as long as they are all
in this very PHI node.
2020-08-26 20:20:41 +03:00
Roman Lebedev 1f90d45b9e
[InstCombine] PHI-of-extractvalues -> extractvalue-of-PHI, aka invokes are bad
While since D86306 we do it's sibling fold for `insertvalue`,
we should also do this for `extractvalue`'s.

And unlike that one, the results here are, quite honestly, shocking,
as it can be observed here on vanilla llvm test-suite + RawSpeed results:

```
| statistic name                                     | baseline  | proposed  |       Δ |       % |    |%| |
|----------------------------------------------------|-----------|-----------|--------:|--------:|-------:|
| asm-printer.EmittedInsts                           | 7945095   | 7942507   |   -2588 |  -0.03% |  0.03% |
| assembler.ObjectBytes                              | 273209920 | 273069800 | -140120 |  -0.05% |  0.05% |
| early-cse.NumCSE                                   | 2183363   | 2183398   |      35 |   0.00% |  0.00% |
| early-cse.NumSimplify                              | 541847    | 550017    |    8170 |   1.51% |  1.51% |
| instcombine.NumAggregateReconstructionsSimplified  | 2139      | 108       |   -2031 | -94.95% | 94.95% |
| instcombine.NumCombined                            | 3601364   | 3635448   |   34084 |   0.95% |  0.95% |
| instcombine.NumConstProp                           | 27153     | 27157     |       4 |   0.01% |  0.01% |
| instcombine.NumDeadInst                            | 1694521   | 1765022   |   70501 |   4.16% |  4.16% |
| instcombine.NumPHIsOfExtractValues                 | 0         | 37546     |   37546 |   0.00% |  0.00% |
| instcombine.NumSunkInst                            | 63158     | 63686     |     528 |   0.84% |  0.84% |
| instcount.NumBrInst                                | 874304    | 871857    |   -2447 |  -0.28% |  0.28% |
| instcount.NumCallInst                              | 1757657   | 1758402   |     745 |   0.04% |  0.04% |
| instcount.NumExtractValueInst                      | 45623     | 11483     |  -34140 | -74.83% | 74.83% |
| instcount.NumInsertValueInst                       | 4983      | 580       |   -4403 | -88.36% | 88.36% |
| instcount.NumInvokeInst                            | 61018     | 59478     |   -1540 |  -2.52% |  2.52% |
| instcount.NumLandingPadInst                        | 35334     | 34215     |   -1119 |  -3.17% |  3.17% |
| instcount.NumPHIInst                               | 344428    | 331116    |  -13312 |  -3.86% |  3.86% |
| instcount.NumRetInst                               | 100773    | 100772    |      -1 |   0.00% |  0.00% |
| instcount.TotalBlocks                              | 1081154   | 1077166   |   -3988 |  -0.37% |  0.37% |
| instcount.TotalFuncs                               | 101443    | 101442    |      -1 |   0.00% |  0.00% |
| instcount.TotalInsts                               | 8890201   | 8833747   |  -56454 |  -0.64% |  0.64% |
| instsimplify.NumSimplified                         | 75822     | 75707     |    -115 |  -0.15% |  0.15% |
| simplifycfg.NumHoistCommonCode                     | 24203     | 24197     |      -6 |  -0.02% |  0.02% |
| simplifycfg.NumHoistCommonInstrs                   | 48201     | 48195     |      -6 |  -0.01% |  0.01% |
| simplifycfg.NumInvokes                             | 2785      | 4298      |    1513 |  54.33% | 54.33% |
| simplifycfg.NumSimpl                               | 997332    | 1018189   |   20857 |   2.09% |  2.09% |
| simplifycfg.NumSinkCommonCode                      | 7088      | 6464      |    -624 |  -8.80% |  8.80% |
| simplifycfg.NumSinkCommonInstrs                    | 15117     | 14021     |   -1096 |  -7.25% |  7.25% |
```
... which tells us that this new fold fires whopping 38k times,
increasing the amount of SimplifyCFG's `invoke`->`call` transforms by +54% (+1513) (again, D85787 did that last time),
decreasing total instruction count by -0.64% (-56454),
and sharply decreasing count of `insertvalue`'s (-88.36%, i.e. 9 times less)
and `extractvalue`'s (-74.83%, i.e. four times less).

This causes geomean -0.01% binary size decrease
http://llvm-compile-time-tracker.com/compare.php?from=4d5ca22b8adfb6643466e4e9f48ba14bb48938bc&to=97dacca0111cb2ae678204e52a3cee00e3a69208&stat=size-text
and, ignoring `O0-g`, is a geomean -0.01%..-0.05% compile-time improvement
http://llvm-compile-time-tracker.com/compare.php?from=4d5ca22b8adfb6643466e4e9f48ba14bb48938bc&to=97dacca0111cb2ae678204e52a3cee00e3a69208&stat=instructions

The other thing that tells is, is that while this is a massive win for `invoke`->`call` transform
`InstCombinerImpl::foldAggregateConstructionIntoAggregateReuse()` fold,
which is supposed to be dealing with such aggregate reconstructions,
fires a lot less now. There are two reasons why:
1. After this fold, as it can be seen in tests, we may (will) end up with trivially redundant PHI nodes.
   We don't CSE them in InstCombine presently, which means that EarlyCSE needs to run and then InstCombine rerun.
2. But then, EarlyCSE not only manages to fold such redundant PHI's,
   it also sees that the extract-insert chain recreates the original aggregate,
   and replaces it with the original aggregate.

The take-aways are
1. We maybe should do most trivial, same-BB PHI CSE in InstCombine
2. I need to check if what other patterns remain, and how they can be resolved.
   (i.e. i wonder if `foldAggregateConstructionIntoAggregateReuse()` might go away)

This is a reland of the original commit fcb51d8c24,
because originally i forgot to ensure that the base aggregate types match.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86530
2020-08-26 09:57:50 +03:00
Roman Lebedev 451b1bd894
[NFC][InstCombine] Add a PHI-of-insertvalues test with different base aggregate types 2020-08-26 09:57:50 +03:00
Roman Lebedev c295c6f2c0
Revert "[InstCombine] PHI-of-extractvalues -> extractvalue-of-PHI, aka invokes are bad"
This reverts commit fcb51d8c24.

As buildbots report, there's apparently some missing check to ensure
that the types of incoming values match the type of PHI.
Let's revert for a moment.
2020-08-26 09:23:22 +03:00
Roman Lebedev fcb51d8c24
[InstCombine] PHI-of-extractvalues -> extractvalue-of-PHI, aka invokes are bad
While since D86306 we do it's sibling fold for `insertvalue`,
we should also do this for `extractvalue`'s.

And unlike that one, the results here are, quite honestly, shocking,
as it can be observed here on vanilla llvm test-suite + RawSpeed results:

```
| statistic name                                     | baseline  | proposed  |       Δ |       % |    |%| |
|----------------------------------------------------|-----------|-----------|--------:|--------:|-------:|
| asm-printer.EmittedInsts                           | 7945095   | 7942507   |   -2588 |  -0.03% |  0.03% |
| assembler.ObjectBytes                              | 273209920 | 273069800 | -140120 |  -0.05% |  0.05% |
| early-cse.NumCSE                                   | 2183363   | 2183398   |      35 |   0.00% |  0.00% |
| early-cse.NumSimplify                              | 541847    | 550017    |    8170 |   1.51% |  1.51% |
| instcombine.NumAggregateReconstructionsSimplified  | 2139      | 108       |   -2031 | -94.95% | 94.95% |
| instcombine.NumCombined                            | 3601364   | 3635448   |   34084 |   0.95% |  0.95% |
| instcombine.NumConstProp                           | 27153     | 27157     |       4 |   0.01% |  0.01% |
| instcombine.NumDeadInst                            | 1694521   | 1765022   |   70501 |   4.16% |  4.16% |
| instcombine.NumPHIsOfExtractValues                 | 0         | 37546     |   37546 |   0.00% |  0.00% |
| instcombine.NumSunkInst                            | 63158     | 63686     |     528 |   0.84% |  0.84% |
| instcount.NumBrInst                                | 874304    | 871857    |   -2447 |  -0.28% |  0.28% |
| instcount.NumCallInst                              | 1757657   | 1758402   |     745 |   0.04% |  0.04% |
| instcount.NumExtractValueInst                      | 45623     | 11483     |  -34140 | -74.83% | 74.83% |
| instcount.NumInsertValueInst                       | 4983      | 580       |   -4403 | -88.36% | 88.36% |
| instcount.NumInvokeInst                            | 61018     | 59478     |   -1540 |  -2.52% |  2.52% |
| instcount.NumLandingPadInst                        | 35334     | 34215     |   -1119 |  -3.17% |  3.17% |
| instcount.NumPHIInst                               | 344428    | 331116    |  -13312 |  -3.86% |  3.86% |
| instcount.NumRetInst                               | 100773    | 100772    |      -1 |   0.00% |  0.00% |
| instcount.TotalBlocks                              | 1081154   | 1077166   |   -3988 |  -0.37% |  0.37% |
| instcount.TotalFuncs                               | 101443    | 101442    |      -1 |   0.00% |  0.00% |
| instcount.TotalInsts                               | 8890201   | 8833747   |  -56454 |  -0.64% |  0.64% |
| instsimplify.NumSimplified                         | 75822     | 75707     |    -115 |  -0.15% |  0.15% |
| simplifycfg.NumHoistCommonCode                     | 24203     | 24197     |      -6 |  -0.02% |  0.02% |
| simplifycfg.NumHoistCommonInstrs                   | 48201     | 48195     |      -6 |  -0.01% |  0.01% |
| simplifycfg.NumInvokes                             | 2785      | 4298      |    1513 |  54.33% | 54.33% |
| simplifycfg.NumSimpl                               | 997332    | 1018189   |   20857 |   2.09% |  2.09% |
| simplifycfg.NumSinkCommonCode                      | 7088      | 6464      |    -624 |  -8.80% |  8.80% |
| simplifycfg.NumSinkCommonInstrs                    | 15117     | 14021     |   -1096 |  -7.25% |  7.25% |
```
... which tells us that this new fold fires whopping 38k times,
increasing the amount of SimplifyCFG's `invoke`->`call` transforms by +54% (+1513) (again, D85787 did that last time),
decreasing total instruction count by -0.64% (-56454),
and sharply decreasing count of `insertvalue`'s (-88.36%, i.e. 9 times less)
and `extractvalue`'s (-74.83%, i.e. four times less).

This causes geomean -0.01% binary size decrease
http://llvm-compile-time-tracker.com/compare.php?from=4d5ca22b8adfb6643466e4e9f48ba14bb48938bc&to=97dacca0111cb2ae678204e52a3cee00e3a69208&stat=size-text
and, ignoring `O0-g`, is a geomean -0.01%..-0.05% compile-time improvement
http://llvm-compile-time-tracker.com/compare.php?from=4d5ca22b8adfb6643466e4e9f48ba14bb48938bc&to=97dacca0111cb2ae678204e52a3cee00e3a69208&stat=instructions

The other thing that tells is, is that while this is a massive win for `invoke`->`call` transform
`InstCombinerImpl::foldAggregateConstructionIntoAggregateReuse()` fold,
which is supposed to be dealing with such aggregate reconstructions,
fires a lot less now. There are two reasons why:
1. After this fold, as it can be seen in tests, we may (will) end up with trivially redundant PHI nodes.
   We don't CSE them in InstCombine presently, which means that EarlyCSE needs to run and then InstCombine rerun.
2. But then, EarlyCSE not only manages to fold such redundant PHI's,
   it also sees that the extract-insert chain recreates the original aggregate,
   and replaces it with the original aggregate.

The take-aways are
1. We maybe should do most trivial, same-BB PHI CSE in InstCombine
2. I need to check if what other patterns remain, and how they can be resolved.
   (i.e. i wonder if `foldAggregateConstructionIntoAggregateReuse()` might go away)

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86530
2020-08-26 09:08:24 +03:00
Sanjay Patel c4f0a0896f [InstCombine] improve demanded element analysis for vector insert-of-extract (2nd try)
The 1st attempt (rG557b890) was reverted because it caused miscompiles.
That bug is avoided here by changing the order of folds and as verified
in the new tests.

Original commit message:
InstCombine currently has odd rules for folding insert-extract chains to shuffles,
so we miss collapsing seemingly simple cases as shown in the tests here.

But poison makes this not quite as easy as we might have guessed. Alive2 tests to
show the subtle difference (similar to the regression tests):
https://alive2.llvm.org/ce/z/hp4hv3 (this is ok)
https://alive2.llvm.org/ce/z/ehEWaN (poison leakage)

SLP tends to create these patterns (as shown in the SLP tests), and this could
help with solving PR16739.

Differential Revision: https://reviews.llvm.org/D86460
2020-08-25 11:19:36 -04:00
Sanjay Patel 11f8d4aa10 [InstCombine] add vector demanded elements tests with shuffles; NFC
The 1st draft of D86460 (reverted) would show miscompiles with these tests
because the undef element tracking went wrong and became visible in the
shuffle masks.
2020-08-25 11:19:35 -04:00
Roman Lebedev 4d5ca22b8a
[NFC][InstCombine] Tests for PHI-of-extractvalues
Much like with it's sibling fold HI-of-insertvalues,
it appears to be much more worthwhile than it would seem.
2020-08-25 13:01:07 +03:00
Benjamin Kramer c6fb72de4f Revert "[InstCombine] improve demanded element analysis for vector insert-of-extract"
This reverts commit 557b890ff4. Causing
miscompiles, test case is on llvm-commits.
2020-08-25 11:31:31 +02:00
David Sherwood 7b64765cd1 [SVE] Fix TypeSize related warnings with IR truncates of scalable vectors
In getCastInstrCost when the instruction is a truncate we were relying
upon the implicit TypeSize -> uint64_t cast when asking if a given type
has the same size as a legal integer. I've changed the code to only
ask the question if the type is fixed length.

I have also changed InstCombinerImpl::SimplifyDemandedUseBits to bail
out for now if the type is a scalable vector.

I've added the following new tests:

  Analysis/CostModel/AArch64/sve-trunc.ll
  Transforms/InstCombine/AArch64/sve-trunc.ll

for both of these fixes.

Differential revision: https://reviews.llvm.org/D86432
2020-08-25 09:17:56 +01:00
Roman Lebedev cdd339c568
[InstCombine] PHI-of-insertvalues -> insertvalue-of-PHI's
As per statistic, this happens pretty exceedingly rare,
but i have seen it in exactly the situations the
Phi-aware aggregate reconstruction would have handled,
eventually, and allowed invoke -> call fold later on.

So while this might be something that other fold
will have to learn about, i believe we should be
doing this transform in general.

Here, we are okay with adding two PHI's to get both the base aggregate,
and the inserted value. I'm not sure it makes much sense to restrict
it to a single phi (to just the inserted value?), because originally
we'd be receiving the final aggregate already..

llvm test-suite + RawSpeed:
```
| statistic name                             | baseline  | proposed  |    Δ |      % | \|%\| |
|--------------------------------------------|-----------|-----------|-----:|-------:|------:|
| instcombine.NumPHIsOfInsertValues          | 0         | 12        |  12  |  0.00% | 0.00% |
| asm-printer.EmittedInsts                   | 8926643   | 8926595   | -48  |  0.00% | 0.00% |
| instcombine.NumCombined                    | 3846614   | 3846640   |  26  |  0.00% | 0.00% |
| instcombine.NumConstProp                   | 24302     | 24293     |  -9  | -0.04% | 0.04% |
| instcombine.NumDeadInst                    | 1620140   | 1620112   | -28  |  0.00% | 0.00% |
| instcount.NumBrInst                        | 898466    | 898464    |  -2  |  0.00% | 0.00% |
| instcount.NumCallInst                      | 1760819   | 1760875   |  56  |  0.00% | 0.00% |
| instcount.NumExtractValueInst              | 45659     | 45649     | -10  | -0.02% | 0.02% |
| instcount.NumInsertValueInst               | 4991      | 4981      | -10  | -0.20% | 0.20% |
| instcount.NumIntToPtrInst                  | 27084     | 27087     |   3  |  0.01% | 0.01% |
| instcount.NumPHIInst                       | 371435    | 371429    |  -6  |  0.00% | 0.00% |
| instcount.NumStoreInst                     | 906011    | 906019    |   8  |  0.00% | 0.00% |
| instcount.TotalBlocks                      | 1105520   | 1105518   |  -2  |  0.00% | 0.00% |
| instcount.TotalInsts                       | 9795737   | 9795776   |  39  |  0.00% | 0.00% |
| simplifycfg.NumInvokes                     | 2784      | 2786      |   2  |  0.07% | 0.07% |
| simplifycfg.NumSimpl                       | 1001840   | 1001850   |  10  |  0.00% | 0.00% |
| simplifycfg.NumSinkCommonInstrs            | 15174     | 15170     |  -4  | -0.03% | 0.03% |
```

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86306
2020-08-25 10:38:11 +03:00
Sanjay Patel 557b890ff4 [InstCombine] improve demanded element analysis for vector insert-of-extract
InstCombine currently has odd rules for folding insert-extract chains to shuffles,
so we miss collapsing seemingly simple cases as shown in the tests here.

But poison makes this not quite as easy as we might have guessed. Alive2 tests to
show the subtle difference (similar to the regression tests):
https://alive2.llvm.org/ce/z/hp4hv3 (this is ok)
https://alive2.llvm.org/ce/z/ehEWaN (poison leakage)

SLP tends to create these patterns (as shown in the SLP tests), and this could
help with solving PR16739.

Differential Revision: https://reviews.llvm.org/D86460
2020-08-24 17:00:16 -04:00
Sanjay Patel 49dd1ea727 [InstCombine] add tests for insert+extract demanded elements; NFC 2020-08-24 17:00:16 -04:00
Roman Lebedev 4dec8ec33d
[NFC][InstCombine] Multi-level aggregate test for phi-of-insertvalue pattern
See https://reviews.llvm.org/D86306
2020-08-24 22:39:34 +03:00
Sanjay Patel 6a44edb8da [InstCombine] fold abs of select with negated op (PR39474)
Similar to the existing transform - peek through a select
to match a value and its negation.

https://alive2.llvm.org/ce/z/MXi5KG

  define i8 @src(i1 %b, i8 %x) {
  %0:
    %neg = sub i8 0, %x
    %sel = select i1 %b, i8 %x, i8 %neg
    %abs = abs i8 %sel, 1
    ret i8 %abs
  }
  =>
  define i8 @tgt(i1 %b, i8 %x) {
  %0:
    %abs = abs i8 %x, 1
    ret i8 %abs
  }
  Transformation seems to be correct!
2020-08-24 07:37:55 -04:00
Sanjay Patel 52df6cac20 [InstCombine] add tests for abs of select with negated op; NFC (PR39474) 2020-08-24 07:37:54 -04:00
Roman Lebedev f6decfa36d
[InstCombine] Negator: freeze is freely negatible if it's operand is negatible 2020-08-23 23:28:19 +03:00
Roman Lebedev b899d13a0f
[NFC][InstCombine] Add tests for negation of freeze 2020-08-23 23:28:19 +03:00
Sanjay Patel ec06b38130 [InstCombine] canonicalize 'not' ops before logical shifts
This reverses the existing transform that would uniformly canonicalize any 'xor' after any shift. In the case of logical shifts, that turns a 'not' into an arbitrary 'xor' with constant, and that's probably not as good for analysis, SCEV, or codegen.

The SCEV motivating case is discussed in:
http://bugs.llvm.org/PR47136

There's an analysis motivating case at:
http://bugs.llvm.org/PR38781

I did draft a patch that would do the same for 'ashr' but that's questionable because it's just swapping the position of a 'not' and uncovers at least 2 missing folds that we would probably need to deal with as preliminary steps.

Alive proofs:
https://rise4fun.com/Alive/BBV

  Name: shift right of 'not'
  Pre: C2 == (-1 u>> C1)
  %a = lshr i8 %x, C1
  %r = xor i8 %a, C2
  =>
  %n = xor i8 %x, -1
  %r = lshr i8 %n, C1

  Name: shift left of 'not'
  Pre: C2 == (-1 << C1)
  %a = shl i8 %x, C1
  %r = xor i8 %a, C2
  =>
  %n = xor i8 %x, -1
  %r = shl i8 %n, C1

  Name: ashr of 'not'
  %a = ashr i8 %x, C1
  %r = xor i8 %a, -1
  =>
  %n = xor i8 %x, -1
  %r = ashr i8 %n, C1

Differential Revision: https://reviews.llvm.org/D86243
2020-08-22 09:38:13 -04:00
Serguei Katkov 9e362bb0eb [InstCombine] Remove unused entries in gc-live bundle of statepoint
If some of gc live value are not used in gc.relocate we can remove them
from gc-live bundle of statepoint instruction.

Also the CL removes duplicated Values in gc-live bundle.

Reviewers: reames, dantrushin
Reviewed By: dantrushin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D85959
2020-08-22 01:36:22 +07:00
Roman Lebedev ea2e682cc2
[NFC][InstCombine] Tests for PHI-of-insertvalue's
Currently we don't do anything about these,
neither in InstCombine, nor in SimplifyCFG's sinking.
These happen exceedingly rarely, but i've seen them in the cases where
PHI-aware aggregate reconstruction would have fired if not for them.
2020-08-20 20:16:31 +03:00
Sebastian Neubauer b8d1994778 [AMDGPU] Add A16/G16 to InstCombine
When sampling from images with coordinates that only have 16 bit
accuracy, convert the image intrinsic call to use a16 or g16.
This does only happen if the target hardware supports it.

An alternative would be to always apply this combination, independent of
the target hardware and extend 16 bit arguments to 32 bit arguments
during legalization. To me, this sounds like an unnecessary roundtrip
that could prevent some further InstCombine optimizations.

Differential Revision: https://reviews.llvm.org/D85887
2020-08-20 10:51:49 +02:00
Sanjay Patel 455688e293 [InstCombine] add tests for shifted xor; NFC 2020-08-19 13:48:35 -04:00
Sanjay Patel 5e31dd2650 [InstCombine] avoid 'tmp' names in tests; NFC
They may conflict with update_test_checks.py regexes.
2020-08-19 12:08:31 -04:00
Sanjay Patel 92bcd240f2 [InstCombine] update stale comments in test files; NFC
I missed updating these with:
rG23bd33c6acc4
2020-08-19 07:42:06 -04:00
Meera Nakrani 545de56f87 [ARM] Enabled VMLAV and Add instructions to use VMLAVA
Used InstCombine to enable VMLAV and Add instructions to generate VMLAVA instead with tests.
2020-08-19 08:36:49 +00:00
Roman Lebedev 78bd4231bf
[InstCombine] PHI-aware aggregate reconstruction: properly handle duplicate predecessors
While it may seem like we can just "deduplicate" the case where
some basic block happens to be a predecessor more than once,
which happens for e.g. switches, that is not correct thing to do.
We must actually add a PHI operand for each predecessor.

This was initially reported to me by David Major
as a clang crash during gecko build for android.
2020-08-19 01:00:42 +03:00
Sanjay Patel 139da9c4d7 [InstCombine] fold fabs of select with negated operand
This is the FP example shown in:
https://bugs.llvm.org/PR39474
2020-08-18 09:23:07 -04:00
Sanjay Patel e0aa335334 [InstCombine] add tests for fneg+fabs; NFC 2020-08-18 09:23:07 -04:00
Roman Lebedev 03127f795b
[InstCombine] PHI-aware aggregate reconstruction: correctly detect "use" basic block
While the original implementation added in D85787 / ae7f08812e
is not incorrect, it is known to be suboptimal.

In particular, it is not incorrect to use the basic block
in which the original `insertvalue` instruction is located
as the merge point, that is not necessarily optimal,
as `@test6` shows.

We should look at all the AggElts, and, if they are all defined
in the same basic block, then that is the basic block we should use.

On RawSpeed library, this catches +4% (+50) more cases.
On vanilla LLVM test-suits, this catches +12% (+92) more cases.
2020-08-18 00:45:18 +03:00