Commit Graph

251 Commits

Author SHA1 Message Date
Sanjay Patel 0fcff69bcb [InstCombine] try to narrow shifted bswap-of-zext (2nd try)
The first attempt at this missed a validity check.
This version includes a test of the narrow source
type for modulo-16-bits.

Original commit message:

This is the IR counterpart to 370ebc9d9a
which provided a bswap narrowing fix for issue #53867.

Here we can be more general (although I'm not sure yet
what would happen for illegal types in codegen - too
rare to worry about?):
https://alive2.llvm.org/ce/z/3-CPfo

This will be more effective if we have moved the shift
after the bswap as proposed in D122010, but it is
independent of that patch.

Differential Revision: https://reviews.llvm.org/D122166
2022-03-23 11:28:37 -04:00
Nathan Chancellor 4e0008dcbe
Revert "[InstCombine] try to narrow shifted bswap-of-zext"
This reverts commit 9e9bda2e8f.

This causes a backend error when building the Linux kernel for arm64.
See https://reviews.llvm.org/D122166 for a simplified reproducer.
2022-03-22 17:32:33 -07:00
Sanjay Patel 9e9bda2e8f [InstCombine] try to narrow shifted bswap-of-zext
This is the IR counterpart to 370ebc9d9a
which provided a bswap narrowing fix for issue #53867.

Here we can be more general (although I'm not sure yet
what would happen for illegal types in codegen - too
rare to worry about?):
https://alive2.llvm.org/ce/z/3-CPfo

This will be more effective if we have moved the shift
after the bswap as proposed in D122010, but it is
independent of that patch.

Differential Revision: https://reviews.llvm.org/D122166
2022-03-22 08:22:30 -04:00
serge-sans-paille 59630917d6 Cleanup includes: Transform/Scalar
Estimated impact on preprocessor output line:
before: 1062981579
after:  1062494547

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D120817
2022-03-03 07:56:34 +01:00
Sanjay Patel 39e602b6c4 [InstCombine] try to fold binop with phi operands
This is an alternate version of D115914 that handles/tests all binary opcodes.

I suspect that we don't see these patterns too often because -simplifycfg
would convert the minimal cases into selects rather than leave them in phi form
(note: instcombine has logic holes for combining the select patterns too though,
so that's another potential patch).

We only create a new binop in a predecessor that unconditionally branches to
the final block.
https://alive2.llvm.org/ce/z/C57M2F
https://alive2.llvm.org/ce/z/WHwAoU (not safe to speculate an sdiv for example)
https://alive2.llvm.org/ce/z/rdVUvW (but it is ok on this path)

Differential Revision: https://reviews.llvm.org/D117110
2022-01-22 15:00:06 -05:00
Sanjay Patel 2d031ec5e5 [InstCombine] add one-use check to opposite shift folds
Test comments say this might be intentional, but I don't
see any hard evidence to support it. The extra instruction
shows up as a potential regression in D117680.

One test does show a missed fold that might be recovered
with better demanded bits analysis.
2022-01-20 13:49:23 -05:00
Sanjay Patel 0c6979b2d6 [InstCombine] fold opposite shifts around an add
((X << C) + Y) >>u C --> (X + (Y >>u C)) & (-1 >>u C)

https://alive2.llvm.org/ce/z/DY9DPg

This replaces a shift with an 'and', and in the case
where the add has a constant operand, it eliminates
both shifts.

As noted in the TODO comment, we already have this fold when
the shifts are in the opposite order (and that code handles
bitwise logic ops too).

Fixes #52851
2021-12-30 12:01:06 -05:00
Sanjay Patel fd9cd3408b Revert "[InstCombine] fold opposite shifts around an add"
This reverts commit 2e3e0a5c28.
Some unintended diffs snuck into this patch.
2021-12-30 11:54:55 -05:00
Sanjay Patel 2e3e0a5c28 [InstCombine] fold opposite shifts around an add
((X << C) + Y) >>u C --> (X + (Y >>u C)) & (-1 >>u C)

https://alive2.llvm.org/ce/z/DY9DPg

This replaces a shift with an 'and', and in the case
where the add has a constant operand, it eliminates
both shifts.

As noted in the TODO comment, we already have this fold when
the shifts are in the opposite order (and that code handles
bitwise logic ops too).

Fixes #52851
2021-12-30 11:52:29 -05:00
Stanislav Mekhanoshin c74f2e5b27 [InstCombine] Use SpecificBinaryOp_match in two more places
Differential Revision: https://reviews.llvm.org/D114038
2021-11-17 01:16:06 -08:00
Anton Afanasyev 7b07c01351 [InstCombine] Support arbitrary const shift amount for `lshr (sext i1 ...)`
Add lshr (sext i1 X to iN), C --> select (X, -1 >> C, 0) case. This expands
C == N-1 case to arbitrary C.

Fixes PR52078.

Reviewed By: spatel, RKSimon, lebedev.ri

Differential Revision: https://reviews.llvm.org/D111330
2021-10-15 13:39:13 +03:00
Sanjay Patel 3fabd98e5b [InstCombine] fold (trunc (X>>C1)) << C to shift+mask directly
This is no-externally-visible-functional-difference-intended.
That is, the test diffs show identical instructions other than
name changes (those are included specifically to verify the logic).

The existing transforms created extra instructions and relied
on subsequent folds to get to the final result, but that could
conflict with other transforms like the proposed D110170 (and
caused that patch to be reverted twice so far because of infinite
combine loops).
2021-10-01 14:22:44 -04:00
Sanjay Patel 3fcb00df5d [InstCombine] restrict shift-trunc-shift fold to opposite direction shifts
This is NFCI because the pattern with 2 left-shifts should get
folded independently by smaller folds.

The motivation is to refine this block to avoid infinite loops
seen with D110170.
2021-09-30 15:06:13 -04:00
Sanjay Patel ea56dcb730 [InstCombine] fix miscompile from dropRedundantMaskingOfLeftShiftInput()
The test is from https://llvm.org/PR51351.

There are 2 related logic bugs from over-generalizing "lshr" to "any shr",
but I'm not sure how to expose the difference for "MaskC" because instsimplify
already folds ashr of -1.

I'll extend instsimplify to catch the MaskD pattern as a follow-up, but this
patch should be enough to avoid the miscompile.
2021-09-29 11:43:18 -04:00
Sanjay Patel 98fde3489a [InstCombine] reduce redundant code for shl-binop folds
This is NFCI (no-functional-change-intended), but there
are benign diffs possible with commutable ops as seen in
the test diffs.

The transforms were repeated for the commutative opcodes,
but that should not be necessary if we canonicalize the
patterns that we're matching. If both operands of the
binop match, that should get folded eventually.

The transform that starts with a mask op seems to
over-constrain the use checks, so that could be a
potential enhancement.
2021-09-28 17:06:45 -04:00
Sanjay Patel fdba1dccbe [InstCombine] reduce code for shl-of-sub transform; NFC 2021-09-27 14:56:01 -04:00
Sanjay Patel 623f93ed1c [InstCombine] add use check to shl transform
This bug was introduced with the refactoring in:
9075edc89b
...but there were no tests to detect it.
2021-09-27 14:10:26 -04:00
Kazu Hirata 59540b29f8 [InstCombine] Fix an "unused variable" warning 2021-09-27 09:49:32 -07:00
Sanjay Patel 9075edc89b [InstCombine] move shl-only folds out from under commonShiftTransforms(); NFCI
This is no-functional-change-intended, but it hopefully makes things
slightly clearer and more efficient to have transforms that require
'shl' be called only from visitShl(). Further cleanup is possible.
2021-09-27 12:09:47 -04:00
Sanjay Patel 21429cf43a [InstCombine] generalize fold for (trunc (X u>> C1)) u>> C
This is another step towards trying to re-apply D110170
by eliminating conflicting transforms that cause infinite loops.
a47c8e40c7 was a previous patch in this direction.

The diffs here are mostly cosmetic, but intentional:
1. The existing code that would handle this pattern in FoldShiftByConstant()
   is limited to 'shl' only now. The formatting change to IsLeftShift shows
   that we could move several transforms into visitShl() directly for
   efficiency because they are not common shift transforms.

2. The tests are regenerated to show new instruction names to prove that
   we are getting (almost) identical logic results.

3. The one case where we differ ("trunc_sandwich_small_shift1") shows that
   we now use a narrow 'and' instruction. Previously, we relied on another
   transform to do that, but it is limited to legal types. That seems to
   be a legacy constraint from when IR analysis and codegen were less robust.

https://alive2.llvm.org/ce/z/JxyGA4

  declare void @llvm.assume(i1)

  define i8 @src(i32 %x, i32 %c0, i8 %c1) {
    ; The sum of the shifts must not overflow the source width.
    %z1 = zext i8 %c1 to i32
    %sum = add i32 %c0, %z1
    %ov = icmp ult i32 %sum, 32
    call void @llvm.assume(i1 %ov)

    %sh1 = lshr i32 %x, %c0
    %tr = trunc i32 %sh1 to i8
    %sh2 = lshr i8 %tr, %c1
    ret i8 %sh2
  }

  define i8 @tgt(i32 %x, i32 %c0, i8 %c1) {
    %z1 = zext i8 %c1 to i32
    %sum = add i32 %c0, %z1
    %maskc = lshr i8 -1, %c1

    %s = lshr i32 %x, %sum
    %t = trunc i32 %s to i8
    %a = and i8 %t, %maskc
    ret i8 %a
  }
2021-09-27 10:57:31 -04:00
Sanjay Patel 025a805d7c [InstCombine] match variable names and code comments; NFC
Similar to:
29c09c7

Planned follow-up is to add a transform here to allow removing
a common shift fold that is conflicting with D110170.
2021-09-27 10:57:31 -04:00
Sanjay Patel a47c8e40c7 [InstCombine] fold lshr(trunc(lshr X, C1)) C2
Only the multi-use cases are changing here because there's
another fold that catches the simpler patterns.

But that other fold is the source of infinite loops when we
try to add D110170, so removing that is planned as a follow-up.

Attempt to show the general proof in Alive2:
https://alive2.llvm.org/ce/z/Ns1uS2

Note that the overshift fold-to-zero tests are not
currently handled by instsimplify. If they were, we
could assert that the shift amount sum is less than
the source bitwidth.
2021-09-24 15:44:07 -04:00
Sanjay Patel 29c09c7653 [InstCombine] match variable names and code comments; NFC 2021-09-24 15:44:07 -04:00
Sanjay Patel 1cd6b44f26 [InstCombine] add one-use check to shift-shift transform
We don't want to create extra instructions, and this
could infinite loop with the proposed transform in D110170.
2021-09-22 16:31:12 -04:00
Chris Lattner 735f46715d [APInt] Normalize naming on keep constructors / predicate methods.
This renames the primary methods for creating a zero value to `getZero`
instead of `getNullValue` and renames predicates like `isAllOnesValue`
to simply `isAllOnes`.  This achieves two things:

1) This starts standardizing predicates across the LLVM codebase,
   following (in this case) ConstantInt.  The word "Value" doesn't
   convey anything of merit, and is missing in some of the other things.

2) Calling an integer "null" doesn't make any sense.  The original sin
   here is mine and I've regretted it for years.  This moves us to calling
   it "zero" instead, which is correct!

APInt is widely used and I don't think anyone is keen to take massive source
breakage on anything so core, at least not all in one go.  As such, this
doesn't actually delete any entrypoints, it "soft deprecates" them with a
comment.

Included in this patch are changes to a bunch of the codebase, but there are
more.  We should normalize SelectionDAG and other APIs as well, which would
make the API change more mechanical.

Differential Revision: https://reviews.llvm.org/D109483
2021-09-09 09:50:24 -07:00
Sanjay Patel 0d83e72034 [InstCombine] fix infinite loop from shift transform
I'm not sure if there is a better way or another bug
still here, but this is enough to avoid the loop from:
https://llvm.org/PR51657

The test requires multiple blocks and datalayout to
trigger the problem path.
2021-09-06 11:13:39 -04:00
Sanjay Patel c85f450619 [InstCombine] refactor to reduce indent; NFC
This transform should be updated to use better
variable names and code comments. It could
also create the shift-of-shift directly instead
of relying on another combine for that.
2021-09-06 11:13:39 -04:00
Sanjay Patel fbb78668f2 [InstCombine] fix one-use condition for shift transform
This transform is written in a confusing style,
and I suspect it is at fault for a more serious
bug noted in PR51567.

But it's been around forever, so I'm making the
minimal change to fix another bug - it could
increase instructions because it was not checking
uses.
2021-09-06 11:13:39 -04:00
Sanjay Patel 982a15cb3f [InstCombine] early exit to reduce indentation; NFC 2021-09-06 11:13:38 -04:00
Sanjay Patel 6c0181c00f [InstCombine] fix typos in comments; NFC 2021-08-31 12:08:36 -04:00
Sanjay Patel be0698559b [InstCombine] remove shl(neg x), y transform
This diff was accidentally committed with:
1b5a195845
2021-08-12 11:27:22 -04:00
Sanjay Patel 1b5a195845 [InstCombine] add tests for factorization of min/max intrinsics; NFC 2021-08-12 11:19:09 -04:00
Roman Lebedev e71870512f
[InstCombine] Prefer `-(x & 1)` as the low bit splatting pattern (PR51305)
Both patterns are equivalent (https://alive2.llvm.org/ce/z/jfCViF),
so we should have a preference. It seems like mask+negation is better
than two shifts.
2021-08-07 17:25:28 +03:00
Simon Pilgrim b2f6cf1479 [InstCombine] Fold lshr/ashr(or(neg(x),x),bw-1) --> zext/sext(icmp_ne(x,0)) (PR50816)
Handle the missing fold reported in PR50816, which is a variant of the existing ashr(sub_nsw(X,Y),bw-1) --> sext(icmp_sgt(X,Y)) fold.

We also handle the lshr(or(neg(x),x),bw-1) --> zext(icmp_ne(x,0)) equivalent - https://alive2.llvm.org/ce/z/SnZmSj

We still allow multi uses of the neg(x) - as this is likely to let us further simplify other uses of the neg - but not multi uses of the or() which would increase instruction count.

Differential Revision: https://reviews.llvm.org/D105764
2021-07-13 14:44:54 +01:00
Sanjay Patel 1e202e8f39 [InstCombine] fold shift-of-srem-by-2 to mask+shift
There are several potential srem-by-2 folds
because the result is known {-1,0,1}.

https://alive2.llvm.org/ce/z/LuVyeK
2021-04-20 17:10:16 -04:00
Roman Lebedev 2760a808b9
[InstCombine] dropRedundantMaskingOfLeftShiftInput(): check that adding shift amounts doesn't overflow (PR49778)
This is identical to 781d077afb,
but for the other function.

For certain shift amount bit widths, we must first ensure that adding
shift amounts is safe, that the sum won't have an unsigned overflow.

Fixes https://bugs.llvm.org/show_bug.cgi?id=49778
2021-04-04 23:26:41 +03:00
Roman Lebedev dceb3e5996
[NFC][InstCombine] Extract canTryToConstantAddTwoShiftAmounts() as helper 2021-04-04 23:26:41 +03:00
Sanjay Patel 6e2053983e [InstCombine] fold lshr(mul X, SplatC), C2
This is a special-case multiply that replicates bits of
the source operand. We need this fold to avoid regression
if we make canonicalization to `mul` more aggressive for
shl+or patterns.

I did not see a way to make Alive generalize the bit width
condition for even-number-of-bits only, but an example of
the proof is:
  Name: i32
  Pre: isPowerOf2(C1 - 1) && log2(C1) == C2 && (C2 * 2 == width(C2))
  %m = mul nuw i32 %x, C1
  %t = lshr i32 %m, C2
  =>
  %t = and i32 %x, C1 - 2

  Name: i14
  %m = mul nuw i14 %x, 129
  %t = lshr i14 %m, 7
  =>
  %t = and i14 %x, 127

https://rise4fun.com/Alive/e52
2021-02-10 15:02:31 -05:00
Roman Lebedev 485c4b552b
[InstCombine] Host inversion out of ashr's value operand (PR48995)
This is a yet another hint that we will eventually need InstCombineInverter,
which would consistently sink inversions, but but for that we'll need
to consistently hoist inversions where possible, so let's do that here.

Example of a proof: https://alive2.llvm.org/ce/z/78SbDq

See https://bugs.llvm.org/show_bug.cgi?id=48995
2021-02-02 17:56:43 +03:00
Sanjay Patel 9f60b8b3d2 [InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)
icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.

It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
	sub	w8, w0, w1
	lsr	w0, w8, #31

vs:
	cmp	w0, w1
	cset	w0, lt

If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.

A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.

Alive proof:
https://rise4fun.com/Alive/o4E

  Name: sign-bit splat
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = ashr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = sext %c

  Name: sign-bit LSB
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = lshr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = zext %c
2020-12-01 09:58:11 -05:00
Simon Pilgrim dcb3dc101d [InstCombine] visitShl - ensure inner shifts have inrange amounts
Noticed when fixing OSS Fuzz #26716
2020-10-29 15:28:15 +00:00
Roman Lebedev 0ac56e8eaa
[InstCombine] Fold `(X >>? C1) << C2` patterns to shift+bitmask (PR37872)
This is essentially finalizes a revert of rL155136,
because nowadays the situation has improved, SCEV can model
all these patterns well, and we canonicalize rotate-like patterns
into a funnel shift intrinsics in InstCombine.
So this should not cause any pessimization.

I've verified the canonicalize-{a,l}shr-shl-to-masking.ll transforms
with alive, which confirms that we can freely preserve exact-ness,
and no-wrap flags.

Profs:
* base: https://rise4fun.com/Alive/gPQ
* exact-ness preservation: https://rise4fun.com/Alive/izi
* nuw preservation: https://rise4fun.com/Alive/DmD
* nsw preservation: https://rise4fun.com/Alive/SLN6N
* nuw nsw preservation: https://rise4fun.com/Alive/Qp7

Refs. https://reviews.llvm.org/D46760
2020-10-27 14:42:53 +03:00
Simon Pilgrim 5df61724a1 [InstCombine] Support uniform vector splats in ((((X >> C) & CC) + Y) << C) folds.
Add support for uniform vector splats (no undefs).
2020-10-13 09:28:39 +01:00
Simon Pilgrim 4ff7136268 [InstCombine] FoldShiftByConstant - create Scalar/Vector constant with ConstantInt::get(). NFCI.
There's no need to create constant vector splats manually - missed this one in rG24dd0cd1edd5
2020-10-12 18:39:45 +01:00
Simon Pilgrim 24dd0cd1ed [InstCombine] FoldShiftByConstant - create Scalar/Vector constant with ConstantInt::get(). NFCI.
There's no need to create constant vector splats manually.
2020-10-12 18:17:20 +01:00
Simon Pilgrim 2de368f6a7 [InstCombine] FoldShiftByConstant - merge equivalent types. NFCI.
Consistently use the original shift instruction's Type/BitWidth instead of the operands, casted values etc.
2020-10-12 18:17:19 +01:00
Simon Pilgrim 8a836daaa9 [InstCombine] Support lshr(trunc(lshr(x,c1)), c2) -> trunc(lshr(lshr(x,c1),c2)) uniform vector tests
FoldShiftByConstant is hardcoded for scalar/uniform outer shift amounts atm so that needs to be fixed first to support non-uniform cases
2020-10-09 16:54:46 +01:00
Simon Pilgrim 1c040a3e56 [InstCombine] commonShiftTransforms - add support for pow2 nonuniform constant vectors in srem fold
Note: we already fold srem to undef if any denominator vector element is undef.
2020-10-09 15:59:33 +01:00
Simon Pilgrim 9e796d5e71 [InstCombine] foldShiftOfShiftedLogic - add support for nonuniform constant vectors 2020-10-09 14:25:12 +01:00
Simon Pilgrim 556316cf72 [InstCombine] foldShiftOfShiftedLogic - replace cast<BinaryOperator> with m_BinOp matcher. NFCI.
Allows us to drop the !isa<ConstantExpr> check.
2020-10-09 14:10:12 +01:00