Commit Graph

4494 Commits

Author SHA1 Message Date
Philip Reames b92c971099 [InstCombine] icmp eq/ne (gep inbounds P, Idx..), null -> icmp eq/ne P, null for vectors
Extend the transform introduced in https://reviews.llvm.org/D66608 to work for vector geps as well.

Differential Revision: https://reviews.llvm.org/D66671

llvm-svn: 369949
2019-08-26 19:11:49 +00:00
Roman Lebedev de19f749e0 [InstCombine] matchThreeWayIntCompare(): commutativity awareness
Summary:
`matchThreeWayIntCompare()` looks for
```
   select i1 (a == b),
          i32 Equal,
          i32 (select i1 (a < b), i32 Less, i32 Greater)
```
but both of these selects/compares can be in it's commuted form,
so out of 8 variants, only the two most basic ones is handled.
This fixes regression being introduced in D66232.

Reviewers: spatel, nikic, efriedma, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66607

llvm-svn: 369841
2019-08-24 06:49:36 +00:00
Roman Lebedev 2c75fe7f2a [InstCombine] Try to reuse constant from select in leading comparison
Summary:
If we have e.g.:
```
  %t = icmp ult i32 %x, 65536
  %r = select i1 %t, i32 %y, i32 65535
```
the constants `65535` and `65536` are suspiciously close.
We could perform a transformation to deduplicate them:
```
Name: ult
%t = icmp ult i32 %x, 65536
%r = select i1 %t, i32 %y, i32 65535
  =>
%t.inv = icmp ugt i32 %x, 65535
%r = select i1 %t.inv, i32 65535, i32 %y
```
https://rise4fun.com/Alive/avb

While this may seem esoteric, this should certainly be good for vectors
(less constant pool usage) and for opt-for-size - need to have only one constant.

But the real fun part here is that it allows further transformation,
in particular it finishes cleaning up the `clamp` folding,
see e.g. `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`.
We start with e.g.
```
  %dont_need_to_clamp_positive = icmp sle i32 %X, 32767
  %dont_need_to_clamp_negative = icmp sge i32 %X, -32768
  %clamp_limit = select i1 %dont_need_to_clamp_positive, i32 -32768, i32 32767
  %dont_need_to_clamp = and i1 %dont_need_to_clamp_positive, %dont_need_to_clamp_negative
  %R = select i1 %dont_need_to_clamp, i32 %X, i32 %clamp_limit
```
without this patch we currently produce
```
  %1 = icmp slt i32 %X, 32768
  %2 = icmp sgt i32 %X, -32768
  %3 = select i1 %2, i32 %X, i32 -32768
  %R = select i1 %1, i32 %3, i32 32767
```
which isn't really a `clamp` - both comparisons are performed on the original value,
this patch changes it into
```
  %1.inv = icmp sgt i32 %X, 32767
  %2 = icmp sgt i32 %X, -32768
  %3 = select i1 %2, i32 %X, i32 -32768
  %R = select i1 %1.inv, i32 32767, i32 %3
```
and then the magic happens! Some further transform finishes polishing it and we finally get:
```
  %t1 = icmp sgt i32 %X, -32768
  %t2 = select i1 %t1, i32 %X, i32 -32768
  %t3 = icmp slt i32 %t2, 32767
  %R = select i1 %t3, i32 %t2, i32 32767
```
which is beautiful and just what we want.

Proofs for `getFlippedStrictnessPredicateAndConstant()` for de-canonicalization:
https://rise4fun.com/Alive/THl
Proofs for the fold itself: https://rise4fun.com/Alive/THl

Reviewers: spatel, dmgreen, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66232

llvm-svn: 369840
2019-08-24 06:49:25 +00:00
Roman Lebedev b3eccc7f0b [InstCombine][NFC] reuse-constant-from-select-in-icmp.ll - revisit tests
llvm-svn: 369839
2019-08-24 06:49:11 +00:00
Vitaly Buka d60271a1ad NFC: Rename lifetime-asan.ll -> lifetime-sanitizer.ll
llvm-svn: 369831
2019-08-24 01:44:39 +00:00
Philip Reames 9cb059fdcc Fix a bug in just submitted rL369789
Started implementing the vector case and realized the scalar case hadn't handled the GEP producing a different type than the base correctly.  It's entertaining seeing what slips through review when we're focused on the 'hard' parts.  :(

Also adding an extra vector test as it happened to be in workspace and wasn't worth separating.

llvm-svn: 369795
2019-08-23 18:27:57 +00:00
Philip Reames 5b02cfa0b3 [InstCombine] icmp eq/ne (gep inbounds P, Idx..), null -> icmp eq/ne P, null
This generalizes the isGEPKnownNonNull rule from ValueTracking to apply when we do not know if the base is non-null, and thus need to replace one condition with another.

The core notion is that since an inbounds GEP can only form null if the base pointer is null and the offset is zero. However, if the offset is non-zero, the the "inbounds" marker makes the result poison. Thus, we're free to ignore the case where the offset is non-zero. Similarly, there's no case under which a non-null base can result in a null result without generating poison.

Differential Revision: https://reviews.llvm.org/D66608

llvm-svn: 369789
2019-08-23 17:58:58 +00:00
Roman Lebedev dddc0fd9cb [NFC][InstCombine] Fixup few new tests in unrecognized_three-way-comparison.ll
llvm-svn: 369701
2019-08-22 20:34:56 +00:00
Peter Collingbourne 2452d7030b IR. Change strip* family of functions to not look through aliases.
I noticed another instance of the issue where references to aliases were
being replaced with aliasees, this time in InstCombine. In the instance that
I saw it turned out to be only a QoI issue (a symbol ended up being missing
from the symbol table due to the last reference to the alias being removed,
preventing HWASAN from symbolizing a global reference), but it could easily
have manifested as incorrect behaviour.

Since this is the third such issue encountered (previously: D65118, D65314)
it seems to be time to address this common error/QoI issue once and for all
and make the strip* family of functions not look through aliases.

Includes a test for the specific issue that I saw, but no doubt there are
other similar bugs fixed here.

As with D65118 this has been tested to make sure that the optimization isn't
load bearing. I built Clang, Chromium for Linux, Android and Windows as well
as the test-suite and there were no size regressions.

Differential Revision: https://reviews.llvm.org/D66606

llvm-svn: 369697
2019-08-22 19:56:14 +00:00
Roman Lebedev 1aeb27af22 [NFC][InstCombine] New tests: unrecognized_three-way-comparison.ll is ignorant about commutative variants part 2
llvm-svn: 369696
2019-08-22 19:53:23 +00:00
Roman Lebedev 41f89c3484 [NFC][InstCombine] New tests: unrecognized_three-way-comparison.ll is ignorant about commutative variants
D66232 "exposes" the problem.

llvm-svn: 369667
2019-08-22 16:46:16 +00:00
Philip Reames 3c4614ff10 Add a couple of extra test noticed in post-commit discussion of rL369541
llvm-svn: 369546
2019-08-21 16:57:53 +00:00
Philip Reames 764b0fd5a3 [instcombine] icmp eq/ne (sub C, Y), C -> icmp eq/ne Y, 0
Noticed while looking at pr43028.  

llvm-svn: 369541
2019-08-21 15:51:57 +00:00
Sanjay Patel e728259278 [InstCombine] narrow icmp with extended operands of different widths
An intermediate extend is used to widen the narrow operand to the width of
the other (wider) operand. At that point, we have the same logic as the
existing transform that was restricted to folds of equal width zext/sext.

This mostly solves PR42700:
https://bugs.llvm.org/show_bug.cgi?id=42700

llvm-svn: 369519
2019-08-21 11:56:08 +00:00
Sanjay Patel d5035727ad [InstCombine] add more extra use tests for icmp with extends; NFC
llvm-svn: 369447
2019-08-20 21:23:28 +00:00
Sanjay Patel 48e81e8e10 [InstCombine] add tests for mismatched cast ops for icmp; NFC
Motivating case is shown in PR42700:
https://bugs.llvm.org/show_bug.cgi?id=42700

llvm-svn: 369439
2019-08-20 20:51:50 +00:00
Sanjay Patel f99d254aae [InstCombine] simplify min/max of min/max with same operands (PR35607)
This is the original integer variant requested in:
https://bugs.llvm.org/show_bug.cgi?id=35607

As noted in the TODO and several similar TODOs around this block,
we could do this in instsimplify, but then it would cost more
because we would be trying to match min/max via ValueTracking
in 2 different places.

There are 4 commuted variants for each of smin/smax/umin/umax
that are not matched here. There are also icmp predicate variants
that are not included in the affected test file because they are
already handled by instsimplify by folding the final icmp to
true/false.

https://rise4fun.com/Alive/3KVc

  Name: smax(smax, smin)
  %c1 = icmp slt i32 %x, %y
  %c2 = icmp slt i32 %y, %x
  %min = select i1 %c1, i32 %x, i32 %y
  %max = select i1 %c2, i32 %x, i32 %y
  %c3 = icmp sgt i32 %max, %min
  %r = select i1 %c3, i32 %max, i32 %min
  =>
  %r = %max

  Name: smin(smax, smin)
  %c1 = icmp slt i32 %x, %y
  %c2 = icmp slt i32 %y, %x
  %min = select i1 %c1, i32 %x, i32 %y
  %max = select i1 %c2, i32 %x, i32 %y
  %c3 = icmp sgt i32 %max, %min
  %r = select i1 %c3, i32 %min, i32 %max
  =>
  %r = %min

  Name: umax(umax, umin)
  %c1 = icmp ult i32 %x, %y
  %c2 = icmp ult i32 %y, %x
  %min = select i1 %c1, i32 %x, i32 %y
  %max = select i1 %c2, i32 %x, i32 %y
  %c3 = icmp ult i32 %min, %max
  %r = select i1 %c3, i32 %max, i32 %min
  =>
  %r = %max

  Name: umin(umax, umin)
  %c1 = icmp ult i32 %x, %y
  %c2 = icmp ult i32 %y, %x
  %min = select i1 %c1, i32 %x, i32 %y
  %max = select i1 %c2, i32 %x, i32 %y
  %c3 = icmp ult i32 %min, %max
  %r = select i1 %c3, i32 %min, i32 %max
  =>
  %r = %min

llvm-svn: 369386
2019-08-20 13:39:17 +00:00
Sanjay Patel eb2211b352 [InstCombine] add tests for min/max with min/max of same operands; NFC
llvm-svn: 369376
2019-08-20 12:49:03 +00:00
Roman Lebedev e8f666f48d [NFC][InstCombine] Some tests for 'shift amount reassoc in bit test - trunc-of-lshr' (PR42399)
Finally, the fold i was looking forward to :)

The legality check is muddy, i doubt  i've groked the full generalization,
but it handles all the cases i care about, and can come up with:
https://rise4fun.com/Alive/26j

https://bugs.llvm.org/show_bug.cgi?id=42399

llvm-svn: 369197
2019-08-17 21:35:33 +00:00
Sanjay Patel a53ad0e157 Revert r367891 - "[InstCombine] combine mul+shl separated by zext"
This reverts commit 5dbb90bfe1.

As noted in the post-commit thread for r367891, this can create
a multiply that is lowered to a libcall that may not exist.

We need to improve the backend decomposition for integer multiply
before trying to re-land this (if it's still worthwhile after
doing the backend work).

llvm-svn: 369174
2019-08-16 23:36:28 +00:00
Roman Lebedev 515ad8fe4a [InstCombine][NFC] reuse-constant-from-select-in-icmp.ll - check branch_weights too
llvm-svn: 369166
2019-08-16 23:06:37 +00:00
Roman Lebedev 97176bd2bc [InstCombine][NFC] Revisit tests in reuse-constant-from-select-in-icmp.ll
llvm-svn: 369163
2019-08-16 22:40:06 +00:00
Sanjay Patel 39eb2324f7 [InstCombine] canonicalize a scalar-select-of-vectors to vector select
This pattern may arise more frequently with an enhancement to SLP vectorization suggested in PR42755:
https://bugs.llvm.org/show_bug.cgi?id=42755
...but we should handle this pattern to make things easier for the backend either way.

For all in-tree targets that I looked at, codegen for typical vector sizes looks better when we change
to a vector select, so this is safe to do without a cost model (in other words, as a target-independent
canonicalization).

For example, if the condition of the select is a scalar, we end up with something like this on x86:

	vpcmpgtd	%xmm0, %xmm1, %xmm0
	vpextrb	$12, %xmm0, %eax
	testb	$1, %al
	jne	LBB0_2
  ## %bb.1:
	vmovaps	%xmm3, %xmm2
  LBB0_2:
	vmovaps	%xmm2, %xmm0

Rather than the splat-condition variant:

	vpcmpgtd	%xmm0, %xmm1, %xmm0
	vpshufd	$255, %xmm0, %xmm0      ## xmm0 = xmm0[3,3,3,3]
	vblendvps	%xmm0, %xmm2, %xmm3, %xmm0

Differential Revision: https://reviews.llvm.org/D66095

llvm-svn: 369140
2019-08-16 18:51:30 +00:00
Evandro Menezes 05e9c2ac2e [InstCombine] Simplify pow(2.0, itofp(y)) to ldexp(1.0, y)
Simplify `pow(2.0, itofp(y))` to `ldexp(1.0, y)`.

Differential revision: https://reviews.llvm.org/D65979

llvm-svn: 369120
2019-08-16 15:33:41 +00:00
Roman Lebedev 16244fccfe [InstCombine] Shift amount reassociation in bittest: trunc-of-shl (PR42399)
Summary:
This is continuation of D63829 / https://bugs.llvm.org/show_bug.cgi?id=42399

I thought naive pattern would solve my issue, but nope, it involved truncation,
thus more folds needed.. This isn't really the fold i'm interested in,
i need trunc-of-lshr, but i'we decided to start with `shl` because it's simpler.

In this case, no extra legality checks are needed:
https://rise4fun.com/Alive/CAb

We should be careful about not increasing instruction count,
since we need to produce `zext` because `and` is done in wider type.

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66057

llvm-svn: 369117
2019-08-16 15:10:41 +00:00
Florian Hahn 75be1a9e58 [ValueTracking] Fix recurrence detection to check both PHI operands.
Summary:
Currently we fail to compute known bits for recurrences where the
first incoming value is the start value of the recurrence.

Instead of exiting the loop when the first incoming value is not
the step of the recurrence, continue to check the second incoming
value.

The original code uses a loop to handle both cases, but incorrectly
exits instead of continuing.

Reviewers: lebedev.ri, spatel, nikic

Reviewed By: lebedev.ri

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66216

llvm-svn: 369088
2019-08-16 09:15:02 +00:00
David Bolvansky 00782a4b68 [NFC] Added tests for 'select with ctlz to cttz' fold
llvm-svn: 369032
2019-08-15 18:23:37 +00:00
Florian Hahn 1bd898989c [InstCombine] Precommit test case for D66216
llvm-svn: 368978
2019-08-15 08:42:12 +00:00
Roman Lebedev 04ddff4cbc [InstCombine][NFC] Tests for 'try to reuse constant from select in comparison'
https://rise4fun.com/Alive/THl

llvm-svn: 368886
2019-08-14 17:27:50 +00:00
David Bolvansky f94460d4b6 [SLC] Dereferenceable annonation - handle valid null pointers
Reviewers: jdoerfert, reames

Reviewed By: jdoerfert

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66161

llvm-svn: 368884
2019-08-14 17:15:20 +00:00
David Bolvansky 0e0fbae1a4 [BuildLibCalls] Noalias annotation
Summary: I think this is better solution than annotating callsites in IC/SLC.

Reviewers: jdoerfert

Reviewed By: jdoerfert

Subscribers: MaskRay, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66217

llvm-svn: 368875
2019-08-14 16:50:06 +00:00
Roman Lebedev 2faafc6e4f [InstCombine][NFC] Autogenerate checks in adjust-for-minmax.ll
Being affected by WIP patch.

llvm-svn: 368807
2019-08-14 08:12:20 +00:00
David Bolvansky 038d604f4f [SimplifyLibCalls] Add noalias from known callsites
Summary:
Should be fine for memcpy, strcpy, strncpy.


Reviewers: jdoerfert, efriedma

Reviewed By: jdoerfert

Subscribers: uenoku, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66135

llvm-svn: 368724
2019-08-13 17:18:46 +00:00
Nikita Popov 2a4f26b4c2 [ValueTracking] Improve reverse assumption inference
Use isGuaranteedToTransferExecutionToSuccessor() instead of
isSafeToSpeculativelyExecute() when seeing whether we can propagate
the information in an assume backwards in isValidAssumeForContext().
The latter is more general - it also allows arbitrary loads/stores -
and is also the condition we want: if our assume is guaranteed to
execute, its condition not holding would be UB.

Original patch by arielb1.

Differential Revision: https://reviews.llvm.org/D37215

llvm-svn: 368723
2019-08-13 17:15:42 +00:00
David Bolvansky dde10cd7a9 [NFC] Revisited/updated tests
llvm-svn: 368722
2019-08-13 17:07:02 +00:00
David Bolvansky 90a30fdcc3 [SLC] Improve dereferenceable bytes annotation
llvm-svn: 368715
2019-08-13 16:44:16 +00:00
Roman Lebedev 73f702ff19 [InstCombine] Non-canonical clamp-like pattern handling
Summary:
Given a pattern like:
```
%old_cmp1 = icmp slt i32 %x, C2
%old_replacement = select i1 %old_cmp1, i32 %target_low, i32 %target_high
%old_x_offseted = add i32 %x, C1
%old_cmp0 = icmp ult i32 %old_x_offseted, C0
%r = select i1 %old_cmp0, i32 %x, i32 %old_replacement
```
it can be rewritten as more canonical pattern:
```
%new_cmp1 = icmp slt i32 %x, -C1
%new_cmp2 = icmp sge i32 %x, C0-C1
%new_clamped_low = select i1 %new_cmp1, i32 %target_low, i32 %x
%r = select i1 %new_cmp2, i32 %target_high, i32 %new_clamped_low
```
Iff `-C1 s<= C2 s<= C0-C1`
Also, `ULT` predicate can also be `UGE`; or `UGT` iff `C0 != -1` (+invert result)
Also, `SLT` predicate can also be `SGE`; or `SGT` iff `C2 != INT_MAX` (+invert result)

If `C1 == 0`, then all 3 instructions must be one-use; else at most either `%old_cmp1` or `%old_x_offseted` can have extra uses.
NOTE: if we could reuse `%old_cmp1` as one of the comparisons we'll have to build, this could be less limiting.

So there are two icmp's, each one with 3 predicate variants, so there are 9 fold variants:

|     | ULT                            | UGE                             | UGT                             |
| SLT | https://rise4fun.com/Alive/yIJ | https://rise4fun.com/Alive/5BfN | https://rise4fun.com/Alive/INH  |
| SGE | https://rise4fun.com/Alive/hd8 | https://rise4fun.com/Alive/Abk  | https://rise4fun.com/Alive/PlzS |
| SGT | https://rise4fun.com/Alive/VYG | https://rise4fun.com/Alive/oMY  | https://rise4fun.com/Alive/KrzC |
{F9730206}

This fold was brought up in https://reviews.llvm.org/D65148#1603922 by @dmgreen, and is needed to unblock that patch.
This patch requires D65530.

Reviewers: spatel, nikic, xbolva00, dmgreen

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits, dmgreen

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65765

llvm-svn: 368687
2019-08-13 12:49:28 +00:00
Roman Lebedev 2635c324da [InstCombine] foldXorOfICmps(): don't give up on non-single-use ICmp's if all users are freely invertible
Summary:
This is rather unconventional..

As the comment there says, we don't have much folds for xor-of-icmps,
we try to turn them into an and-of-icmps, for which we have plenty of folds.
But if the ICmp we need to invert is not single-use - we give up.

As discussed in https://reviews.llvm.org/D65148#1603922,
we may have a non-canonical CLAMP pattern, with bit match and
select-of-threshold that we'll potentially clamp.
As it can be seen in `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`,
out of all 8 variations of the pattern, only two are **not** canonicalized into
the variant with and+icmp instead of bit math.
The reason is because the ICmp we need to invert is not single-use - we give up.

We indeed can't perform this fold at will, the general rule is that
we should not increase instruction count in InstCombine,

But we wouldn't end up increasing instruction count if we can adapt every other
user to the inverted value. This way the `not` we create **will** get folded,
and in the end the instruction count did not increase.

For that, of course, we need to look at the users of a Value,
which is again rather unconventional for InstCombine :S

Thus i'm proposing to be a little bit more insistive in `foldXorOfICmps()`.
The alternatives would be to not create that `not`, but add duplicate code to
manually invert all users; or to add some even less general combine to handle
some more specific pattern[s].

Reviewers: spatel, nikic, RKSimon, craig.topper

Reviewed By: spatel

Subscribers: hiraditya, jdoerfert, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65530

llvm-svn: 368685
2019-08-13 12:49:06 +00:00
David Bolvansky 39130314fe [SimplifyLibCalls] Add dereferenceable bytes from known callsites
Summary:
int mm(char *a, char *b) {
    return memcmp(a,b,16);
}

Currently:
define dso_local i32 @mm(i8* nocapture readonly %a, i8* nocapture readonly %b) local_unnamed_addr #1 {
entry:
  %call = tail call i32 @memcmp(i8* %a, i8* %b, i64 16)
  ret i32 %call
}

After patch:
define dso_local i32 @mm(i8* nocapture readonly %a, i8* nocapture readonly %b) local_unnamed_addr #1 {
entry:
  %call = tail call i32 @memcmp(i8* dereferenceable(16)  %a, i8* dereferenceable(16)  %b, i64 16)
  ret i32 %call
}




Reviewers: jdoerfert, efriedma

Reviewed By: jdoerfert

Subscribers: javed.absar, spatel, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66079

llvm-svn: 368657
2019-08-13 09:11:49 +00:00
Roman Lebedev 09eb71ced3 [NFC][InstCombine] Non-canonical clamp pattern: non-canonical predicate tests
We can't handle 'uge' case because we can't ever get it,
there needs to be extra use on that compare or else it will be
canonicalized, but because of extra use we can't handle it.

'sge' case we can have.

llvm-svn: 368656
2019-08-13 08:14:13 +00:00
Sanjay Patel 24a9e86849 [InstCombine] add tests for scalar-select-of-vectors; NFC
llvm-svn: 368583
2019-08-12 15:21:11 +00:00
David Bolvansky 20d37fab82 [InstCombine] x /c fabs(x) -> copysign(1.0, x)
Summary:
x / fabs(x) -> copysign(1.0, x)
fabs(x) / x -> copysign(1.0, x)

Reviewers: spatel, foad, RKSimon, efriedma

Reviewed By: spatel

Subscribers: lebedev.ri, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65898

llvm-svn: 368570
2019-08-12 13:43:35 +00:00
Roman Lebedev ccdad6ef48 [InstCombine] foldShiftIntoShiftInAnotherHandOfAndInICmp(): avoid constantexpr pitfail (PR42962)
Instead of matching value and then blindly casting to BinaryOperator
just to get the opcode, just match instruction and do no cast.

Fixes https://bugs.llvm.org/show_bug.cgi?id=42962

llvm-svn: 368554
2019-08-12 11:28:02 +00:00
Roman Lebedev 404e978f27 [NFC][InstCombine] Tests for shift amount reassociation in bittest with truncated shl (PR42399)
trunc-of-shl:
  https://rise4fun.com/Alive/zGx
  https://rise4fun.com/Alive/sl0L
I.e. no extra legality check needed.

https://bugs.llvm.org/show_bug.cgi?id=42399

llvm-svn: 368520
2019-08-10 19:29:03 +00:00
Roman Lebedev a8d20b4467 [InstCombine] Shift amount reassociation in bittest: relax one-use check when shifting constant
If one of the values being shifted is a constant, since the new shift
amount is known-constant, the new shift will end up being constant-folded
so, we don't need that one-use restriction then.

llvm-svn: 368519
2019-08-10 19:28:54 +00:00
Roman Lebedev 64fe806c4e [InstCombine] Shift amount reassociation in bittest: drop pointless one-use restriction
That one-use restriction is not needed for correctness - we have already
ensured that one of the shifts will go away, so we know we won't increase
the instruction count. So there is no need for that restriction.

llvm-svn: 368518
2019-08-10 19:28:44 +00:00
Roman Lebedev 45e9990c02 [NFC][InstCombine] Tests for shift amount reassociation in bittest with shift of const
llvm-svn: 368517
2019-08-10 19:28:12 +00:00
David Bolvansky f6a5699392 [NFC] Added tests for D65898
llvm-svn: 368447
2019-08-09 15:52:26 +00:00
David Bolvansky 2689ed0f9d [InstCombine][NFC] Added comments about constants in tests for pow->exp2 fold
llvm-svn: 368360
2019-08-08 22:37:51 +00:00
David Bolvansky ae154d00b4 [NFC] Fixed newly added tests
llvm-svn: 368201
2019-08-07 19:36:46 +00:00
David Bolvansky f8183d64de [NFC] Added tests for x/fabs(X) fold
llvm-svn: 368200
2019-08-07 19:35:25 +00:00
Jay Foad 7d4ab7751d [InstCombine] Add a TODO comment
llvm-svn: 368176
2019-08-07 15:18:34 +00:00
Jay Foad 8e8b295835 [InstCombine] Propagate fast math flags through selects
Summary:
In SimplifySelectsFeedingBinaryOp, propagate fast math flags from the
outer op into both arms of the new select, to take advantage of
simplifications that require fast math flags.

Reviewers: mcberg2017, majnemer, spatel, arsenm, xbolva00

Subscribers: wdng, javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65658

llvm-svn: 368175
2019-08-07 15:16:28 +00:00
Roman Lebedev 9bece444dd [InstCombine] Recommit: Shift amount reassociation: shl-trunc-shl pattern
This was initially committed in r368059 but got reverted in r368084
because there was a faulty logic in how the shift amounts type mismatch
was being handled (it simply wasn't).

I've added an explicit bailout before we SimplifyAddInst() - i don't think
it's designed in general to handle differently-typed values, even though
the actual problem only comes from ConstantExpr's.

I have also changed the common type deduction, to not just blindly
look past zext, but try to do that so that in the end types match.

Differential Revision: https://reviews.llvm.org/D65380

llvm-svn: 368141
2019-08-07 09:41:50 +00:00
Reid Kleckner e4bd38478b Revert [InstCombine] Shift amount reassociation: shl-trunc-shl pattern
This reverts r368059 (git commit 0f95710976)

This caused Clang to assert while self-hosting and compiling
SystemZInstrInfo.cpp. Reduction is running.

llvm-svn: 368084
2019-08-06 20:32:07 +00:00
Roman Lebedev 0f95710976 [InstCombine] Shift amount reassociation: shl-trunc-shl pattern
Summary:
Currently `reassociateShiftAmtsOfTwoSameDirectionShifts()` only handles
two shifts one after another. If the shifts are `shl`, we still can
easily perform the fold, with no extra legality checks:
https://rise4fun.com/Alive/OQbM

If we have right-shift however, we won't be able to make it
any simpler than it already is.

After this the only thing missing here is constant-folding: (`NewShAmt >= bitwidth(X)`)
* If it's a logical shift, then constant-fold to `0` (not `undef`)
* If it's a `ashr`, then a splat of original signbit
https://rise4fun.com/Alive/E1K
https://rise4fun.com/Alive/i0V

Reviewers: spatel, nikic, xbolva00

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65380

llvm-svn: 368059
2019-08-06 17:03:40 +00:00
Sanjay Patel efc24d9d6f [InstCombine] add tests for binop with FMF with select operands; NFC
Baseline coverage for D65658.

llvm-svn: 368028
2019-08-06 13:19:13 +00:00
Roman Lebedev 76b772f9ce [InstCombine][NFC] Tests for non-canonical clamp-like pattern
As discussed in https://reviews.llvm.org/D65148#1607019

The canonical fold is: https://rise4fun.com/Alive/FKe

llvm-svn: 367897
2019-08-05 18:01:22 +00:00
Sanjay Patel 5dbb90bfe1 [InstCombine] combine mul+shl separated by zext
This appears to slightly help patterns similar to what's
shown in PR42874:
https://bugs.llvm.org/show_bug.cgi?id=42874
...but not in the way requested.

That fix will require some later IR and/or backend pass to
decompose multiply/shifts into something more optimal per
target. Those transforms already exist in some basic forms,
but probably need enhancing to catch more cases.

https://rise4fun.com/Alive/Qzv2

llvm-svn: 367891
2019-08-05 16:59:58 +00:00
Sanjay Patel 4b9d66cf41 [InstCombine] add tests for shl+mul; NFC
llvm-svn: 367883
2019-08-05 16:17:07 +00:00
Sanjay Patel 1a29823b9c [InstCombine] add extra use constraint for shl-zext fold
As the test shows, we can end up with more instructions than
we started with if we don't include the extra-use check.

llvm-svn: 367880
2019-08-05 16:04:07 +00:00
Sanjay Patel d1c5d13470 [InstCombine] add test for shl-zext with extra use; NFC
llvm-svn: 367876
2019-08-05 15:25:07 +00:00
David Bolvansky e834e306cb [InstCombine] Added mempcpy tests [NFC]
llvm-svn: 367825
2019-08-05 09:58:32 +00:00
Sanjay Patel 9ce5f41851 [InstCombine] fold cmp+select using select operand equivalence
As discussed in PR42696:
https://bugs.llvm.org/show_bug.cgi?id=42696
...but won't help that case yet.

We have an odd situation where a select operand equivalence fold was
implemented in InstSimplify when it could have been done more generally
in InstCombine if we allow dropping of {nsw,nuw,exact} from a binop operand.

Here's an example:
https://rise4fun.com/Alive/Xplr

  %cmp = icmp eq i32 %x, 2147483647
  %add = add nsw i32 %x, 1
  %sel = select i1 %cmp, i32 -2147483648, i32 %add
  =>
  %sel = add i32 %x, 1

I've left the InstSimplify code in place for now, but my guess is that we'd
prefer to remove that as a follow-up to save on code duplication and
compile-time.

Differential Revision: https://reviews.llvm.org/D65576

llvm-svn: 367695
2019-08-02 17:39:32 +00:00
Sanjay Patel 66ce04f261 [InstCombine] add tests with 'ne' predicates; NFC
More coverage for the proposal in D65576.

llvm-svn: 367579
2019-08-01 16:04:12 +00:00
Sanjay Patel 350b389c90 [InstCombine] add test with swapped select operands; NFC
More coverage for the proposal in D65576.

llvm-svn: 367577
2019-08-01 15:32:10 +00:00
Sanjay Patel 435cdecdf7 [InstCombine] canonicalize fneg before fmul/fdiv
Reverse the canonicalization of fneg relative to fmul/fdiv. That makes it
easier to implement the transforms (and possibly other fneg transforms) in
1 place because we can always start the pattern match from fneg (either the
legacy binop or the new unop).

There's a secondary practical benefit seen in PR21914 and PR42681:
https://bugs.llvm.org/show_bug.cgi?id=21914
https://bugs.llvm.org/show_bug.cgi?id=42681
...hoisting fneg rather than sinking seems to play nicer with LICM in IR
(although this change may expose analysis holes in the other direction).

1. The instcombine test changes show the expected neutral IR diffs from
   reversing the order.

2. The reassociation tests show that we were missing an optimization
   opportunity to fold away fneg-of-fneg. My reading of IEEE-754 says
   that all of these transforms are allowed (regardless of binop/unop
   fneg version) because:

   "For all other operations [besides copy/abs/negate/copysign], this
   standard does not specify the sign bit of a NaN result."
   In all of these transforms, we always have some other binop
   (fadd/fsub/fmul/fdiv), so we are free to flip the sign bit of a
   potential intermediate NaN operand.
   (If that interpretation is wrong, then we must already have a bug in
   the existing transforms?)

3. The clang tests shouldn't exist as-is, but that's effectively a
   revert of rL367149 (the test broke with an extension of the
   pre-existing fneg canonicalization in rL367146).

Differential Revision: https://reviews.llvm.org/D65399

llvm-svn: 367447
2019-07-31 16:53:22 +00:00
Roman Lebedev 8d76284599 [NFC][InstCombine] Add xor-or-icmp tests with icmp having extra uses
Currently InstCombiner::foldXorOfICmps() bailouts if the
ICMP it wants to invert has extra uses. As it can be seen
in the tests in previous commit, this is super unfortunate,
this is the single pattern that is left non-canonicalized.

We could analyze if we can also invert all the uses if said ICMP
at the same time, thus not bailing out there.
I'm not seeing any nicer alternative.

llvm-svn: 367439
2019-07-31 15:20:33 +00:00
Roman Lebedev 67688af5f0 [NFC][InstCombine] Add baseline tests with non-canonical CLAMP pattern
As disscussed in https://reviews.llvm.org/D65148#1603922
these would all need to be canonicalized to traditional clamp pattern.

llvm-svn: 367438
2019-07-31 15:20:21 +00:00
Roman Lebedev be612ea471 [InstCombine] Fold "x ?% y ==/!= 0" to "x & (y-1) ==/!= 0" iff y is power-of-two
Summary:
I have stumbled into this by accident while preparing to extend backend `x s% C ==/!= 0` handling.

While we did happen to handle this fold in most of the cases,
the folding is indirect - we fold `x u% y` to `x & (y-1)` (iff `y` is power-of-two),
or first turn `x s% -y` to `x u% y`; that does handle most of the cases.
But we can't turn `x s% INT_MIN` to `x u% -INT_MIN`,
and thus we end up being stuck with `(x s% INT_MIN) == 0`.

There is no such restriction for the more general fold:
https://rise4fun.com/Alive/IIeS

To be noted, the fold does not enforce that `y` is a constant,
so it may indeed increase instruction count.
This is consistent with what `x u% y`->`x & (y-1)` already does.
I think it makes sense, it's at most one (simple) extra instruction,
while `rem`ainder is really much more un-simple (and likely **very** costly).

Reviewers: spatel, RKSimon, nikic, xbolva00, craig.topper

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65046

llvm-svn: 367322
2019-07-30 15:28:22 +00:00
Cameron McInally b32a6592eb [NFC][FPEnv] Pre-commit tests for canonicalize negated operand of fdiv.
llvm-svn: 367233
2019-07-29 16:09:56 +00:00
Sanjay Patel e9ee7b47d4 [InstCombine] fold fadd+fneg with fdiv/fmul betweena
The backend already does this via isNegatibleForFree(),
but we may want to alter the fneg IR canonicalizations
that currently exist, so we need to try harder to fold
fneg in IR to avoid regressions.

llvm-svn: 367227
2019-07-29 13:50:25 +00:00
Sanjay Patel 74c35bd6b0 [InstCombine] add tests for fadd with negated operand; NFC
llvm-svn: 367222
2019-07-29 12:49:36 +00:00
Roman Lebedev 6ff633ddc4 [NFC][InstCombine] Revisit tests in shift-amount-reassociation-with-truncation-shl.ll
llvm-svn: 367196
2019-07-28 21:31:58 +00:00
Sanjay Patel 99c57c6daf [InstCombine] fold fsub+fneg with fdiv/fmul between
The backend already does this via isNegatibleForFree(),
but we may want to alter the fneg IR canonicalizations
that currently exist, so we need to try harder to fold
fneg in IR to avoid regressions.

llvm-svn: 367194
2019-07-28 17:10:06 +00:00
Roman Lebedev d5bc4b09f1 [NFC][InstCombine] Shift amount reassociation: can have trunc between shl's
https://rise4fun.com/Alive/OQbM
Not so simple for lshr/ashr, so those maybe later.

https://bugs.llvm.org/show_bug.cgi?id=42391

llvm-svn: 367189
2019-07-28 13:13:46 +00:00
Sanjay Patel d20a0fe203 [InstCombine] add tests for fsub with negated operand; NFC
llvm-svn: 367156
2019-07-26 21:12:22 +00:00
Sanjay Patel a9ab31558c [InstCombine] canonicalize negated operand of fdiv
This is a transform that we use with fmul, so use
it for fdiv too for consistency.

llvm-svn: 367146
2019-07-26 19:56:59 +00:00
Sanjay Patel 487e957775 [InstCombine] add tests for fdiv with negated operand; NFC
llvm-svn: 367145
2019-07-26 19:44:53 +00:00
Sanjay Patel c229cfeb7a [InstCombine] remove flop from lerp patterns
(Y * (1.0 - Z)) + (X * Z) -->
Y - (Y * Z) + (X * Z) -->
Y + Z * (X - Y)

This is part of solving:
https://bugs.llvm.org/show_bug.cgi?id=42716

Factoring eliminates an instruction, so that should be a good canonicalization.
The potential conversion to FMA would be handled by the backend based on target
capabilities.

Differential Revision: https://reviews.llvm.org/D65305

llvm-svn: 367101
2019-07-26 11:19:18 +00:00
Sanjay Patel 8f15d40555 [InstCombine] add tests for lerp patterns (PR42716); NFC
llvm-svn: 367069
2019-07-25 22:25:21 +00:00
Vlad Tsyrklevich 5d5a58317c Revert "[InstCombine] try to narrow a truncated load"
This reverts commit bc4a63fd3c, this is a
speculative revert to fix a number of sanitizer bots (like
sanitizer-x86_64-linux-bootstrap-ubsan) that have started to see stage2
compiler crashes, presumably due to a miscompile.

llvm-svn: 367029
2019-07-25 15:37:57 +00:00
Sanjay Patel bc4a63fd3c [InstCombine] try to narrow a truncated load
trunc (load X) --> load (bitcast X to narrow type)

We have this transform in DAGCombiner::ReduceLoadWidth(), but the truncated
load pattern can interfere with other instcombine transforms, so I'd like to
allow the fold sooner.

Example:
https://bugs.llvm.org/show_bug.cgi?id=16739
...in that report, we have bitcasts bracketing these ops, so those could get
eliminated too.

We've generally ruled out widening of loads early in IR ( LoadCombine -
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105291.html ), but
that reasoning may not apply to narrowing if we can preserve information
such as the dereferenceable range.

Differential Revision: https://reviews.llvm.org/D64432

llvm-svn: 367011
2019-07-25 12:14:27 +00:00
Craig Topper e9abc8177a [InstCombine] Teach foldOrOfICmps to allow icmp eq MIN_INT/MAX to be part of a range comparision. Similar for foldAndOfICmps
We can treat icmp eq X, MIN_UINT as icmp ule X, MIN_UINT and allow
it to merge with icmp ugt X, C. Similar for the other constants.

We can do simliar for icmp ne X, (U)INT_MIN/MAX in foldAndOfICmps. And we already handled UINT_MIN there.

Fixes PR42691.

Differential Revision: https://reviews.llvm.org/D65017

llvm-svn: 366945
2019-07-24 20:57:29 +00:00
David Bolvansky db913d9618 [InstCombine] Adjusted pow-exp tests for Windows [NFC]
Summary: https://bugs.llvm.org/show_bug.cgi?id=42740

Reviewers: efriedma, hans

Reviewed By: hans

Subscribers: spatel, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65220

llvm-svn: 366925
2019-07-24 17:01:20 +00:00
Matt Arsenault 0b7f226311 AMDGPU: Fix test after r366913
llvm-svn: 366916
2019-07-24 16:05:55 +00:00
Sanjay Patel 3624074426 [InstCombine] add tests for load narrowing; NFC
Baseline results for D64432.

llvm-svn: 366901
2019-07-24 12:44:21 +00:00
Roman Lebedev 402bf28ecc [NFC][InstCombine] Fixup commutative/negative tests with icmp preds in @llvm.umul.with.overflow tests
llvm-svn: 366802
2019-07-23 12:42:57 +00:00
Hideto Ueno 2d654df763 [AMDGPU][NFC] Simplify test file for amdgcn intrinsics
Summary: Remove unchecked attribute in the call site and use FileCheck String Substitution for `convergent` check.

Reviewers: nhaehnle

Reviewed By: nhaehnle

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64901

llvm-svn: 366781
2019-07-23 06:48:47 +00:00
Roman Lebedev 77d37037f0 [InstCombine][NFC] Tests for canonicalization of unsigned multiply overflow check
llvm-svn: 366748
2019-07-22 22:08:45 +00:00
Craig Topper ee5dc7e7ad [InstCombine] Add foldAndOfICmps test cases inspired by PR42691.
icmp ne %x, INT_MIN can be treated similarly to icmp sgt %x, INT_MIN.
icmp ne %x, INT_MAX can be treated similarly to icmp slt %x, INT_MAX.
icmp ne %x, UINT_MAX can be treated similarly to icmp ult %x, UINT_MAX.

We already treat icmp ne %x, 0 similarly to icmp ugt %x, 0

llvm-svn: 366662
2019-07-22 02:43:43 +00:00
Roman Lebedev 8a431874e9 [NFC][InstCombine] Add a few extra srem-by-power-of-two tests - extra uses
llvm-svn: 366652
2019-07-21 09:05:49 +00:00
Roman Lebedev a2dd672c5f [NFC][InstCombine] Autogenerate a few tests
llvm-svn: 366643
2019-07-20 21:34:00 +00:00
Roman Lebedev 056640f8b3 [NFC][InstCombine] Add srem-by-signbit tests - still can fold to bittest
https://rise4fun.com/Alive/IIeS

llvm-svn: 366642
2019-07-20 21:33:50 +00:00
Craig Topper 3a3c58f045 [InstCombine] Fix copy/paste mistake in the test cases I added for PR42691. NFC
llvm-svn: 366614
2019-07-19 21:09:21 +00:00
Craig Topper 18230ecf7e [InstCombine] Add test cases for PR42691. NFC
llvm-svn: 366611
2019-07-19 20:48:52 +00:00
Roman Lebedev 9998585c47 [NFC][InstCombine] Tests for 'rem' formation from sub-of-mul-by-'div' (PR42673)
https://rise4fun.com/Alive/8Rp
https://bugs.llvm.org/show_bug.cgi?id=42673

llvm-svn: 366565
2019-07-19 11:29:18 +00:00
Roman Lebedev 882bf2a844 [NFC][InstCombine] Redundant masking before left-shift: tests with assume
If the legality check is `(shiftNbits-maskNbits) s>= 0`,
then we can simplify it to `shiftNbits u>= maskNbits`,
which is easier to check for.

However, currently switching the `dropRedundantMaskingOfLeftShiftInput()`
to `SimplifyICmpInst()` does not catch these cases and regresses
currently-handled cases, so i'll leave it as is for now.

https://rise4fun.com/Alive/25P

llvm-svn: 366564
2019-07-19 11:29:04 +00:00
Roman Lebedev f2eb403144 [InstCombine] Dropping redundant masking before left-shift [5/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
f. `((x << MaskShAmt) a>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
f. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

Normally, the inner pattern is sign-extend,
but for our purposes it's no different to other patterns:

alive proofs:
f: https://rise4fun.com/Alive/7U3

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64524

llvm-svn: 366540
2019-07-19 08:26:58 +00:00
Roman Lebedev 441c9d6ca8 [InstCombine] Dropping redundant masking before left-shift [4/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
e. `((x << MaskShAmt) l>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
e. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

alive proofs:
e: https://rise4fun.com/Alive/0FT

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64521

llvm-svn: 366539
2019-07-19 08:26:47 +00:00
Roman Lebedev 3c212ce305 [InstCombine] Dropping redundant masking before left-shift [3/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
d. `(x & ((-1 << MaskShAmt) >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
d. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

alive proofs:
d: https://rise4fun.com/Alive/I5Y

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64519

llvm-svn: 366538
2019-07-19 08:26:37 +00:00
Roman Lebedev 2ebe57386d [InstCombine] Dropping redundant masking before left-shift [2/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
c. `(x & (-1 >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
c. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

alive proofs:
c: https://rise4fun.com/Alive/RgJh

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64517

llvm-svn: 366537
2019-07-19 08:26:25 +00:00
Roman Lebedev 4422a1657c [InstCombine] Dropping redundant masking before left-shift [1/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
b. `(x & (~(-1 << maskNbits))) << shiftNbits`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
b. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`

alive proof:
b: https://rise4fun.com/Alive/y8M

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64514

llvm-svn: 366536
2019-07-19 08:26:13 +00:00
Roman Lebedev a5f0824eb5 [InstCombine] Dropping redundant masking before left-shift [0/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
a. `(x & ((1 << MaskShAmt) - 1)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
a. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`

alive proof:
a: https://rise4fun.com/Alive/wi9

Indeed, not all of these patterns are canonical.
But since this fold will only produce a single instruction
i'm really interested in handling even uncanonical patterns,
since i have this general kind of pattern in hotpaths,
and it is not totally outlandish for bit-twiddling code.

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Reviewers: spatel, nikic, huihuiz, xbolva00

Reviewed By: xbolva00

Subscribers: efriedma, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64512

llvm-svn: 366535
2019-07-19 08:25:43 +00:00
Nikita Popov 57190b3974 [InstCombine] Add assume context test; NFC
Baseline test for D37215.

llvm-svn: 366021
2019-07-14 15:55:32 +00:00
Sanjay Patel 22cc1030f6 Revert "[InstCombine] add tests for umin/umax via usub.sat; NFC"
This reverts commit rL365999 / 0f6148df23.
The tests already exist in this file, and the hoped-for transform
(mentioned in D62871) is invalid because of undef as discussed in
D63060.

llvm-svn: 366000
2019-07-13 13:16:46 +00:00
Sanjay Patel 0f6148df23 [InstCombine] add tests for umin/umax via usub.sat; NFC
llvm-svn: 365999
2019-07-13 12:54:48 +00:00
David Bolvansky af1b3185f5 [InstCombine] Fold select (icmp sgt x, -1), lshr (X, Y), ashr (X, Y) to ashr (X, Y))
Summary:
(select (icmp sgt x, -1), lshr (X, Y), ashr (X, Y)) -> ashr (X, Y))
(select (icmp slt x, 1), ashr (X, Y), lshr (X, Y)) -> ashr (X, Y))

Fixes PR41173

Alive proof by @lebedev.ri (thanks)
Name: PR41173
  %cmp = icmp slt i32 %x, 1
  %shr = lshr i32 %x, %y
  %shr1 = ashr i32 %x, %y
  %retval.0 = select i1 %cmp, i32 %shr1, i32 %shr
  =>
  %retval.0 = ashr i32 %x, %y

Optimization: PR41173
Done: 1
Optimization is correct!

Reviewers: lebedev.ri, spatel

Reviewed By: lebedev.ri

Subscribers: nikic, craig.topper, llvm-commits, lebedev.ri

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64285

llvm-svn: 365893
2019-07-12 11:31:16 +00:00
Huihui Zhang 7b4a59db1e [InstCombine][NFCI] Add more test coverage to onehot_merge.ll
Prep work for upcoming patch D64275.

llvm-svn: 365828
2019-07-11 21:28:25 +00:00
David Bolvansky 5dca95bc4e [NFC] Revisited tests for D64285
llvm-svn: 365815
2019-07-11 19:39:20 +00:00
Sanjay Patel 3487791fea [InstCombine] don't move FP negation out of a constant expression
-(X * ConstExpr) becomes X * (-ConstExpr), so don't reverse that
and infinite loop.

llvm-svn: 365774
2019-07-11 13:44:29 +00:00
David Bolvansky e195a91d2d [NFC] Updated tests for D64285
llvm-svn: 365765
2019-07-11 12:51:33 +00:00
David Bolvansky e23be09e66 [InstCombine] Reorder recently added/improved pow transformations
Changed cases are now faster with exp2.

llvm-svn: 365758
2019-07-11 10:55:04 +00:00
Huihui Zhang 51f5079191 [InstCombine][NFCI] Add test coverage to onehot_merge.ll
Prep work for upcoming patch D64275.

llvm-svn: 365729
2019-07-11 04:56:37 +00:00
Johannes Doerfert 3ed286a388 Replace three "strip & accumulate" implementations with a single one
This patch replaces the three almost identical "strip & accumulate"
implementations for constant pointer offsets with a single one,
combining the respective functionalities. The old interfaces are kept
for now.

Differential Revision: https://reviews.llvm.org/D64468

llvm-svn: 365723
2019-07-11 01:14:48 +00:00
Roman Lebedev 61cc6df5dc [NFC][InstCombine] Comb through just-added "omit mask before left-shift" tests once more
llvm-svn: 365694
2019-07-10 19:58:13 +00:00
Roman Lebedev 20b45a6115 [NFC][InstCombine] Fixup some tests in just-added "omit mask before left-shift" tests
llvm-svn: 365663
2019-07-10 16:54:13 +00:00
Roman Lebedev 1c51073a3a [NFC][InstCombine] Redundant masking before left-shift (PR42563)
alive proofs:
a,b:     https://rise4fun.com/Alive/4zsf
c,d,e,f: https://rise4fun.com/Alive/RC49

Indeed, not all of these patterns are canonical.
But since this fold will only produce a single instruction
i'm really interested in handling even uncanonical patterns.

Other than these 6 patterns, i can't think of any other
reasonable variants right now, although i'm sure they exist.

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

llvm-svn: 365641
2019-07-10 15:08:06 +00:00
David Bolvansky 0735cc1954 [InstCombine] pow(C,x) -> exp2(log2(C)*x)
Summary:
Transform
pow(C,x) 

To
exp2(log2(C)*x) 

if C > 0, C != inf, C != NaN (and C is not power of 2, since we have some fold for such case already).

log(C) is folded by the compiler and exp2 is much faster to compute than pow.

Reviewers: spatel, efriedma, evandro

Reviewed By: evandro

Subscribers: lebedev.ri, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64099

llvm-svn: 365637
2019-07-10 14:43:27 +00:00
Sanjay Patel 5f4d7c9d4f [InstCombine] add tests for trunc(load); NFC
I'm not sure if transforming any of these is valid as
a target-independent fold, but we might as well have
a few tests here to confirm or deny our position.

llvm-svn: 365523
2019-07-09 18:06:16 +00:00
Sanjay Patel 3dee113ebc [InstCombine] fold insertelement into splat of same scalar
Forming the canonical splat shuffle improves analysis and
may allow follow-on transforms (although some possibilities
are missing as shown in the test diffs).

The backend generically turns these patterns into build_vector,
so there should be no codegen regressions. All targets are
expected to be able to lower splats efficiently.

llvm-svn: 365379
2019-07-08 19:48:52 +00:00
Sanjay Patel 77ccc04700 [InstCombine] add tests for insert of same splatted scalar; NFC
llvm-svn: 365362
2019-07-08 18:03:22 +00:00
Sanjay Patel 0b59103a73 [InstCombine] canonicalize insert+splat to/from element 0 of vector
We recognize a splat from element 0 in (VectorUtils) llvm::getSplatValue()
and also in ShuffleVectorInst::isZeroEltSplatMask(), so this converts
to that form for better matching.

The backend generically turns these patterns into build_vector,
so there should be no codegen difference.

llvm-svn: 365342
2019-07-08 16:26:48 +00:00
Sanjay Patel 320a28200f [InstCombine] fix typo in test; NFC
I added this test in rL365325, but didn't mean to create an undef insert.

llvm-svn: 365333
2019-07-08 15:38:03 +00:00
Sanjay Patel 74cbaa37b6 [InstCombine] add tests for splat shuffles; NFC
llvm-svn: 365325
2019-07-08 14:49:21 +00:00
Sanjay Patel 75b5edf6a1 [InstCombine] allow undef elements when forming splat from chain of insertelements
We allow forming a splat (broadcast) shuffle, but we were conservatively limiting
that to cases where all elements of the vector are specified. It should be safe
from a codegen perspective to allow undefined lanes of the vector because the
expansion of a splat shuffle would become the chain of inserts again.

Forming splat shuffles can reduce IR and help enable further IR transforms.
Motivating bugs:
https://bugs.llvm.org/show_bug.cgi?id=42174
https://bugs.llvm.org/show_bug.cgi?id=16739

Differential Revision: https://reviews.llvm.org/D63848

llvm-svn: 365147
2019-07-04 16:45:34 +00:00
David Bolvansky 5f73e37af8 [NFC] Added tests for D64099
llvm-svn: 365141
2019-07-04 13:48:32 +00:00
Roman Lebedev 826db453d1 [NFC][InstCombine] onehot_merge.ll: add last few tests in the state they regress to in D62818
llvm-svn: 365056
2019-07-03 16:48:53 +00:00
Roman Lebedev 9f0c83902d [InstCombine] Y - ~X --> X + Y + 1 fold (PR42457)
Summary:
I *think* we'd want this new variant, because we obviously
have better handling for `add` as compared to `sub`/`not`.

https://rise4fun.com/Alive/WMn

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=42457 | PR42457 ]]

Reviewers: spatel, nikic, huihuiz, efriedma

Reviewed By: spatel

Subscribers: RKSimon, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63992

llvm-svn: 365011
2019-07-03 09:41:50 +00:00
David Bolvansky cb1a5a705c [SimplifyLibCalls] powf(x, sitofp(n)) -> powi(x, n)
Summary:
Partially solves https://bugs.llvm.org/show_bug.cgi?id=42190



Reviewers: spatel, nikic, efriedma

Reviewed By: efriedma

Subscribers: efriedma, nikic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63038

llvm-svn: 364940
2019-07-02 15:58:45 +00:00
Roman Lebedev 0bde7c6527 [InstCombine] Shift amount reassociation: fixup constantexpr handling (PR42484)
I was actually wondering if there was some nicer way than m_Value()+cast,
but apparently what i was really "subconsciously" thinking about
was correctness issue.

hasNoUnsignedWrap()/hasNoUnsignedWrap() exist for Instruction,
not for BinaryOperator, so let's just use m_Instruction(),
thus both avoiding a cast, and a crash.

Fixes https://bugs.llvm.org/show_bug.cgi?id=42484,
      https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=15587

llvm-svn: 364915
2019-07-02 12:54:48 +00:00
Roman Lebedev 7928fea4a7 [NFC][InstCombine] Revisit tests for "redundant shift input masking" (PR42456)
llvm-svn: 364897
2019-07-02 10:02:25 +00:00
Roman Lebedev 377dfb0226 [NFC][InstCombine] Add tests for "redundant shift input masking" (PR42456)
https://bugs.llvm.org/show_bug.cgi?id=42456
https://rise4fun.com/Alive/Vf1p

llvm-svn: 364894
2019-07-02 09:27:34 +00:00
Huihui Zhang 8e1051b3a0 [InstCombine][NFCI] Update test cases in onehot_merge.ll
Use both one bit and signbit shifting to check for one bit merge.

Reviewers: lebedev.ri, spatel, efriedma, craig.topper

Reviewed By: lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63903

llvm-svn: 364857
2019-07-01 22:00:32 +00:00
Sanjay Patel ddc1b40f26 [InstCombine] reduce more checks for power-of-2-or-zero using ctpop
Extends the transform from:
rL364341
...to include another (more common?) pattern that tests whether a
value is a power-of-2 (including or excluding zero).

llvm-svn: 364856
2019-07-01 22:00:00 +00:00
Roman Lebedev 975120a21b [NFC][InstCombine] More commutative tests for "shift direction in bittest" (PR42466)
'and' is commutative, if we don't want to touch shift-of-const,
we still need to check the other hand of 'and'.

llvm-svn: 364844
2019-07-01 20:33:56 +00:00
Roman Lebedev e62857786f [NFC][InstCombine] Add tests for "shift direction in bittest" (PR42466)
https://rise4fun.com/Alive/8O1
https://bugs.llvm.org/show_bug.cgi?id=42466

llvm-svn: 364824
2019-07-01 18:11:32 +00:00
Roman Lebedev 04d3d3bbff [InstCombine] (Y + ~X) + 1 --> Y - X fold (PR42459)
Summary:
To be noted, this pattern is not unhandled by instcombine per-se,
it is somehow does end up being folded when one runs opt -O3,
but not if it's just -instcombine. Regardless, that fold is
indirect, depends on some other folds, and is thus blind
when there are extra uses.

This does address the regression being exposed in D63992.

https://godbolt.org/z/7DGltU
https://rise4fun.com/Alive/EPO0

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=42459 | PR42459 ]]

Reviewers: spatel, nikic, huihuiz

Reviewed By: spatel

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63993

llvm-svn: 364792
2019-07-01 15:55:24 +00:00
Roman Lebedev 72b8d41ce8 [InstCombine] Shift amount reassociation in bittest (PR42399)
Summary:
Given pattern:
`icmp eq/ne (and ((x shift Q), (y oppositeshift K))), 0`
we should move shifts to the same hand of 'and', i.e. rewrite as
`icmp eq/ne (and (x shift (Q+K)), y), 0`  iff `(Q+K) u< bitwidth(x)`

It might be tempting to not restrict this to situations where we know
we'd fold two shifts together, but i'm not sure what rules should there be
to avoid endless combine loops.

We pick the same shift that was originally used to shift the variable we picked to shift:
https://rise4fun.com/Alive/6x1v

Should fix [[ https://bugs.llvm.org/show_bug.cgi?id=42399 | PR42399]].

Reviewers: spatel, nikic, RKSimon

Reviewed By: spatel

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63829

llvm-svn: 364791
2019-07-01 15:55:15 +00:00
Roman Lebedev 34a0b16e29 [NFC][InstCombine] Better commutative tests for "shift amount reassociation in bittest" pattern.
As discussed in https://reviews.llvm.org/D63829
*if* *both* shifts are one-use, we'd most likely want to produce `lshr`,
and not rely on ordering.

Also, there should likely be a *separate* fold to do this reordering.

llvm-svn: 364772
2019-07-01 14:28:24 +00:00
Roman Lebedev 9f3645869c [NFC][InstCombine] Improve test coverage for ((~x) + y) + 1 -> y - x fold fold (PR42459)
So we indeed to have this fold, but only if +1 is not the last operation..

llvm-svn: 364764
2019-07-01 13:31:06 +00:00
Roman Lebedev d5c3e34cb7 [NFC][InstCombine] Tests for ((~x) + y) + 1 -> y - x fold fold (PR42459)
To be noted, this pattern is not unhandled by instcombine per-se,
it is somehow does end up being folded when one runs opt -O3,
but not if it's just -instcombine. Regardless, that fold is
indirect, depends on some other folds, and is thus blind
when there are extra uses.

https://bugs.llvm.org/show_bug.cgi?id=42459
https://rise4fun.com/Alive/EPO0

llvm-svn: 364749
2019-07-01 12:22:06 +00:00
Roman Lebedev 4f878fe3a7 [NFC][InstCombine] Tests for x - ~(y) -> x + y + 1 fold (PR42457)
https://bugs.llvm.org/show_bug.cgi?id=42457
https://rise4fun.com/Alive/iFhE

llvm-svn: 364739
2019-07-01 09:57:53 +00:00
Roman Lebedev f55818e3a7 [InstCombine] Omit 'urem' where possible
This was added in D63390 / rL364286 to backend,
but it makes sense to also handle it in middle-end.
https://rise4fun.com/Alive/Zsln

llvm-svn: 364738
2019-07-01 09:41:43 +00:00
Roman Lebedev 0f82f64c83 [NFC][InstCombine] Copy test for omit urem when possible from TargetLowering
Was added in D63390 / rL364286 to backend, but it makes sense to also handle it here.
https://rise4fun.com/Alive/Zsln

llvm-svn: 364737
2019-07-01 09:41:27 +00:00
Sanjay Patel 706b48251f [InstCombine] canonicalize fcmp+select to minnum/maxnum intrinsics
This is the opposite direction of D62158 (we have to choose 1 form or the other).
Now that we have FMF on the select, this becomes more palatable. And the benefits
of having a single IR instruction for this operation (less chances of missing folds
based on extra uses, etc) overcome my previous comments about the potential advantage
of larger pattern matching/analysis.

Differential Revision: https://reviews.llvm.org/D62414

llvm-svn: 364721
2019-06-30 13:40:31 +00:00
Sanjay Patel 77dc1e8568 [InstCombine] canonicalize fmin/fmax to LLVM intrinsics minnum/maxnum
This transform came up in D62414, but we should deal with it first.
We have LLVM intrinsics that correspond exactly to libm calls (unlike
most libm calls, these libm calls never set errno).
This holds without any fast-math-flags, so we should always canonicalize
to those intrinsics directly for better optimization.
Currently, we convert to fcmp+select only when we have FMF (nnan) because
fcmp+select does not preserve the semantics of the call in the general case.

Differential Revision: https://reviews.llvm.org/D63214

llvm-svn: 364714
2019-06-29 14:28:54 +00:00
Roman Lebedev e3a94ba4a9 [InstCombine] Shift amount reassociation (PR42391)
Summary:
Given pattern:
`(x shiftopcode Q) shiftopcode K`
we should rewrite it as
`x shiftopcode (Q+K)`  iff `(Q+K) u< bitwidth(x)`
This is valid for any shift, but they must be identical.

* https://rise4fun.com/Alive/9E2
* exact on both lshr => exact https://rise4fun.com/Alive/plHk
* exact on both ashr => exact https://rise4fun.com/Alive/QDAA
* nuw on both shl => nuw https://rise4fun.com/Alive/5Uk
* nsw on both shl => nsw https://rise4fun.com/Alive/0plg

Should fix [[ https://bugs.llvm.org/show_bug.cgi?id=42391 | PR42391]].

Reviewers: spatel, nikic, RKSimon

Reviewed By: nikic

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63812

llvm-svn: 364712
2019-06-29 11:51:50 +00:00
Roman Lebedev 3b4f086df4 [NFC][InstCombine] Shift amount reassociation: revisit flag preservation tests
llvm-svn: 364657
2019-06-28 16:36:53 +00:00
Roman Lebedev 9f1dffdb02 [NFC][InstCombine] Shift amount reassociation: add flag preservation test
As discussed in https://reviews.llvm.org/D63812#inline-569870
* exact on both lshr => exact https://rise4fun.com/Alive/plHk
* exact on both ashr => exact https://rise4fun.com/Alive/QDAA
* nuw on both shl => nuw https://rise4fun.com/Alive/5Uk
* nsw on both shl => nsw https://rise4fun.com/Alive/0plg

So basically if the same flag is set on both original shifts -> set it on new shift.
Don't think we can do anything with non-matching flags on shl.

llvm-svn: 364652
2019-06-28 15:32:52 +00:00