Commit Graph

10693 Commits

Author SHA1 Message Date
Philip Reames 5b39acd111 [LICM] Compute a must execute property for the prefix of the header as we go
Computing this property within the existing walk ensures that the cost is linear with the size of the block. If we did this from within isGuaranteedToExecute, it would be quadratic without some very fancy caching.

This allows us to reliably catch a hoistable instruction within a header which may throw at some point *after* our hoistable instruction. It doesn't do anything for non-header cases, but given how common single block loops are, this seems very worthwhile.

llvm-svn: 331557
2018-05-04 21:35:00 +00:00
Shoaib Meenai 57fadab1cb [ObjCARC] Account for catchswitch in bitcast insertion
A catchswitch is both a pad and a terminator, meaning it must be the
only non-phi instruction in its basic block. When we're inserting a
bitcast in the incoming basic block for a phi, if that incoming block is
a catchswitch, we should go up the dominator tree to find a valid
insertion point rather than attempting to insert before the catchswitch
(which would result in invalid IR).

Differential Revision: https://reviews.llvm.org/D46412

llvm-svn: 331548
2018-05-04 19:03:11 +00:00
Craig Topper a3f39ee33d [LoopIdiomRecognize] Add a test case to show incorrect transformation of an infinite loop with side effets into a countable loop using ctlz.
We currently recognize this idiom where x is signed and thus the shift in an ashr.

int cnt = 0;
while (x) {
  x >>= 1; // arithmetic shift right
  ++cnt;
}

and turn it into (bitwidth - ctlz(x)). And if there is anything else in the loop we will create a new loop that runs that many times.

If x is initially negative, the shift result will never be 0 and thus the loop is infinite. If you put something with side effects in the loop, that side effect will now only happen bitwidth times instead of an infinite number of times.

So this transform is only safe for logical shift right (which we don't currently recognize) or if we can prove that x cannot be negative before the loop.

llvm-svn: 331493
2018-05-03 23:50:29 +00:00
Sanjay Patel e7b6654711 [InstCombine] refine select-of-constants to bitwise ops
Add logic for the special case when a cmp+select can clearly be
reduced to just a bitwise logic instruction, and remove an 
over-reaching chunk of general purpose bit magic. The primary goal 
is to remove cases where we are not improving the IR instruction 
count when doing these select transforms, and in all cases here that 
is true.

In the motivating 3-way compare tests, there are further improvements
because we can combine/propagate select values (not sure if that
belongs in instcombine, but it's there for now).

DAGCombiner has folds to turn some of these selects into bit magic,
so there should be no difference in the end result in those cases.
Not all constant combinations are handled there yet, however, so it
is possible that some targets will see more cmov/csel codegen with
this change in IR canonicalization. 

Ideally, we'll go further to *not* turn selects into multiple 
logic/math ops in instcombine, and we'll canonicalize to selects.
But we should make sure that this step does not result in regressions
first (and if it does, we should fix those in the backend).

The general direction for this change was discussed here:
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105373.html
http://lists.llvm.org/pipermail/llvm-dev/2017-July/114885.html

Alive proofs for the new bit magic:
https://rise4fun.com/Alive/XG7

Differential Revision: https://reviews.llvm.org/D46086

llvm-svn: 331486
2018-05-03 21:58:44 +00:00
Piotr Padlewski c77ab8ef2f perform DSE through launder.invariant.group
Summary:
Alias Analysis knows that llvm.launder.invariant.group
returns pointer that mustalias argument, but this information
wasn't used, therefor we didn't DSE through launder.invariant.group

Reviewers: chandlerc, dberlin, bogner, hfinkel, efriedma

Reviewed By: dberlin

Subscribers: amharc, llvm-commits, nlewycky, rsmith

Differential Revision: https://reviews.llvm.org/D31581

llvm-svn: 331449
2018-05-03 11:03:53 +00:00
Piotr Padlewski 5dde809404 Rename invariant.group.barrier to launder.invariant.group
Summary:
This is one of the initial commit of "RFC: Devirtualization v2" proposal:
https://docs.google.com/document/d/16GVtCpzK8sIHNc2qZz6RN8amICNBtvjWUod2SujZVEo/edit?usp=sharing

Reviewers: rsmith, amharc, kuhar, sanjoy

Subscribers: arsenm, nhaehnle, javed.absar, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D45111

llvm-svn: 331448
2018-05-03 11:03:01 +00:00
Craig Topper 856fd68690 [LoopIdiomRecognize] When looking for 'x & (x -1)' for popcnt, make sure the left hand side of the 'and' matches the left hand side of the 'subtract'
llvm-svn: 331437
2018-05-03 05:48:49 +00:00
Craig Topper a0cba89f86 [LoopIdiomRecognize] Add a test case showing that we transform to ctpop without fully checking the 'x & (x-1)' part.
The code fails to check that the same value is used twice. We only make sure the left hand side of the and is part of the loop recurrence. The 'x' in the subtract can be any value.

llvm-svn: 331436
2018-05-03 05:48:48 +00:00
Max Kazantsev 58fce7e54b Re-enable "[SCEV] Make computeExitLimit more simple and more powerful"
This patch was temporarily reverted because it has exposed bug 37229 on
PowerPC platform. The bug is unrelated to the patch and was just a general
bug in the optimization done for PowerPC platform only. The bug was fixed
by the patch rL331410.

This patch returns the disabled commit since the bug was fixed.

llvm-svn: 331427
2018-05-03 02:37:55 +00:00
Chandler Carruth 71c3a3fac5 [GCOV] Emit the writeout function as nested loops of global data.
Summary:
Prior to this change, LLVM would in some cases emit *massive* writeout
functions with many 10s of 1000s of function calls in straight-line
code. This is a very wasteful way to represent what are fundamentally
loops and creates a number of scalability issues. Among other things,
register allocating these calls is extremely expensive. While D46127 makes this
less severe, we'll still run into scaling issues with this eventually. If not
in the compile time, just from the code size.

Now the pass builds up global data structures modeling the inputs to
these functions, and simply loops over the data structures calling the
relevant functions with those values. This ensures that the code size is
a fixed and only data size grows with larger amounts of coverage data.

A trivial change to IRBuilder is included to make it easier to build
the constants that make up the global data.

Reviewers: wmi, echristo

Subscribers: sanjoy, mcrosier, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D46357

llvm-svn: 331407
2018-05-02 22:24:39 +00:00
Daniel Sanders 8d0d1aa229 [reassociate] Fix excessive revisits when processing long chains of reassociatable instructions.
Summary:
Some of our internal testing detected a major compile time regression which I've
tracked down to:
    r278938 - Revert "Reassociate: Reprocess RedoInsts after each inst".
It appears that processing long chains of reassociatable instructions causes
non-linear (potentially exponential) growth in the number of times an
instruction is revisited. For example, the included test revisits instructions
220 times in a 20-instruction test.

It appears that r278938 reversed the order instructions were visited and that
this is preventing scheduled revisits from being cancelled as a result of
visiting the instructions naturally during normal processing. However, simply
reversing the order also harmed the generated code. Upon closer inspection, it
was discovered that revisits occurred in the opposite order to the first pass
(Thanks to escha for spotting that).

This patch makes the revisit order consistent with the first pass which allows
more revisits to be cancelled. This does appear to have a small impact on the
generated code in few cases but it significantly reduces compile-time.

After this patch, our internal test that was most affected by the regression
dropped from ~2 million revisits to ~4k resulting in Reassociate having 0.46%
of the runtime it had before (99.54% improvement).

Here's the summaries reported by lnt for the LLVM test-suite with --benchmarking-only:
| metric         | geomean before patch | geomean after patch | delta   |
| -----          | -----                | -----               | -----   |
| compile time   | 0.1956               | 0.1261              | -35.54% |
| execution time | 0.3240               | 0.3237              | -       |
| code size      | 7365.4459            | 7365.6079           | -       |

The results have a few wins and losses on compile-time, mostly in the +/- 2.5% range. There was one outlier though:
| Performance Regressions - compile_time | Δ | Previous | Current |
| MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk | 9.82% | 2.0473 | 2.2483 |

Reviewers: javed.absar, dberlin

Reviewed By: dberlin

Subscribers: kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D45734

llvm-svn: 331381
2018-05-02 17:59:16 +00:00
Farhana Aleen e2dfe8a853 [AMDGPU] Support horizontal vectorization.
Author: FarhanaAleen

Reviewed By: rampitec, arsenm

Subscribers: llvm-commits, AMDGPU

Differential Revision: https://reviews.llvm.org/D46213

llvm-svn: 331313
2018-05-01 21:41:12 +00:00
Sanjay Patel d2025a2e31 [AggressiveInstCombine] convert a chain of 'or-shift' bits into masked compare
and (or (lshr X, C), ...), 1 --> (X & C') != 0

I initially thought about implementing the minimal pattern in instcombine as mentioned here:
https://bugs.llvm.org/show_bug.cgi?id=37098#c6

...but we need to do better to catch the more general sequence from the motivating test 
(more than 2 bits in the compare). And a test-suite run with statistics showed that this 
pattern only happened 2 times currently. It would potentially happen more often if 
reassociation worked better (D45842), but it's probably still not too frequent?

This is small enough that I didn't see a need to create a whole new class/file within 
AggressiveInstCombine. There are likely other relatively small matchers like what was 
discussed in D44266 that would slide under foldUnusualPatterns() (name suggestions welcome). 
We could potentially also consolidate matchers for ctpop, bswap, etc under here.

Differential Revision: https://reviews.llvm.org/D45986

llvm-svn: 331311
2018-05-01 21:02:09 +00:00
Sanjay Patel f70671582d [AggressiveInstCombine] add more bitfield test patterns; NFC
Add another baseline for D45986 and a pattern that won't be
matched with that patch.

llvm-svn: 331309
2018-05-01 20:55:03 +00:00
Sanjay Patel 2b36e95d45 [PhaseOrdering] add tests for bittest patterns from bitfields; NFC
As mentioned in D45986, there's a potential ordering dependency 
between instcombine and aggressive-instcombine for detecting these,
so I'm adding a few tests to confirm that the expected folds occur
using -O3 (because aggressive-instcombine only runs at -O3 currently).

llvm-svn: 331308
2018-05-01 20:53:44 +00:00
Daniel Neilson 1091ca4640 [LV] Move test/Transforms/LoopVectorize/pr23997.ll
Summary:
This fixes a build break with r331269.

test/Transforms/LoopVectorize/pr23997.ll

should be in:

test/Transforms/LoopVectorize/X86/pr23997.ll

llvm-svn: 331281
2018-05-01 16:40:45 +00:00
Matthew Simpson 661e6a02bd [SLP] Add additional test for transposable binary operations with reuse
llvm-svn: 331274
2018-05-01 15:59:26 +00:00
Daniel Neilson 9e4bbe801a [LV] Preserve inbounds on created GEPs
Summary:
This is a fix for PR23997.

The loop vectorizer is not preserving the inbounds property of GEPs that it creates.
This is inhibiting some optimizations. This patch preserves the inbounds property in
the case where a load/store is being fed by an inbounds GEP.

Reviewers: mkuper, javed.absar, hsaito

Reviewed By: hsaito

Subscribers: dcaballe, hsaito, llvm-commits

Differential Revision: https://reviews.llvm.org/D46191

llvm-svn: 331269
2018-05-01 15:35:08 +00:00
Wei Mi eec5ba9fae Fix the issue that ComputeValueKnownInPredecessors only handles the case when
phi is on lhs of a comparison op.

For the following testcase,
L1:

  %t0 = add i32 %m, 7
  %t3 = icmp eq i32* %t2, null
  br i1 %t3, label %L3, label %L2

L2:

  %t4 = load i32, i32* %t2, align 4
  br label %L3

L3:

  %t5 = phi i32 [ %t0, %L1 ], [ %t4, %L2 ]
  %t6 = icmp eq i32 %t0, %t5
  br i1 %t6, label %L4, label %L5

We know if we go through the path L1 --> L3, %t6 should always be true. However
currently, if the rhs of the eq comparison is phi, JumpThreading fails to
evaluate %t6 to true. And we know that Instcombine cannot guarantee always
canonicalizing phi to the left hand side of the comparison operation according
to the operand priority comparison mechanism in instcombine. The patch handles
the case when rhs of the comparison op is a phi.

Differential Revision: https://reviews.llvm.org/D46275

llvm-svn: 331266
2018-05-01 14:47:24 +00:00
Omer Paparo Bivas 54596e1c1a [InstCombine] new testcases for OverflowingBinaryOperators and PossiblyExactOperators transformations; NFC
instcombine should transform the relevant cases if the OverflowingBinaryOperator/PossiblyExactOperator can be proven to be safe.

Change-Id: I7aec62a31a894e465e00eb06aed80c3ea0c9dd45
llvm-svn: 331265
2018-05-01 14:27:10 +00:00
Omer Paparo Bivas 82ef8e19ef [InstCombine] Adjusting bswap pattern matching to hold for And/Shift mixed case
Differential Revision: https://reviews.llvm.org/D45731

Change-Id: I85d4226504e954933c41598327c91b2d08192a9d
llvm-svn: 331257
2018-05-01 12:25:46 +00:00
Sanjay Patel 1934cb1c49 [InstCombine] fix test to restore intent
This test had values that differed in only in capitalization,
and that causes problems for the auto-generating check line
script. So I changed that in rL331226, but I accidentally
forgot to change a subsequent use of a param.

llvm-svn: 331228
2018-04-30 21:28:18 +00:00
Sanjay Patel dab8515370 [InstCombine] add tests, update checks; NFC
llvm-svn: 331226
2018-04-30 21:03:36 +00:00
Roman Lebedev aa4faec114 [InstCombine] Unfold masked merge with constant mask
Summary:
As discussed in D45733, we want to do this in InstCombine.

https://rise4fun.com/Alive/LGk

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: chandlerc, xbolva00, llvm-commits

Differential Revision: https://reviews.llvm.org/D45867

llvm-svn: 331205
2018-04-30 17:59:33 +00:00
Roman Lebedev b7ae4ef82b [InstCombine][NFC] Add tests for unfolding masked merge with constant mask
Summary: As discussed in D45733, we want to do this in InstCombine.

Differential Revision: https://reviews.llvm.org/D45866

llvm-svn: 331204
2018-04-30 17:59:26 +00:00
Davide Italiano bd3bf1660b [SLPVectorizer] Debug info shouldn't impact spill cost computation.
<rdar://problem/39794738>

(Also, PR32761).

Differential Revision:  https://reviews.llvm.org/D46199

llvm-svn: 331199
2018-04-30 16:57:33 +00:00
Roman Lebedev 136867931a [InstCombine] Canonicalize variable mask in masked merge
Summary:
Masked merge has a pattern of: `((x ^ y) & M) ^ y`.
But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`,
We should canonicalize the pattern to non-inverted mask.

https://rise4fun.com/Alive/Yol

Reviewers: spatel, craig.topper

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D45664

llvm-svn: 331112
2018-04-28 15:45:07 +00:00
Roman Lebedev 6b1e66b188 [InstCombine][NFC] Add tests for variable mask canonicalization in masked merge
Summary:
Masked merge has a pattern of: `((x ^ y) & M) ^ y`.
But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`,
We should canonicalize the pattern to non-inverted mask.

Differential Revision: https://reviews.llvm.org/D45663

llvm-svn: 331111
2018-04-28 15:45:00 +00:00
Max Kazantsev 303572f3df [NFC] Add some tests that demonstrate unrecognized three-way comparison patterns
llvm-svn: 331100
2018-04-28 04:38:21 +00:00
Philip Reames 502d4481d4 [LoopGuardWidening] Make PostDomTree optional
The effect of doing so is not disrupting the LoopPassManager when mixing this pass with other loop passes.  This should help locality of access substaintially and avoids the cost of computing PostDom.

The assumption here is that the full GuardWidening (which does use PostDom) is run as a canonicalization before loop opts and that this version is just catching cases exposed by other loop passes.  (i.e. LoopPredication, IndVarSimplify, LoopUnswitch, etc..)

llvm-svn: 331094
2018-04-27 23:15:56 +00:00
Adrian Prantl 210a29de7b Fix a bug in GlobalOpt's handling of DIExpressions.
This patch adds support for fragment expressions
TryToShrinkGlobalToBoolean() which were previously just dropped.

Thanks to Reid Kleckner for providing me a reproducer!

llvm-svn: 331086
2018-04-27 21:41:36 +00:00
Roman Lebedev 6959b8e76f [PatternMatch] Stabilize the matching order of commutative matchers
Summary:
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the `LHS` and `RHS` matchers:
1. match `RHS` matcher to the `first` operand of binary operator,
2. and then match `LHS` matcher to the `second` operand of binary operator.

This works ok.
But it complicates writing of commutative matchers, where one would like to match
(`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side.

This is additionally complicated by the fact that `m_Specific()` stores the `Value *`,
not `Value **`, so it won't work at all out of the box.

The last problem is trivially solved by adding a new `m_c_Specific()` that stores the
`Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing
one because i guess all the current users are ok with existing behavior,
and this additional pointer indirection may have performance drawbacks.
Also, i'm storing pointer, not reference, because for some mysterious-to-me reason
it did not work with the reference.

The first one appears trivial, too.
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**:
1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator,
2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator.

Surprisingly, `$ ninja check-llvm` still passes with this.
But i expect the bots will disagree..

The motivational unittest is included.
I'd like to use this in D45664.

Reviewers: spatel, craig.topper, arsenm, RKSimon

Reviewed By: craig.topper

Subscribers: xbolva00, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D45828

llvm-svn: 331085
2018-04-27 21:23:20 +00:00
Sanjay Patel 2677038cc0 [Reassociate] add a test with debug info; NFC
As suggested in D45842 
(although still not sure if we're going to advance that),
we must invalidate references to instructions that have
been recycled (operands were changed, so result is different).

llvm-svn: 331083
2018-04-27 21:14:15 +00:00
Philip Reames e4ec473b3f [MustExecute/LICM] Special case first instruction in throwing header
We currently have a hard to solve analysis problem around the order of instructions within a potentially throwing block.  We can't cheaply determine whether a given instruction is before the first potential throw in the block.  While we're working on that in the background, special case the first instruction within the header.

why this particular special case?  Well, headers are guaranteed to execute if the loop does, and it turns out we tend to produce this form in practice.

In a follow on patch, I tend to extend LICM with an alternate approach which works for any instruction in the header before the first throw, but this is the best I can come up with other users of the analysis (such as store promotion.)

Note: I can't show the difference in the analysis result since we're ORing in the expensive instruction walk used by SCEV.  Using the full walk is not suitable for a general solution.
llvm-svn: 331079
2018-04-27 20:44:01 +00:00
Philip Reames 9258e9d190 [LoopGuardWidening] Split out a loop pass version of GuardWidening
The idea is to have a pass which performs the same transformation as GuardWidening, but can be run within a loop pass manager without disrupting the pass manager structure.  As demonstrated by the test case, this doesn't quite get there because of issues with post dom, but it gives a good step in the right direction.  the motivation is purely to reduce compile time since we can now preserve locality during the loop walk.

This patch only includes a legacy pass.  A follow up will add a new style pass as well.

llvm-svn: 331060
2018-04-27 17:29:10 +00:00
Florian Hahn f3fea0f11f [LoopInterchange] Allow some loops with PHI nodes in the exit block.
We currently support LCSSA PHI nodes in the outer loop exit, if their
incoming values do not come from the outer loop latch or if the
outer loop latch has a single predecessor. In that case, the outer loop latch
will be executed only if the inner loop gets executed. If we have multiple
predecessors for the outer loop latch, it may be executed even if the inner
loop does not get executed.

This is a first step to support the case described in
https://bugs.llvm.org/show_bug.cgi?id=30472

Reviewers: efriedma, karthikthecool, mcrosier

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D43237

llvm-svn: 331037
2018-04-27 13:52:51 +00:00
Benjamin Kramer 733c7fc55d [NVPTX] Turn on Loop/SLP vectorization
Since PTX has grown a <2 x half> datatype vectorization has become more
important. The late LoadStoreVectorizer intentionally only does loads
and stores, but now arithmetic has to be vectorized for optimal
throughput too.

This is still very limited, SLP vectorization happily creates <2 x half>
if it's a legal type but there's still a lot of register moving
happening to get that fed into a vectorized store. Overall it's a small
performance win by reducing the amount of arithmetic instructions.

I haven't really checked what the loop vectorizer does to PTX code, the
cost model there might need some more tweaks. I didn't see it causing
harm though.

Differential Revision: https://reviews.llvm.org/D46130

llvm-svn: 331035
2018-04-27 13:36:05 +00:00
Matt Morehouse 1ae1febfde Revert "[SimplifyLibcalls] Replace locked IO with unlocked IO"
This reverts r331002 due to sanitizer bot breakage.

llvm-svn: 331011
2018-04-27 01:48:09 +00:00
Eli Friedman e06539456c [LowerTypeTests] Mark .cfi.jumptable nounwind.
It doesn't unwind, and the wrong marking leads to the creation of an
.eh_frame section when it isn't necessary.

Differential Revision: https://reviews.llvm.org/D46082

llvm-svn: 331008
2018-04-27 00:32:24 +00:00
David Bolvansky 2c9cc9c731 [SimplifyLibcalls] Replace locked IO with unlocked IO
Summary: If file stream arg is not captured and source is fopen, we could replace IO calls by unlocked IO ("_unlocked" function variants) to gain better speed,

Reviewers: efriedma, RKSimon, spatel, sanjoy, hfinkel, majnemer

Subscribers: lebedev.ri, llvm-commits

Differential Revision: https://reviews.llvm.org/D45736

llvm-svn: 331002
2018-04-26 22:31:43 +00:00
Roman Lebedev 33095e3610 [InstCombine][NFC] Regenerate checks in or-xor.ll
llvm-svn: 330996
2018-04-26 21:41:56 +00:00
Roman Lebedev cabaeac29c [InstCombine][NFC] Regenerate checks in and-or-not.ll
llvm-svn: 330994
2018-04-26 21:13:09 +00:00
Sanjoy Das 6f1937b10f [InstCombine] Simplify Add with remainder expressions as operands.
Summary:
Simplify integer add expression X % C0 + (( X / C0 ) % C1) * C0 to
X % (C0 * C1).  This is a common pattern seen in code generated by the XLA
GPU backend.

Add test cases for this new optimization.

Patch by Bixia Zheng!

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: efriedma, craig.topper, lebedev.ri, llvm-commits, jlebar

Differential Revision: https://reviews.llvm.org/D45976

llvm-svn: 330992
2018-04-26 20:52:28 +00:00
Sanjoy Das 0e643db48f Add test cases to prepare for the optimization that simplifies Add with
remainder expressions as operands.

Summary:
Add test cases to prepare for the new optimization that Simplifies integer add
expression X % C0 + (( X / C0 ) % C1) * C0 to X % (C0 * C1).

Patch by Bixia Zheng!

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: jlebar, llvm-commits

Differential Revision: https://reviews.llvm.org/D46017

llvm-svn: 330991
2018-04-26 20:52:27 +00:00
Roman Lebedev 7cc56f1599 [InstCombine][NFC] add2.ll: add a few commutative checks.
Fixes some missing test coverage in InstCombineAddSub.cpp, visitAdd()

llvm-svn: 330986
2018-04-26 20:07:17 +00:00
Roman Lebedev 1efe879641 [InstCombine][NFC] Autogenerate checks in add2.ll
llvm-svn: 330985
2018-04-26 20:07:12 +00:00
Roman Lebedev 3d7b22621c [NFC][InstCombine] rem.ll: add a few commutative tests.
This closes a gap in missing test coverage in
isKnownToBeAPowerOfTwo() from ValueTracking.cpp

llvm-svn: 330975
2018-04-26 18:44:37 +00:00
Roman Lebedev e117e1a440 [NFC][InstCombine] Regenerate rem.ll test
llvm-svn: 330974
2018-04-26 18:44:32 +00:00
Matthew Simpson cfdec0ff70 [SLP] Add tests for transposable binary operations
These test cases are vectorizable, but we are currently unable to vectorize
them effectively.

llvm-svn: 330945
2018-04-26 14:50:04 +00:00
Alex Bradbury 130b8b3f2b [RISCV] Implement isTruncateFree
Adapted from ARM's implementation introduced in r313533 and r314280.

llvm-svn: 330940
2018-04-26 13:37:00 +00:00