Commit Graph

9213 Commits

Author SHA1 Message Date
Simon Pilgrim 54822d1432 [BBVectorize] Regenerate simple tests
llvm-svn: 306571
2017-06-28 17:40:20 +00:00
Geoff Berry b0573547f6 [LoopUnroll] Fix bug in computeUnrollCount causing it to not honor MaxCount
Reviewers: sanjoy, anna, reames, apilipenko, igor-laevsky, mkuper

Subscribers: mcrosier, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D34532

llvm-svn: 306564
2017-06-28 17:01:15 +00:00
Sanjay Patel 1a132d27c6 [InstCombine] add tests for icmp with bswapped operands; NFC
llvm-svn: 306563
2017-06-28 16:56:45 +00:00
Nikolai Bozhenov 6710ba07c7 Revert r306528
llvm-svn: 306536
2017-06-28 12:15:13 +00:00
Nikolai Bozhenov 77b5536e4e [ValueTracking] Enabling existing ValueTracking patch by default.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.

Reviewers: reames

Differential Revision: https://reviews.llvm.org/D34101

Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 306528
2017-06-28 10:08:08 +00:00
Nikolai Bozhenov b01e6b5a52 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 306525
2017-06-28 09:26:20 +00:00
Nikolai Bozhenov 4ec1bb6f39 Add tests to document current InstCombine behavior for clamp pattern.
Summary:
This commit adds the tests for clamp pattern as a prerequisite of
D33186 to make the impact of that fix more clear and also to document
current behavior.

Reviewers: spatel, jmolloy

Reviewed By: spatel

Subscribers: n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D34350

llvm-svn: 306524
2017-06-28 09:22:58 +00:00
Craig Topper 8fe3603ff1 [InstCombine] Add test case demonstrating that we don't handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC
llvm-svn: 306510
2017-06-28 06:45:36 +00:00
Craig Topper 7f124694c5 Revert r306508 "[InstCombine] Add test case demonstrating that we don't handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC"
I accidentally had a extra change in there.

llvm-svn: 306509
2017-06-28 06:43:58 +00:00
Craig Topper 1d5b4b634b [InstCombine] Add test case demonstrating that we don't handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC
llvm-svn: 306508
2017-06-28 06:42:48 +00:00
Sanjay Patel 4b23fa0abf [CGP] add specialization for memcmp expansion with only one basic block
llvm-svn: 306485
2017-06-27 23:15:01 +00:00
Easwaran Raman c5fa6358ba [NewPM/Inliner] Reduce threshold for cold callsites in the non-PGO case
Differential Revision: https://reviews.llvm.org/D34312

llvm-svn: 306484
2017-06-27 23:11:18 +00:00
Florian Hahn 2665febb54 [AArch64] Inline callee if its target-features are a subset of the caller
Summary:
Similar to X86, it should be safe to inline callees if their target-features
are a subset of the caller. This change matches GCC's inlining behavior
with respect to attributes [1].

[1] https://gcc.gnu.org/onlinedocs/gcc/AArch64-Function-Attributes.html#AArch64-Function-Attributes

Reviewers: kristof.beyls, javed.absar, rengolin, t.p.northover

Reviewed By: t.p.northover

Subscribers: aemerson, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D34698

llvm-svn: 306478
2017-06-27 22:27:32 +00:00
Dehao Chen 920d022519 re-commit r306336: Enable vectorizer-maximize-bandwidth by default.
Differential Revision: https://reviews.llvm.org/D33341

llvm-svn: 306473
2017-06-27 22:05:58 +00:00
Sanjay Patel 70b36f193d [CGP] eliminate a sub instruction in memcmp expansion
As noted in D34071, there are some IR optimization opportunities that could be 
handled by normal IR passes if this expansion wasn't happening so late in CGP.

Regardless of that, it seems wasteful to knowingly produce suboptimal IR here, 
so I'm proposing this change:
  %s = sub i32 %x, %y
  %r = icmp ne %s, 0
    =>
  %r = icmp ne %x, %y

Changing the predicate to 'eq' mimics what InstCombine would do, so that's just
an efficiency improvement if we decide this expansion should happen sooner.

The fact that the PowerPC backend doesn't eliminate the 'subf.' might be 
something for PPC folks to investigate separately.

Differential Revision: https://reviews.llvm.org/D34416

llvm-svn: 306471
2017-06-27 21:46:34 +00:00
Xinliang David Li 241f020657 Clean up a test case
llvm-svn: 306468
2017-06-27 21:35:49 +00:00
Craig Topper 5fe0197622 [InstCombine] Propagate nsw flag when turning mul by pow2 into shift when the constant is a vector splat or the scalar bit width is larger than 64-bits
The check to see if we can propagate the nsw flag used m_ConstantInt(uint64_t*&) which doesn't work with splat vectors and has a restriction that the bitwidth of the ConstantInt must be 64-bits are less.

This patch changes it to use m_APInt to remove both these issues

Differential Revision: https://reviews.llvm.org/D34699

llvm-svn: 306457
2017-06-27 19:57:53 +00:00
Serge Guelton 7bc405aa4c [CodeExtractor] Prevent extraction of block involving blockaddress
BlockAddress are only valid within their function context, which does not
interact well with CodeExtractor. Detect this case and prevent it.

Differential Revision: https://reviews.llvm.org/D33839

llvm-svn: 306448
2017-06-27 18:57:53 +00:00
Yaxun Liu 7c44f340de [SROA] Fix APInt size when alloca address space is not 0
SROA assumes alloca address space is 0, which causes assertion. This patch fixes that.

Differential Revision: https://reviews.llvm.org/D34104

llvm-svn: 306440
2017-06-27 18:26:06 +00:00
Sanjay Patel 7227276d41 [InstCombine] canonicalize icmp predicate feeding select
This canonicalization was suggested in D33172 as a way to make InstCombine behavior more uniform. 
We have this transform for icmp+br, so unless there's some reason that icmp+select should be 
treated differently, we should do the same thing here.

The benefit comes from increasing the chances of creating identical instructions. This is shown in
the tests in logical-select.ll (PR32791). InstCombine doesn't fold those directly, but EarlyCSE 
can simplify the identical cmps, and then InstCombine can fold the selects together.

The possible regression for the tests in select.ll raises questions about poison/undef:
http://lists.llvm.org/pipermail/llvm-dev/2017-May/113261.html

...but that transform is just as likely to be triggered by this canonicalization as it is to be 
missed, so we're just pointing out a commutation deficiency in the pattern matching:
https://reviews.llvm.org/rL228409

Differential Revision: https://reviews.llvm.org/D34242

llvm-svn: 306435
2017-06-27 17:53:22 +00:00
Craig Topper 9512332bcb [InstCombine] Add test case demonstrating that we don't propagate nsw flag when converting mul by pow2 to shl when the type is larger than 64-bits. NFC
llvm-svn: 306427
2017-06-27 17:16:03 +00:00
Craig Topper d068fb8104 [InstCombine] Add test cases to show that we don't propagate 'nsw' flags when converting mul by pow2 constant to shl for splat vectors. NFC
llvm-svn: 306426
2017-06-27 17:16:01 +00:00
Craig Topper 81cbb0c237 [PatternMatch] Remove 64-bit or less restriction from m_SpecificInt
Not sure why this restriction existed, but it seems like we should support any size Constant here.

The particular pattern in the tests is not the only use of this matcher in the tree. There's one in CodeGenPrepare and one in InstSimplify as well.

Differential Revision: https://reviews.llvm.org/D34666

llvm-svn: 306417
2017-06-27 15:39:40 +00:00
Craig Topper 0a60d85811 [JumpThreading] Add test case that was supposed to go with r306085.
Looks like I forgot to 'git add' when I submitted the commit. Thanks to Chandler for noticing.

llvm-svn: 306416
2017-06-27 15:26:47 +00:00
Chandler Carruth 3f81d8024c [SROA] Fix PR32902 by more carefully propagating !nonnull metadata.
This is based heavily on the work done ni D34285. I mostly wanted to do
test cleanup for the author to save them some time, but I had a really
hard time understanding why it was so hard to write better test cases
for these issues.

The problem is that because SROA does a second rewrite of the loads and
because we *don't* propagate !nonnull for non-pointer loads, we first
introduced invalid !nonnull metadata and then stripped it back off just
in time to avoid most ways of this PR manifesting. Moving to the more
careful utility only fixes this by changing the predicate to look at the
new load's type rather than the target type. However, that *does* fix
the bug, and the utility is much nicer including adding range metadata
to model the nonnull property after a conversion to an integer.

However, we have bigger problems because we don't actually propagate
*range* metadata, and the utility to do this extracted from instcombine
isn't really in good shape to do this currently. It *only* handles the
case of copying range metadata from an integer load to a pointer load.
It doesn't even handle the trivial cases of propagating from one integer
load to another when they are the same width! This utility will need to
be beefed up prior to using in this location to get the metadata to
fully survive.

And even then, we need to go and teach things to turn the range metadata
into an assume the way we do with nonnull so that when we *promote* an
integer we don't lose the information.

All of this will require a new test case that looks kind-of like
`preserve-nonnull.ll` does here but focuses on range metadata. It will
also likely require more testing because it needs to correctly handle
changes to the integer width, especially as SROA actively tries to
change the integer width!

Last but not least, I'm a little worried about hooking the range
metadata up here because the instcombine logic for converting from
a range metadata *to* a nonnull metadata node seems broken in the face
of non-zero address spaces where null is not mapped to the integer `0`.
So that probably needs to get fixed with test cases both in SROA and in
instcombine to cover it.

But this *does* extract the core PR fix from D34285 of preventing the
!nonnull metadata from being propagated in a broken state just long
enough to feed into promotion and crash value tracking.

On D34285 there is some discussion of zero-extend handling because it
isn't necessary. First, the new load size covers all of the non-undef
(ie, possibly initialized) bits. This may even extend past the original
alloca if loading those bits could produce valid data. The only way its
valid for us to zero-extend an integer load in SROA is if the original
code had a zero extend or those bits were undef. And we get to assume
things like undef *never* satifies nonnull, so non undef bits can
participate here. No need to special case the zero-extend handling, it
just falls out correctly.

The original credit goes to Ariel Ben-Yehuda! I'm mostly landing this to
save a few rounds of trivial edits fixing style issues and test case
formulation.

Differental Revision: D34285

llvm-svn: 306379
2017-06-27 08:32:03 +00:00
Mikael Holmen 37b5120a9a [Reassociate] Make sure EraseInst sets MadeChange
Summary:
EraseInst didn't report that it made IR changes through MadeChange.

It is essential that changes to the IR are reported correctly,
since for example ReassociatePass::run() will indicate that all
analyses are preserved otherwise.
And the CGPassManager determines if the CallGraph is up-to-date
based on status from InstructionCombiningPass::runOnFunction().

Reviewers: craig.topper, rnk, davide

Reviewed By: rnk, davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34616

llvm-svn: 306368
2017-06-27 05:32:13 +00:00
Craig Topper f9319a78a5 [InstCombine] Add test cases demonstrating that we don't optmize select+cmp+cttz/ctlz when the bitwidth is larger than 64 bits.
llvm-svn: 306365
2017-06-27 04:50:47 +00:00
Chandler Carruth c691623585 [SROA] Further test cleanup and add a test for the actual propagation of
the nonnull attribute distinct from rewriting it into an assume.

llvm-svn: 306358
2017-06-27 03:08:45 +00:00
Chandler Carruth 3b978394ba [SROA] Clean up a test case a bit prior to adding more testing for
nonnull as part of fixing PR32902.

llvm-svn: 306353
2017-06-27 02:23:15 +00:00
Dehao Chen 8b7effb344 revert r306336 for breaking ppc test.
llvm-svn: 306344
2017-06-26 23:05:35 +00:00
Dehao Chen 79655792cc Enable vectorizer-maximize-bandwidth by default.
Summary:
vectorizer-maximize-bandwidth is generally useful in terms of performance. I've tested the impact of changing this to default on speccpu benchmarks on sandybridge machines. The result shows non-negative impact:

spec/2006/fp/C++/444.namd                 26.84  -0.31%
spec/2006/fp/C++/447.dealII               46.19  +0.89%
spec/2006/fp/C++/450.soplex               42.92  -0.44%
spec/2006/fp/C++/453.povray               38.57  -2.25%
spec/2006/fp/C/433.milc                   24.54  -0.76%
spec/2006/fp/C/470.lbm                    41.08  +0.26%
spec/2006/fp/C/482.sphinx3                47.58  -0.99%
spec/2006/int/C++/471.omnetpp             22.06  +1.87%
spec/2006/int/C++/473.astar               22.65  -0.12%
spec/2006/int/C++/483.xalancbmk           33.69  +4.97%
spec/2006/int/C/400.perlbench             33.43  +1.70%
spec/2006/int/C/401.bzip2                 23.02  -0.19%
spec/2006/int/C/403.gcc                   32.57  -0.43%
spec/2006/int/C/429.mcf                   40.35  +0.27%
spec/2006/int/C/445.gobmk                 26.96  +0.06%
spec/2006/int/C/456.hmmer                  24.4  +0.19%
spec/2006/int/C/458.sjeng                 27.91  -0.08%
spec/2006/int/C/462.libquantum            57.47  -0.20%
spec/2006/int/C/464.h264ref               46.52  +1.35%

geometric mean                                   +0.29%

The regression on 453.povray seems real, but is due to secondary effects as all hot functions are bit-identical with and without the flag.

I started this patch to consult upstream opinions on this. It will be greatly appreciated if the community can help test the performance impact of this change on other architectures so that we can decided if this should be target-dependent.

Reviewers: hfinkel, mkuper, davidxl, chandlerc

Reviewed By: chandlerc

Subscribers: rengolin, sanjoy, javed.absar, bjope, dorit, magabari, RKSimon, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D33341

llvm-svn: 306336
2017-06-26 21:41:09 +00:00
Dehao Chen 38f1bc7834 Fix the bug when handling shufflevector for aarch64.
Summary: This Fixes https://bugs.llvm.org/show_bug.cgi?id=33600

Reviewers: mssimpso, davidxl, Carrot

Reviewed By: mssimpso

Subscribers: aemerson, rengolin, sanjoy, javed.absar, llvm-commits, kristof.beyls

Differential Revision: https://reviews.llvm.org/D34641

llvm-svn: 306334
2017-06-26 21:33:51 +00:00
Wei Mi 71f06420e4 [GVN] Recommit the patch "Add phi-translate support in scalarpre".
The recommit fixes three bugs: The first one is to use CurrentBlock instead of
PREInstr's Parent as param of performScalarPREInsertion because the Parent
of a clone instruction may be uninitialized. The second one is stop PRE when
CurrentBlock to its predecessor is a backedge and an operand of CurInst is
defined inside of CurrentBlock. The same value defined inside of loop in last
iteration can not be regarded as available. The third one is an out-of-bound
array access in a flipped if guard.

Right now scalarpre doesn't have phi-translate support, so it will miss some
simple pre opportunities. Like the following testcase, current scalarpre cannot
recognize the last "a * b" is fully redundent because a and b used by the last
"a * b" expr are both defined by phis.

long a[100], b[100], g1, g2, g3;
__attribute__((pure)) long goo();

void foo(long a, long b, long c, long d) {

  g1 = a * b;
  if (__builtin_expect(g2 > 3, 0)) {
    a = c;
    b = d;
    g2 = a * b;
  }
  g3 = a * b;      // fully redundant.

}

The patch adds phi-translate support in scalarpre. This is only a temporary
solution before the newpre based on newgvn is available.

llvm-svn: 306313
2017-06-26 18:16:10 +00:00
Michael Zuckerman ce7e187f84 [X86][LLVM][test]Expanding Supports lowerInterleavedStore() in X86InterleavedAccess test.
Adding base tast (to trunk) for Store strid=4 vf=32. 

llvm-svn: 306286
2017-06-26 13:27:32 +00:00
Chandler Carruth 4a000883c7 [LoopSimplify] Re-instate r306081 with a bug fix w.r.t. indirectbr.
This was reverted in r306252, but I already had the bug fixed and was
just trying to form a test case.

The original commit factored the logic for forming dedicated exits
inside of LoopSimplify into a helper that could be used elsewhere and
with an approach that required fewer intermediate data structures. See
that commit for full details including the change to the statistic, etc.

The code looked fine to me and my reviewers, but in fact didn't handle
indirectbr correctly -- it left the 'InLoopPredecessors' vector dirty.

If you have code that looks *just* right, you can end up leaking these
predecessors into a subsequent rewrite, and crash deep down when trying
to update PHI nodes for predecessors that don't exist.

I've added an assert that makes the bug much more obvious, and then
changed the code to reliably clear the vector so we don't get this bug
again in some other form as the code changes.

I've also added a test case that *does* manage to catch this while also
giving some nice positive coverage in the face of indirectbr.

The real code that found this came out of what I think is CPython's
interpreter loop, but any code with really "creative" interpreter loops
mixing indirectbr and other exit paths could manage to tickle the bug.
I was hard to reduce the original test case because in addition to
having a particular pattern of IR, the whole thing depends on the order
of the predecessors which is in turn depends on use list order. The test
case added here was designed so that in multiple different predecessor
orderings it should always end up going down the same path and tripping
the same bug. I hope. At least, it tripped it for me without
manipulating the use list order which is better than anything bugpoint
could do...

llvm-svn: 306257
2017-06-25 22:45:31 +00:00
Chandler Carruth 73367b6a09 [LoopSimplify] Improve a test for loop simplify minorly. NFC.
I did some basic testing while looking for a bug in my recent change to
loop simplify and even though it didn't find the bug it seems like
a useful improvement anyways.

llvm-svn: 306256
2017-06-25 22:24:02 +00:00
Daniel Jasper 4c6cd4ccb7 Revert "[LoopSimplify] Factor the logic to form dedicated exits into a utility."
This leads to a segfault. Chandler already has a test case and should be
able to recommit with a fix soon.

llvm-svn: 306252
2017-06-25 17:58:25 +00:00
Sanjay Patel 2f3ead7adc [InstCombine] add (sext i1 X), 1 --> zext (not X)
http://rise4fun.com/Alive/i8Q

A narrow bitwise logic op is obviously better than math for value tracking, 
and zext is better than sext. Typically, the 'not' will be folded into an 
icmp predicate.

The IR difference would even survive through codegen for x86, so we would see 
worse code:

https://godbolt.org/g/C14HMF

one_or_zero(int, int):                      # @one_or_zero(int, int)
        xorl    %eax, %eax
        cmpl    %esi, %edi
        setle   %al
        retq

one_or_zero_alt(int, int):                  # @one_or_zero_alt(int, int)
        xorl    %ecx, %ecx
        cmpl    %esi, %edi
        setg    %cl
        movl    $1, %eax
        subl    %ecx, %eax
        retq

llvm-svn: 306243
2017-06-25 14:15:28 +00:00
Xinliang David Li b67530e9b9 [PGO] Implementate profile counter regiser promotion
Differential Revision: http://reviews.llvm.org/D34085

llvm-svn: 306231
2017-06-25 00:26:43 +00:00
Hiroshi Inoue b300824ee7 fix trivial typos in comment, NFC
dereferencable -> dereferenceable

llvm-svn: 306210
2017-06-24 15:43:33 +00:00
Vitaly Buka df19ad456e [InstCombine] Don't replace allocas with smaller globals
Summary:
InstCombine replaces large allocas with small globals consts causing buffer overflows
on valid code, see PR33372.

This fix permits this optimization only if the global is dereference for alloca size.

Fixes PR33372

Reviewers: eugenis, majnemer, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34311

llvm-svn: 306194
2017-06-24 01:35:19 +00:00
Jun Bum Lim 506cfb7ab7 [InlineCost] Do not take INT_MAX when Cost is negative
Summary: visitSwitchInst should not take INT_MAX when Cost is negative. Instead of INT_MAX , we also use a valid upperbound cost when overflow occurs in Cost.

Reviewers: hans, echristo, dmgreen

Reviewed By: dmgreen

Subscribers: mcrosier, javed.absar, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D34436

llvm-svn: 306118
2017-06-23 16:12:37 +00:00
Anna Thomas d67165c93c [InstCombine] Recognize and simplify three way comparison idioms
Summary:
Many languages have a three way comparison idiom where comparing two values
produces not a boolean, but a tri-state value. Typical values (e.g. as used in
the lcmp/fcmp bytecodes from Java) are -1 for less than, 0 for equality, and +1
for greater than.

We actually do a great job already of converting three way comparisons into
binary comparisons when the result produced has one a single use. Unfortunately,
such values can have more than one use, and in that case, our existing
optimizations break down.

The patch adds a peephole which converts a three-way compare + test idiom into a
binary comparison on the original inputs. It focused on replacing the test on
the result of the three way compare and does nothing about removing the three
way compare itself. That's left to other optimizations (which do actually kick
in commonly.)
We currently recognize one idiom on signed integer compare. In the future, we
plan to recognize and simplify other comparison idioms on
other signed/unsigned datatypes such as floats, vectors etc.

This is a resurrection of Philip Reames' original patch:
https://reviews.llvm.org/D19452

Reviewers: majnemer, apilipenko, reames, sanjoy, mkazantsev

Reviewed by: mkazantsev

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34278

llvm-svn: 306100
2017-06-23 13:41:45 +00:00
Chandler Carruth 4ab0f4910a [LoopSimplify] Factor the logic to form dedicated exits into a utility.
I want to use the same logic as LoopSimplify to form dedicated exits in
another pass (SimpleLoopUnswitch) so I wanted to factor it out here.

I also noticed that there is a pretty significantly more efficient way
to implement this than the way the code in LoopSimplify worked. We don't
need to actually retain the set of unique exit blocks, we can just
rewrite them as we find them and use only a set to deduplicate.

This did require changing one part of LoopSimplify to not re-use the
unique set of exits, but it only used it to check that there was
a single unique exit. That part of the code is about to walk the exiting
blocks anyways, so it seemed better to rewrite it to use those exiting
blocks to compute this property on-demand.

I also had to ditch a statistic, but it doesn't seem terribly valuable.

Differential Revision: https://reviews.llvm.org/D34049

llvm-svn: 306081
2017-06-23 04:03:04 +00:00
Craig Topper b60f866a8b [LVI] Teach LVI to reason about ORs of icmps similar to how it reasons about ANDs of icmps
Summary: LVI can reason about an AND of icmps on the true dest of a branch. I believe we can do similar for the false dest of ORs. This allows us to get the same answer for the demorganed versions of some of the AND test cases as you can see.

Reviewers: anna, reames

Reviewed By: reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34431

llvm-svn: 306076
2017-06-23 01:08:16 +00:00
Andrew Kaylor d49711996f Restrict the definition of loop preheader to avoid EH blocks
Differential Revision: https://reviews.llvm.org/D34487

llvm-svn: 306070
2017-06-22 23:27:16 +00:00
whitequark 08b20356c3 Define behavior of "stack-probe-size" attribute when inlining.
Also document the attribute, since "probe-stack" already is.

Reviewed By: majnemer

Differential Revision: https://reviews.llvm.org/D34528

llvm-svn: 306069
2017-06-22 23:22:36 +00:00
Farhana Aleen 4b652a5335 Supported lowerInterleavedStore() in X86InterleavedAccess.
Reviewers: RKSimon, DavidKreitzer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32658

llvm-svn: 306068
2017-06-22 22:59:04 +00:00
Eric Christopher 5a7c2f1700 Remove the LoadCombine pass. It was never enabled and is unsupported.
Based on discussions with the author on mailing lists.

llvm-svn: 306067
2017-06-22 22:58:12 +00:00
Anna Thomas 72c90c87f8 [LoopDeletion] Update exits correctly when multiple duplicate edges from an exiting block
Summary:
Currently, we incorrectly update exit blocks of loops when there are multiple
edges from a single exiting block to the exit block. This can happen when we
have switches as the terminator of the exiting blocks.
The fix here is to correctly update the phi nodes in the exit block, and remove
all incoming values *except* for one which is from the preheader.

Note: Currently, this error can manifest only while deleting non-executed loops. However, it
is possible to trigger this error in invariant loops, once we enhance the logic
around the exit conditions for the loop check.

Reviewers: chandlerc, dberlin, sanjoy, efriedma

Reviewed by: efriedma

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D34516

llvm-svn: 306048
2017-06-22 20:20:56 +00:00
Craig Topper dffbbcb3fd [InstCombine] Teach foldSelectICmpAndOr to recognize (select (icmp slt (trunc (X)), 0), Y, (or Y, C2))
Summary:
InstCombine likes to turn (icmp eq (and X, C1), 0) into (icmp slt (trunc (X)), 0) sometimes. This breaks foldSelectICmpAndOr's ability to recognize (select (icmp eq (and X, C1), 0), Y, (or Y, C2))->(or (shl (and X, C1), C3), y).

This patch tries to recover this. I had to flip around some of the early out checks so that I could create a new And instruction during the compare processing without it possibly never getting used.

Reviewers: spatel, majnemer, davide

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34184

llvm-svn: 306029
2017-06-22 16:23:30 +00:00
Craig Topper 0de5e6a729 [InstCombine] Add one use checks to or/and->xnor folding
If the components of the and/or had multiple uses, this transform created an additional instruction.

This patch makes sure we remove one of the components.

Differential Revision: https://reviews.llvm.org/D34498

llvm-svn: 306027
2017-06-22 16:12:02 +00:00
Sanjay Patel d1e811979c [InstCombine] reverse bitcast + bitwise-logic canonicalization (PR33138)
There are 2 parts to this patch made simultaneously to avoid a regression.

We're reversing the canonicalization that moves bitwise vector ops before bitcasts. 
We're moving bitwise vector ops *after* bitcasts instead. That's the 1st and 3rd hunks 
of the patch. The motivation is that there's only one fold that currently depends on 
the existing canonicalization (see next), but there are many folds that would 
automatically benefit from the new canonicalization. 
PR33138 ( https://bugs.llvm.org/show_bug.cgi?id=33138 ) shows why/how we have these 
patterns in IR.

There's an or(and,andn) pattern that requires an adjustment in order to continue matching
to 'select' because the bitcast changes position. This match is unfortunately complicated 
because it requires 4 logic ops with optional bitcast and sext ops.

Test diffs:

  1. The bitcast.ll and bitcast-bigendian.ll changes show the most basic difference - 
     bitcast comes before logic.
  2. There are also tests with no diffs in bitcast.ll that verify that we're still doing 
     folds that were enabled by the previous canonicalization.
  3. icmp-xor-signbit.ll shows the payoff. We don't need to adjust existing icmp patterns 
     to look through bitcasts.
  4. logical-select.ll contains several tests for the or(and,andn) --> select fold to 
     verify that we are still handling those cases. The lone diff shows the movement of 
     the bitcast from the new canonicalization rule.

Differential Revision: https://reviews.llvm.org/D33517

llvm-svn: 306011
2017-06-22 15:46:54 +00:00
Diana Picus b512e91515 Revert "Enable vectorizer-maximize-bandwidth by default."
This reverts commit r305960 because it broke self-hosting on AArch64.

llvm-svn: 305990
2017-06-22 10:00:28 +00:00
Craig Topper 71e2c1611e [InstCombine] Add test cases to demonstrate that and->xnor and or->xnor folding can create more instructions than it removed when there are multiple uses. NFC
llvm-svn: 305985
2017-06-22 05:20:39 +00:00
Dehao Chen 014db29b89 Enable vectorizer-maximize-bandwidth by default.
Summary:
vectorizer-maximize-bandwidth is generally useful in terms of performance. I've tested the impact of changing this to default on speccpu benchmarks on sandybridge machines. The result shows non-negative impact:

spec/2006/fp/C++/444.namd                 26.84  -0.31%
spec/2006/fp/C++/447.dealII               46.19  +0.89%
spec/2006/fp/C++/450.soplex               42.92  -0.44%
spec/2006/fp/C++/453.povray               38.57  -2.25%
spec/2006/fp/C/433.milc                   24.54  -0.76%
spec/2006/fp/C/470.lbm                    41.08  +0.26%
spec/2006/fp/C/482.sphinx3                47.58  -0.99%
spec/2006/int/C++/471.omnetpp             22.06  +1.87%
spec/2006/int/C++/473.astar               22.65  -0.12%
spec/2006/int/C++/483.xalancbmk           33.69  +4.97%
spec/2006/int/C/400.perlbench             33.43  +1.70%
spec/2006/int/C/401.bzip2                 23.02  -0.19%
spec/2006/int/C/403.gcc                   32.57  -0.43%
spec/2006/int/C/429.mcf                   40.35  +0.27%
spec/2006/int/C/445.gobmk                 26.96  +0.06%
spec/2006/int/C/456.hmmer                  24.4  +0.19%
spec/2006/int/C/458.sjeng                 27.91  -0.08%
spec/2006/int/C/462.libquantum            57.47  -0.20%
spec/2006/int/C/464.h264ref               46.52  +1.35%

geometric mean                                   +0.29%

The regression on 453.povray seems real, but is due to secondary effects as all hot functions are bit-identical with and without the flag.

I started this patch to consult upstream opinions on this. It will be greatly appreciated if the community can help test the performance impact of this change on other architectures so that we can decided if this should be target-dependent.

Reviewers: hfinkel, mkuper, davidxl, chandlerc

Reviewed By: chandlerc

Subscribers: rengolin, sanjoy, javed.absar, bjope, dorit, magabari, RKSimon, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D33341

llvm-svn: 305960
2017-06-21 22:01:32 +00:00
whitequark ed54b4a798 Add a "probe-stack" attribute
This attribute is used to ensure the guard page is triggered on stack
overflow. Stack frames larger than the guard page size will generate
a call to __probestack to touch each page so the guard page won't
be skipped.

Reviewed By: majnemer

Differential Revision: https://reviews.llvm.org/D34386

llvm-svn: 305939
2017-06-21 18:46:50 +00:00
Dehao Chen 50f2aa19e8 Do not inline recursive direct calls in sample loader pass.
Summary: r305009 disables recursive inlining for indirect calls in sample loader pass. The same logic applies to direct recursive calls.

Reviewers: iteratee, davidxl

Reviewed By: iteratee

Subscribers: sanjoy, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D34456

llvm-svn: 305934
2017-06-21 17:57:43 +00:00
Sanjay Patel deed579140 [x86] set the datalayout to match the RUN line triple; NFC
I don't think there's any visible difference from having the wrong layout
for the 32-bit case at this point, but that could change in the future.

llvm-svn: 305931
2017-06-21 17:06:24 +00:00
Craig Topper 5b173f2bb3 [InstCombine] Add range metadata to cttz/ctlz/ctpop intrinsic calls based on known bits
Summary:
I noticed that passing known bits across these intrinsics isn't great at capturing the information we really know. Turning known bits of the input into known bits of a count output isn't able to convey a lot of what we really know.

This patch adds range metadata to these intrinsics based on the known bits.

Currently the patch punts if we already have range metadata present.

Reviewers: spatel, RKSimon, davide, majnemer

Reviewed By: RKSimon

Subscribers: sanjoy, hfinkel, llvm-commits

Differential Revision: https://reviews.llvm.org/D32582

llvm-svn: 305927
2017-06-21 16:32:35 +00:00
Craig Topper ae86cc725d [InstCombine] Don't let folding (select (icmp eq (and X, C1), 0), Y, (or Y, C2)) create more instructions than it removes
Summary:
Previously this folding had no checks to see if it was going to result in less instructions. This was pointed out during the review of D34184

This patch adds code to count how many instructions its going to create vs how many its going to remove so we can make a proper decision.

Reviewers: spatel, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34437

llvm-svn: 305926
2017-06-21 16:07:13 +00:00
Craig Topper cbac691c4b [Reassociate] Support xor reassociating for splat vectors
Summary: This patch adds support for xors of splat vectors.

Reviewers: mcrosier

Reviewed By: mcrosier

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34354

llvm-svn: 305925
2017-06-21 16:07:09 +00:00
Max Kazantsev eac01d4c62 [SCEV] Make MulOpsInlineThreshold lower to avoid excessive compilation time
MulOpsInlineThreshold option of SCEV is defaulted to 1000, which is inadequately high.
When constructing SCEVs of expressions like:

  x1 = a * a
  x2 = x1 * x1
  x3 = x2 * x2
    ...

We actually have huge SCEVs with max allowed amount of operands inlined.
Such expressions are easy to get from unrolling of loops looking like

  x = a
  for (i = 0; i < n; i++)
    x = x * x

Or more tricky cases where big powers are involved. If some non-linear analysis
tries to work with a SCEV that has 1000 operands, it may lead to excessively long
compilation. The attached test does not pass within 1 minute with default threshold.

This patch decreases its default value to 32, which looks much more reasonable if we
use analyzes with complexity O(N^2) or O(N^3) working with SCEV.

Differential Revision: https://reviews.llvm.org/D34397

llvm-svn: 305882
2017-06-21 07:28:13 +00:00
Matt Arsenault 67cd347e93 AMDGPU: Allow vectorization of packed types
llvm-svn: 305844
2017-06-20 20:38:06 +00:00
Sanjay Patel 0656629b87 [x86] enable CGP memcmp() expansion for 2/4/8 byte sizes
There are a couple of potential improvements as seen in the IR and asm:
1. We're unnecessarily extending to a larger type to compare values.
2. The codegen for (select cond, 1, -1) could avoid a cmov.
(or we could change the order of the compares, so we have a select with 0 operand)

llvm-svn: 305802
2017-06-20 15:58:30 +00:00
Sanjay Patel 4ccbd58d70 [InstCombine] fix code/test comments for r305792; NFC
These diffs were in the last version of the patch in D33342,
but I accidentally committed the previous rev. 

llvm-svn: 305793
2017-06-20 12:45:46 +00:00
Sanjay Patel adca825dc1 [InstCombine] try to canonicalize xor-of-icmps to and-of-icmps
We have a large portfolio of folds for and-of-icmps and or-of-icmps in InstSimplify and InstCombine, 
but hardly anything for xor-of-icmps. Rather than trying to rethink and translate all of those folds, 
we can use the truth table definition of xor:

X ^ Y --> (X | Y) & !(X & Y)

...to see if we can convert the xor to and/or and then use the existing folds.

http://rise4fun.com/Alive/J9v

Differential Revision: https://reviews.llvm.org/D33342

llvm-svn: 305792
2017-06-20 12:40:55 +00:00
Ana Pazos f731bde064 [PATCH] [PGO] Fixed cast operation in emIntrinsicVisitor::instrumentOneMemIntrinsic.
Reviewers: xur, efriedma, davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34293

llvm-svn: 305737
2017-06-19 20:04:33 +00:00
Taewook Oh 9083547ae3 Improve profile-guided heuristics to use estimated trip count.
Summary:
Existing heuristic uses the ratio between the function entry
frequency and the loop invocation frequency to find cold loops. However,
even if the loop executes frequently, if it has a small trip count per
each invocation, vectorization is not beneficial. On the other hand,
even if the loop invocation frequency is much smaller than the function
invocation frequency, if the trip count is high it is still beneficial
to vectorize the loop.

This patch uses estimated trip count computed from the profile metadata
as a primary metric to determine coldness of the loop. If the estimated
trip count cannot be computed, it falls back to the original heuristics.

Reviewers: Ayal, mssimpso, mkuper, danielcdh, wmi, tejohnson

Reviewed By: tejohnson

Subscribers: tejohnson, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D32451

llvm-svn: 305729
2017-06-19 18:48:58 +00:00
Bjorn Pettersson 475fcd9cd8 [InstCombine] Make sure AddReachableCodeToWorklist sets MadeIRChange
Summary:
Some optimizations in AddReachableCodeToWorklist did not update
the MadeIRChange state. This could happen both when removing
trivially dead instructions (DCE) and at constant folds.

It is essential that changes to the IR is reported correctly,
since for example InstCombinePass::run() will indicate that all
analyses are preserved otherwise.
And the CGPassManager determines if the CallGraph is up-to-date
based on status from InstructionCombiningPass::runOnFunction().

The new test case early_dce_clobbers_callgraph.ll is a reproducer
for some asserts that started to trigger after changes in the
inliner in r305245. With this patch the test case passes again.

Reviewers: sanjoy, craig.topper, dblaikie

Reviewed By: craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34346

llvm-svn: 305725
2017-06-19 18:00:27 +00:00
Hans Wennborg ca69fc1cb7 Revert r304824 "Fix PR23384 (part 3 of 3)"
This seems to be interacting badly with ASan somehow, causing false reports of
heap-buffer overflows: PR33514.

> Summary:
> The patch makes instruction count the highest priority for
> LSR solution for X86 (previously registers had highest priority).
>
> Reviewers: qcolombet
>
> Differential Revision: http://reviews.llvm.org/D30562
>
> From: Evgeny Stupachenko <evstupac@gmail.com>

llvm-svn: 305720
2017-06-19 17:57:15 +00:00
Craig Topper a7529b68cc [InstCombine] Cleanup some duplicated one use checks
Summary:
These 4 patterns have the same one use check repeated twice for each. Once without a cast and one with. But the cast has no effect on what method is called.

For the OR case I believe it is always profitable regardless of the number of uses since we'll never increase the instruction count.

For the AND case I believe it is profitable if the pair of xors has one use such that we'll get rid of it completely. Or if the C value is something freely invertible, in which case the not doesn't cost anything.

Reviewers: spatel, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34308

llvm-svn: 305705
2017-06-19 16:23:49 +00:00
Craig Topper ef85498e05 [Reassociate] Support some reassociation of vector xors
Summary:
Currently we don't try to do anything with vector xors.

This patch adds support for removing duplicate pairs from a chain of vector xors as its pretty easy to support. We still dont' try to combine the xors with and/ors, but I might try that in a future patch.

Reviewers: mcrosier, davide, resistor

Reviewed By: mcrosier

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34338

llvm-svn: 305704
2017-06-19 16:23:46 +00:00
Xin Tong b412831d11 [TRE] Improve code motion in TRE, use AA to tell whether a load can be moved before a call that writes to memory.
Summary: use AA to tell whether a load can be moved before a call that writes to memory.

Reviewers: dberlin, davide, sanjoy, hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, llvm-commits

Differential Revision: https://reviews.llvm.org/D34115

llvm-svn: 305698
2017-06-19 15:21:18 +00:00
Max Kazantsev 35b2a18eb9 [SCEV] Teach SCEVExpander to expand BinPow
Current implementation of SCEVExpander demonstrates a very naive behavior when
it deals with power calculation. For example, a SCEV for x^8 looks like

  (x * x * x * x * x * x * x * x)

If we try to expand it, it generates a very straightforward sequence of muls, like:

  x2 = mul x, x
  x3 = mul x2, x
  x4 = mul x3, x
      ...
  x8 = mul x7, x

This is a non-efficient way of doing that. A better way is to generate a sequence of
binary power calculation. In this case the expanded calculation will look like:

  x2 = mul x, x
  x4 = mul x2, x2
  x8 = mul x4, x4

In some cases the code size reduction for such SCEVs is dramatic. If we had a loop:

  x = a;
  for (int i = 0; i < 3; i++)
    x = x * x;

And this loop have been fully unrolled, we have something like:

  x = a;
  x2 = x * x;
  x4 = x2 * x2;
  x8 = x4 * x4;

The SCEV for x8 is the same as in example above, and if we for some reason
want to expand it, we will generate naively 7 multiplications instead of 3.
The BinPow expansion algorithm here allows to keep code size reasonable.

This patch teaches SCEV Expander to generate a sequence of BinPow multiplications
if we have repeating arguments in SCEVMulExpressions.

Differential Revision: https://reviews.llvm.org/D34025

llvm-svn: 305663
2017-06-19 06:24:53 +00:00
Daniel Berlin 36b08b2088 NewGVN: Fix PR 33461, caused by slightly overzealous verification.
llvm-svn: 305657
2017-06-19 00:24:00 +00:00
Xin Tong 9d2a5b1cf7 Add argmononly attribute to strlen and wcslen, i.e. they only read memory (string) passed to them.
Summary:
This allows strlen to be moved out of the loop in case its argument is
not modified in the loop in LICM.

Reviewers: hfinkel, davide, sanjoy, dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34323

llvm-svn: 305641
2017-06-18 03:10:26 +00:00
Sanjoy Das b70ddd8901 [SROA] Add support for non-integral pointers
Summary: C.f. http://llvm.org/docs/LangRef.html#non-integral-pointer-type

Reviewers: chandlerc, loladiro

Reviewed By: loladiro

Subscribers: reames, loladiro, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D32203

llvm-svn: 305639
2017-06-17 20:28:13 +00:00
Davide Italiano 64f94fe02a [InstCombine] Make FPMathOperator working with ConstantExpression(s).
Fixes PR33453.

Differential Revision:  https://reviews.llvm.org/D34303

llvm-svn: 305618
2017-06-17 00:07:22 +00:00
Wei Mi c7ba876323 Revert rL305578. There is still some buildbot failure to be fixed.
llvm-svn: 305603
2017-06-16 23:14:35 +00:00
Anna Thomas 6bc14c65ad [InstCombine] Set correct insertion point for selects generated while folding phis
Summary:
When we fold vector constants that are operands of phi's that feed into select,
we need to set the correct insertion point for the *new* selects that get generated.
The correct insertion point is the incoming block for the phi.
Such cases can occur with patch r298845, which fixed folding of
vector constants, but the new selects could be inserted incorrectly (as the added
test case shows).

Reviewers: majnemer, spatel, sanjoy

Reviewed by: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34162

llvm-svn: 305591
2017-06-16 21:08:37 +00:00
Evgeniy Stepanov 69e2b1d4a7 Change YAML traits for vector<string> to flow_vector.
This is a workaround for an ODR conflict with the definition in
AMDGPUCodeObjectMetadata.cpp.

llvm-svn: 305584
2017-06-16 20:50:41 +00:00
Wei Mi a2493b6ad9 [GVN] Recommit the patch "Add phi-translate support in scalarpre".
The recommit fixes two bugs: The first one is to use CurrentBlock instead of
PREInstr's Parent as param of performScalarPREInsertion because the Parent
of a clone instruction may be uninitialized. The second one is stop PRE when
CurrentBlock to its predecessor is a backedge and an operand of CurInst is
defined inside of CurrentBlock. The same value defined inside of loop in last
iteration can not be regarded as available.

Right now scalarpre doesn't have phi-translate support, so it will miss some
simple pre opportunities. Like the following testcase, current scalarpre cannot
recognize the last "a * b" is fully redundent because a and b used by the last
"a * b" expr are both defined by phis.

long a[100], b[100], g1, g2, g3;
__attribute__((pure)) long goo();

void foo(long a, long b, long c, long d) {

  g1 = a * b;
  if (__builtin_expect(g2 > 3, 0)) {
    a = c;
    b = d;
    g2 = a * b;
  }
  g3 = a * b;      // fully redundant.

}
The patch adds phi-translate support in scalarpre. This is only a temporary
solution before the newpre based on newgvn is available.

Differential Revision: https://reviews.llvm.org/D32252

llvm-svn: 305578
2017-06-16 20:21:01 +00:00
Craig Topper 31399b7024 [InstCombine] Add test cases to show missed opportunities due to overly conservative single use checks. NFC
llvm-svn: 305562
2017-06-16 16:44:36 +00:00
Daniel Neilson 3faabbbe85 [Atomics] Rename and change prototype for atomic memcpy intrinsic
Summary:

Background: http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html

This change is to alter the prototype for the atomic memcpy intrinsic. The prototype itself is being changed to more closely resemble the semantics and parameters of the llvm.memcpy intrinsic -- to ease later combination of the llvm.memcpy and atomic memcpy intrinsics. Furthermore, the name of the atomic memcpy intrinsic is being changed to make it clear that it is not a generic atomic memcpy, but specifically a memcpy is unordered atomic.

Reviewers: reames, sanjoy, efriedma

Reviewed By: reames

Subscribers: mzolotukhin, anna, llvm-commits, skatkov

Differential Revision: https://reviews.llvm.org/D33240

llvm-svn: 305558
2017-06-16 14:43:59 +00:00
Craig Topper da6ea0d3e8 [InstCombine] Fold (!iszero(A & K1) & !iszero(A & K2)) -> (A & (K1 | K2)) == (K1 | K2) if K1 and K2 are a 1-bit mask
Summary: This is the demorganed version of the case we already handle for the OR of iszero.

Reviewers: spatel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34244

llvm-svn: 305548
2017-06-16 05:10:37 +00:00
Evgeniy Stepanov 4d4ee93d25 [cfi] CFI-ICall for ThinLTO.
Implement ControlFlowIntegrity for indirect function calls in ThinLTO.
Design follows the RFC in llvm-dev, see
https://groups.google.com/d/msg/llvm-dev/MgUlaphu4Qc/kywu0AqjAQAJ

llvm-svn: 305533
2017-06-16 00:18:29 +00:00
Craig Topper ca73ba019b [InstCombine] Add test cases to demonstrate instcombine increasing instruction count when trying to fold (select (icmp eq (and X, C1), 0), Y, (or Y, C2))->(or (shl (and X, C1), C3), y) when the pieces have multiple uses.
llvm-svn: 305509
2017-06-15 21:38:44 +00:00
Craig Topper 51dc5327c2 [InstCombine] Pre-commit test cases for the transform proposed in D34244.
llvm-svn: 305492
2017-06-15 18:56:05 +00:00
Craig Topper 6eec9e21a5 [InstCombine] Handle (iszero(A & K1) | iszero(A & K2)) -> (A & (K1 | K2)) != (K1 | K2) when the one of the Ands is commuted relative to the other
Currently we expect A to be on the same side in both Ands but nothing guarantees that.

While there also switch to using matchers for some of the code.

Differential Revision: https://reviews.llvm.org/D34230

llvm-svn: 305487
2017-06-15 17:55:20 +00:00
Craig Topper 6ec5a63073 [BasicAA] Add test case that goes with r305481.
Forgot to 'git add' the file.

llvm-svn: 305483
2017-06-15 17:27:56 +00:00
Sanjay Patel 4749669583 [InstCombine] auto-generate complete checks; NFC
llvm-svn: 305474
2017-06-15 15:13:19 +00:00
Craig Topper 6ea89f2497 [InstCombine] Add a test case to show a case where don't handle a partially commuted IR. NFC
llvm-svn: 305438
2017-06-15 01:01:01 +00:00
Daniel Berlin 6d2db9edb2 PredicateInfo: Don't insert conditional info when a conditional branch jumps to the same target regardless of condition
llvm-svn: 305416
2017-06-14 21:19:52 +00:00
Davide Italiano 0dc4778067 [EarlyCSE] Make PhiToCheck in removeMSSA() a set.
This way we end up not looking at PHI args already removed.
MemSSA now goes through the updater so we can prune
it to avoid having redundant MemoryPHI arguments, but that
doesn't quite work for the general case.

Discussed with Daniel Berlin, fixes PR33406.

llvm-svn: 305409
2017-06-14 19:29:53 +00:00
Craig Topper f93b7b1c1f [ValueTracking] Correct early out in computeKnownBitsFromOperator to work with non power of 2 bit widths
There's an early out that's trying to detect when we don't know any bits that make up the legal range of a shift. The code subtracts one from BitWidth which creates a mask in the lower bits for power of 2 bit widths. This is then ANDed with the known bits to see if any of those bits are known. If the bit width isn't a power of 2 this creates a non-sensical mask.

This patch corrects this by rounding up to a power of 2 before doing the subtract and mask.

Differential Revision: https://reviews.llvm.org/D34165

llvm-svn: 305400
2017-06-14 17:04:59 +00:00
Florian Hahn ffc498dfcc Align definition of DW_OP_plus with DWARF spec [3/3]
Summary:
This patch is part of 3 patches that together form a single patch, but must be introduced in stages in order not to break things.
 
The way that LLVM interprets DW_OP_plus in DIExpression nodes is basically that of the DW_OP_plus_uconst operator since LLVM expects an unsigned constant operand. This unnecessarily restricts the DW_OP_plus operator, preventing it from being used to describe the evaluation of runtime values on the expression stack. These patches try to align the semantics of DW_OP_plus and DW_OP_minus with that of the DWARF definition, which pops two elements off the expression stack, performs the operation and pushes the result back on the stack.
 
This is done in three stages:
• The first patch (LLVM) adds support for DW_OP_plus_uconst.
• The second patch (Clang) contains changes all its uses from DW_OP_plus to DW_OP_plus_uconst.
• The third patch (LLVM) changes the semantics of DW_OP_plus and DW_OP_minus to be in line with its DWARF meaning. This patch includes the bitcode upgrade from legacy DIExpressions.

Patch by Sander de Smalen.

Reviewers: echristo, pcc, aprantl

Reviewed By: aprantl

Subscribers: fhahn, javed.absar, aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D33894

llvm-svn: 305386
2017-06-14 13:14:38 +00:00
Craig Topper 82e4c2dfd2 [InstCombine] Add test cases demonstrating failure to handle (select (icmp eq (and X, C1), 0), Y, (or Y, C2)) when the icmp portion gets turned into a truncate and a signed compare with 0.
InstCombine has an optimization that recognizes an and with the sign bit of legal type size and turns it into a truncate and compare that checks the sign bit. But the select handling code doesn't recognize this idiom.

llvm-svn: 305338
2017-06-13 23:30:41 +00:00
Teresa Johnson 8015f88525 [PGO] Update VP metadata after memory intrinsic optimization
Summary:
Leave an updated VP metadata on the fallback memcpy intrinsic after
specialization. This can be used for later possible expansion based on
the average of the remaining values.

Reviewers: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34164

llvm-svn: 305321
2017-06-13 20:44:08 +00:00
David Blaikie ae8c4af4ac Inliner: Don't remove calls to readnone+nounwind (but not always_inline) functions in the AlwaysInliner
llvm-svn: 305245
2017-06-12 23:01:17 +00:00
Anna Thomas 4b027e8f89 [RS4GC] Drop invalid metadata after pointers are relocated
Summary:
After RS4GC, we should drop metadata that is no longer valid. These metadata
is used by optimizations scheduled after RS4GC, and can cause a miscompile.
One such metadata is invariant.load which is used by LICM sinking transform.
After rewriting statepoints, the address of a load maybe relocated. With
invariant.load metadata on a load instruction, LICM sinking assumes the
loaded value (from a dererenceable address) to be invariant, and
rematerializes the load operand and the load at the exit block.
This transforms the IR to have an unrelocated use of the
address after a statepoint, which is incorrect.
Other metadata we conservatively remove are related to
dereferenceability and noalias metadata.

This patch drops such metadata on store and load instructions after
rewriting statepoints.

Reviewers: reames, sanjoy, apilipenko

Reviewed by: reames

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D33756

llvm-svn: 305234
2017-06-12 21:26:53 +00:00
Sanjay Patel 2e33bbaff0 [InstCombine] lshr (sext iM X to iN), N-M --> zext (ashr X, min(N-M, M-1)) to iN
This is a follow-up to https://reviews.llvm.org/D33879 / https://reviews.llvm.org/rL304939 ,
and was discussed in https://reviews.llvm.org/D33338.

We prefer this form because a narrower shift may be cheaper, and we can more easily fold a
zext than a sext.

http://rise4fun.com/Alive/slVe

Name: shz
%s = sext i8 %x to i12
%r = lshr i12 %s, 4
=>
%a = ashr i8 %x, 4
%r = zext i8 %a to i12 

llvm-svn: 305190
2017-06-12 14:23:43 +00:00
Xinliang David Li 7ed6cd32ea [PartialInlining] Support shrinkwrap life_range markers
Differential Revision: http://reviews.llvm.org/D33847

llvm-svn: 305170
2017-06-11 20:46:05 +00:00
Simon Pilgrim 448c2290f1 [X86][SLM] Add SLM arithmetic vectorization tests
As discussed on D33983, as SLM has so many custom costs its worth testing as well.

llvm-svn: 305151
2017-06-10 19:16:09 +00:00
Andrew Kaylor 647025f9e1 [InstSimplify] Don't constant fold or DCE calls that are marked nobuiltin
Differential Revision: https://reviews.llvm.org/D33737

llvm-svn: 305132
2017-06-09 23:18:11 +00:00
Yaxun Liu 6455b0dbf3 [SROA] Fix APInt size when load/store have different address space
Currently there is a bug in SROA::presplitLoadsAndStores which causes assertion in
GEPOperator::accumulateConstantOffset.

Basically it does not consider the situation that the pointer operand of load or store
may be in a non-zero address space and its size may be different from the size of
a pointer in address space 0.

This patch fixes assertion when compiling Blender Cycles kernels for amdgpu backend.

Diffferential Revision: https://reviews.llvm.org/D33298

llvm-svn: 305107
2017-06-09 20:46:29 +00:00
Keno Fischer 5329174cb1 [Sink] Fix predicate in legality check
Summary:
isSafeToSpeculativelyExecute is the wrong predicate to use here.
All that checks for is whether it is safe to hoist a value due to
unaligned/un-dereferencable accesses. However, not only are we doing
sinking rather than hoisting, our concern is that the location
we're loading from may have been modified. Instead forbid sinking
any load across a critical edge.

Reviewers: majnemer

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D33179

llvm-svn: 305102
2017-06-09 19:31:10 +00:00
Serguei Katkov 38414b57f9 [IndVars] Add an option to be able to disable LFTR
This change adds an option disable-lftr to be able to disable Linear Function Test Replace optimization.
By default option is off so current behavior is not changed.

Reviewers: reames, sanjoy, wmi, andreadb, apilipenko
Reviewed By: sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33979

llvm-svn: 305055
2017-06-09 06:11:59 +00:00
George Burgess IV a20352e13e [LoopVectorize] Don't preserve nsw/nuw flags on shrunken ops.
If we're shrinking a binary operation, it may be the case that the new
operations wraps where the old didn't. If this happens, the behavior
should be well-defined. So, we can't always carry wrapping flags with us
when we shrink operations.

If we do, we get incorrect optimizations in cases like:

void foo(const unsigned char *from, unsigned char *to, int n) {
  for (int i = 0; i < n; i++)
    to[i] = from[i] - 128;
}

which gets optimized to:

void foo(const unsigned char *from, unsigned char *to, int n) {
  for (int i = 0; i < n; i++)
    to[i] = from[i] | 128;
}

Because:
- InstCombine turned `sub i32 %from.i, 128` into
  `add nuw nsw i32 %from.i, 128`.
- LoopVectorize vectorized the add to be `add nuw nsw <16 x i8>` with a
  vector full of `i8 128`s
- InstCombine took advantage of the fact that the newly-shrunken add
  "couldn't wrap", and changed the `add` to an `or`.

InstCombine seems happy to figure out whether we can add nuw/nsw on its
own, so I just decided to drop the flags. There are already a number of
places in LoopVectorize where we rely on InstCombine to clean up.

llvm-svn: 305053
2017-06-09 03:56:15 +00:00
David Blaikie cb9327b02d Inliner: Don't touch indirect calls
Other comments/implications are that this isn't intended behavior (nor
perserved/reimplemented in the new inliner) & complicates fixing the
'inlining' of trivially dead calls without consulting the cost function
first.

llvm-svn: 305052
2017-06-09 03:29:20 +00:00
Evgeniy Stepanov d02dbf6b1c [CFI] Remove LinkerSubsectionsViaSymbols.
Since D17854 LinkerSubsectionsViaSymbols is unnecessary.

It is interfering with ThinLTO implementation of CFI-ICall, where
the aliases used on the !LinkerSubsectionsViaSymbols branch are
needed to export jump tables to ThinLTO backends.

This is the second attempt to land this change after fixing PR33316.

llvm-svn: 305031
2017-06-08 23:38:22 +00:00
Peter Collingbourne e357fbd243 Write summaries for merged modules when splitting modules for ThinLTO.
This is to prepare to allow for dead stripping of globals in the
merged modules.

Differential Revision: https://reviews.llvm.org/D33921

llvm-svn: 305027
2017-06-08 23:01:49 +00:00
Sanjay Patel bc6d8b0f51 [CGP, x86] add tests for potential memcmp expansion; NFC
No IR tests were added with rL304313 ( https://reviews.llvm.org/D28637 ),
so I want these for extra coverage if we enable memcmp expansion for x86.
As shown, nothing is expanded for x86 in CGP yet.

Also fundamentally, we're doing an IR transform, so we should have IR tests 
for just that part. If something goes wrong, we need to know if the bug is
in CGP or later lowering.

llvm-svn: 305011
2017-06-08 20:40:39 +00:00
Dehao Chen e2a428bad7 Do not early-inline recursive calls in sample profile loader.
Summary: Early-inlining of recursive call makes the code size bloat exponentially. We should not disable it.

Reviewers: davidxl, dnovillo, iteratee

Reviewed By: iteratee

Subscribers: iteratee, llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D34017

llvm-svn: 305009
2017-06-08 20:11:57 +00:00
Galina Kistanova e128958552 Changed a comparison operator for std::stable_sort to implement strict weak ordering.
This is a temporarily fix which needs additional work, as it triggers a test3 failure.
test3 is commented out till then.

llvm-svn: 304993
2017-06-08 17:27:40 +00:00
Nirav Dave 62fb8498d3 InferAddressSpaces: Avoid assertion failure with replacing identical
cloned constexpr

Have cloneConstantExprWithNewAddressSpaces return nullptr when
returning initial ConstantExpr.

Reviewers: arsenm

Subscribers: jholewinski, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D33995

llvm-svn: 304975
2017-06-08 13:20:55 +00:00
Simon Pilgrim 631109c09c Regenerate test
llvm-svn: 304973
2017-06-08 10:24:49 +00:00
Sanjay Patel 66f7fdb300 [InstCombine] fold lshr (sext X), C1 --> zext (lshr X, C2)
This was discussed in D33338. We have larger pattern-matching ending in a truncate that 
we can reduce or remove by handling these smaller patterns first. Further motivation is 
that narrower shift ops are easier for value tracking and zext is better than sext.

http://rise4fun.com/Alive/rhh

Name: boolshift
%sext = sext i1 %x to i8
%r = lshr i8 %sext, 7

=>

%r = zext i1 %x to i8

Name: noboolshift
%sext = sext i3 %x to i8
%r = lshr i8 %sext, 7

=>

%sh = lshr i3 %x, 2
%r = zext i3 %sh to i8

Differential Revision: https://reviews.llvm.org/D33879

llvm-svn: 304939
2017-06-07 20:32:08 +00:00
Xinliang David Li 4f49bee764 Fix builin_expect lowering bug
PR33346

Skip cases when expected value is not constant int.

llvm-svn: 304933
2017-06-07 18:32:24 +00:00
Peter Collingbourne aaae7eed5c LowerTypeTests: Generate simpler IR for br(llvm.type.test, then, else).
This makes it so that the code quality for CFI checks when compiling
with -O2 and linking with --lto-O0 is similar to that of the rest of
the code.

Reduces the size of a chrome binary built with -O2/--lto-O0 by
about 750KB.

Differential Revision: https://reviews.llvm.org/D33925

llvm-svn: 304921
2017-06-07 15:49:14 +00:00
NAKAMURA Takumi b34fd817ba Introduce the new feature "abi-breaking-checks" to satisfy -reverse-iterate in llvm/test/Transforms/Util/PredicateInfo/
A few tests in llvm/test/Transforms/Util/PredicateInfo/ are using -reverse-iterate.
The option -reverse-iterate is enabled with +Asserts in usual cases, but it can be turned on/off regardless of LLVM_ENABLE_ASSERTIONS.

I wonder if this were incompatible to https://reviews.llvm.org/D33908 (r304757).

Differential Revision: https://reviews.llvm.org/D33854

llvm-svn: 304851
2017-06-07 00:22:52 +00:00
Evgeny Stupachenko ff9d80cc7a Added tests for X86InterleavedStore.
Reviewers: RKSimon, DavidKreitzer

Differential Revision: https://reviews.llvm.org/D33684

Patch by: Aleen Farhana <Farhana.aleen@gmail.com>

llvm-svn: 304834
2017-06-06 21:08:00 +00:00
Alexey Bataev 0c2fca94a3 [SLP] Change extension of the test, NFC.
llvm-svn: 304829
2017-06-06 20:27:45 +00:00
Alexey Bataev 7266fec0d4 [SLP] Add a test for fix of PR32164, NFC.
llvm-svn: 304826
2017-06-06 20:11:35 +00:00
Evgeny Stupachenko 3b88291581 Fix PR23384 (part 3 of 3)
Summary:
The patch makes instruction count the highest priority for
 LSR solution for X86 (previously registers had highest priority).

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D30562

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 304824
2017-06-06 20:04:16 +00:00
Anna Thomas 2f1556726c [LoopIdiom] Move X86 specific atomic memcpy test to the X86 directory
Patch https://reviews.llvm.org/rL304806 was causing failures in Aarch64
and multiple other targets since the test should be run on X86 only.

Specifying the target triple is not enough. Moving the testcase to the
X86 target directory in LoopIdiom.

llvm-svn: 304809
2017-06-06 17:46:41 +00:00
Daniel Berlin eafdd862e5 NewGVN: Fix PR/33187. This is a bug caused by two things:
1. When there is no perfect iteration order, we can't let phi nodes
put themselves in terms of things that come later in the iteration
order, or we will endlessly cycle (the normal RPO algorithm clears the
hashtable to avoid this issue).
2. We are sometimes erasing the wrong expression (causing pessimism)
because our equality says loads and stores are the same.
We introduce an exact equality function and use it when erasing to
make sure we erase only identical expressions, not equivalent ones.

llvm-svn: 304807
2017-06-06 17:15:28 +00:00
Anna Thomas b2a212c070 [Atomics][LoopIdiom] Recognize unordered atomic memcpy
Summary:
Expanding the loop idiom test for memcpy to also recognize
unordered atomic memcpy. The only difference for recognizing
an unordered atomic memcpy and instead of a normal memcpy is
that the loads and/or stores involved are unordered atomic operations.

Background:  http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html

Patch by Daniel Neilson!

Reviewers: reames, anna, skatkov

Reviewed By: reames, anna

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D33243

llvm-svn: 304806
2017-06-06 16:45:25 +00:00
Anna Thomas 7218032019 [IRCE] Canonicalize pre/post loops after the blocks are added into parent loop
Summary:
We were canonizalizing the pre loop (into loop-simplify form) before
the post loop blocks were added into parent loop. This is incorrect when IRCE is
done on a subloop. The post-loop blocks are created, but not yet added to the
parent loop. So, loop-simplification on the pre-loop incorrectly updates
LoopInfo.

This patch corrects the ordering so that pre and post loop blocks are added to
parent loop (if any), and then the loops are canonicalized to LCSSA and
LoopSimplifyForm.

Reviewers: reames, sanjoy, apilipenko

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D33846

llvm-svn: 304800
2017-06-06 14:54:01 +00:00
Joey Gouly 61eaa63b65 [InstSimplify] Constant fold the new GEP in SimplifyGEPInst.
llvm-svn: 304784
2017-06-06 10:17:14 +00:00
Sven van Haastregt 78819e0fd4 [InstCombine] Fix extractelement use before def
This fixes a bug that can cause extractelements with operands that
haven't been defined yet to be inserted at a wrong point when
optimising insertelements.

Patch by Karl Hylen.

Differential Revision: https://reviews.llvm.org/D33449

llvm-svn: 304701
2017-06-05 09:18:10 +00:00
Craig Topper 2b54baeb96 [X86] Replace 'REQUIRES: x86' in tests with 'REQUIRES: x86-registered-target' which seems to be the correct way to make them run on an x86 build.
llvm-svn: 304682
2017-06-04 08:21:58 +00:00
Craig Topper fe9ad82e44 [ConstantFolding] Properly support constant folding of vector powi intrinsic. The second argument is not a vector so needs special treatment.
llvm-svn: 304679
2017-06-04 07:30:28 +00:00
Craig Topper 97f113e795 [InstSimplify] Add test case demonstrating that we fail to constant fold vector llvm.powi intrinsics due to the second argument not being a vector.
llvm-svn: 304678
2017-06-04 07:30:23 +00:00
Craig Topper 0799ff9e64 [InstCombine] Add support for simplifying ctlz/cttz intrinsics based on known bits.
llvm-svn: 304669
2017-06-03 18:50:32 +00:00
Craig Topper 7c553edced [ConstantFolding] Fix constant folding for vector cttz and ctlz intrinsics to understand that the second argument is still a scalar.
llvm-svn: 304668
2017-06-03 18:50:29 +00:00
Craig Topper 36fa2f0dee [InstCombine][InstSimplify] Add various tests for ctlz/cttz with vectors, some showing missed optimizations. NFC
llvm-svn: 304667
2017-06-03 18:50:26 +00:00
Craig Topper 622c0f89ec [InstCombine] Use cttz instead of ctlz in the cttz_cmp_vec test case. Looks like a copy paste mistake.
llvm-svn: 304666
2017-06-03 18:50:23 +00:00
Evgeniy Stepanov 704003ea3d Revert "[CFI] Remove LinkerSubsectionsViaSymbols."
This reverts commit r304582: breaks cfi-devirt :: anon-namespace.cpp on Darwin.

llvm-svn: 304626
2017-06-03 00:46:27 +00:00
Xinliang David Li 0b7d858fa3 [PartialInlining] Minor cost anaysis tuning
Also added a test option and 2 cost analysis related tests.

llvm-svn: 304599
2017-06-02 22:08:04 +00:00
Jun Bum Lim 2960d41e68 [InlineCost] Enable the new switch cost heuristic
Summary:
This is to enable the new switch inline cost heuristic (r301649) by removing the
old heuristic as well as the flag itself.
In my experiment for LLVM test suite and spec2000/2006, +17.82% performance and
8% code size reduce was observed in spec2000/vertex with O3 LTO in AArch64.
No significant code size / performance regression was found in O3/O2/Os. No
significant complain was reported from the llvm-dev thread.

Reviewers: hans, chandlerc, eraman, haicheng, mcrosier, bmakam, eastig, ddibyend, echristo

Reviewed By: echristo

Subscribers: javed.absar, kristof.beyls, echristo, aemerson, rengolin, mehdi_amini

Differential Revision: https://reviews.llvm.org/D32653

llvm-svn: 304594
2017-06-02 20:42:54 +00:00
Keno Fischer 514a6a54e7 [SROA] Fix crash due to bad bitcast
Summary:
As shown in the test case, SROA was crashing when trying to split
stores (to the alloca) of loads (from anywhere), because it assumed
the pointer operand to the loads and stores had to have the same
address space. This isn't the case. Make sure to use the correct
pointer type for both the load and the store.

Reviewed By: yaxunl
Differential Revision: https://reviews.llvm.org/D32593

llvm-svn: 304585
2017-06-02 19:04:17 +00:00
Evgeniy Stepanov 63f056327d [CFI] Remove LinkerSubsectionsViaSymbols.
Since D17854 LinkerSubsectionsViaSymbols is unnecessary.

It is interfering with ThinLTO implementation of CFI-ICall, where
the aliases used on the !LinkerSubsectionsViaSymbols branch are
needed to export jump tables to ThinLTO backends.

llvm-svn: 304582
2017-06-02 18:45:14 +00:00
Evgeniy Stepanov b933ad3a77 Skip CFI for dead functions.
Differential Revision: https://reviews.llvm.org/D33805

llvm-svn: 304578
2017-06-02 18:24:23 +00:00
Sanjay Patel ce241f48c5 [InstCombine] fix icmp with not op and constant to work with splat vector constant
llvm-svn: 304562
2017-06-02 16:29:41 +00:00
Craig Topper b23e7c78a5 [InstSimplify][ConstantFolding] Teach constant folding how to handle icmp null, (inttoptr x) as well as it handles icmp (inttoptr x), null
Summary:
The constant folding code currently assumes that the constant expression will always be on the left and the simple null will be on the right. But that's not true at least on the path from InstSimplify.

This patch adds support to ConstantFolding to detect the reversed case.

Reviewers: spatel, dberlin, majnemer, davide, joey

Reviewed By: joey

Subscribers: joey, llvm-commits

Differential Revision: https://reviews.llvm.org/D33801

llvm-svn: 304559
2017-06-02 16:17:32 +00:00
Sanjay Patel 630a524e8d [InstCombine] fix/add tests for icmp with not ops; NFC
The existing test was not minimal, and there was no coverage
for the variants with a constant or vector types.

llvm-svn: 304555
2017-06-02 15:35:45 +00:00
Gor Nishanov 053d2d24f7 [coroutines] PR33271: Remove stray coro.save intrinsics during CoroSplit
Summary:
Optimization passes may remove llvm.coro.suspend intrinsic while leaving matching llvm.coro.save intrinsic orphaned.
Make sure we clean up orphaned coro.saves.  The bug manifested with a crash similar to this:

```
    llvm_unreachable("Unknown type!");
    llvm::MVT::getVT (Ty=0x489518, HandleUnknown=false)
    llvm::EVT::getEVT
    llvm::TargetLoweringBase::getValueType
    llvm::ComputeValueVTs
    llvm::SelectionDAGBuilder::visitTargetIntrinsic
```

Reviewers: GorNishanov

Subscribers: EricWF, llvm-commits

Differential Revision: https://reviews.llvm.org/D33817

llvm-svn: 304518
2017-06-02 02:18:36 +00:00
Xinliang David Li 621e8dcf1f [Profile] Enhance expect lowering to handle correlated branches
builtin_expect applied on && or || expressions were not
handled properly before. With this patch, the problem is fixed.

Differential Revision: http://reviews.llvm.org/D33164

llvm-svn: 304517
2017-06-02 02:09:31 +00:00
Mandeep Singh Grang fce1f464ac [PredicateInfo] Enable -reverse-iterate tests only for +Asserts builds
Summary: The flag -reverse-iterate is present only on +Asserts builds.

Reviewers: dberlin, davide, RKSimon, efriedma, chapuni

Reviewed By: efriedma, chapuni

Subscribers: chapuni, llvm-commits

Differential Revision: https://reviews.llvm.org/D33795

llvm-svn: 304498
2017-06-01 23:52:59 +00:00