Commit Graph

780 Commits

Author SHA1 Message Date
Thomas Lively d47b5c7bed [ValueTracking] Allow select patterns to work on FP vectors
Summary:
This CL allows constant vectors of floats to be recognized as non-NaN
and non-zero in select patterns. This change makes
`matchSelectPattern` more powerful generally, but was motivated
specifically because I wanted fminnan and fmaxnan to be created for
vector versions of the scalar patterns they are created for.

Tested with check-all on all targets. A testcase in the WebAssembly
backend that tests the non-nan codepath is in an upcoming CL.

Reviewers: aheejin, dschuff

Subscribers: sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D52324

llvm-svn: 343364
2018-09-28 21:36:43 +00:00
JF Bastien 73d8e4e531 Merge clang's isRepeatedBytePattern with LLVM's isBytewiseValue
Summary:
his code was in CGDecl.cpp and really belongs in LLVM's isBytewiseValue. Teach isBytewiseValue the tricks clang's isRepeatedBytePattern had, including merging undef properly, and recursing on more types.

clang part of this patch: D51752

Subscribers: dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D51751

llvm-svn: 342709
2018-09-21 05:17:42 +00:00
Max Kazantsev 3c284bde3f Re-enable "[NFC] Unify guards detection"
rL340921 has been reverted by rL340923 due to linkage dependency
from Transform/Utils to Analysis which is not allowed. In this patch
this has been fixed, a new utility function moved to Analysis.

Differential Revision: https://reviews.llvm.org/D51152

llvm-svn: 341014
2018-08-30 03:39:16 +00:00
Hans Wennborg 2c390c54f6 Revert r340921 "[NFC] Unify guards detection"
This broke the build, see e.g.

http://lab.llvm.org:8011/builders/clang-cmake-armv8-lnt/builds/4626/
http://lab.llvm.org:8011/builders/clang-ppc64be-linux-lnt/builds/18647/
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux/builds/5856/
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/22800/

> We have multiple places in code where we try to identify whether or not
> some instruction is a guard. This patch factors out this logic into a separate
> utility function which works uniformly in all places.
>
> Differential Revision: https://reviews.llvm.org/D51152
> Reviewed By: fedor.sergeev

llvm-svn: 340923
2018-08-29 12:21:32 +00:00
Max Kazantsev 1dafaa87d9 [NFC] Unify guards detection
We have multiple places in code where we try to identify whether or not
some instruction is a guard. This patch factors out this logic into a separate
utility function which works uniformly in all places.

Differential Revision: https://reviews.llvm.org/D51152
Reviewed By: fedor.sergeev

llvm-svn: 340921
2018-08-29 11:37:34 +00:00
Craig Topper dfa176e813 [ValueTracking] Fix assert message and add test case for r340546 and PR38677.
The bug was already fixed. This just adds a test case for it.

llvm-svn: 340556
2018-08-23 17:45:53 +00:00
Craig Topper 15f8692381 [ValueTracking] Fix an assert from r340480.
We need to allow ConstantExpr Selects in addition to SelectInst.

I'll try to put together a test case, but I wanted to fix the issues being reported.

Fixes PR38677

llvm-svn: 340546
2018-08-23 17:15:02 +00:00
Craig Topper bec15b6516 [ValueTracking] Teach computeNumSignBits to understand min/max clamp patterns with constant/splat values
If we have a min/max pair we can do a better job of counting sign bits if we look at them together. This is similar to what is done in the SelectionDAG version of computeNumSignBits for ISD::SMAX/SMIN.

Differential Revision: https://reviews.llvm.org/D51112

llvm-svn: 340480
2018-08-22 23:27:50 +00:00
Matt Arsenault 450fcc77a7 ValueTracking: Handle more instructions in isKnownNeverNaN
llvm-svn: 340187
2018-08-20 16:51:00 +00:00
Florian Hahn 19f9e32f07 [InstrSimplify,NewGVN] Add option to ignore additional instr info when simplifying.
NewGVN uses InstructionSimplify for simplifications of leaders of
congruence classes. It is not guaranteed that the metadata or other
flags/keywords (like nsw or exact) of the leader is available for all members
in a congruence class, so we cannot use it for simplification.

This patch adds a InstrInfoQuery struct with a boolean field
UseInstrInfo (which defaults to true to keep the current behavior as
default) and a set of helper methods to get metadata/keywords for a
given instruction, if UseInstrInfo is true. The whole thing might need a
better name, to avoid confusion with TargetInstrInfo but I am not sure
what a better name would be.

The current patch threads through InstrInfoQuery to the required
places, which is messier then it would need to be, if
InstructionSimplify and ValueTracking would share the same Query struct.

The reason I added it as a separate struct is that it can be shared
between InstructionSimplify and ValueTracking's query objects. Also,
some places do not need a full query object, just the InstrInfoQuery.

It also updates some interfaces that do not take a Query object, but a
set of optional parameters to take an additional boolean UseInstrInfo.

See https://bugs.llvm.org/show_bug.cgi?id=37540.

Reviewers: dberlin, davide, efriedma, sebpop, hiraditya

Reviewed By: hiraditya

Differential Revision: https://reviews.llvm.org/D47143

llvm-svn: 340031
2018-08-17 14:39:04 +00:00
Matt Arsenault d54b7f0592 ValueTracking: Start enhancing isKnownNeverNaN
llvm-svn: 339399
2018-08-09 22:40:08 +00:00
Matt Arsenault 56b31d8d75 ValueTracking: Handle canonicalize in CannotBeNegativeZero
Also fix apparently missing test coverage for any of the
handling here.

llvm-svn: 339023
2018-08-06 15:16:26 +00:00
Max Kazantsev 2dbbd64cb7 Re-enable "[ValueTracking] Teach isKnownNonNullFromDominatingCondition about AND"
The patch was reverted because of bug detected by sanitizer. The bug is fixed,
respective tests added.

Differential Revision: https://reviews.llvm.org/D50172

llvm-svn: 339005
2018-08-06 11:14:18 +00:00
Max Kazantsev 3271f379a9 Revert rL338990 to see if it causes sanitizer failures
Multiple failues reported by sanitizer-x86_64-linux, seem to be caused by this
patch. Reverting to see if they sustain without it.

Differential Revision: https://reviews.llvm.org/D50172

llvm-svn: 338994
2018-08-06 08:10:28 +00:00
Max Kazantsev 34b0666be9 [ValueTracking] Teach isKnownNonNullFromDominatingCondition about AND
`isKnownNonNullFromDominatingCondition` is able to prove non-null basing on `br` or `guard`
by `%p != null` condition, but is unable to do so basing on `(%p != null) && %other_cond`.
This patch allows it to do so.

Differential Revision: https://reviews.llvm.org/D50172
Reviewed By: reames

llvm-svn: 338990
2018-08-06 06:11:36 +00:00
Sanjay Patel f9a0d593e9 [ValueTracking] fix maxnum miscompile for cannotBeOrderedLessThanZero (PR37776)
This adds the NAN checks suggested in PR37776:
https://bugs.llvm.org/show_bug.cgi?id=37776

If both operands to maxnum are NAN, that should get constant folded, so we don't 
have to handle that case. This is the same assumption as other FP ops in this
function. Returning 'false' is always conservatively correct.

Copying from the bug report:

Currently, we have this for "when is cannotBeOrderedLessThanZero 
(mustBePositiveOrNaN) true for maxnum":
               L
        -------------------
        | Pos | Neg | NaN |
   ------------------------
   |Pos |  x  |  x  |  x  |
   ------------------------
 R |Neg |  x  |     |  x  |
   ------------------------
   |NaN |  x  |  x  |  x  |
   ------------------------


The cases with (Neg & NaN) are wrong. We should have:

                L
        -------------------
        | Pos | Neg | NaN |
   ------------------------
   |Pos |  x  |  x  |  x  |
   ------------------------
 R |Neg |  x  |     |     |
   ------------------------
   |NaN |  x  |     |  x  |
   ------------------------

Differential Revision: https://reviews.llvm.org/D50081

llvm-svn: 338716
2018-08-02 13:46:20 +00:00
Fangrui Song f78650a8de Remove trailing space
sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h}

llvm-svn: 338293
2018-07-30 19:41:25 +00:00
Stanislav Mekhanoshin b8269a9589 Fix llvm::ComputeNumSignBits with some operations and llvm.assume
Currently ComputeNumSignBits does early exit while processing some
of the operations (add, sub, mul, and select). This prevents the
function from using AssumptionCacheTracker if passed.

Differential Revision: https://reviews.llvm.org/D49759

llvm-svn: 337936
2018-07-25 16:39:24 +00:00
Chen Zheng 69bb064539 [InstrSimplify] fold sdiv if two operands are negated and non-overflow
Differential Revision: https://reviews.llvm.org/D49382

llvm-svn: 337642
2018-07-21 12:27:54 +00:00
Chen Zheng ccc8422464 [InstCombine] add more SPFofSPF folding
Differential Revision: https://reviews.llvm.org/D49238

llvm-svn: 337143
2018-07-16 02:23:00 +00:00
Fangrui Song 9bb6c392e3 [InstCombine] Simplify isKnownNegation
llvm-svn: 336957
2018-07-12 22:56:23 +00:00
Chen Zheng fdf13ef342 [InstSimplify] simplify add instruction if two operands are negative
Differential Revision: https://reviews.llvm.org/D49216

llvm-svn: 336881
2018-07-12 03:06:04 +00:00
Manoj Gupta 77eeac3d9e llvm: Add support for "-fno-delete-null-pointer-checks"
Summary:
Support for this option is needed for building Linux kernel.
This is a very frequently requested feature by kernel developers.

More details : https://lkml.org/lkml/2018/4/4/601

GCC option description for -fdelete-null-pointer-checks:
This Assume that programs cannot safely dereference null pointers,
and that no code or data element resides at address zero.

-fno-delete-null-pointer-checks is the inverse of this implying that
null pointer dereferencing is not undefined.

This feature is implemented in LLVM IR in this CL as the function attribute
"null-pointer-is-valid"="true" in IR (Under review at D47894).
The CL updates several passes that assumed null pointer dereferencing is
undefined to not optimize when the "null-pointer-is-valid"="true"
attribute is present.

Reviewers: t.p.northover, efriedma, jyknight, chandlerc, rnk, srhines, void, george.burgess.iv

Reviewed By: efriedma, george.burgess.iv

Subscribers: eraman, haicheng, george.burgess.iv, drinkcat, theraven, reames, sanjoy, xbolva00, llvm-commits

Differential Revision: https://reviews.llvm.org/D47895

llvm-svn: 336613
2018-07-09 22:27:23 +00:00
Vedant Kumar b3091da3af Use Type::isIntOrPtrTy where possible, NFC
It's a bit neater to write T.isIntOrPtrTy() over `T.isIntegerTy() ||
T.isPointerTy()`.

I used Python's re.sub with this regex to update users:

  r'([\w.\->()]+)isIntegerTy\(\)\s*\|\|\s*\1isPointerTy\(\)'

llvm-svn: 336462
2018-07-06 20:17:42 +00:00
Sanjay Patel 284ba0c18f [ValueTracking] allow undef elements when matching vector abs
llvm-svn: 336111
2018-07-02 14:43:40 +00:00
Piotr Padlewski 5b3db45e8f Implement strip.invariant.group
Summary:
This patch introduce new intrinsic -
strip.invariant.group that was described in the
RFC: Devirtualization v2

Reviewers: rsmith, hfinkel, nlopes, sanjoy, amharc, kuhar

Subscribers: arsenm, nhaehnle, JDevlieghere, hiraditya, xbolva00, llvm-commits

Differential Revision: https://reviews.llvm.org/D47103

Co-authored-by: Krzysztof Pszeniczny <krzysztof.pszeniczny@gmail.com>
llvm-svn: 336073
2018-07-02 04:49:30 +00:00
John Brawn c5a6392be3 [ValueTracking] Match select abs pattern when there's an sext involved
When checking a select to see if it matches an abs, allow the true/false values
to be a sign-extension of the comparison value instead of requiring that they're
directly the comparison value, as all the comparison cares about is the sign of
the value.

This fixes a regression due to r333702, where we were no longer generating ctlz
due to isKnownNonNegative failing to match such a pattern.

Differential Revision: https://reviews.llvm.org/D47631

llvm-svn: 333927
2018-06-04 16:53:57 +00:00
Karl-Johan Karlsson ebaaa2ddae [ValueTracking] Fix endless recursion in isKnownNonZero()
Summary:
The isKnownNonZero() function have checks that abort the recursion when
it reaches the specified max depth. However one of the recursive calls
was placed before the max depth check was done, resulting in a endless
recursion that eventually triggered a segmentation fault.

Fixed the problem by moving the max depth check above the first
recursive call.

Reviewers: Prazek, nlopes, spatel, craig.topper, hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, bjope, llvm-commits

Differential Revision: https://reviews.llvm.org/D47531

llvm-svn: 333557
2018-05-30 15:56:46 +00:00
Craig Topper 8f77dcadff Recommit r333226 "[ValueTracking] Teach computeKnownBits that the result of an absolute value pattern that uses nsw flag is always positive."
Libfuzzer tests have been fixed to prevent being optimized.

Original commit message:

If the nsw flag is used in the absolute value then it is undefined for INT_MIN. For all other value it will produce a positive number. So we can assume the result is positive.

This breaks some InstCombine abs/nabs combining tests because we simplify the second compare from known bits rather than as the whole pattern. Looks like we can probably fix it by adding a neg+abs/nabs combine to just swap the select operands. N

Differential Revision: https://reviews.llvm.org/D47041

llvm-svn: 333300
2018-05-25 19:18:09 +00:00
Craig Topper 8174281b93 Revert r333226 "[ValueTracking] Teach computeKnownBits that the result of an absolute value pattern that uses nsw flag is always positive."
This breaks some libFuzzer tests. http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fuzzer/builds/15589/steps/check-fuzzer/logs/stdio

Reverting to investigate

llvm-svn: 333253
2018-05-25 04:01:56 +00:00
Craig Topper 49f23fe349 [ValueTracking] Teach computeKnownBits that the result of an absolute value pattern that uses nsw flag is always positive.
If the nsw flag is used in the absolute value then it is undefined for INT_MIN. For all other value it will produce a positive number. So we can assume the result is positive.

This breaks some InstCombine abs/nabs combining tests because we simplify the second compare from known bits rather than as the whole pattern. Looks like we can probably fix it by adding a neg+abs/nabs combine to just swap the select operands. Need to check alive to make sure there are no corner cases.

Differential Revision: https://reviews.llvm.org/D47041

llvm-svn: 333226
2018-05-24 21:22:51 +00:00
Piotr Padlewski d6f7346a4b Fix aliasing of launder.invariant.group
Summary:
Patch for capture tracking broke
bootstrap of clang with -fstict-vtable-pointers
which resulted in debbugging nightmare. It was fixed
https://reviews.llvm.org/D46900 but as it turned
out, there were other parts like inliner (computing of
noalias metadata) that I found after bootstraping with enabled
assertions.

Reviewers: hfinkel, rsmith, chandlerc, amharc, kuhar

Subscribers: JDevlieghere, eraman, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D47088

llvm-svn: 333070
2018-05-23 09:16:44 +00:00
David Bolvansky 1f343fa0e0 [InstCombine] Remove calloc transformations
Summary: Previous patch does not care if a value is changed between calloc and strlen. This needs to be removed from InstCombine and maybe moved to DSE later after some rework.

Reviewers: efriedma

Reviewed By: efriedma

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D47218

llvm-svn: 333022
2018-05-22 20:27:36 +00:00
David Bolvansky 41f4b64ee1 [InstCombine] Calloc-ed strings optimizations
Summary:
Example cases:
strlen(calloc(...)) -> 0

Reviewers: efriedma, bkramer

Reviewed By: bkramer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D47059

llvm-svn: 332990
2018-05-22 15:41:23 +00:00
Craig Topper f14e62c9a5 [EarlyCSE] Improve EarlyCSE of some absolute value cases.
Change matchSelectPattern to return X and -X for ABS/NABS in a well defined order. Adjust EarlyCSE to account for this. Ensure the SPF result is some kind of min/max and not abs/nabs in one place in InstCombine that made me nervous.

Prevously we returned the two operands of the compare part of the abs pattern. The RHS is always going to be a 0i, 1 or -1 constant. This isn't a very meaningful thing to return for any one. There's also some freedom in the abs pattern as to what happens when the value is equal to 0. This freedom led to early cse failing to match when different constants were used in otherwise equivalent operations. By returning the input and its negation in a defined order we can ensure an exact match. This also makes sure both patterns use the exact same subtract instruction for the negation. I believe CSE should evebntually make this happen and properly merge the nsw/nuw flags. But I'm not familiar with CSE and what order it does things in so it seemed like it might be good to really enforce that they were the same.

Differential Revision: https://reviews.llvm.org/D47037

llvm-svn: 332865
2018-05-21 18:42:42 +00:00
Piotr Padlewski 5642a42442 Propagate nonnull and dereferenceable throught launder
Summary:
invariant.group.launder should not stop propagation
of nonnull and dereferenceable, because e.g. we would not be
able to hoist loads speculatively.

Reviewers: rsmith, amharc, kuhar, xbolva00, hfinkel

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D46972

llvm-svn: 332788
2018-05-18 23:54:33 +00:00
Omer Paparo Bivas fbb83deef7 [InstCombine] Moving overflow computation logic from InstCombine to ValueTracking; NFC
Differential Revision: https://reviews.llvm.org/D46704

Change-Id: Ifabcbe431a2169743b3cc310f2a34fd706f13f02
llvm-svn: 332026
2018-05-10 19:46:19 +00:00
Shiva Chen 2c864551df [DebugInfo] Add DILabel metadata and intrinsic llvm.dbg.label.
In order to set breakpoints on labels and list source code around
labels, we need collect debug information for labels, i.e., label
name, the function label belong, line number in the file, and the
address label located. In order to keep these information in LLVM
IR and to allow backend to generate debug information correctly.
We create a new kind of metadata for labels, DILabel. The format
of DILabel is

!DILabel(scope: !1, name: "foo", file: !2, line: 3)

We hope to keep debug information as much as possible even the
code is optimized. So, we create a new kind of intrinsic for label
metadata to avoid the metadata is eliminated with basic block.
The intrinsic will keep existing if we keep it from optimized out.
The format of the intrinsic is

llvm.dbg.label(metadata !1)

It has only one argument, that is the DILabel metadata. The
intrinsic will follow the label immediately. Backend could get the
label metadata through the intrinsic's parameter.

We also create DIBuilder API for labels to be used by Frontend.
Frontend could use createLabel() to allocate DILabel objects, and use
insertLabel() to insert llvm.dbg.label intrinsic in LLVM IR.

Differential Revision: https://reviews.llvm.org/D45024

Patch by Hsiangkai Wang.

llvm-svn: 331841
2018-05-09 02:40:45 +00:00
Adrian Prantl 5f8f34e459 Remove \brief commands from doxygen comments.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

  for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46290

llvm-svn: 331272
2018-05-01 15:54:18 +00:00
Roman Lebedev 6959b8e76f [PatternMatch] Stabilize the matching order of commutative matchers
Summary:
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the `LHS` and `RHS` matchers:
1. match `RHS` matcher to the `first` operand of binary operator,
2. and then match `LHS` matcher to the `second` operand of binary operator.

This works ok.
But it complicates writing of commutative matchers, where one would like to match
(`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side.

This is additionally complicated by the fact that `m_Specific()` stores the `Value *`,
not `Value **`, so it won't work at all out of the box.

The last problem is trivially solved by adding a new `m_c_Specific()` that stores the
`Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing
one because i guess all the current users are ok with existing behavior,
and this additional pointer indirection may have performance drawbacks.
Also, i'm storing pointer, not reference, because for some mysterious-to-me reason
it did not work with the reference.

The first one appears trivial, too.
Currently, we
1. match `LHS` matcher to the `first` operand of binary operator,
2. and then match `RHS` matcher to the `second` operand of binary operator.
If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**:
1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator,
2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator.

Surprisingly, `$ ninja check-llvm` still passes with this.
But i expect the bots will disagree..

The motivational unittest is included.
I'd like to use this in D45664.

Reviewers: spatel, craig.topper, arsenm, RKSimon

Reviewed By: craig.topper

Subscribers: xbolva00, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D45828

llvm-svn: 331085
2018-04-27 21:23:20 +00:00
Roman Lebedev 620b3da38f [InstCombine] Simplify 'add' to 'or' if no common bits are set.
Summary:
In order to get the whole fold as specified in [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]],
let's first handle the simple straight-forward things.
Let's start with the `and` -> `or` simplification.

The one obvious thing missing here: the constant mask is not handled.
I have an idea how to handle it, but it will require some thinking,
and is not strictly required here, so i've left that for later.

https://rise4fun.com/Alive/Pkmg

Reviewers: spatel, craig.topper, eli.friedman, jingyue

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D45631

llvm-svn: 330101
2018-04-15 18:59:33 +00:00
Sanjay Patel 93e64dd9a1 [PatternMatch] allow undef elements when matching vector FP +0.0
This continues the FP constant pattern matching improvements from:
https://reviews.llvm.org/rL327627
https://reviews.llvm.org/rL327339
https://reviews.llvm.org/rL327307

Several integer constant matchers also have this ability. I'm
separating matching of integer/pointer null from FP positive zero
and renaming/commenting to make the functionality clearer.

llvm-svn: 328461
2018-03-25 21:16:33 +00:00
Philip Reames fbffd126b8 [NFC] Factor out a helper function for checking if a block has a potential early implicit exit.
llvm-svn: 327065
2018-03-08 21:25:30 +00:00
Sanjay Patel 7ed0bc26ac [ValueTracking] move helpers for SelectPatterns from InstCombine to ValueTracking
Most of the folds based on SelectPatternResult belong in InstSimplify rather than
InstCombine, so the helper code should be available to other passes/analysis.

llvm-svn: 326812
2018-03-06 16:57:55 +00:00
Vedant Kumar 1a8456da17 Fix more spelling mistakes in comments of LLVM Analysis passes
Patch by Reshabh Sharma!

Differential Revision: https://reviews.llvm.org/D43939

llvm-svn: 326601
2018-03-02 18:57:02 +00:00
Vedant Kumar d319674a81 Fixed spelling mistake in comments of LLVM Analysis passes
Patch by Reshabh Sharma!

Differential Revision: https://reviews.llvm.org/D43861

llvm-svn: 326352
2018-02-28 19:08:52 +00:00
Craig Topper 301991080e [ValueTracking] Teach cannotBeOrderedLessThanZeroImpl to look through ExtractElement.
This is similar to what's done in computeKnownBits and computeSignBits. Don't do anything fancy just collect information valid for any element.

Differential Revision: https://reviews.llvm.org/D43789

llvm-svn: 326237
2018-02-27 19:53:45 +00:00
Craig Topper 69c8972fd1 [ValueTracking] Teach cannotBeOrderedLessThanZeroImpl to handle vector constants.
Summary: This allows vector fabs to be removed in more cases.

Reviewers: spatel, arsenm, RKSimon

Reviewed By: spatel

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D43739

llvm-svn: 326138
2018-02-26 22:33:17 +00:00
Elena Demikhovsky 945b7e5aa6 Adding a width of the GEP index to the Data Layout.
Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout.
p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits.
The index size parameter is optional, if not specified, it is equal to the pointer size.

Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width.
It works fine if you can convert pointer to integer for address calculation and all registered targets do this.
But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout.
http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html

I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account.
This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size.

Differential Revision: https://reviews.llvm.org/D42123

llvm-svn: 325102
2018-02-14 06:58:08 +00:00
Sanjay Patel a60aec1ab7 [ValueTracking] don't crash when assumptions conflict (PR36270)
The last assume in the test says that %B12 is 0. 
The first assume says that %and1 is less than %B12. 
Therefore, %and1 is unsigned less than 0...does not compute.

That means this line:
Known.Zero.setHighBits(RHSKnown.countMinLeadingZeros() + 1);
...tries to set more bits than exist.

Differential Revision: https://reviews.llvm.org/D43052

llvm-svn: 324610
2018-02-08 14:52:40 +00:00
Simon Pilgrim 9f2ae7e2d1 [InstCombine][ValueTracking] Match non-uniform constant power-of-two vectors
Generalize existing constant matching to work with non-uniform constant vectors as well.

Differential Revision: https://reviews.llvm.org/D42818

llvm-svn: 324369
2018-02-06 18:39:23 +00:00
Sanjay Patel 1d91ec34b2 [ValueTracking] add recursion depth param to matchSelectPattern
We're getting bug reports:
https://bugs.llvm.org/show_bug.cgi?id=35807
https://bugs.llvm.org/show_bug.cgi?id=35840
https://bugs.llvm.org/show_bug.cgi?id=36045
...where we blow up the stack in value tracking because other passes are sending 
in selects that have an operand that is itself the select.

We don't currently have a reliable way to avoid analyzing dead code that may take 
non-standard forms, so bail out when things go too far.

This mimics the recursion depth limitations in other parts of value tracking.

Unfortunately, this pushes the underlying problems for other passes (jump-threading,
simplifycfg, correlated-propagation) into hiding. If someone wants to uncover those
again, the first draft of this patch on Phab would do that (it would assert rather
than bail out).

Differential Revision: https://reviews.llvm.org/D42442

llvm-svn: 323331
2018-01-24 15:20:37 +00:00
Sanjay Patel e63d8dda5a [ValueTracking] recognize min/max-of-min/max with notted ops (PR35875)
This was originally planned as the fix for:
https://bugs.llvm.org/show_bug.cgi?id=35834
...but simpler transforms handled that case, so I implemented a 
lesser solution. It turns out we need to handle the case with 'not'
ops too because the real code example that we are trying to solve:
https://bugs.llvm.org/show_bug.cgi?id=35875
...has extra uses of the intermediate values, so we can't rely on 
smaller canonicalizations to get us to the goal.

As with rL321672, I've tried to show every possibility in the
codegen tests because that's the simplest way to prove we're doing
the right thing in the wide variety of permutations of this pattern.

We can also show an InstCombine win because we added a fold for
this case in:
rL321998 / D41603

An Alive proof for one variant of the pattern to show that the 
InstCombine and codegen results are correct:
https://rise4fun.com/Alive/vd1

Name: min3_nots
  %nx = xor i8 %x, -1
  %ny = xor i8 %y, -1
  %nz = xor i8 %z, -1
  %cmpxz = icmp slt i8 %nx, %nz
  %minxz = select i1 %cmpxz, i8 %nx, i8 %nz
  %cmpyz = icmp slt i8 %ny, %nz
  %minyz = select i1 %cmpyz, i8 %ny, i8 %nz
  %cmpyx = icmp slt i8 %y, %x
  %r = select i1 %cmpyx, i8 %minxz, i8 %minyz
=>
  %cmpxyz = icmp slt i8 %minxz, %ny
  %r = select i1 %cmpxyz, i8 %minxz, i8 %ny

Name: min3_nots_alt
  %nx = xor i8 %x, -1
  %ny = xor i8 %y, -1
  %nz = xor i8 %z, -1
  %cmpxz = icmp slt i8 %nx, %nz
  %minxz = select i1 %cmpxz, i8 %nx, i8 %nz
  %cmpyz = icmp slt i8 %ny, %nz
  %minyz = select i1 %cmpyz, i8 %ny, i8 %nz
  %cmpyx = icmp slt i8 %y, %x
  %r = select i1 %cmpyx, i8 %minxz, i8 %minyz
=>
  %xz = icmp sgt i8 %x, %z
  %maxxz = select i1 %xz, i8 %x, i8 %z
  %xyz = icmp sgt i8 %maxxz, %y
  %maxxyz = select i1 %xyz, i8 %maxxz, i8 %y
  %r = xor i8 %maxxyz, -1

llvm-svn: 322283
2018-01-11 15:13:47 +00:00
Sanjay Patel 7dfe96ad16 [ValueTracking] remove overzealous assert
The test is derived from a failing fuzz test:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5008

Credit to @rksimon for pointing out the problem.

llvm-svn: 322016
2018-01-08 18:31:13 +00:00
Sanjay Patel 7811430588 [ValueTracking] recognize min/max of min/max patterns
This is part of solving PR35717:
https://bugs.llvm.org/show_bug.cgi?id=35717

The larger IR optimization is proposed in D41603, but we can show 
the improvement in ValueTracking using codegen tests because 
SelectionDAG creates min/max nodes based on ValueTracking. 

Any target with min/max ops should show wins here. I chose AArch64
vector ops because they're clean and uniform.

Some Alive proofs for the tests (can't put more than 2 tests in 1 
page currently because the web app says it's too long):
https://rise4fun.com/Alive/WRN
https://rise4fun.com/Alive/iPm
https://rise4fun.com/Alive/HmY
https://rise4fun.com/Alive/CNm
https://rise4fun.com/Alive/LYf

llvm-svn: 321672
2018-01-02 20:56:45 +00:00
Simon Pilgrim 6720726d27 [ValueTracking] Don't assume shift values are in range
Reduced (as best I could...) from oss-fuzz #4857 test case

llvm-svn: 321634
2018-01-01 22:44:59 +00:00
Sanjay Patel 9a39979dd2 [ValueTracking] ignore FP signed-zero when detecting a casted-to-integer fmin/fmax pattern
This is a preliminary step for the patch discussed in D41136 (and denoted here with the FIXME comment).

When we match an FP min/max that is cast to integer, any intermediate difference between +0.0 or -0.0 
should be muted in the result by the conversion (either fptosi or fptoui) of the result. Thus, we can 
enable 'nsz' for the purpose of matching fmin/fmax.

Note that there's probably room to generalize this more, possibly by fixing the current calls to the
weak version of isKnownNonZero() in matchSelectPattern() to the more powerful recursive version.

Differential Revision: https://reviews.llvm.org/D41333

llvm-svn: 321456
2017-12-26 15:09:19 +00:00
Haicheng Wu a446151552 [InlineCost] Find repeated loads in the callee
SROA analysis of InlineCost can figure out that some stores can be removed
after inlining and then the repeated loads clobbered by these stores are also
free.  This patch finds these clobbered loads and adjust the inline cost
accordingly.

Differential Revision: https://reviews.llvm.org/D33946

llvm-svn: 320814
2017-12-15 14:34:41 +00:00
Simon Dardis 70dbd5fbd0 Infer lowest bits of an integer Multiply when the low bits of the operands are known
When the lowest bits of the operands to an integer multiply are known, the low bits of the result are deducible.
Code to deduce known-zero bottom bits already existed, but this change improves on that by deducing known-ones.

Patch by: Pedro Ferreira

Reviewers: craig.topper, sanjoy, efriedma

Differential Revision: https://reviews.llvm.org/D34029

llvm-svn: 320269
2017-12-09 23:25:57 +00:00
Evgeniy Stepanov c667c1f47a Hardware-assisted AddressSanitizer (llvm part).
Summary:
This is LLVM instrumentation for the new HWASan tool. It is basically
a stripped down copy of ASan at this point, w/o stack or global
support. Instrumenation adds a global constructor + runtime callbacks
for every load and store.

HWASan comes with its own IR attribute.

A brief design document can be found in
clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier).

Reviewers: kcc, pcc, alekseyshl

Subscribers: srhines, mehdi_amini, mgorny, javed.absar, eraman, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D40932

llvm-svn: 320217
2017-12-09 00:21:41 +00:00
Igor Laevsky cec8f47e77 [InstCombine] Don't crash on out of bounds shifts
Differential Revision: https://reviews.llvm.org/D40649

llvm-svn: 319761
2017-12-05 12:18:15 +00:00
Sam McCall d0d43e6f14 Revert "[ValueTracking] Pass only a single lambda to computeKnownBitsFromShiftOperator by using KnownBits struct instead of separate APInts. NFCI"
This reverts commit r319624, which seems to cause a miscompile (breaks the
multistage PPC buildbots)

llvm-svn: 319652
2017-12-04 12:51:49 +00:00
Craig Topper 199acd88e3 [ValueTracking] Pass only a single lambda to computeKnownBitsFromShiftOperator by using KnownBits struct instead of separate APInts. NFCI
llvm-svn: 319624
2017-12-02 23:42:17 +00:00
Sanjay Patel 20df88a754 [ValueTracking] use 'auto' with 'dyn_cast'; NFC
llvm-svn: 318058
2017-11-13 17:56:23 +00:00
Sanjay Patel 9e3d8f4b39 [ValueTracking] simplify code in CannotBeNegativeZero() with match(); NFCI
llvm-svn: 318055
2017-11-13 17:40:47 +00:00
Dan Gohman 2c74fe977d Add an @llvm.sideeffect intrinsic
This patch implements Chandler's idea [0] for supporting languages that
require support for infinite loops with side effects, such as Rust, providing
part of a solution to bug 965 [1].

Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual
effect, but which appears to optimization passes to have obscure side effects,
such that they don't optimize away loops containing it. It also teaches
several optimization passes to ignore this intrinsic, so that it doesn't
significantly impact optimization in most cases.

As discussed on llvm-dev [2], this patch is the first of two major parts.
The second part, to change LLVM's semantics to have defined behavior
on infinite loops by default, with a function attribute for opting into
potential-undefined-behavior, will be implemented and posted for review in
a separate patch.

[0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html
[1] https://bugs.llvm.org/show_bug.cgi?id=965
[2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html

Differential Revision: https://reviews.llvm.org/D38336

llvm-svn: 317729
2017-11-08 21:59:51 +00:00
Craig Topper 81d772c28a [ValueTracking] Use APInt::isNullValue/isOneValue which are more efficient for large APInts.
llvm-svn: 317712
2017-11-08 19:38:45 +00:00
Sanjay Patel 86d24f1668 [ValueTracking] readonly (const) is a requirement for converting sqrt to llvm.sqrt; nnan is not
As discussed in D39204, this is effectively a revert of rL265521 which required nnan 
to vectorize sqrt libcalls based on the old LangRef definition of llvm.sqrt. Now that
the definition has been updated so the libcall and intrinsic have the same semantics
apart from potentially setting errno, we can remove the nnan requirement.

We have the right check to know that errno is not set:

if (!ICS.onlyReadsMemory())

...ahead of the switch.

This will solve https://bugs.llvm.org/show_bug.cgi?id=27435 assuming that's being 
built for a target with -fno-math-errno.

Differential Revision: https://reviews.llvm.org/D39642

llvm-svn: 317519
2017-11-06 22:40:09 +00:00
Artur Gainullin af7ba8ff6b Improve clamp recognition in ValueTracking.
Summary:
ValueTracking was recognizing not all variations of clamp. Swapping of
true value and false value of select was added to fix this problem. The
first patch was reverted because it caused miscompile in NVPTX target. 
Added corresponding test cases.

Reviewers: spatel, majnemer, efriedma, reames

Subscribers: llvm-commits, jholewinski

Differential Revision: https://reviews.llvm.org/D39240

llvm-svn: 316795
2017-10-27 20:53:41 +00:00
Craig Topper 8e8b6efdc8 [ValueTracking] Remove unnecessary temporary APInt from computeNumSignBitsVectorConstant.
We can just use getNumSignBits instead of inverting negative numbers.

llvm-svn: 316266
2017-10-21 16:35:41 +00:00
Craig Topper b98ee58511 [ValueTracking] Simplify the known bits code for constant vectors a little.
Neither of these cases really require a temporary APInt outside the loop. For the ConstantDataSequential case the APInt will never be larger than 64-bits so its fine to just call getElementAsAPInt. For ConstantVector we can get the APInt by reference and only make a copy where the inversion is needed.

llvm-svn: 316265
2017-10-21 16:35:39 +00:00
Nikolai Bozhenov fa8c5514c5 [ValueTracking] Enabling ValueTracking patch by default
(recommit #2 after checking for timeout issue). 

The original patch was an improvement to IR ValueTracking on
non-negative integers. It has been checked in to trunk (D18777,
r284022). But was disabled by default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.

Reviewers: reames, hfinkel
 
Differential Revision: https://reviews.llvm.org/D34101
 
Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 316208
2017-10-20 10:08:47 +00:00
Nikolai Bozhenov 8dcab54cb4 Revert r315992 because of a found miscompilation failure
llvm-svn: 316164
2017-10-19 15:36:18 +00:00
Nikolai Bozhenov 9723f12491 Fixup patch for revision rL316070.
Added check that type of CmpConst and source type of trunc are equal
for correct matching of the case when we can set widened C constant
equal to CmpConstant.

  %cond = cmp iN %x, CmpConst
  %tr = trunc iN %x to iK
  %narrowsel = select i1 %cond, iK %t, iK C

Patch by: Gainullin, Artur <artur.gainullin@intel.com>

llvm-svn: 316082
2017-10-18 14:24:50 +00:00
Nikolai Bozhenov 74c047eabb Improve lookThroughCast function.
Summary:
When we have the following case:

  %cond = cmp iN %x, CmpConst
  %tr = trunc iN %x to iK
  %narrowsel = select i1 %cond, iK %t, iK C

We could possibly match only min/max pattern after looking through cast.
So it is more profitable if widened C constant will be equal CmpConst.
That is why just set widened C constant equal to CmpConst, because there
is a further check in this function that trunc CmpConst == C.

Also description for lookTroughCast function was added.

Reviewers: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38536

Patch by: Artur Gainullin <artur.gainullin@intel.com>

llvm-svn: 316070
2017-10-18 09:28:09 +00:00
Nikolai Bozhenov 346f4329c4 Improve clamp recognition in ValueTracking.
Summary:
ValueTracking was recognizing not all variations of clamp. Swapping of
true value and false value of select was added to fix this problem. This
change breaks the canonical form of cmp inside the matchMinMax function,
that is why additional checks for compare predicates is needed. Added
corresponding test cases.

Reviewers: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38531

Patch by: Artur Gainullin <artur.gainullin@intel.com>

llvm-svn: 315992
2017-10-17 11:50:48 +00:00
Sanjay Patel b7d1238cfc [ValueTracking] fix typos, formatting; NFC
llvm-svn: 315909
2017-10-16 14:46:37 +00:00
Sanjay Patel e272be7c9a [ValueTracking] return zero when there's conflict in known bits of a shift (PR34838)
Poison allows us to return a better result than undef.

llvm-svn: 315595
2017-10-12 17:31:46 +00:00
Hiroshi Inoue b49b015bed [ScheduleDAGInstrs] fix behavior of getUnderlyingObjectsForCodeGen when no identifiable object found
This patch fixes the bug introduced in https://reviews.llvm.org/D35907; the bug is reported by http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20171002/491452.html.

Before D35907, when GetUnderlyingObjects fails to find an identifiable object, allMMOsOkay lambda in getUnderlyingObjectsForInstr returns false and Objects vector is cleared. This behavior is unintentionally changed by D35907.

This patch makes the behavior for such case same as the previous behavior.
Since D35907 introduced a wrapper function getUnderlyingObjectsForCodeGen around GetUnderlyingObjects, getUnderlyingObjectsForCodeGen is modified to return a boolean value to ask the caller to clear the Objects vector.

Differential Revision: https://reviews.llvm.org/D38735

llvm-svn: 315565
2017-10-12 06:26:04 +00:00
Vivek Pandya 9590658fb8 [NFC] Convert OptimizationRemarkEmitter old emit() calls to new closure
parameterized emit() calls

Summary: This is not functional change to adopt new emit() API added in r313691.

Reviewed By: anemet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D38285

llvm-svn: 315476
2017-10-11 17:12:59 +00:00
Adam Nemet 0965da2055 Rename OptimizationDiagnosticInfo.* to OptimizationRemarkEmitter.*
Sync it up with the name of the class actually defined here.  This has been
bothering me for a while...

llvm-svn: 315249
2017-10-09 23:19:02 +00:00
Nuno Lopes 404f106d71 Merge isKnownNonNull into isKnownNonZero
It now knows the tricks of both functions.
Also, fix a bug that considered allocas of non-zero address space to be always non null

Differential Revision: https://reviews.llvm.org/D37628

llvm-svn: 312869
2017-09-09 18:23:11 +00:00
Sanjay Patel 6840c5ff75 [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145

In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.

But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.

By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.

Differential Revision: https://reviews.llvm.org/D37427

llvm-svn: 312591
2017-09-05 23:13:13 +00:00
Eugene Zelenko 75075efe5e [Analysis, Transforms] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 312383
2017-09-01 21:37:29 +00:00
Craig Topper 7227ebad9c [ValueTracking] Add assertions that the starting Depth in isKnownToBeAPowerOfTwo and ComputeNumSignBitsImpl is not above MaxDepth
The function does an equality check later to terminate the recursion, but that won't work if its starts out too high. Similar assert already exists in computeKnownBits.

llvm-svn: 311400
2017-08-21 22:56:12 +00:00
Amjad Aboud 88ffa3afe2 [InstCombine] Teach ComputeNumSignBitsImpl to handle integer multiply instruction.
Differential Revision: https://reviews.llvm.org/D36679

llvm-svn: 311206
2017-08-18 22:56:55 +00:00
Hal Finkel b03dd4be70 [ValueTracking] Don't delete assumes of side-effectful instructions
ValueTracking has to strike a balance when attempting to propagate information
backwards from assumes, because if the information is trivially propagated
backwards, it can appear to LLVM that the assumption is known to be true, and
therefore can be removed.

This is sound (because an assumption has no semantic effect except for causing
UB), but prevents the assume from allowing further optimizations.

The isEphemeralValueOf check exists to try and prevent this issue by not
removing the source of an assumption. This tries to make it a little bit more
general to handle the case of side-effectful instructions, such as in

  %0 = call i1 @get_val()
  %1 = xor i1 %0, true
  call void @llvm.assume(i1 %1)

Patch by Ariel Ben-Yehuda, thanks!

Differential Revision: https://reviews.llvm.org/D36590

llvm-svn: 310859
2017-08-14 17:11:43 +00:00
Chandler Carruth 37c7b08710 [ValueTracking] Revert r310583 which enabled functionality that still is
causing compile time issues.

Moreover, the patch *deleted* the flag in addition to changing the
default, and links to a code review that doesn't even discuss the flag
and just has an update to a Clang test case.

I've followed up on the commit thread to ask for numbers on compile time
at this point, leaving the flag in place until things stabilize, and
pointing at specific code that seems to exhibit excessive compile time
with this patch.

Original commit message for r310583:
"""
[ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2.

The original patch was an improvement to IR ValueTracking on
non-negative integers. It has been checked in to trunk (D18777,
r284022). But was disabled by default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
""""

llvm-svn: 310816
2017-08-14 07:03:24 +00:00
Nikolai Bozhenov d97136c182 [ValueTracking] Enabling ValueTracking patch by default (recommit). Part 2.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.
 
Reviewers: reames, hfinkel
 
Differential Revision: https://reviews.llvm.org/D34101
 
Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 310583
2017-08-10 11:24:57 +00:00
Davide Italiano 1a943a90f5 [ValueTracking] Turn a test into an assertion.
As discussed with Chad, this should never happen, but this
assertion is basically free, so, keep it around just in case.

llvm-svn: 310493
2017-08-09 16:06:54 +00:00
Davide Italiano 30e5194287 [ValueTracking] Honour recursion limit.
The recently improved support for `icmp` in ValueTracking
(r307304) exposes the fact that `isImplied` condition doesn't
really bail out if we hit the recursion limit (and calls
`computeKnownBits` which increases the depth and asserts).

Differential Revision:  https://reviews.llvm.org/D36512

llvm-svn: 310481
2017-08-09 15:13:50 +00:00
Craig Topper b498a23f0e [KnownBits][ValueTracking] Move the math for calculating known bits for add/sub into a static method in KnownBits object
I want to reuse this code in SimplifyDemandedBits handling of Add/Sub. This will make that easier.

Wonder if we should use it in SelectionDAG's computeKnownBits too.

Differential Revision: https://reviews.llvm.org/D36433

llvm-svn: 310378
2017-08-08 16:29:35 +00:00
Nikolai Bozhenov 1545eb3408 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 310054
2017-08-04 12:22:17 +00:00
Hiroshi Inoue 0bd906ec8f [StackColoring] Update AliasAnalysis information in stack coloring pass (part 2)
This patch is update after the first patch (https://reviews.llvm.org/rL309651) based on the post-commit comments.

Stack coloring pass need to maintain AliasAnalysis information when merging stack slots of different types.
Actually, there is a FIXME comment in StackColoring.cpp

// FIXME: In order to enable the use of TBAA when using AA in CodeGen,
// we'll also need to update the TBAA nodes in MMOs with values
// derived from the merged allocas.

But, TBAA has been already enabled in CodeGen without fixing this pass.
The incorrect TBAA metadata results in recent failures in bootstrap test on ppc64le (PR33928) by allowing unsafe instruction scheduling.
Although we observed the problem on ppc64le, this is a platform neutral issue.

This patch makes the stack coloring pass maintains AliasAnalysis information when merging multiple stack slots.

This patch fixes PR33928.

llvm-svn: 309849
2017-08-02 18:16:32 +00:00
Chad Rosier dfd1de687d [Value Tracking] Default argument to true and rename accordingly. NFC.
IMHO this is a bit more readable.

llvm-svn: 309739
2017-08-01 20:18:54 +00:00
Chad Rosier f73a10d2df [Value Tracking] Refactor and/or logic into helper. NFC.
llvm-svn: 309726
2017-08-01 19:22:36 +00:00
Hiroshi Inoue b9417dbd48 [StackColoring] Update AliasAnalysis information in stack coloring pass
Stack coloring pass need to maintain AliasAnalysis information when merging stack slots of different types.
Actually, there is a FIXME comment in StackColoring.cpp

// FIXME: In order to enable the use of TBAA when using AA in CodeGen,
// we'll also need to update the TBAA nodes in MMOs with values
// derived from the merged allocas.

But, TBAA has been already enabled in CodeGen without fixing this pass.
The incorrect TBAA metadata results in recent failures in bootstrap test on ppc64le (PR33928) by allowing unsafe instruction scheduling.
Although we observed the problem on ppc64le, this is a platform neutral issue.

This patch makes the stack coloring pass maintains AliasAnalysis information when merging multiple stack slots.

llvm-svn: 309651
2017-08-01 03:32:15 +00:00
Chad Rosier 2f49803c1f [Value Tracking] Refactor icmp comparison logic into helper. NFC.
llvm-svn: 309417
2017-07-28 18:47:43 +00:00
Chad Rosier e42b44b87d [ValueTracking] Remove a number of unused arguments. NFC.
llvm-svn: 309385
2017-07-28 14:39:06 +00:00
NAKAMURA Takumi 76bab1f20b Revert r307581, "Avoid doing conservative phi checks in aliasSameBasePointerGEPs() if no phis have been visited yet."
It broke stage2 tests in selfhosting.

llvm-svn: 307613
2017-07-11 02:31:51 +00:00
Farhana Aleen 2ff973f2a5 Avoid doing conservative phi checks in aliasSameBasePointerGEPs() if no phis have been visited yet.
Reviewers: Daniel Berlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D34478

llvm-svn: 307581
2017-07-10 20:15:40 +00:00
Craig Topper fde4723ebe [IR] Add Type::isIntOrIntVectorTy(unsigned) similar to the existing isIntegerTy(unsigned), but also works for vectors.
llvm-svn: 307492
2017-07-09 07:04:03 +00:00
Craig Topper 95d2347ae1 [IR] Make use of Type::isPtrOrPtrVectorTy/isIntOrIntVectorTy/isFPOrFPVectorTy to shorten code. NFC
llvm-svn: 307491
2017-07-09 07:04:00 +00:00
Chad Rosier 3f02123f7c [ValueTracking] Fix the identity case (LHS => RHS) when the LHS is false.
Prior to this commit both of the added test cases were passing.  However, in the
latter case (test7) we were doing a lot more work to arrive at the same answer
(i.e., we were using isImpliedCondMatchingOperands() to determine the
implication.).

llvm-svn: 307400
2017-07-07 13:55:55 +00:00
Chad Rosier a72a9ff557 [ValueTracking] Support icmps fed by 'and' and 'or'.
This patch adds support for handling some forms of ands and ors in
ValueTracking's isImpliedCondition API.

PR33611
https://reviews.llvm.org/D34901

llvm-svn: 307304
2017-07-06 20:00:25 +00:00
Craig Topper 79ab643da8 [Constants] If we already have a ConstantInt*, prefer to use isZero/isOne/isMinusOne instead of isNullValue/isOneValue/isAllOnesValue inherited from Constant. NFCI
Going through the Constant methods requires redetermining that the Constant is a ConstantInt and then calling isZero/isOne/isMinusOne.

llvm-svn: 307292
2017-07-06 18:39:47 +00:00
Nikolai Bozhenov bde9b14c6f Revert of r306525: "Canonicalize clamp of float types to minmax"
llvm-svn: 306815
2017-06-30 10:39:09 +00:00
Nikolai Bozhenov 6710ba07c7 Revert r306528
llvm-svn: 306536
2017-06-28 12:15:13 +00:00
Nikolai Bozhenov 77b5536e4e [ValueTracking] Enabling existing ValueTracking patch by default.
The original patch was an improvement to IR ValueTracking on non-negative
integers. It has been checked in to trunk (D18777, r284022). But was disabled by
default due to performance regressions.
Perf impact has improved. The patch would be enabled by default.

Reviewers: reames

Differential Revision: https://reviews.llvm.org/D34101

Patch by: Olga Chupina <olga.chupina@intel.com>

llvm-svn: 306528
2017-06-28 10:08:08 +00:00
Nikolai Bozhenov b01e6b5a52 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 306525
2017-06-28 09:26:20 +00:00
Craig Topper 7b66ffe875 [ValueTracking][InstCombine] Use m_Shr instead m_CombineOr(m_LShr, m_AShr). NFC
llvm-svn: 306205
2017-06-24 06:24:04 +00:00
Craig Topper f93b7b1c1f [ValueTracking] Correct early out in computeKnownBitsFromOperator to work with non power of 2 bit widths
There's an early out that's trying to detect when we don't know any bits that make up the legal range of a shift. The code subtracts one from BitWidth which creates a mask in the lower bits for power of 2 bit widths. This is then ANDed with the known bits to see if any of those bits are known. If the bit width isn't a power of 2 this creates a non-sensical mask.

This patch corrects this by rounding up to a power of 2 before doing the subtract and mask.

Differential Revision: https://reviews.llvm.org/D34165

llvm-svn: 305400
2017-06-14 17:04:59 +00:00
Sanjay Patel 2ad88f81f0 fix typos/formatting; NFC
llvm-svn: 305243
2017-06-12 22:34:37 +00:00
Sanjay Patel fef83e8fb9 [ValueTracking] fix typo; NFC
llvm-svn: 305080
2017-06-09 14:21:18 +00:00
Chandler Carruth 6bda14b313 Sort the remaining #include lines in include/... and lib/....
I did this a long time ago with a janky python script, but now
clang-format has built-in support for this. I fed clang-format every
line with a #include and let it re-sort things according to the precise
LLVM rules for include ordering baked into clang-format these days.

I've reverted a number of files where the results of sorting includes
isn't healthy. Either places where we have legacy code relying on
particular include ordering (where possible, I'll fix these separately)
or where we have particular formatting around #include lines that
I didn't want to disturb in this patch.

This patch is *entirely* mechanical. If you get merge conflicts or
anything, just ignore the changes in this patch and run clang-format
over your #include lines in the files.

Sorry for any noise here, but it is important to keep these things
stable. I was seeing an increasing number of patches with irrelevant
re-ordering of #include lines because clang-format was used. This patch
at least isolates that churn, makes it easy to skip when resolving
conflicts, and gets us to a clean baseline (again).

llvm-svn: 304787
2017-06-06 11:49:48 +00:00
Craig Topper 3002d5b0bf [ValueTracking] Remove scalar only restriction from isKnownNonEqual. The computeKnownBits and isKnownNonZero calls this code relies on should work fine for vectors.
This will be used by another commit to remove some code from InstSimplify that is redundant for scalars, but was needed for vectors due to this issue.

llvm-svn: 304774
2017-06-06 07:13:15 +00:00
Craig Topper 8e662f7f81 [ValueTracking] Use the computeKnownBits version that returns a KnownBits object instead of taking one by reference. NFC
llvm-svn: 304772
2017-06-06 07:13:11 +00:00
Craig Topper 8365df825e [ValueTracking] Use APInt::intersects to avoid some temporary APInts. NFC
llvm-svn: 304771
2017-06-06 07:13:09 +00:00
Galina Kistanova 244621faad Added LLVM_FALLTHROUGH to address warning: this statement may fall through. NFC.
llvm-svn: 304361
2017-05-31 22:16:24 +00:00
Zaara Syeda 3a7578c658 [PPC] Inline expansion of memcmp
This patch does an inline expansion of memcmp.
It changes the memcmp library call into an inline expansion when the size is
known at compile time and is under a target specified threshold.
This expansion is implemented in CodeGenPrepare and expands into straight line
code. The target specifies a maximum load size and the expansion works by using
this size to load the two sources, compare, and exit early if a difference is
found. It also has a special case when the memcmp result is used in a compare
to zero equality.

Differential Revision: https://reviews.llvm.org/D28637

llvm-svn: 304313
2017-05-31 17:12:38 +00:00
Craig Topper a2025eaaef [ValueTracking] Add OptimizationRemarkEmitter to the other signature for commuteKnownBits.
This is needed for an upcoming patch.

llvm-svn: 303772
2017-05-24 16:53:03 +00:00
Matthias Braun 50ec0b5dce SimplifyLibCalls: Optimize wcslen
Refactor the strlen optimization code to work for both strlen and wcslen.

This especially helps with programs in the wild where people pass
L"string"s to const std::wstring& function parameters and the wstring
constructor gets inlined.

This also fixes a lingerind API problem/bug in getConstantStringInfo()
where zeroinitializers would always give you an empty string (without a
length) back regardless of the actual length of the initializer which
did not work well in the TrimAtNul==false causing the PR mentioned
below.

Note that the fixed getConstantStringInfo() needed fixes to SelectionDAG
memcpy lowering and may lead to some cases for out-of-bounds
zeroinitializer accesses not getting optimized anymore. So some code
with UB may produce out of bound memory reads now instead of just
producing zeros.

The refactoring "accidentally" fixes http://llvm.org/PR32124

Differential Revision: https://reviews.llvm.org/D32839

llvm-svn: 303461
2017-05-19 22:37:09 +00:00
Craig Topper 1a36b7d836 [ValueTracking] Replace all uses of ComputeSignBit with computeKnownBits.
This patch finishes off the conversion of ComputeSignBit to computeKnownBits.

Differential Revision: https://reviews.llvm.org/D33166

llvm-svn: 303035
2017-05-15 06:39:41 +00:00
Craig Topper bb9737247a [InstCombine] Merge duplicate functionality between InstCombine and ValueTracking
Summary:
Merge overflow computation for signed add,
appearing both in InstCombine and ValueTracking.

As part of the merge,
cleanup the interface for overflow checks in InstCombine.

Patch by Yoav Ben-Shalom.

Reviewers: craig.topper, majnemer

Reviewed By: craig.topper

Subscribers: takuto.ikuta, llvm-commits

Differential Revision: https://reviews.llvm.org/D32946

llvm-svn: 303029
2017-05-15 02:44:08 +00:00
Craig Topper 8df66c602a [KnownBits] Add bit counting methods to KnownBits struct and use them where possible
This patch adds min/max population count, leading/trailing zero/one bit counting methods.

The min methods return answers based on bits that are known without considering unknown bits. The max methods give answers taking into account the largest count that unknown bits could give.

Differential Revision: https://reviews.llvm.org/D32931

llvm-svn: 302925
2017-05-12 17:20:30 +00:00
Craig Topper 868813ffbb [ValueTracking] Use KnownOnes to provide a better bound on known zeros for ctlz/cttz intrinics
This patch uses KnownOnes of the input of ctlz/cttz to bound the value that can be returned from these intrinsics. This makes these intrinsics more similar to the handling for ctpop which already uses known bits to produce a similar bound.

Differential Revision: https://reviews.llvm.org/D32521

llvm-svn: 302444
2017-05-08 17:22:34 +00:00
Craig Topper 6e11a05e7e [ValueTracking] Introduce a version of computeKnownBits that returns a KnownBits struct. Begin using it to replace internal usages of ComputeSignBit
This introduces a new interface for computeKnownBits that returns the KnownBits object instead of requiring it to be pre-constructed and passed in by reference.

This is a much more convenient interface as it doesn't require the caller to figure out the BitWidth to pre-construct the object. It's so convenient that I believe we can use this interface to remove the special ComputeSignBit flavor of computeKnownBits.

As a step towards that idea, this patch replaces all of the internal usages of ComputeSignBit with this new interface. As you can see from the patch there were a couple places where we called ComputeSignBit which really called computeKnownBits, and then called computeKnownBits again directly. I've reduced those places to only making one call to computeKnownBits. I bet there are probably external users that do it too.

A future patch will update the external users and remove the ComputeSignBit interface. I'll also working on moving more locations to the KnownBits returning interface for computeKnownBits.

Differential Revision: https://reviews.llvm.org/D32848

llvm-svn: 302437
2017-05-08 16:22:48 +00:00
Craig Topper f0aeee01c3 [KnownBits] Add wrapper methods for setting and clear all bits in the underlying APInts in KnownBits.
This adds routines for reseting KnownBits to unknown, making the value all zeros or all ones. It also adds methods for querying if the value is zero, all ones or unknown.

Differential Revision: https://reviews.llvm.org/D32637

llvm-svn: 302262
2017-05-05 17:36:09 +00:00
Craig Topper 6b3940a4b3 [ValueTracking] Remove handling for BitWidth being 0 in ComputeSignBit and isKnownNonZero.
I don't believe its possible to have non-zero values here since DataLayout became required. The APInt constructor inside of the KnownBits object will assert if this ever happens.

llvm-svn: 302089
2017-05-03 22:25:19 +00:00
Craig Topper d938fd1397 [KnownBits] Add zext, sext, and trunc methods to KnownBits
This patch adds zext, sext, and trunc methods to KnownBits and uses them where possible.

Differential Revision: https://reviews.llvm.org/D32784

llvm-svn: 302088
2017-05-03 22:07:25 +00:00
Matt Arsenault 6a288c1e32 Replace hardcoded intrinsic list with speculatable attribute.
No change in which intrinsics should be speculated.

llvm-svn: 301995
2017-05-03 02:26:10 +00:00
Sanjoy Das 08989c7ecd Rename isKnownNotFullPoison to programUndefinedIfPoison; NFC
Summary:
programUndefinedIfPoison makes more sense, given what the function
does; and I'm about to add a function with a name similar to
isKnownNotFullPoison (so do the rename to avoid confusion).

Reviewers: broune, majnemer, bjarke.roune

Reviewed By: broune

Subscribers: mcrosier, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D30444

llvm-svn: 301776
2017-04-30 19:41:19 +00:00
Craig Topper ca48af3c87 [KnownBits] Add methods for determining if the known bits represent a negative/nonnegative number and add methods for changing the negative/nonnegative state
Summary: This patch adds isNegative, isNonNegative for querying whether the sign bit is known. It also adds makeNegative and makeNonNegative for controlling the sign bit.

Reviewers: RKSimon, spatel, davide

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32651

llvm-svn: 301747
2017-04-29 16:43:11 +00:00
Matt Arsenault cf5e7fe358 [ValueTracking] Teach isSafeToSpeculativelyExecute() about the speculatable attribute
Patch by Tom Stellard

llvm-svn: 301688
2017-04-28 21:13:09 +00:00
Daniel Berlin 4d0fe64ae3 Kill off the old SimplifyInstruction API by converting remaining users.
llvm-svn: 301673
2017-04-28 19:55:38 +00:00
Craig Topper 9eb2d72a1d [ValueTracking] Use APInt::isSubsetOf and APInt::intersects. NFC
llvm-svn: 301654
2017-04-28 16:57:55 +00:00
Craig Topper f42b23f7d8 [ValueTracking] Convert computeKnownBitsFromRangeMetadata to use KnownBits struct.
llvm-svn: 301626
2017-04-28 06:28:56 +00:00
Craig Topper b45eabcf82 [ValueTracking] Introduce a KnownBits struct to wrap the two APInts for computeKnownBits
This patch introduces a new KnownBits struct that wraps the two APInt used by computeKnownBits. This allows us to treat them as more of a unit.

Initially I've just altered the signatures of computeKnownBits and InstCombine's simplifyDemandedBits to pass a KnownBits reference instead of two separate APInt references. I'll do similar to the SelectionDAG version of computeKnownBits/simplifyDemandedBits as a separate patch.

I've added a constructor that allows initializing both APInts to the same bit width with a starting value of 0. This reduces the repeated pattern of initializing both APInts. Once place default constructed the APInts so I added a default constructor for those cases.

Going forward I would like to add more methods that will work on the pairs. For example trunc, zext, and sext occur on both APInts together in several places. We should probably add a clear method that can be used to clear both pieces. Maybe a method to check for conflicting information. A method to return (Zero|One) so we don't write it out everywhere. Maybe a method for (Zero|One).isAllOnesValue() to determine if all bits are known. I'm sure there are many other methods we can come up with.

Differential Revision: https://reviews.llvm.org/D32376

llvm-svn: 301432
2017-04-26 16:39:58 +00:00
Craig Topper f3dbd17d0a [APInt] Use isSubsetOf, intersects, and bit counting methods to reduce temporary APInts
This patch uses various APInt methods to reduce temporary APInt creation.

This should be all of the unrelated cleanups that got buried in D32376(creating a KnownBits struct) as well as some pointed out by Simon during the review of that. Plus a few improvements to use counting instead of masking.

I've left out any places where we do something like (KnownZero & KnownOne) != 0 as I plan to add a helper method to KnownBits to ask that question and didn't want to thrash that code an additional time.

Differential Revision: https://reviews.llvm.org/D32495

llvm-svn: 301338
2017-04-25 17:46:30 +00:00
Craig Topper 2d9afa7745 [ValueTracking] Use APInt::operator|=(uint64_t) instead of creating a temporary APInt. NFC
llvm-svn: 301325
2017-04-25 16:48:14 +00:00
Craig Topper da8ff4181c [ValueTracking] Use APInt instead of auto. NFC
This is a pre-commit for a patch I'm working on to turn KnownZero/One into a struct. Once I do that the type here will be less obvious.

llvm-svn: 301324
2017-04-25 16:48:09 +00:00
Craig Topper 9c932d31e1 [ValueTracking] Use BitWidth local variable instead of re-reading it from KnownZero. NFC
This is a pre-commit for a patch that I'm working on to merge KnownZero/KnownOne into a KnownBits struct which would have had to touch this line.

llvm-svn: 301323
2017-04-25 16:48:03 +00:00
Craig Topper 72f31a8381 [ValueTracking] Use APInt::setAllBits and APInt::intersects to simplify some code. NFC
llvm-svn: 300997
2017-04-21 16:43:32 +00:00
Craig Topper bcfd2d1789 [APInt] Rename getSignBit to getSignMask
getSignBit is a static function that creates an APInt with only the sign bit set. getSignMask seems like a better name to convey its functionality. In fact several places use it and then store in an APInt named SignMask.

Differential Revision: https://reviews.llvm.org/D32108

llvm-svn: 300856
2017-04-20 16:56:25 +00:00
Craig Topper 9b71a402c2 [APInt] Cast calls to add/sub/mul overflow methods to void if only their overflow bool out param is used.
This is preparation for a clang change to improve the [[nodiscard]] warning to not be ignored on methods that return a class marked [[nodiscard]] that are defined in the class itself. See D32207.

We should consider adding wrapper methods to APInt that return the overflow flag directly and discard the APInt result. This would eliminate the void casts and the need to create a bool before the call to pass to the out param.

llvm-svn: 300758
2017-04-19 21:09:45 +00:00
Craig Topper fc947bcfba [APInt] Use lshrInPlace to replace lshr where possible
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result.

This adds an lshrInPlace(const APInt &) version as well.

Differential Revision: https://reviews.llvm.org/D32155

llvm-svn: 300566
2017-04-18 17:14:21 +00:00
Craig Topper d23004c37b Introduce APInt::isSignBitSet/isSignBitClear. Use in place isSignBitSet in place of isNegative in known bits tracking.
This makes statements like KnownZero.isNegative() (which means the value we're tracking is positive) less confusing.

llvm-svn: 300457
2017-04-17 16:38:20 +00:00
Craig Topper da886c665b [InstCombine][ValueTracking] When computing known bits for Srem make sure we don't compute known bits for the LHS twice.
If we already called computeKnownBits for the RHS being a constant power of 2, we've already computed everything we can and should just stop. I think previously we would still recurse if we had determined the result was negative or had not determined the sign bit at all.

llvm-svn: 300432
2017-04-16 21:46:12 +00:00
Craig Topper 66df10ff63 [ValueTracking] Calculate the KnownZeros for Intrinsic::ctpop without using a temporary APInt to count leading zeros on.
The APInt was created from an 'unsigned' and we just wanted to know how many bits the value needed to represent it. We can just use Log2_32 from MathExtras.h to get the info.

llvm-svn: 300309
2017-04-14 06:43:34 +00:00
Craig Topper 1281deaa00 [ValueTracking] Use APInt::isNegative(). NFC
llvm-svn: 300308
2017-04-14 06:43:32 +00:00
Craig Topper f8631cd1de [ValueTracking] Use APInt::sext instead of zext and setBitsFrom. NFC
llvm-svn: 300307
2017-04-14 06:43:29 +00:00
Craig Topper e953dec673 [ValueTracking] Remove duplicate call to computeKnownBits for the operands of Select.
We call it unconditionally on the operands of the select. Then decide if its a min/max and call it on the min/max operands or on the select operands again. Either of those second calls will overwrite the results of the initial call so we can just delete the first call.

llvm-svn: 300256
2017-04-13 20:39:37 +00:00
Craig Topper a80f2041f7 [ValueTracking] Prevent a call to computeKnownBits if we already know the state of the bit we would calculate. Also reuse a temporary APInt instead of creating a new one.
llvm-svn: 300239
2017-04-13 19:04:45 +00:00
Craig Topper 9ce07b6a15 [ValueTracking] Move a temporary APInt instead of copying it.
llvm-svn: 300233
2017-04-13 18:25:53 +00:00
Craig Topper 854824139e [ValueTracking] Teach GetUnderlyingObject to stop when it reachs an alloca instruction.
Previously it tried to call SimplifyInstruction which doesn't know anything about alloca so defers to constant folding which also doesn't do anything with alloca. This results in wasted cycles making calls that won't do anything. Given the frequency with which this function is called this time adds up.

llvm-svn: 300118
2017-04-12 22:29:23 +00:00
Craig Topper 885fa12e8a [APInt] Remove shift functions from APIntOps namespace. Replace the few users with the APInt class methods. NFCI
llvm-svn: 299248
2017-03-31 20:01:16 +00:00
Craig Topper 8fbb74b5b2 Revert r298711 "[InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits"
Tsan bot is failing.

llvm-svn: 298745
2017-03-24 22:12:10 +00:00
Craig Topper d4521c2fc2 [InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits
SimplifyDemandedUseBits for Add/Sub already recursed down LHS and RHS for simplifying bits. If that didn't provide any simplifications we fall back to calling computeKnownBits which will recurse again. Instead just take the known bits for LHS and RHS we already have and call into a new function in ValueTracking that can calculate the known bits given the LHS/RHS bits.

llvm-svn: 298711
2017-03-24 16:56:51 +00:00
Craig Topper 059b98e044 [ValueTracking] Use uint64_t for CarryIn in computeKnownBitsAddSub instead of a creating a temporary APInt. NFC
llvm-svn: 298688
2017-03-24 05:38:09 +00:00
Craig Topper 2bd9514e8c [ValueTracking] Convert more places to use setHighBits/setLowBits/setSignBit. NFCI
llvm-svn: 298683
2017-03-24 03:57:24 +00:00
Craig Topper 93683b6aff [ValueTracking] Use APInt::isNegative instead of using operator[BitWidth-1]. NFCI
llvm-svn: 298584
2017-03-23 07:06:42 +00:00
Craig Topper d73c6b4ef8 [ValueTracking] Use setAllBits/setSignBit/setLowBits/setHighBits. NFCI
llvm-svn: 298583
2017-03-23 07:06:39 +00:00
Craig Topper ad5c2d04f7 [ValueTracking] Make sure we keep range metadata information when calculating known bits for calls to bitreverse intrinsic.
llvm-svn: 298488
2017-03-22 07:22:49 +00:00
Craig Topper 57d8ca72d1 [ValueTracking] use setLowBits/setHighBits/setBitsFrom to replace |= getHighBits/getLowBits. NFCI
llvm-svn: 298486
2017-03-22 06:19:37 +00:00
Craig Topper 7cfd4a9d7a [ValueTracking] Remove deadish code from computeKnownBitsAddSub.
The code assigned to KnownZero, but later code unconditionally assigned over it. I'm pretty sure the later code can handle the same cases and more equally well.

llvm-svn: 298190
2017-03-18 18:21:46 +00:00
Craig Topper 3eb0d80de0 [ValueTracking] Add APInt::setSignBit and use it to replace ORing with getSignBit which will malloc if the bit width is larger than 64.
llvm-svn: 298180
2017-03-18 04:01:29 +00:00
Oliver Stannard 062041113f [ValueTracking] Out of range shifts might be undef
If it is possible for the RHS of a shift operation to be greater than or equal
to the bit-width, then the result might be undef, and we can't report any known
bits.

In some cases, this was allowing a transformation in instcombine which widened
an undef value from i1 to i32, increasing the range of values that a function
could return.

Differential revision: https://reviews.llvm.org/D30781

llvm-svn: 297724
2017-03-14 10:13:17 +00:00
Sebastian Pop 4a4d245b19 Handle UnreachableInst in isGuaranteedToTransferExecutionToSuccessor
A block with an UnreachableInst does not transfer execution to a successor.
The problem was exposed by GVN-hoist. This patch fixes bug 32153.

Patch by Aditya Kumar.

Differential Revision: https://reviews.llvm.org/D30667

llvm-svn: 297254
2017-03-08 01:54:50 +00:00
Sanjoy Das 39a684d117 [ValueTracking] Don't do an unchecked shift in ComputeNumSignBits
Summary:
Previously we used to return a bogus result, 0, for IR like `ashr %val,
-1`.

I've also added an assert checking that `ComputeNumSignBits` at least
returns 1.  That assert found an already checked in test case where we
were returning a bad result for `ashr %val, -1`.

Fixes PR32045.

Reviewers: spatel, majnemer

Reviewed By: spatel, majnemer

Subscribers: efriedma, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30311

llvm-svn: 296273
2017-02-25 20:30:45 +00:00
Sanjoy Das 5cd6c5cacf [ValueTracking] Make poison propagation more aggressive
Summary:
Motivation: fix PR31181 without regression (the actual fix is still in
progress).  However, the actual content of PR31181 is not relevant
here.

This change makes poison propagation more aggressive in the following
cases:

 1. poision * Val == poison, for any Val.  In particular, this changes
    existing intentional and documented behavior in these two cases:
     a. Val is 0
     b. Val is 2^k * N
 2. poison << Val == poison, for any Val
 3. getelementptr is poison if any input is poison

I think all of these are justified (and are axiomatically true in the
new poison / undef model):

1a: we need poison * 0 to be poison to allow transforms like these:

  A * (B + C) ==> A * B + A * C

If poison * 0 were 0 then the above transform could not be allowed
since e.g. we could have A = poison, B = 1, C = -1, making the LHS

  poison * (1 + -1) = poison * 0 = 0

and the RHS

  poison * 1 + poison * -1 = poison + poison = poison

1b: we need e.g. poison * 4 to be poison since we want to allow

  A * 4 ==> A + A + A + A

If poison * 4 were a value with all of their bits poison except the
last four; then we'd not be able to do this transform since then if A
were poison the LHS would only be "partially" poison while the RHS
would be "full" poison.

2: Same reasoning as (1b), we'd like have the following kinds
transforms be legal:

  A << 1 ==> A + A

Reviewers: majnemer, efriedma

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D30185

llvm-svn: 295809
2017-02-22 06:52:32 +00:00
Sanjoy Das 7b0b408973 [ValueTracking] clang-format a section I'm about to touch; NFC
(Whitespace only change)

llvm-svn: 295690
2017-02-21 02:42:42 +00:00
Sanjay Patel 97e4b98749 [ValueTracking] use nonnull argument attribute to eliminate null checks
Enhancing value tracking's analysis of null-ness was suggested in D27855, so here's a first attempt at that.

This is part of solving:
https://llvm.org/bugs/show_bug.cgi?id=28430

Differential Revision: https://reviews.llvm.org/D28204

llvm-svn: 294897
2017-02-12 15:35:34 +00:00
Sanjay Patel 54656ca7db [ValueTracking] emit a remark when we detect a conflicting assumption (PR31809)
This is a follow-up to D29395 where we try to be good citizens and let the user know that
we've probably gone off the rails.

This should allow us to resolve:
https://llvm.org/bugs/show_bug.cgi?id=31809

Differential Revision: https://reviews.llvm.org/D29404

llvm-svn: 294208
2017-02-06 18:26:06 +00:00
Sanjay Patel 52e4e6594e [ValueTracking] remove a FIXME for something we don't want to do; NFC
The comment was added with:
https://reviews.llvm.org/rL293773
...but there would be a cost to implement this and possibly no payoff.

llvm-svn: 293823
2017-02-01 22:27:34 +00:00
Sanjay Patel 25f6d710d9 [ValueTracking] avoid crashing from bad assumptions (PR31809)
A program may contain llvm.assume info that disagrees with other analysis. 
This may be caused by UB in the program, so we must not crash because of that.

As noted in the code comments:
https://llvm.org/bugs/show_bug.cgi?id=31809
...we can do better, but this at least avoids the assert/crash in the bug report.

Differential Revision: https://reviews.llvm.org/D29395

llvm-svn: 293773
2017-02-01 15:41:32 +00:00
Sanjay Patel 14a4b8185f [ValueTracking] clean up lookThroughCast; NFCI
1. Use auto with dyn_cast.
2. Don't use else after return.
3. Convert chain of 'else if' to switch.
4. Improve variable names.

llvm-svn: 293432
2017-01-29 16:34:57 +00:00
Justin Lebar 322c127bee [ValueTracking] Add comment that CannotBeOrderedLessThanZero does the wrong thing for powi.
Summary:
CannotBeOrderedLessThanZero(powi(x, exp)) returns true if
CannotBeOrderedLessThanZero(x).  But powi(-0, exp) is negative if exp is
odd, so we actually want to return SignBitMustBeZero(x).

Except that also isn't right, because we want to return true if x is
NaN, even if x has a negative sign bit.

What we really need in order to fix this is a consistent approach in
this function to handling the sign bit of NaNs.  Without this it's very
difficult to say what the correct behavior here is.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28927

llvm-svn: 293243
2017-01-27 00:58:34 +00:00
Justin Lebar 7e3184c412 [ValueTracking] Implement SignBitMustBeZero correctly for sqrt.
Summary:
Previously we assumed that the result of sqrt(x) always had 0 as its
sign bit.  But sqrt(-0) == -0.

Reviewers: hfinkel, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28928

llvm-svn: 293115
2017-01-26 00:10:26 +00:00
whitequark 16f1e5f1ca Mark @llvm.powi.* as safe to speculatively execute.
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.

However, the @llvm.powi.* intrinsics do not correspond to any libm
function, and lacks any defined error handling semantics in LangRef.
It most certainly does not alter errno.

llvm-svn: 293041
2017-01-25 09:32:30 +00:00
David L. Jones d21529fa0d [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)
Summary:
The LibFunc::Func enum holds enumerators named for libc functions.
Unfortunately, there are real situations, including libc implementations, where
function names are actually macros (musl uses "#define fopen64 fopen", for
example; any other transitively visible macro would have similar effects).

Strictly speaking, a conforming C++ Standard Library should provide any such
macros as functions instead (via <cstdio>). However, there are some "library"
functions which are not part of the standard, and thus not subject to this
rule (fopen64, for example). So, in order to be both portable and consistent,
the enum should not use the bare function names.

The old enum naming used a namespace LibFunc and an enum Func, with bare
enumerators. This patch changes LibFunc to be an enum with enumerators prefixed
with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override
macros.)

There are additional changes required in clang.

Reviewers: rsmith

Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D28476

llvm-svn: 292848
2017-01-23 23:16:46 +00:00
Sanjay Patel 24c6f88e4c [ValueTracking] tighten up matchMinMax(); NFCI
This is similar to what the caller (matchSelectPattern()) does. In all
cases where we succeed in matching a min/max pattern, the values in
that pattern will be the values of the 'select', so hoist that and
remove a bunch of duplicated code.

llvm-svn: 292725
2017-01-21 17:51:25 +00:00
Sanjay Patel 0c1c70aef4 [ValueTracking] recognize variations of 'clamp' to improve codegen (PR31693)
By enhancing value tracking, we allow an existing min/max canonicalization to
kick in and improve codegen for several targets that have min/max instructions.

Unfortunately, recognizing min/max in value tracking may cause us to hit
a hack in InstCombiner::visitICmpInst() more often:
http://lists.llvm.org/pipermail/llvm-dev/2017-January/109340.html
...but I'm hoping we can remove that soon.

Correctness proofs based on Alive:

Name: smaxmin
Pre: C1 < C2
%cmp2 = icmp slt i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp slt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %min
=>
%cmp2 = icmp slt i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp sgt i8 %min, C1
%r = select i1 %cmp1, i8 %min, i8 C1

Name: sminmax
Pre: C1 > C2
%cmp2 = icmp sgt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp sgt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %max
=>
%cmp2 = icmp sgt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp slt i8 %max, C1
%r = select i1 %cmp1, i8 %max, i8 C1

---------------------------------------- 
Optimization: smaxmin 
Done: 1 
Optimization is correct! 
---------------------------------------- 
Optimization: sminmax 
Done: 1 
Optimization is correct!



Name: umaxmin
Pre: C1 u< C2
%cmp2 = icmp ult i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp ult i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %min
=>
%cmp2 = icmp ult i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp ugt i8 %min, C1
%r = select i1 %cmp1, i8 %min, i8 C1

Name: uminmax
Pre: C1 u> C2
%cmp2 = icmp ugt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp ugt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %max
=>
%cmp2 = icmp ugt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp ult i8 %max, C1
%r = select i1 %cmp1, i8 %max, i8 C1

---------------------------------------- 
Optimization: umaxmin 
Done: 1 
Optimization is correct! 
---------------------------------------- 
Optimization: uminmax 
Done: 1 
Optimization is correct! 
 

llvm-svn: 292660
2017-01-20 22:18:47 +00:00
Sanjay Patel 9666996563 [ValueTracking] recognize a 'not' of an assumed condition as false
Also, add the corresponding match to the AssumptionCache's 'Affected Values' list.

Differential Revision: https://reviews.llvm.org/D28485

llvm-svn: 292239
2017-01-17 18:15:49 +00:00
Chad Rosier 8520429bdd [ValueTracking] Extend known bits to understand @llvm.bitreverse.
Differential Revision: https://reviews.llvm.org/D28780

llvm-svn: 292233
2017-01-17 17:23:51 +00:00
Malcolm Parsons 17d266bc96 Remove unused lambda captures. NFC
llvm-svn: 291916
2017-01-13 17:12:16 +00:00
Hal Finkel 8a9a783f2c Make processing @llvm.assume more efficient - Add affected values to the assumption cache
Here's my second try at making @llvm.assume processing more efficient. My
previous attempt, which leveraged operand bundles, r289755, didn't end up
working: it did make assume processing more efficient but eliminating the
assumption cache made ephemeral value computation too expensive. This is a
more-targeted change. We'll keep the assumption cache, but extend it to keep a
map of affected values (i.e. values about which an assumption might provide
some information) to the corresponding assumption intrinsics. This allows
ValueTracking and LVI to find assumptions relevant to the value being queried
without scanning all assumptions in the function. The fact that ValueTracking
started doing O(number of assumptions in the function) work, for every
known-bits query, has become prohibitively expensive in some cases.

As discussed during the review, this is a pragmatic fix that, longer term, will
likely be replaced by a more-principled solution (perhaps based on an extended
SSA form).

Differential Revision: https://reviews.llvm.org/D28459

llvm-svn: 291671
2017-01-11 13:24:24 +00:00
Matt Arsenault 1e0edbf03c InstSimplify: Eliminate fabs on known positive
llvm-svn: 291624
2017-01-11 00:33:24 +00:00
Xin Tong c13a8e84d1 Intrinsic::Bitreverse is safe to speculate
Summary: Intrinsic::Bitreverse is safe to speculate

Reviewers: hfinkel, mkuper, arsenm, jmolloy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D28471

llvm-svn: 291456
2017-01-09 17:57:08 +00:00
Sanjay Patel 4382997a13 [ValueTracking] remove stale comments; NFC
The checks were improved with:
https://reviews.llvm.org/rL290194

llvm-svn: 290826
2017-01-02 19:04:07 +00:00
Sanjoy Das 3bb2dbd665 Fix an issue with isGuaranteedToTransferExecutionToSuccessor
I'm not sure if this was intentional, but today
isGuaranteedToTransferExecutionToSuccessor returns true for readonly and
argmemonly calls that may throw.  This commit changes the function to
not implicitly infer nounwind this way.

Even if we eventually specify readonly calls as not throwing,
isGuaranteedToTransferExecutionToSuccessor is not the best place to
infer that.  We should instead teach FunctionAttrs or some other such
pass to tag readonly functions / calls as nounwind instead.

llvm-svn: 290794
2016-12-31 22:12:34 +00:00
Sanjoy Das 0945530d4d Avoid const_cast; NFC
llvm-svn: 290793
2016-12-31 22:12:31 +00:00
Sanjay Patel 7fd779f09f [ValueTracking] make dominator tree requirement explicit for isKnownNonNullFromDominatingCondition(); NFCI
I don't think this hole is currently exposed, but I crashed regression tests for
jump-threading and loop-vectorize after I added calls to isKnownNonNullAt() in
InstSimplify as part of trying to solve PR28430:
https://llvm.org/bugs/show_bug.cgi?id=28430

That's because they call into value tracking with a context instruction, but no
other parts of the query structure filled in.

For more background, see the discussion in:
https://reviews.llvm.org/D27855

llvm-svn: 290786
2016-12-31 17:37:01 +00:00
Matt Arsenault cb2a7eb1aa Use MaxDepth instead of repeating its value
llvm-svn: 290194
2016-12-20 19:06:15 +00:00
Daniel Jasper aec2fa352f Revert @llvm.assume with operator bundles (r289755-r289757)
This creates non-linear behavior in the inliner (see more details in
r289755's commit thread).

llvm-svn: 290086
2016-12-19 08:22:17 +00:00
Hal Finkel 3ca4a6bcf1 Remove the AssumptionCache
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...

llvm-svn: 289756
2016-12-15 03:02:15 +00:00
Hal Finkel cb9f78e1c3 Make processing @llvm.assume more efficient by using operand bundles
There was an efficiency problem with how we processed @llvm.assume in
ValueTracking (and other places). The AssumptionCache tracked all of the
assumptions in a given function. In order to find assumptions relevant to
computing known bits, etc. we searched every assumption in the function. For
ValueTracking, that means that we did O(#assumes * #values) work in InstCombine
and other passes (with a constant factor that can be quite large because we'd
repeat this search at every level of recursion of the analysis).

Several of us discussed this situation at the last developers' meeting, and
this implements the discussed solution: Make the values that an assume might
affect operands of the assume itself. To avoid exposing this detail to
frontends and passes that need not worry about it, I've used the new
operand-bundle feature to add these extra call "operands" in a way that does
not affect the intrinsic's signature. I think this solution is relatively
clean. InstCombine adds these extra operands based on what ValueTracking, LVI,
etc. will need and then those passes need only search the users of the values
under consideration. This should fix the computational-complexity problem.

At this point, no passes depend on the AssumptionCache, and so I'll remove
that as a follow-up change.

Differential Revision: https://reviews.llvm.org/D27259

llvm-svn: 289755
2016-12-15 02:53:42 +00:00
Peter Collingbourne 235c275b20 IR, X86: Understand !absolute_symbol metadata on global variables.
Summary:
Attaching !absolute_symbol to a global variable does two things:
1) Marks it as an absolute symbol reference.
2) Specifies the value range of that symbol's address.
Teach the X86 backend to allow absolute symbols to appear in place of
immediates by extending the relocImm and mov64imm32 matchers. Start using
relocImm in more places where it is legal.

As previously proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/105800.html

Differential Revision: https://reviews.llvm.org/D25878

llvm-svn: 289087
2016-12-08 19:01:00 +00:00
Peter Collingbourne ab85225be4 IR: Change the gep_type_iterator API to avoid always exposing the "current" type.
Instead, expose whether the current type is an array or a struct, if an array
what the upper bound is, and if a struct the struct type itself. This is
in preparation for a later change which will make PointerType derive from
Type rather than SequentialType.

Differential Revision: https://reviews.llvm.org/D26594

llvm-svn: 288458
2016-12-02 02:24:42 +00:00
Yaxun Liu 02f75f31e0 Fix known zero bits for addrspacecast.
Currently LLVM assumes that a pointer addrspacecasted to a different addr space is equivalent to trunc or zext bitwise, which is not true. For example, in amdgcn target, when a null pointer is addrspacecasted from addr space 4 to 0, its value is changed from i64 0 to i32 -1.

This patch teaches LLVM not to assume known bits of addrspacecast instruction to its operand.

Differential Revision: https://reviews.llvm.org/D26803

llvm-svn: 287545
2016-11-21 15:42:31 +00:00
Sanjay Patel cfcc42bdc2 [ValueTracking] recognize even more variants of smin/smax
Similar to:
https://reviews.llvm.org/rL285499
https://reviews.llvm.org/rL286318

We can't minimally expose this in IR tests because we don't have min/max intrinsics,
but the difference is visible in codegen because SelectionDAGBuilder::visitSelect() 
uses matchSelectPattern().

We're not canonicalizing these patterns in IR (yet), so I don't expect there to be any
regressions as noted here:
http://lists.llvm.org/pipermail/llvm-dev/2016-November/106868.html

llvm-svn: 286776
2016-11-13 20:04:52 +00:00