Commit Graph

17700 Commits

Author SHA1 Message Date
Alex Shlyapnikov 09171aa31f Revert "[asan] Put ctor/dtor in comdat."
Speculative revert, some libfuzzer tests are affected.

This reverts commit r298756.

llvm-svn: 298889
2017-03-27 23:11:47 +00:00
Matthew Simpson b8ff4a4a70 [LV] Transform truncations of non-primary induction variables
The vectorizer tries to replace truncations of induction variables with new
induction variables having the smaller type. After r295063, this optimization
was applied to all integer induction variables, including non-primary ones.
When optimizing the truncation of a non-primary induction variable, we still
need to transform the new induction so that it has the correct start value.
This should fix PR32419.

Reference: https://bugs.llvm.org/show_bug.cgi?id=32419
llvm-svn: 298882
2017-03-27 20:07:38 +00:00
Anna Thomas f57ae33381 [InstCombine] Avoid incorrect folding of select into phi nodes when incoming element is a vector type
Summary:
We are incorrectly folding selects into phi nodes when the incoming value of a phi
node is a constant vector. This optimization is done in `FoldOpIntoPhi` when the
select condition is a phi node with constant incoming values.
Without the fix, we are miscompiling (i.e. incorrectly folding the
select into the phi node) when the vector contains non-zero
elements.
This patch fixes the miscompile and we will correctly fold based on the
select vector operand (see added test cases).

Reviewers: majnemer, sanjoy, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31189

llvm-svn: 298845
2017-03-27 13:52:51 +00:00
Serge Pavlov b71bb80c2d [LoopUnroll] Remap references in peeled iteration
References in cloned blocks must be remapped prior to dominator
calculation.

Differential Revision: https://reviews.llvm.org/D31281

llvm-svn: 298811
2017-03-26 16:46:53 +00:00
Joerg Sonnenberger fa7367428a Split the SimplifyCFG pass into two variants.
The first variant contains all current transformations except
transforming switches into lookup tables. The second variant
contains all current transformations.

The switch-to-lookup-table conversion results in code that is more
difficult to analyze and optimize by other passes. Most importantly,
it can inhibit Dead Code Elimination. As such it is often beneficial to
only apply this transformation very late. A common example is inlining,
which can often result in range restrictions for the switch expression.

Changes in execution time according to LNT:
SingleSource/Benchmarks/Misc/fp-convert +3.03%
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20%
MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43%
and a couple of smaller changes. For perimeter it also results 2.6%
a smaller binary.

Differential Revision: https://reviews.llvm.org/D30333

llvm-svn: 298799
2017-03-26 06:44:08 +00:00
Chandler Carruth 0d256c0f5d [IR] Make SwitchInst::CaseIt almost a normal iterator.
This moves it to the iterator facade utilities giving it full random
access semantics, etc. It can also now be used with standard algorithms
like std::all_of and std::any_of and range adaptors like llvm::reverse.

Also make the semantics of iterating match what every other iterator
uses and forbid decrementing past the begin iterator. This was used as
a hacky way to work around iterator invalidation. However, every
instance trying to do this failed to actually avoid touching invalid
iterators despite the clear documentation that the removed and all
subsequent iterators become invalid including the end iterator. So I've
added a return of the next iterator to removeCase and rewritten the
loops that were doing this to correctly follow the iterator pattern of
either incremneting or removing and assigning fresh values to the
iterator and the end.

In one case we were trying to go backwards to make this cleaner but it
doesn't actually work. I've made that code match the code we use
everywhere else to remove cases as we iterate. This changes the order of
cases in one test output and I moved that test to CHECK-DAG so it
wouldn't care -- the order isn't semantically meaningful anyways.

llvm-svn: 298791
2017-03-26 02:49:23 +00:00
Craig Topper 47596dd4cc [InstCombine] Change the interface of SimplifyDemandedBits so that it takes the instruction and operand instead of the Use.
The first thing it did was get the User for the Use to get the instruction back. This requires looking through the Uses for the User using the waymarking walk. That's pretty fast, but its probably still better to just pass the Instruction we already had.

llvm-svn: 298772
2017-03-25 06:52:52 +00:00
Davide Italiano e9781e7b2f [NewGVN] Adjust NDEBUG markers.
This avoids 'used but not defined' warnings in Release builds
with GCC.

llvm-svn: 298760
2017-03-25 02:40:02 +00:00
Evgeniy Stepanov 71bb8f1ad0 [asan] Put ctor/dtor in comdat.
When possible, put ASan ctor/dtor in comdat.

The only reason not to is global registration, which can be
TU-specific. This is not the case when there are no instrumented
globals. This is also limited to ELF targets, because MachO does
not have comdat, and COFF linkers may GC comdat constructors.

The benefit of this is a lot less __asan_init() calls: one per DSO
instead of one per TU. It's also necessary for the upcoming
gc-sections-for-globals change on Linux, where multiple references to
section start symbols trigger quadratic behaviour in gold linker.

llvm-svn: 298756
2017-03-25 01:01:11 +00:00
Craig Topper 8fbb74b5b2 Revert r298711 "[InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits"
Tsan bot is failing.

llvm-svn: 298745
2017-03-24 22:12:10 +00:00
Ivan Krasin c2124e185c Revert r298620: [LV] Vectorize GEPs
Reason: breaks linking Chromium with LLD + ThinLTO (a pass crashes)
LLVM bug: https://bugs.llvm.org//show_bug.cgi?id=32413

Original change description:

[LV] Vectorize GEPs

This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.

With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.

The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.

Original Differential Revision: https://reviews.llvm.org/D30710

llvm-svn: 298735
2017-03-24 20:49:43 +00:00
Evgeniy Stepanov 64e872a91f [asan] Delay creation of asan ctor.
Create the constructor in the module pass.
This in needed for the GC-friendly globals change, where the constructor can be
put in a comdat  in some cases, but we don't know about that in the function
pass.

llvm-svn: 298731
2017-03-24 20:42:15 +00:00
Matt Arsenault 4c7795dd31 AMDGPU: Fold rcp/rsq of undef to undef
llvm-svn: 298725
2017-03-24 19:04:57 +00:00
Matt Arsenault 18bb24a1be TTI: Split IsSimple in MemIntrinsicInfo
All this did before was assert in EarlyCSE.

llvm-svn: 298724
2017-03-24 18:56:43 +00:00
Teresa Johnson 428b9e0627 [ThinLTO] Correct counting of functions in inliner stats
Summary: Declarations need to be filtered out when counting functions.

Reviewers: eraman

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31336

llvm-svn: 298720
2017-03-24 17:59:06 +00:00
Craig Topper d4521c2fc2 [InstCombine] Provide a way to calculate KnownZero/One for Add/Sub in SimplifyDemandedUseBits without recursing into ComputeKnownBits
SimplifyDemandedUseBits for Add/Sub already recursed down LHS and RHS for simplifying bits. If that didn't provide any simplifications we fall back to calling computeKnownBits which will recurse again. Instead just take the known bits for LHS and RHS we already have and call into a new function in ValueTracking that can calculate the known bits given the LHS/RHS bits.

llvm-svn: 298711
2017-03-24 16:56:51 +00:00
Benjamin Kramer 46f5e2c47b Make GCC happy again.
llvm-svn: 298702
2017-03-24 14:15:35 +00:00
Daniel Berlin ffc30781f4 NewGVN: Small cleanup of two dominance related functions to make
them easier to understand.

llvm-svn: 298692
2017-03-24 06:33:51 +00:00
Daniel Berlin 0e9001131d NewGVN: Small cleanup of useless expression deletion, and don't uselessly create two expressions in symbolic store evaluation.
llvm-svn: 298691
2017-03-24 06:33:48 +00:00
Daniel Berlin 9d0796e5d0 NewGVN: Fix PR32403 - Handling of undef in phis was not quite correct
due to LLVM's view of phi nodes.  It would cause NewGVN not to fixpoint
in some interesting edge cases.

llvm-svn: 298687
2017-03-24 05:30:34 +00:00
Craig Topper 36f2e0eee8 [InstCombine] Use range-based for loop. NFC
llvm-svn: 298680
2017-03-24 02:58:02 +00:00
Craig Topper df73e7c5b7 [InstCombine] Fix 80 column violation I accidentally introduced. NFC
llvm-svn: 298679
2017-03-24 02:57:59 +00:00
Reid Kleckner 392f062675 [sancov] Don't instrument blocks with no insertion point
This prevents crashes when attempting to instrument functions containing
C++ try.

Sanitizer coverage will still fail at runtime when an exception is
thrown through a sancov instrumented function, but that seems marginally
better than what we have now. The full solution is to color the blocks
in LLVM IR and only instrument blocks that have an unambiguous color,
using the appropriate token.

llvm-svn: 298662
2017-03-23 23:30:41 +00:00
Dehao Chen 722e94061b Set the prof weight correctly for call instructions in DeadArgumentElimination.
Summary: In DeadArgumentElimination, the call instructions will be replaced. We also need to set the prof weights so that function inlining can find the correct profile.

Reviewers: eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31143

llvm-svn: 298660
2017-03-23 23:26:00 +00:00
Bryant Wong def79b21e4 [MetaRenamer] Don't rename library functions.
Library functions can have specific semantics that affect the behavior of
certain passes. DSE, for instance, gives special treatment to malloc-ed pointers
but not to pointers returned from an equivalently typed (but differently named)
function.

MetaRenamer ought not to alter program semantics, so library functions must
remain untouched.

Reviewers: mehdi_amini, majnemer, chandlerc, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D31304

llvm-svn: 298659
2017-03-23 23:21:07 +00:00
Dehao Chen 8c88671985 Disable loop unrolling and icp in SamplePGO ThinLTO compile phase
Summary:
loop unrolling and icp will make the sample profile annotation much harder in the backend. So disable these 2 optimization in the ThinLTO compile phase.
Will add a test in cfe in a separate patch.

Reviewers: tejohnson

Reviewed By: tejohnson

Subscribers: mehdi_amini, llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D31217

llvm-svn: 298646
2017-03-23 21:20:05 +00:00
Craig Topper 74494d0179 [InstCombine] Remove some code from visitAnd that dealt with trying to reduce the LHS of a sub to 0. This should now be fully handled by SimplifyDemandedInstructionBits now.
Now that we call ShrinkDemandedConstant on the RHS of sub this should be taken care of. This code doesn't trigger on any in tree regressions, but did before ShrinkDemandedConstant was added to the RHS.

llvm-svn: 298644
2017-03-23 21:00:13 +00:00
Teresa Johnson 0c6a4ff8dc [ThinLTO] Add support for emitting minimized bitcode for thin link
Summary:
The cumulative size of the bitcode files for a very large application
can be huge, particularly with -g. In a distributed build environment,
all of these files must be sent to the remote build node that performs
the thin link step, and this can exceed size limits.

The thin link actually only needs the summary along with a bitcode
symbol table. Until we have a proper bitcode symbol table, simply
stripping the debug metadata results in significant size reduction.

Add support for an option to additionally emit minimized bitcode
modules, just for use in the thin link step, which for now just strips
all debug metadata. I plan to add a cc1 option so this can be invoked
easily during the compile step.

However, care must be taken to ensure that these minimized thin link
bitcode files produce the same index as with the original bitcode files,
as these original bitcode files will be used in the backends.

Specifically:
1) The module hash used for caching is typically produced by hashing the
written bitcode, and we want to include the hash that would correspond
to the original bitcode file. This is because we want to ensure that
changes in the stripped portions affect caching. Added plumbing to emit
the same module hash in the minimized thin link bitcode file.
2) The module paths in the index are constructed from the module ID of
each thin linked bitcode, and typically is automatically generated from
the input file path. This is the path used for finding the modules to
import from, and obviously we need this to point to the original bitcode
files. Added gold-plugin support to take a suffix replacement during the
thin link that is used to override the identifier on the MemoryBufferRef
constructed from the loaded thin link bitcode file. The assumption is
that the build system can specify that the minimized bitcode file has a
name that is similar but uses a different suffix (e.g. out.thinlink.bc
instead of out.o).

Added various tests to ensure that we get identical index files out of
the thin link step.

Reviewers: mehdi_amini, pcc

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31027

llvm-svn: 298638
2017-03-23 19:47:39 +00:00
Matthew Simpson 4e7b71bc86 [LV] Vectorize GEPs
This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.

With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.

The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.

Differential Revision: https://reviews.llvm.org/D30710

llvm-svn: 298620
2017-03-23 16:29:58 +00:00
Matthew Simpson 1fb4064531 [LV] Delete unneeded scalar GEP creation code
The code for generating scalar base pointers in vectorizeMemoryInstruction is
not needed. We currently scalarize all GEPs and maintain the scalarized values
in VectorLoopValueMap. The GEP cloning in this unneeded code is the same as
that in scalarizeInstruction. The test cases that changed as a result of this
patch changed because we were able to reuse the scalarized GEP that we
previously generated instead of cloning a new one.

Differential Revision: https://reviews.llvm.org/D30587

llvm-svn: 298615
2017-03-23 16:07:21 +00:00
Dehao Chen 53a0c082d2 Do not set branch weight if the branch weight annotation is present.
Summary: ThinLTO will annotate the CFG twice. If the branch weight is set by the first annotation, we should not set the branch weight again in the second annotation because the first annotation is more accurate as there is less optimization that could affect debug info accuracy.

Reviewers: tejohnson, davidxl

Reviewed By: tejohnson

Subscribers: mehdi_amini, aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D31228

llvm-svn: 298602
2017-03-23 14:43:10 +00:00
Luqman Aden 3f807c91dc Preserve nonnull metadata on Loads through SROA & mem2reg.
Summary:
https://llvm.org/bugs/show_bug.cgi?id=31142 :

SROA was dropping the nonnull metadata on loads from allocas that got optimized out. This patch simply preserves nonnull metadata on loads through SROA and mem2reg.

Reviewers: chandlerc, efriedma

Reviewed By: efriedma

Subscribers: hfinkel, spatel, efriedma, arielb1, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D27114

llvm-svn: 298540
2017-03-22 19:16:39 +00:00
Peter Collingbourne f7691d8b41 IPO: Const correctness for summaries passed into passes.
Pass const qualified summaries into importers and unqualified summaries into
exporters. This lets us const-qualify the summary argument to thinBackend.

Differential Revision: https://reviews.llvm.org/D31230

llvm-svn: 298534
2017-03-22 18:22:59 +00:00
Peter Collingbourne 9a3f97977f IR: Fix a race condition in type id clients of ModuleSummaryIndex.
Add a const version of the getTypeIdSummary accessor that avoids
mutating the TypeIdMap.

Differential Revision: https://reviews.llvm.org/D31226

llvm-svn: 298531
2017-03-22 18:04:39 +00:00
Sanjay Patel 2f602cea41 [InstCombine] canonicalize insertelement of scalar constant ahead of insertelement of variable
insertelement (insertelement X, Y, IdxC1), ScalarC, IdxC2 -->
insertelement (insertelement X, ScalarC, IdxC2), Y, IdxC1

As noted in the code comment and seen in the test changes, the motivation is that by pulling
constant insertion up, we may be able to constant fold some insertelement instructions.

Differential Revision: https://reviews.llvm.org/D31196

llvm-svn: 298520
2017-03-22 17:10:44 +00:00
Evgeny Astigeevich 7823c66e05 r286814 resulted that CallPenalty can be subtracted twice:
- First time, during calculation of the cost in InlineCost.cpp
- Second time, during calculation of the cost in Inliner.cpp

This patches fixes this.

Differential Revision: https://reviews.llvm.org/D31137

llvm-svn: 298496
2017-03-22 12:01:57 +00:00
Craig Topper 07f2915ad8 [InstCombine] Teach SimplifyDemandedUseBits to shrink Constants on the left side of subtracts
Summary: Subtracts can have constants on the left side, but we don't shrink them based on demanded bits. This patch fixes that to match the right hand side.

Reviewers: davide, majnemer, spatel, sanjoy, hfinkel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31119

llvm-svn: 298478
2017-03-22 04:03:53 +00:00
George Burgess IV 56c7e88c2c Let llvm.objectsize be conservative with null pointers
This adds a parameter to @llvm.objectsize that makes it return
conservative values if it's given null.

This fixes PR23277.

Differential Revision: https://reviews.llvm.org/D28494

llvm-svn: 298430
2017-03-21 20:08:59 +00:00
Dehao Chen 9907e9d860 Do not inline hot callsites for samplepgo in thinlto compile phase.
Summary: Because SamplePGO passes will be invoked twice in ThinLTO build: once at compile phase, the other at backend. We want to make sure the IR at the 2nd phase matches the hot part in profile, thus we do not want to inline hot callsites in the first phase.

Reviewers: tejohnson, eraman

Reviewed By: tejohnson

Subscribers: mehdi_amini, llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D31201

llvm-svn: 298428
2017-03-21 19:55:36 +00:00
Reid Kleckner b518054b87 Rename AttributeSet to AttributeList
Summary:
This class is a list of AttributeSetNodes corresponding the function
prototype of a call or function declaration. This class used to be
called ParamAttrListPtr, then AttrListPtr, then AttributeSet. It is
typically accessed by parameter and return value index, so
"AttributeList" seems like a more intuitive name.

Rename AttributeSetImpl to AttributeListImpl to follow suit.

It's useful to rename this class so that we can rename AttributeSetNode
to AttributeSet later. AttributeSet is the set of attributes that apply
to a single function, argument, or return value.

Reviewers: sanjoy, javed.absar, chandlerc, pete

Reviewed By: pete

Subscribers: pete, jholewinski, arsenm, dschuff, mehdi_amini, jfb, nhaehnle, sbc100, void, llvm-commits

Differential Revision: https://reviews.llvm.org/D31102

llvm-svn: 298393
2017-03-21 16:57:19 +00:00
Yi Kong 019e1c4f99 Test commit access
Remove some trailing whitespaces.

llvm-svn: 298379
2017-03-21 14:49:19 +00:00
Artur Pilipenko 4cc6130f52 NFC. InstCombiner::visitFAdd extract LHSIntVal/RHSIntVal local variables
llvm-svn: 298359
2017-03-21 11:32:15 +00:00
Matt Arsenault 6b00d40900 InstCombine: Check source value precision when reducing cast intrinsic
Missed this check when porting from the libcall version.

llvm-svn: 298312
2017-03-20 21:59:24 +00:00
Evgeniy Stepanov c440572715 Revert r298158.
Revert "[asan] Fix dead stripping of globals on Linux."

OOM in gold linker.

llvm-svn: 298288
2017-03-20 18:45:34 +00:00
David Blaikie 795dc94614 Fix UB found by -Wtautological-undefined-compare
llvm-svn: 298279
2017-03-20 18:01:07 +00:00
Dehao Chen e593049fb0 Updates branch_weights annotation for call instructions during inlining.
Summary: Inliner should update the branch_weights annotation to scale it to proper value.

Reviewers: davidxl, eraman

Reviewed By: eraman

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D30767

llvm-svn: 298270
2017-03-20 16:40:44 +00:00
Adrian Prantl 6d80a262d5 Use isa<> instead of dyn_cast<> (NFC).
llvm-svn: 298268
2017-03-20 16:39:41 +00:00
Craig Topper d92d2fc763 [InstCombine] Print a debug message when we constant fold an operand during worklist creation
InstCombine tries to constant fold instruction operands during worklist building, but we don't print that we're doing this.

We also set a change flag here that causes us to rebuild and rerun the worklist one more time even if processing the worklist itself created no additional changes. So in the log I saw two inst combine runs that visited all instructions without printing that anything was changed. I may be submitting another patch to remove the change flag unless I can find some reason why we should be doing that.

Differential Revision: https://reviews.llvm.org/D31091

llvm-svn: 298264
2017-03-20 16:31:14 +00:00
Daniel Berlin 12883b1673 Templatize parts of VNCoercion, and add constant-only versions of the functions to be used in NewGVN.
NFCI.

Summary:
This is ground work for the changes to enable coercion in NewGVN.
GVN doesn't care if they end up constant because it eliminates as it goes.
NewGVN cares.

IRBuilder and ConstantFolder deliberately present the same interface,
so we use this to our advantage to templatize our functions to make
them either constant only or not.

Reviewers: davide

Subscribers: llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D30928

llvm-svn: 298262
2017-03-20 16:08:29 +00:00
Simon Pilgrim 00b34996b4 Use MutableArrayRef for APFloat::convertToInteger
As discussed on D31074, use MutableArrayRef for destination integer buffers to help assert before stack overflows happen.

llvm-svn: 298253
2017-03-20 14:40:12 +00:00
Simon Pilgrim 610ad9b53f Strip trailing whitespace
llvm-svn: 298249
2017-03-20 13:55:35 +00:00
Craig Topper b5c2bfa869 [IR] Remove some unneeded includes from Operator.h and fix cpp files that were transitively depending on it. NFC
llvm-svn: 298235
2017-03-20 05:08:41 +00:00
Xin Tong cbf04d95e6 Remove unnecessary IDom check
Summary: This Idom check seems unnecessary. The immediate children of a node on the Dominator Tree should always be the IDom of its immediate children in this case.

Reviewers: hfinkel, majnemer, dberlin

Reviewed By: dberlin

Subscribers: dberlin, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D26954

llvm-svn: 298232
2017-03-20 00:30:19 +00:00
Craig Topper ff9749f759 [InstCombine] Remove duplicate code in SimplifyDemandedUseBits for URem. NFC
llvm-svn: 298231
2017-03-19 21:45:57 +00:00
Xin Tong bcb17ecf04 Correct a rebase mistake.
Left out AA in jumpthreading SimplifyPartiallyRedundantLoad

llvm-svn: 298219
2017-03-19 15:41:46 +00:00
Xin Tong d67fb1b66e [JumpThreading] Perform phi-translation in SimplifyPartiallyRedundantLoad.
Summary:
In case we are loading on a phi-load in SimplifyPartiallyRedundantLoad.
Try to phi translate it into incoming values in the predecessors before
we search for available loads.

This needs https://reviews.llvm.org/D30524

Reviewers: davide, sanjoy, efriedma, dberlin, rengolin

Reviewed By: dberlin

Subscribers: junbuml, llvm-commits

Differential Revision: https://reviews.llvm.org/D30543

llvm-svn: 298217
2017-03-19 15:30:53 +00:00
Craig Topper 3a86a04404 [InstCombine] Use setHighBits/setLowBits/setBitsFrom in place of getLowBitsSet/getHighBitsSet.
llvm-svn: 298204
2017-03-19 05:49:16 +00:00
Daniel Berlin 46b72e6de6 NewGVN: Now that we have a better verifier, we can prove that we can erase the predicateuser set each time we mark it touched
llvm-svn: 298199
2017-03-19 00:07:32 +00:00
Daniel Berlin d43f0ee7e1 NewGVN: Remove dead code (for now)
llvm-svn: 298198
2017-03-19 00:07:27 +00:00
Craig Topper d55e153b87 [GVN] Fix accidental double storage of the function BasicBlock list in iterateOnFunction
Summary:
iterateOnFunction creates a ReversePostOrderTraversal object which does a post order traversal in its constructor and stores the results in an internal vector. Iteration over it just reads from the internal vector in reverse order.

The GVN code seems to be unaware of this and iterates over ReversePostOrderTraversal object and makes a copy of the vector into a local vector. (I think at one point in time we used a DFS here instead which would have required the local vector).

The net affect of this is that we have two vectors containing the basic block list. As I didn't want to expose the implementation detail of ReversePostOrderTraversal's constructor to GVN, I've changed the code to do an explicit post order traversal storing into the local vector and then reverse iterate over that.

I've also removed the reserve(256) since the ReversePostOrderTraversal wasn't doing that. I can add it back if we thinks it important. Though it seemed weird that it wasn't based on the size of the function.

Reviewers: davide, anemet, dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31084

llvm-svn: 298191
2017-03-18 18:24:41 +00:00
Daniel Berlin 06329a98e3 NewGVN: Greatly enhance the ability of the NewGVN verifier to detect
issues, subsuming previous verifier.

llvm-svn: 298188
2017-03-18 15:41:40 +00:00
Daniel Berlin 41b39169e2 NewGVN: Fix PHI evaluation bug exposed by new verifier. We were checking whether the incoming block was reachable instead of whether the specific edge was reachable
llvm-svn: 298187
2017-03-18 15:41:36 +00:00
Craig Topper 0f5063c754 [BuildLibCalls] emitPutChar should infer function attributes for putchar
When InstCombine calls into SimplifyLibCalls and it createa putChar calls, we don't infer the attributes. And since SimplifyLibCalls doesn't use InstCombine's IRBuilder the calls doesn't end up in the worklist on this iteration of InstCombine. So it gets picked up on the next iteration where it causes an IR change. This of course causes InstCombine to run another iteration.

So this patch just gets the attributes right the first time. We already did this for puts and some other libcalls.

Differential Revision: https://reviews.llvm.org/D31094

llvm-svn: 298171
2017-03-17 23:48:02 +00:00
Evgeniy Stepanov c5aa6b9411 [asan] Fix dead stripping of globals on Linux.
Use a combination of !associated, comdat, @llvm.compiler.used and
custom sections to allow dead stripping of globals and their asan
metadata. Sometimes.

Currently this works on LLD, which supports SHF_LINK_ORDER with
sh_link pointing to the associated section.

This also works on BFD, which seems to treat comdats as
all-or-nothing with respect to linker GC. There is a weird quirk
where the "first" global in each link is never GC-ed because of the
section symbols.

At this moment it does not work on Gold (as in the globals are never
stripped).

Differential Revision: https://reviews.llvm.org/D30121

llvm-svn: 298158
2017-03-17 22:17:29 +00:00
Vassil Vassilev c70a8f2df1 [coverity] Fix uninit variable.
Patch by John Harvey!

llvm-svn: 298122
2017-03-17 20:58:08 +00:00
Rong Xu 8e06e80b87 [PGO] Change the internal options description. nfc.
llvm-svn: 298120
2017-03-17 20:51:44 +00:00
Rong Xu e60343d6b0 [PGO] Value profile for size of memory intrinsic calls
This patch annotates the valuesites profile to memory intrinsics.

Differential Revision: http://reviews.llvm.org/D31002

llvm-svn: 298110
2017-03-17 18:07:26 +00:00
Stanislav Mekhanoshin ee2dd785f6 Only unswitch loops with uniform conditions
Loop unswitching can be extremely harmful for a SIMT target. In case
if hoisted condition is not uniform a SIMT machine will execute both
clones of a loop sequentially. Therefor LoopUnswitch checks if the
condition is non-divergent.

Since DivergenceAnalysis adds an expensive PostDominatorTree analysis
not needed for non-SIMT targets a new option is added to avoid unneded
analysis initialization. The method getAnalysisUsage is called when
TargetTransformInfo is not yet available and we cannot use it here.
For that reason a new field DivergentTarget is added to PassManagerBuilder
to control the behavior and set this field from a target.

Differential Revision: https://reviews.llvm.org/D30796

llvm-svn: 298104
2017-03-17 17:13:41 +00:00
Sanjoy Das c4e4dcdf64 [RSForGC] Handle vector GEPs
We were not handling getelemenptr instructions of vector type before.
Since getelemenptr instructions for vector types follow the same rule as
getelementptr instructions for non-vector types, we can just handle them
in the same way.

llvm-svn: 298028
2017-03-17 00:55:53 +00:00
Reid Kleckner 45707d4d5a Remove getArgumentList() in favor of arg_begin(), args(), etc
Users often call getArgumentList().size(), which is a linear way to get
the number of function arguments. arg_size(), on the other hand, is
constant time.

In general, the fact that arguments are stored in an iplist is an
implementation detail, so I've removed it from the Function interface
and moved all other users to the argument container APIs (arg_begin(),
arg_end(), args(), arg_size()).

Reviewed By: chandlerc

Differential Revision: https://reviews.llvm.org/D31052

llvm-svn: 298010
2017-03-16 22:59:15 +00:00
Rong Xu 60faea19f8 Resubmit r297897: [PGO] Value profile for size of memory intrinsic calls
R297897 inadvertently enabled annotation for memop profiling. This new patch
fixed it.

llvm-svn: 297996
2017-03-16 21:15:48 +00:00
Adrian Prantl 47ea6478ed Salvage debug info from instructions about to be deleted
[Reapplies r297971 and punting on finding a better API for findDbgValues()]

This patch improves debug info quality in InstCombine by looking at
values that are about to be deleted, checking whether there are any
dbg.value instrinsics referring to them, and potentially encoding the
semantics of the deleted instruction into the dbg.value's
DIExpression.

In the example in the testcase (which was extracted from XNU) there is a sequence of

 %4 = load %struct.entry*, %struct.entry** %next2, align 8, !dbg !41
 %5 = bitcast %struct.entry* %4 to i8*, !dbg !42
 %add.ptr4 = getelementptr inbounds i8, i8* %5, i64 -8, !dbg !43
 %6 = bitcast i8* %add.ptr4 to %struct.entry*, !dbg !44
 call void @llvm.dbg.value(metadata %struct.entry* %6, i64 0, metadata !20, metadata !21), !dbg 34

When these instructions are eliminated by instcombine one after
another, we can still salvage the otherwise dead debug info:

- Bitcasts have no effect, so have the dbg.value point to operand(0)
- Loads can be expressed via a DW_OP_deref
- Constant gep instructions can be replaced by DWARF expression arithmetic

The API introduced by this patch is not specific to instcombine and
can be useful in other places, too.

rdar://problem/30725338

Differential Revision: https://reviews.llvm.org/D30919

llvm-svn: 297994
2017-03-16 21:14:09 +00:00
Michael Kuperstein 2da2bfa088 [LoopUnroll] Don't peel loops where the latch isn't the exiting block
Peeling assumed this doesn't happen, but didn't check it.
This fixes PR32178.

Differential Revision: https://reviews.llvm.org/D30757

llvm-svn: 297993
2017-03-16 21:07:48 +00:00
Sanjay Patel 6105bb5eaf [InstCombine] avoid breaking up bitcasted vector min/max patterns (PR32306)
As the related tests show, we're not canonicalizing to this form for scalars or vectors yet,
but this solves the immediate problem in:
https://bugs.llvm.org/show_bug.cgi?id=32306

llvm-svn: 297989
2017-03-16 20:42:45 +00:00
Adrian Prantl fa9e84eb6d Revert commit r297971 because of issues reported by msan.
llvm-svn: 297982
2017-03-16 20:11:54 +00:00
Adrian Prantl 4a7781aa38 Fix unused variable warnings.
llvm-svn: 297973
2017-03-16 18:33:01 +00:00
Adrian Prantl 4377314a98 Salvage debug info from instructions about to be deleted
This patch improves debug info quality in InstCombine by looking at
values that are about to be deleted, checking whether there are any
dbg.value instrinsics referring to them, and potentially encoding the
semantics of the deleted instruction into the dbg.value's
DIExpression.

In the example in the testcase (which was extracted from XNU) there is a sequence of

  %4 = load %struct.entry*, %struct.entry** %next2, align 8, !dbg !41
  %5 = bitcast %struct.entry* %4 to i8*, !dbg !42
  %add.ptr4 = getelementptr inbounds i8, i8* %5, i64 -8, !dbg !43
  %6 = bitcast i8* %add.ptr4 to %struct.entry*, !dbg !44
  call void @llvm.dbg.value(metadata %struct.entry* %6, i64 0, metadata !20, metadata !21), !dbg 34

When these instructions are eliminated by instcombine one after
another, we can still salvage the otherwise dead debug info:

- Bitcasts have no effect, so have the dbg.value point to operand(0)
- Loads can be expressed via a DW_OP_deref
- Constant gep instructions can be replaced by DWARF expression arithmetic

The API introduced by this patch is not specific to instcombine and
can be useful in other places, too.

rdar://problem/30725338

Differential Revision: https://reviews.llvm.org/D30919

llvm-svn: 297971
2017-03-16 18:22:52 +00:00
Aditya Kumar 24f6ad51bb Fix: Refactor SimplifyCFG:canSinkInstructions [NFC]
Differential Revision: https://reviews.llvm.org/D30116

llvm-svn: 297955
2017-03-16 14:09:18 +00:00
Bjorn Pettersson c98dabb1a0 [InstCombine] Liberate assert in InstCombiner::visitZExt
Summary:
The call to canEvaluateZExtd in InstCombiner::visitZExt may
return with BitsToClear == SrcTy->getScalarSizeInBits(), but
there is an assert that BitsToClear should be smaller than
SrcTy->getScalarSizeInBits().

I have a test case that triggers the assert, but it only happens
for my downstream target. I've not been able to trigger it for
any upstream target.

The assert triggered for a piece of code such as this
  %shr1 = lshr i16 undef, 15
  ...
  %shr2 = lshr i16 %shr1, 1
  %conv = zext i16 %shr2 to i32

Normally the lshr instructions are constant folded before we
visit the zext (that is why it is so hard to reproduce).
The original pattern, before instcombine, is of course a lot more
complicated in my test case. The shift count in the second lshr
is for example determined by the outcome of a PHI instruction.
It seems like other rewrites by instcombine leads up to
the pattern above. And then the zext is pulled from the
worklist, and visited (hitting the assert), before we detect
that the lshr instrucions can be constant folded.

Anyway, since the canEvaluateZExtd may return with BitsToClear
equal to SrcTy->getScalarSizeInBits(), and since the rewrite
that converts the expression type to avoid a zero extend works
also for the case where SrcBitsKept ends up being zero, then
it should be OK to liberate the assert to
  assert(BitsToClear <= SrcTy->getScalarSizeInBits() &&
         "Unreasonable BitsToClear");

Reviewers: hfinkel

Reviewed By: hfinkel

Subscribers: hfinkel, llvm-commits

Differential Revision: https://reviews.llvm.org/D30993

llvm-svn: 297952
2017-03-16 13:22:01 +00:00
Eric Liu 971de62291 Revert "[PGO] Value profile for size of memory intrinsic calls"
This commit reverts r297897 and r297909.

llvm-svn: 297951
2017-03-16 13:16:35 +00:00
Chandler Carruth 814e0df1c5 [PM/Inliner] Fix a bug in r297374 where we would leave stale calls in
the work queue and crash when trying to visit them after deleting the
function containing those calls.

llvm-svn: 297940
2017-03-16 10:45:42 +00:00
Tobias Grosser 115c022282 [ADCE] Remove redundent code [NFC]
Summary:
In commit r289548 ([ADCE] Add code to remove dead branches) a redundant loop
nest was accidentally introduced, which implements exactly the same
functionality as has already been available right after. This redundancy has
been found when inspecting the ADCE code in the context of our recent
discussions on post-dominator modeling. This redundant code was also eliminated
by r296535 (which sparked the discussion), but only as part of a larger semantic
change of the post-dominance modeling. As this redundency in [ADCE] is really
just an oversight completely independent of the post-dominance changes under
discussion, we remove this redundancy independently.

Reviewers: dberlin, david2050

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31023

llvm-svn: 297929
2017-03-16 03:59:23 +00:00
Vitaly Buka ca6ecd213a Revert "Revert "[PGO] Minor cleanup for count instruction in SelectInstVisitor.""
Previously reverted wrong revision.

This reverts commit r297910.

llvm-svn: 297911
2017-03-15 23:07:41 +00:00
Vitaly Buka de85ad895d Revert "[PGO] Minor cleanup for count instruction in SelectInstVisitor."
Fails LLVMFuzzer.LLVMFuzzer.value-profile-strncmp.test

This reverts commit r297892.

llvm-svn: 297910
2017-03-15 23:06:22 +00:00
Rong Xu 8acf76b6b9 Fix build failure from r297897.
llvm-svn: 297909
2017-03-15 23:00:19 +00:00
Rong Xu 4ed52798ce [PGO] Value profile for size of memory intrinsic calls
This patch adds the value profile support to profile the size parameter of
memory intrinsic calls: memcpy, memcmp, and memmov.

Differential Revision: http://reviews.llvm.org/D28965

llvm-svn: 297897
2017-03-15 21:47:27 +00:00
Rong Xu d709b0fe95 [PGO] Minor cleanup for count instruction in SelectInstVisitor.
Summary:
NSIs can be double-counted by different operations in
SelectInstVisitor. Sink the the update to VM_counting mode only.
Also reset the value for each counting operation.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: xur, llvm-commits

Differential Revision: https://reviews.llvm.org/D30999

llvm-svn: 297892
2017-03-15 21:05:24 +00:00
Sanjay Patel f1e1fba1b0 [EarlyCSE] reduce indent; NFCI
llvm-svn: 297886
2017-03-15 20:25:05 +00:00
Rong Xu a3bbf96eba [PGO] Refactor the code for value profile annotation
This patch refactors the code for value profile annotation to facilitate
of adding other kind of value profiles.

Differential Revision: http://reviews.llvm.org/D30989

llvm-svn: 297870
2017-03-15 18:23:39 +00:00
Eric Liu 8c7d28b2f1 Revert "Refactor SimplifyCFG:canSinkInstructions [NFC]"
This reverts commit r297839, which breaks Transforms/SimplifyCFG/sink-common-code.ll

llvm-svn: 297845
2017-03-15 15:29:42 +00:00
Aditya Kumar ee55bf3e34 Refactor SimplifyCFG:canSinkInstructions [NFC]
llvm-svn: 297839
2017-03-15 14:26:45 +00:00
Fiona Glaser a9bd572b6f MemCpyOptimizer: don't create new addrspace casts
This isn't safe on all targets, and since we don't have a way
to know it's safe, avoid doing it for now.

llvm-svn: 297788
2017-03-14 22:37:38 +00:00
Dehao Chen 4a435e0896 SamplePGO ThinLTO ICP fix for local functions.
Summary:
In SamplePGO, if the profile is collected from non-LTO binary, and used to drive ThinLTO, the indirect call promotion may fail because ThinLTO adjusts local function names to avoid conflicts. There are two places of where the mismatch can happen:

1. thin-link prepends SourceFileName to front of FuncName to build the GUID (GlobalValue::getGlobalIdentifier). Unlike instrumentation FDO, SamplePGO does not use the PGOFuncName scheme and therefore the indirect call target profile data contains a hash of the OriginalName.
2. backend compiler promotes some local functions to global and appends .llvm.{$ModuleHash} to the end of the FuncName to derive PromotedFunctionName

This patch tries at the best effort to find the GUID from the original local function name (in profile), and use that in ICP promotion, and in SamplePGO matching that happens in the backend after importing/inlining:

1. in thin-link, it builds the map from OriginalName to GUID so that when thin-link reads in indirect call target profile (represented by OriginalName), it knows which GUID to import.
2. in backend compiler, if sample profile reader cannot find a profile match for PromotedFunctionName, it will try to find if there is a match for OriginalFunctionName.
3. in backend compiler, we build symbol table entry for OriginalFunctionName and pointer to the same symbol of PromotedFunctionName, so that ICP can find the correct target to promote.

Reviewers: mehdi_amini, tejohnson

Reviewed By: tejohnson

Subscribers: llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D30754

llvm-svn: 297757
2017-03-14 17:33:01 +00:00
Sanjay Patel a0a5682d00 [InstCombine] improve readability; NFCI
llvm-svn: 297755
2017-03-14 17:27:27 +00:00
Gil Rapaport 28d0f8ddf7 [LV] Refactor cross-iteration phi's back-patching; NFC
This patch refactors the PHisToFix loop as follows:

- The loop itself now resides in its own method.
- The new method iterates on scalar-loop's header; the PHIsToFix map formerly
  propagated as an output parameter and filled during phi widening is removed.
- The code handling reductions is moved into its own method, similar to the
  existing fixFirstOrderRecurrence().

Differential Revision: https://reviews.llvm.org/D30755

llvm-svn: 297740
2017-03-14 13:50:47 +00:00
Ayal Zaks 928ec40584 [LV] Refactor Cost Model's selectVectorizationFactor(); NFC
Refactoring Cost Model's selectVectorizationFactor() so that it handles only the
selection of the best VF from a pre-computed range of candidate VF's, extracting
early-exit criteria and the computation of a MaxVF upper-bound to other methods,
all driven by a newly introduced LoopVectorizationPlanner.

Differential Revision: https://reviews.llvm.org/D30653

llvm-svn: 297737
2017-03-14 13:07:04 +00:00
Tobias Grosser 335b6bf208 Fix typos in ADCE comments
llvm-svn: 297726
2017-03-14 10:18:11 +00:00
Jonas Paulsson a48ea231c0 [TargetTransformInfo] getIntrinsicInstrCost() scalarization estimation improved
getIntrinsicInstrCost() used to only compute scalarization cost based on types.
This patch improves this so that the actual arguments are checked when they are
available, in order to handle only unique non-constant operands.

Tests updates:

Analysis/CostModel/X86/arith-fp.ll
Transforms/LoopVectorize/AArch64/interleaved_cost.ll
Transforms/LoopVectorize/ARM/interleaved_cost.ll

The improvement in getOperandsScalarizationOverhead() to differentiate on
constants made it necessary to update the interleaved_cost.ll tests even
though they do not relate to intrinsics.

Review: Hal Finkel
https://reviews.llvm.org/D29540

llvm-svn: 297705
2017-03-14 06:35:36 +00:00
Matt Arsenault d81f557fe2 AMDGPU: Fold icmp/fcmp into icmp intrinsic
The typical use is a library vote function which
compares to 0. Fold the user condition into the intrinsic.

llvm-svn: 297650
2017-03-13 18:14:02 +00:00
Adrian Prantl 140a8569ce API gardening: Rename FindAllocaDbgValue to findDbgValue (NFC)
and use have it use SmallVectorImpl.

There is nothing specific about allocas in this function.

llvm-svn: 297643
2017-03-13 17:20:47 +00:00
Gil Rapaport 00cb43908c [LV] Set memcheck metadata also for VF==1
This commit is a follow-up on r297580. It fixes the FIXME added temporarily
by that commit to keep the removal of Unroller's specialized version of
scalarizeInstruction() an NFC. See https://reviews.llvm.org/D30715 for details.

llvm-svn: 297610
2017-03-13 10:23:46 +00:00
Gil Rapaport a1e5a37d3f [LV] A unified scalarizeInstruction() for Vectorizer and Unroller; NFC
Unroller's specialized scalarizeInstruction() is mostly duplicating Vectorizer's
variant. OTOH Vectorizer's scalarizeInstruction() already supports the special
case of VF==1 except for avoiding mask-bit extraction in that case. This patch
removes Unroller's specialized version in favor of a unified method.

The only functional difference between the two variants seems to be setting
memcheck metadata for loads and stores only in Vectorizer's variant, which is a
bug in Unroller. To keep this patch an NFC the unified method doesn't set
memcheck metadata for VF==1.

Differential Revision: https://reviews.llvm.org/D30715

llvm-svn: 297580
2017-03-12 12:31:38 +00:00
Ayal Zaks 09cf3121d8 Test commit.
llvm-svn: 297579
2017-03-12 09:48:06 +00:00
Daniel Berlin 64e689938d Split NewGVN class into a legacy pass and an impl, instead of a merged class.
llvm-svn: 297576
2017-03-12 04:46:45 +00:00
Daniel Berlin cd07a0f685 VNCoercion: Make the function signatures all consistent
llvm-svn: 297537
2017-03-11 00:51:01 +00:00
Peter Collingbourne 14dcf02fcb WholeProgramDevirt: Implement export/import support for VCP.
Differential Revision: https://reviews.llvm.org/D30017

llvm-svn: 297503
2017-03-10 20:13:58 +00:00
Peter Collingbourne 59675ba0f8 WholeProgramDevirt: Implement export/import support for unique ret val opt.
Differential Revision: https://reviews.llvm.org/D29917

llvm-svn: 297502
2017-03-10 20:09:11 +00:00
Daniel Berlin 5c338ff7a3 NewGVN: Rename InitialClass to TOP, which is what most people would expect it to be called
llvm-svn: 297494
2017-03-10 19:05:04 +00:00
Michael Kuperstein 5fb39a7966 [SLP] Revert everything that has to do with memory access sorting.
This reverts r293386, r294027, r294029 and r296411.

Turns out the SLP tree isn't actually a "tree" and we don't handle
accessing the same packet of loads in several different orders well,
causing miscompiles.

Revert until we can fix this properly.

llvm-svn: 297493
2017-03-10 18:59:07 +00:00
George Rimar 5d8aea1009 WholeProgramDevirt: Fixed compilation error under MSVS2015.
It was introduced in:

r296945
WholeProgramDevirt: Implement exporting for single-impl devirtualization.
---------------------
r296939
WholeProgramDevirt: Add any unsuccessful llvm.type.checked.load devirtualizations to the list of llvm.type.test users.
---------------------

Microsoft Visual Studio Community 2015
Version 14.0.23107.0 D14REL
Does not compile that code without additional brackets, showing multiple error like below:

WholeProgramDevirt.cpp(1216): error C2958: the left bracket '[' found at 'c:\access_softek\llvm\lib\transforms\ipo\wholeprogramdevirt.cpp(1216)' was not matched correctly
WholeProgramDevirt.cpp(1216): error C2143: syntax error: missing ']' before '}'
WholeProgramDevirt.cpp(1216): error C2143: syntax error: missing ';' before '}'
WholeProgramDevirt.cpp(1216): error C2059: syntax error: ']'

llvm-svn: 297451
2017-03-10 10:31:56 +00:00
Matt Arsenault a3bdd8f27b AMDGPU: Fix insertion point when reducing load intrinsics
The insertion point may be later than the next instruction,
so it is necessary to set it when replacing the call.

llvm-svn: 297439
2017-03-10 05:25:49 +00:00
Daniel Berlin 5ac9179f6c Move memory coercion functions from GVN.cpp to VNCoercion.cpp so they can be shared between GVN and NewGVN.
Summary:
These are the functions used to determine when values of loads can be
extracted from stores, etc, and to perform the necessary insertions to
do this.  There are no changes to the functions themselves except
reformatting, and one case where memdep was informed of a removed load
(which was pushed into the caller).

Reviewers: davide

Subscribers: mgorny, llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D30478

llvm-svn: 297438
2017-03-10 04:54:10 +00:00
Daniel Berlin e3e69e1680 NewGVN: Rewrite DCE during elimination so we do it as well as old GVN did.
llvm-svn: 297428
2017-03-10 00:32:33 +00:00
Daniel Berlin c0e008d807 NewGVN: Rename a few things for clarity
llvm-svn: 297427
2017-03-10 00:32:26 +00:00
Daniel Berlin 04d9e746f1 Add support for DenseMap/DenseSet count and find using const pointers
Summary:
Similar to SmallPtrSet, this makes find and count work with both const
referneces and const pointers.

Reviewers: dblaikie

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D30713

llvm-svn: 297424
2017-03-10 00:25:26 +00:00
Matt Arsenault efe949cc67 AMDGPU: Support for SimplifyDemandedVectorElts for load intrinsics
llvm-svn: 297408
2017-03-09 20:34:27 +00:00
Rong Xu 0a2a1311df Minor format change. nfc.
llvm-svn: 297400
2017-03-09 19:08:55 +00:00
Chandler Carruth 20e588e1af [PM/Inliner] Make the new PM's inliner process call edges across an
entire SCC before iterating on newly-introduced call edges resulting
from any inlined function bodies.

This more closely matches the behavior of the old PM's inliner. While it
wasn't really clear to me initially, this behavior is actually essential
to the inliner behaving reasonably in its current design.

Because the inliner is fundamentally a bottom-up inliner and all of its
cost modeling is designed around that it often runs into trouble within
an SCC where we don't have any meaningful bottom-up ordering to use. In
addition to potentially cyclic, infinite inlining that we block with the
inline history mechanism, it can also take seemingly simple call graph
patterns within an SCC and turn them into *insanely* large functions by
accidentally working top-down across the SCC without any of the
threshold limitations that traditional top-down inliners use.

Consider this diabolical monster.cpp file that Richard Smith came up
with to help demonstrate this issue:
```
template <int N> extern const char *str;

void g(const char *);

template <bool K, int N> void f(bool *B, bool *E) {
  if (K)
    g(str<N>);
  if (B == E)
    return;
  if (*B)
    f<true, N + 1>(B + 1, E);
  else
    f<false, N + 1>(B + 1, E);
}
template <> void f<false, MAX>(bool *B, bool *E) { return f<false, 0>(B, E); }
template <> void f<true, MAX>(bool *B, bool *E) { return f<true, 0>(B, E); }

extern bool *arr, *end;
void test() { f<false, 0>(arr, end); }
```

When compiled with '-DMAX=N' for various values of N, this will create an SCC
with a reasonably large number of functions. Previously, the inliner would try
to exhaust the inlining candidates in a single function before moving on. This,
unfortunately, turns it into a top-down inliner within the SCC. Because our
thresholds were never built for that, we will incrementally decide that it is
always worth inlining and proceed to flatten the entire SCC into that one
function.

What's worse, we'll then proceed to the next function, and do the exact same
thing except we'll skip the first function, and so on. And at each step, we'll
also make some of the constant factors larger, which is awesome.

The fix in this patch is the obvious one which makes the new PM's inliner use
the same technique used by the old PM: consider all the call edges across the
entire SCC before beginning to process call edges introduced by inlining. The
result of this is essentially to distribute the inlining across the SCC so that
every function incrementally grows toward the inline thresholds rather than
allowing the inliner to grow one of the functions vastly beyond the threshold.
The code for this is a bit awkward, but it works out OK.

We could consider in the future doing something more powerful here such as
prioritized order (via lowest cost and/or profile info) and/or a code-growth
budget per SCC. However, both of those would require really substantial work
both to design the system in a way that wouldn't break really useful
abstraction decomposition properties of the current inliner and to be tuned
across a reasonably diverse set of code and workloads. It also seems really
risky in many ways. I have only found a single real-world file that triggers
the bad behavior here and it is generated code that has a pretty pathological
pattern. I'm not worried about the inliner not doing an *awesome* job here as
long as it does *ok*. On the other hand, the cases that will be tricky to get
right in a prioritized scheme with a budget will be more common and idiomatic
for at least some frontends (C++ and Rust at least). So while these approaches
are still really interesting, I'm not in a huge rush to go after them. Staying
even closer to the existing PM's behavior, especially when this easy to do,
seems like the right short to medium term approach.

I don't really have a test case that makes sense yet... I'll try to find a
variant of the IR produced by the monster template metaprogram that is both
small enough to be sane and large enough to clearly show when we get this wrong
in the future. But I'm not confident this exists. And the behavior change here
*should* be unobservable without snooping on debug logging. So there isn't
really much to test.

The test case updates come from two incidental changes:
1) We now visit functions in an SCC in the opposite order. I don't think there
   really is a "right" order here, so I just update the test cases.
2) We no longer compute some analyses when an SCC has no call instructions that
   we consider for inlining.

llvm-svn: 297374
2017-03-09 11:35:40 +00:00
Adam Nemet 8c83386f89 [SLP] Mark values in Dot that need to be extracted
llvm-svn: 297361
2017-03-09 05:48:03 +00:00
Peter Collingbourne 0152c8156b WholeProgramDevirt: Implement importing for uniform ret val opt.
Differential Revision: https://reviews.llvm.org/D29854

llvm-svn: 297350
2017-03-09 01:11:15 +00:00
Peter Collingbourne 6d284fab20 WholeProgramDevirt: Implement importing for single-impl devirtualization.
Differential Revision: https://reviews.llvm.org/D29844

llvm-svn: 297333
2017-03-09 00:21:25 +00:00
Teresa Johnson d820447212 Perform symbol binding for .symver versioned symbols
Summary:
In a .symver assembler directive like:
.symver name, name2@@nodename
"name2@@nodename" should get the same symbol binding as "name".

While the ELF object writer is updating the symbol binding for .symver
aliases before emitting the object file, not doing so when the module
inline assembly is handled by the RecordStreamer is causing the wrong
behavior in *LTO mode.

E.g. when "name" is global, "name2@@nodename" must also be marked as
global. Otherwise, the symbol is skipped when iterating over the LTO
InputFile symbols (InputFile::Symbol::shouldSkip). So, for example,
when performing any *LTO via the gold-plugin, the versioned symbol
definition is not recorded by the plugin and passed back to the
linker. If the object was in an archive, and there were no other symbols
needed from that object, the object would not be included in the final
link and references to the versioned symbol are undefined.

The llvm-lto2 tests added will give an error about an unused symbol
resolution without the fix.

Reviewers: rafael, pcc

Reviewed By: pcc

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D30485

llvm-svn: 297332
2017-03-09 00:19:49 +00:00
Evgeniy Stepanov 8537d9994d Don't merge global constants with non-dbg metadata.
!type metadata can not be dropped. An alternative to this is adding
!type metadata from the replaced globals to the replacement, but that
may weaken type tests and make them slower at the same time.

The merged global gets !dbg metadata from replaced globals, and can
end up with multiple debug locations.

llvm-svn: 297327
2017-03-09 00:03:37 +00:00
George Burgess IV ecb95f58a2 [MemCpyOpt] clang-format + trim the legacy pass. NFC.
None of the declarations below `// Helper functions` seem to have
definitions anymore.

llvm-svn: 297309
2017-03-08 21:28:19 +00:00
Adam Nemet 95da05c3f5 [SLP] Visualize SLP trees with -view-slp-tree
Analyzing larger trees is extremely difficult with the current debug output so
this adds GraphTraits and DOTGraphTraits on top of the VectorizableTree data
structure.  We can now display the SLP trees with Graphviz as in
https://reviews.llvm.org/F3132765.

I decorated the graph where a value needs to be gathered for one reason or
another.  These are the red nodes.

There are other improvement I am planning to make as I work through my case
here.  For example, I would also like to mark nodes that need to be extracted.

Differential Revision: https://reviews.llvm.org/D30731

llvm-svn: 297303
2017-03-08 18:47:50 +00:00
Matthew Simpson 3388de1349 [LV] Select legal insert point when fixing first-order recurrences
Because IRBuilder performs constant-folding, it's not guaranteed that an
instruction in the original loop map to an instruction in the vector loop. It
could map to a constant vector instead. The handling of first-order recurrences
was incorrectly making this assumption when setting the IRBuilder's insert
point.

llvm-svn: 297302
2017-03-08 18:18:20 +00:00
Jun Bum Lim ac170872b2 [JumpThread] Use AA in SimplifyPartiallyRedundantLoad()
Summary: Use AA when scanning to find an available load value.

Reviewers: rengolin, mcrosier, hfinkel, trentxintong, dberlin

Reviewed By: rengolin, dberlin

Subscribers: aemerson, dberlin, llvm-commits

Differential Revision: https://reviews.llvm.org/D30352

llvm-svn: 297284
2017-03-08 15:22:30 +00:00
Sanjay Patel 62906af379 [InstCombine] avoid crashing on shuffle shrinkage when input type is not same as result type
llvm-svn: 297280
2017-03-08 15:02:23 +00:00
Sam Parker 0f4db38c20 [LoopRotate] Propagate dbg.value intrinsics
Recommitting patch which was previously reverted in r297159. These
changes should address the casting issues.

The original patch enables dbg.value intrinsics to be attached to
newly inserted PHI nodes.

Differential Review: https://reviews.llvm.org/D30701

llvm-svn: 297269
2017-03-08 09:56:22 +00:00
Davide Italiano b6ddd7a437 [SCCP] Merge markOverdefined and markAnythingOverdefined.
There's no need to have two separate APIs.

llvm-svn: 297253
2017-03-08 01:26:37 +00:00
Sanjay Patel fe9705149b [InstCombine] shrink truncated insertelement into undef vector
This is the 2nd part of solving:
http://lists.llvm.org/pipermail/llvm-dev/2017-February/110293.html

D30123 moves the trunc ahead of the shuffle, and this moves the trunc ahead of the insertelement. 
We're limiting this transform to undef rather than any constant to avoid backend problems.

Differential Revision: https://reviews.llvm.org/D30137

llvm-svn: 297242
2017-03-07 23:27:14 +00:00
Evgeniy Stepanov 7a5cfa9a11 Fix one-after-the-end type metadata handling in globalsplit.
Itanium ABI may have an address point one byte after the end of a
vtable. When such vtable global is split, the !type metadata needs to
follow the right vtable.

Differential Revision: https://reviews.llvm.org/D30716

llvm-svn: 297236
2017-03-07 22:18:48 +00:00
Sanjay Patel 53fa17a014 [InstCombine] shrink truncated splat shuffle (2nd try)
This was committed at r297155 and reverted at r297166 because of an
over-reaching clang test. That should be fixed with r297189.

This is one part of solving a recent bug report:
http://lists.llvm.org/pipermail/llvm-dev/2017-February/110293.html

This keeps with our general approach: changing arbitrary shuffles is off-limts,
but changing splat is ok. The transform is very similar to the existing
shrinkBitwiseLogic() canonicalization.

Differential Revision: https://reviews.llvm.org/D30123

llvm-svn: 297232
2017-03-07 21:45:16 +00:00
Gor Nishanov c52006ab09 [coroutines] Add handling for unwind coro.ends
Summary:
The purpose of coro.end intrinsic is to allow frontends to mark the cleanup and
other code that is only relevant during the initial invocation of the coroutine
and should not be present in resume and destroy parts.

In landing pads coro.end is replaced with an appropriate instruction to unwind to
caller. The handling of coro.end differs depending on whether the target is
using landingpad or WinEH exception model.

For landingpad based exception model, it is expected that frontend uses the
`coro.end`_ intrinsic as follows:

```
    ehcleanup:
      %InResumePart = call i1 @llvm.coro.end(i8* null, i1 true)
      br i1 %InResumePart, label %eh.resume, label %cleanup.cont

    cleanup.cont:
      ; rest of the cleanup

    eh.resume:
      %exn = load i8*, i8** %exn.slot, align 8
      %sel = load i32, i32* %ehselector.slot, align 4
      %lpad.val = insertvalue { i8*, i32 } undef, i8* %exn, 0
      %lpad.val29 = insertvalue { i8*, i32 } %lpad.val, i32 %sel, 1
      resume { i8*, i32 } %lpad.val29

```
The `CoroSpit` pass replaces `coro.end` with ``True`` in the resume functions,
thus leading to immediate unwind to the caller, whereas in start function it
is replaced with ``False``, thus allowing to proceed to the rest of the cleanup
code that is only needed during initial invocation of the coroutine.

For Windows Exception handling model, a frontend should attach a funclet bundle
referring to an enclosing cleanuppad as follows:

```
    ehcleanup:
      %tok = cleanuppad within none []
      %unused = call i1 @llvm.coro.end(i8* null, i1 true) [ "funclet"(token %tok) ]
      cleanupret from %tok unwind label %RestOfTheCleanup
```

The `CoroSplit` pass, if the funclet bundle is present, will insert
``cleanupret from %tok unwind to caller`` before
the `coro.end`_ intrinsic and will remove the rest of the block.

Reviewers: majnemer

Reviewed By: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D25543

llvm-svn: 297223
2017-03-07 21:00:54 +00:00
Xin Tong ac2b5767af [JumpThread] Simplify CmpInst-as-Condition branch-folding a bit.
Summary: Simplify CmpInst-as-Condition branch-folding a bit.

Reviewers: sanjoy, efriedma

Reviewed By: efriedma

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30429

llvm-svn: 297186
2017-03-07 18:59:09 +00:00
Matthew Simpson c86b2134c7 [LV] Consider users that are memory accesses in uniforms expansion step
When expanding the set of uniform instructions beyond the seed instructions
(e.g., consecutive pointers), we mark a new instruction uniform if all its
loop-varying users are uniform. We should also allow users that are consecutive
or interleaved memory accesses. This fixes cases where we have an instruction
that is used as the pointer operand of a consecutive access but also used by a
non-memory instruction that later becomes uniform as part of the expansion.

llvm-svn: 297179
2017-03-07 18:47:30 +00:00
Sanjay Patel 6d30606168 revert r297155 because there's a clang test that depends on InstCombine:
tools/clang/test/CodeGen/zvector.c

llvm-svn: 297166
2017-03-07 17:41:45 +00:00
Adrian Prantl d4056501fb Revert "Strip debug info when inlining into a nodebug function."
This reverts commit r296488.

As noted by David Blaikie on llvm-commits, I overlooked the case of a
debug function being inlined into a nodebug function being inlined
into a debug function.

llvm-svn: 297163
2017-03-07 17:28:57 +00:00
Nico Weber 3b2f0094d7 Revert r297132, it caused PR32171
llvm-svn: 297159
2017-03-07 17:23:52 +00:00
Sanjay Patel defdb7bed5 [InstCombine] shrink truncated splat shuffle
This is one part of solving a recent bug report:
http://lists.llvm.org/pipermail/llvm-dev/2017-February/110293.html

This keeps with our general approach: changing arbitrary shuffles is off-limts, 
but changing splat is ok. The transform is very similar to the existing 
shrinkBitwiseLogic() canonicalization.

Differential Revision: https://reviews.llvm.org/D30123

llvm-svn: 297155
2017-03-07 16:10:36 +00:00
Sam Parker 6ec5fdbc94 [LoopRotate] Update dbg.value intrinsics
Propagate debug info through the newly inserted PHI nodes.

Differential Revision: https://reviews.llvm.org/D30190

llvm-svn: 297132
2017-03-07 09:34:25 +00:00
Sanjoy Das 30c3538e2e [LoopUnrolling] Fix loop size check for peeling
Summary:
We should check if loop size allows us to peel at least one iteration
before we do so.

Patch by Max Kazantsev!

Reviewers: sanjoy, mkuper, efriedma

Reviewed By: mkuper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30632

llvm-svn: 297122
2017-03-07 06:03:15 +00:00
Michael Kuperstein 768d013a03 [SLP] Revert r296863 due to miscompiles.
Details and reproducer are on the email thread for r296863.

llvm-svn: 297103
2017-03-06 23:54:51 +00:00
Sanjay Patel c3b4735b6f [InstCombine] use dyn_cast instead of isa+cast; NFCI
llvm-svn: 297092
2017-03-06 23:25:28 +00:00
Hans Wennborg 254f5fa5f2 Disable gvn-hoist (PR32153)
llvm-svn: 297075
2017-03-06 21:10:40 +00:00
Daniel Berlin 343576a6f4 NewGVN: Remove DebugUnknownExprs, just mark the instructions as unused
llvm-svn: 297047
2017-03-06 18:42:39 +00:00
Daniel Berlin 856fa14e06 NewGVN: Only call isInstructionTrivially dead once per instruction.
llvm-svn: 297046
2017-03-06 18:42:27 +00:00
Dehao Chen c632a393b7 Remove the sample pgo annotation heuristic that uses call count to annotate basic block count.
Summary: We do not need that special handling because the debug info is more accurate now. Performance testing shows no regression on google internal benchmarks.

Reviewers: davidxl, aprantl

Reviewed By: aprantl

Subscribers: llvm-commits, aprantl

Differential Revision: https://reviews.llvm.org/D30658

llvm-svn: 297038
2017-03-06 17:49:59 +00:00
Michael Kruse 811de8a619 [BasicBlockUtils] Check for nullptr before updating LoopInfo.
LoopInfo::getLoopFor returns nullptr if a BB is not in a loop and only
then can the loop be updated to contain the newly created BBs. Add the
missing nullptr check to SplitBlockAndInsertIfThen.

Within LLVM, the only user of this function that also passes a LoopInfo
to be updated is InnerLoopVectorizer::predicateInstructions().
As the method's name implies, the BB operataten on will always be within
a loop, but out-of-tree users may also use it differently (here: Polly).

All other uses of LoopInfo::getLoopFor in the file properly check its
return value for nullptr.

llvm-svn: 297016
2017-03-06 15:33:05 +00:00
Craig Topper b9dbd4d596 [SimplifyCFG] Use APInt::operator| instead of APInt::Or. NFC
I'm looking to improve operator| to support rvalue references and may remove APInt::Or.

llvm-svn: 296982
2017-03-05 01:08:19 +00:00
Evgeny Stupachenko d6aa0d02c2 Set option enabling LSR alternative way to resolve complex solution to false.
Differential Revision: http://reviews.llvm.org/D29862

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 296959
2017-03-04 03:14:05 +00:00
Peter Collingbourne f0bb90b126 Fix build.
llvm-svn: 296949
2017-03-04 01:38:05 +00:00
Peter Collingbourne 77a8d563a3 WholeProgramDevirt: Implement exporting for uniform ret val opt.
Differential Revision: https://reviews.llvm.org/D29846

llvm-svn: 296948
2017-03-04 01:34:53 +00:00
Peter Collingbourne 2325bb34c1 WholeProgramDevirt: Implement exporting for single-impl devirtualization.
Differential Revision: https://reviews.llvm.org/D29811

llvm-svn: 296945
2017-03-04 01:31:01 +00:00
Peter Collingbourne b406baaeef WholeProgramDevirt: Add any unsuccessful llvm.type.checked.load devirtualizations to the list of llvm.type.test users.
Any unsuccessful llvm.type.checked.load devirtualizations will be translated
into uses of llvm.type.test, so we need to add the resulting llvm.type.test
intrinsics to the function summaries so that the LowerTypeTests pass will
export them.

Differential Revision: https://reviews.llvm.org/D29808

llvm-svn: 296939
2017-03-04 01:23:30 +00:00
Daniel Berlin 0350a87ce3 NewGVN: Be consistent in what order we compare operands for swapping.
NFC.

llvm-svn: 296935
2017-03-04 00:44:43 +00:00
Sanjoy Das 0a4ec554c1 Fix a compiler warning
llvm-svn: 296903
2017-03-03 18:53:09 +00:00
Sanjoy Das 664c925a57 [LoopUnrolling] Peel loops with invariant backedge Phi input
Summary:
If a loop contains a Phi node which has an invariant input from back
edge, it is profitable to peel such loops (rather than unroll them) to
use the advantage that this Phi is always invariant starting from 2nd
iteration. After the 1st iteration is peeled, other optimizations can
potentially simplify calculations with this invariant.

Patch by Max Kazantsev!

Reviewers: sanjoy, apilipenko, igor-laevsky, anna, mkuper, reames

Reviewed By: mkuper

Subscribers: mkuper, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D30161

llvm-svn: 296898
2017-03-03 18:19:15 +00:00
Sanjoy Das eed71b9e1c [LoopUnrolling] Re-prioritize Peeling and Partial unrolling
Summary:
In current implementation the loop peeling happens after trip-count based partial unrolling and may
sometimes not happen at all due to it (for example, if trip count is known, but UP.Partial = false). This
is generally bad, the more than there are some situations where peeling is profitable even if the partial
unrolling is disabled.

This patch is a NFC which reorders peeling and partial unrolling application and prepares the code for
implementation of the said optimizations.

Patch by Max Kazantsev!

Reviewers: sanjoy, anna, reames, apilipenko, igor-laevsky, mkuper

Reviewed By: mkuper

Subscribers: mkuper, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D30243

llvm-svn: 296897
2017-03-03 18:19:10 +00:00
Simon Pilgrim e938a152c5 Use APInt::getLowBitsSet instead of APInt::getBitsSet for lower bit mask creation
llvm-svn: 296882
2017-03-03 16:56:33 +00:00
Benjamin Kramer 9528f8c2fb Revert "Re-apply "[GVNHoist] Move GVNHoist to function simplification part of pipeline.""
This reverts commit r296759. Miscompiles bash.

llvm-svn: 296872
2017-03-03 14:27:53 +00:00
Mohammad Shahid bdac9f30c0 [SLP] Fixes the bug due to absence of in order uses of scalars which needs to be available
for VectorizeTree() API.This API uses it for proper mask computation to be used in shufflevector IR.
The fix is to compute the mask for out of order memory accesses while building the vectorizable tree
instead of actual vectorization of vectorizable tree.It also needs to recompute the proper Lane for
external use of vectorizable scalars based on shuffle mask.

Reviewers: mkuper

Differential Revision: https://reviews.llvm.org/D30159

Change-Id: Ide8773ce0ad3562f3cf4d1a0ad0f487e2f60ce5d
llvm-svn: 296863
2017-03-03 10:02:47 +00:00
Evgeniy Stepanov d0285f21d0 [msan] Handle x86_sse_stmxcsr and x86_sse_ldmxcsr.
llvm-svn: 296848
2017-03-03 01:12:43 +00:00
Evgeniy Stepanov c00e45eada [msan] Remove stale comments.
ClStoreCleanOrigin flag was removed back in 2014.

llvm-svn: 296844
2017-03-03 00:25:56 +00:00
Peter Collingbourne 3baa72af7d ThinLTOBitcodeWriter: Do not follow operand edges of type GlobalValue when looking for virtual functions.
Such edges may otherwise result in infinite recursion if a pointer to a vtable
is reachable from the vtable itself. This can happen in practice if a TU
defines the ABI types used to implement RTTI, and is itself compiled with RTTI.

Fixes PR32121.

llvm-svn: 296839
2017-03-02 23:10:17 +00:00
Daniel Berlin dcb004fdf1 Move defClobbersUseOrDef to being a protected member of a class since we don't want anyone else using it
llvm-svn: 296838
2017-03-02 23:06:46 +00:00
Nikolai Bozhenov 4a04fb9e90 [BypassSlowDivision] Use ValueTracking to simplify run-time checks
ValueTracking is used for more thorough analysis of operands. Based on the
analysis, either run-time checks can be simplified (e.g. check only one operand
instead of two) or the transformation can be avoided. For example, it is quite
often the case that a divisor is promoted from a shorter type and run-time
checks for it are redundant.

With additional compile-time analysis of values, two special cases naturally
arise and are addressed by the patch:

 1) Both operands are known to be short enough. Then, the long division can be
    simply replaced with a short one without CFG modification.

 2) If a division is unsigned and the dividend is known to be short then the
    long division is not needed at all. Because if the divisor is too big for
    short division then the quotient is obviously zero (and the remainder is
    equal to the dividend). Actually, the division is not needed when
    (divisor > dividend).

Differential Revision: https://reviews.llvm.org/D29897

llvm-svn: 296832
2017-03-02 22:12:15 +00:00
Nikolai Bozhenov d4b12b3348 [BypassSlowDivision] Refactor fast division insertion logic (NFC)
The most important goal of the patch is to break large insertFastDiv function
into separate pieces, so that later a different fast insertion logic can be
implemented using some of these pieces.

Differential Revision: https://reviews.llvm.org/D29896

llvm-svn: 296828
2017-03-02 22:05:07 +00:00
Tobias Grosser f818c3300b Revert "Fix PR 24415 (at least), by making our post-dominator tree behavior sane."
and also "clang-format GenericDomTreeConstruction.h, since the current
formatting makes it look like their is a bug in the loop indentation, and there
is not"

This reverts commit r296535.

There are still some open design questions which I would like to discuss. I
revert this for Daniel (who gave the OK), as he is on vacation.

llvm-svn: 296812
2017-03-02 21:08:37 +00:00
Evgeny Stupachenko 21bef2cb3c The patch turns on epilogue unroll for loops with constant recurency start.
Summary:

Set unroll remainder to epilog if a loop contains a phi with constant parameter:

  loop:
  pn = phi [Const, PreHeader], [pn.next, Latch]
  ...

Reviewer: hfinkel

Differential Revision: http://reviews.llvm.org/D27004

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 296770
2017-03-02 17:38:46 +00:00
Geoff Berry 484d756583 Re-apply "[GVNHoist] Move GVNHoist to function simplification part of pipeline."
This re-applies r289696, which caused TSan perf regression, which has
since been addressed in separate changes (see PR for details).

See PR31382.

llvm-svn: 296759
2017-03-02 16:16:47 +00:00
Bjorn Pettersson e5027cfbcc [InstCombine] Avoid faulty combines of select-cmp-br
Summary:
When InstCombine is optimizing certain select-cmp-br patterns
it replaces the result of the select in uses outside of the
basic block containing the select. This is only legal if the
path from the select to the outside use is disjoint from all
other paths out from the originating basic block.

The problem found was that InstCombiner::replacedSelectWithOperand
did not consider the case when both edges out from the br pointed
to the same label. In that case the paths aren't disjoint and the
transformation is illegal. This patch avoids the faulty rewrites
by verifying that there is a single flow to the successor where
we want to replace uses.

Reviewers: llvm-commits, spatel, majnemer

Differential Revision: https://reviews.llvm.org/D30455

llvm-svn: 296752
2017-03-02 15:18:58 +00:00
Matthew Simpson 455c2ee394 [LV] Considier non-consecutive but vectorizable accesses for VF selection
When computing the smallest and largest types for selecting the maximum
vectorization factor, we currently ignore loads and stores of pointer types if
the memory access is non-consecutive. We do this because such accesses must be
scalarized regardless of vectorization factor, and thus shouldn't be considered
when determining the factor. This patch makes this check less aggressive by
also considering non-consecutive accesses that may be vectorized, such as
interleaved accesses. Because we don't know at the time of the check if an
accesses will certainly be vectorized (this is a cost model decision given a
particular VF), we consider all accesses that can potentially be vectorized.

Differential Revision: https://reviews.llvm.org/D30305

llvm-svn: 296747
2017-03-02 13:55:05 +00:00
Xin Tong fb0dc6206e Fix typo. NFCI
llvm-svn: 296735
2017-03-02 08:39:11 +00:00
Reid Kleckner d80b69fa3b [Constant Hoisting] Avoid inserting instructions before EH pads
Now that terminators can be EH pads, this code needs to iterate over the
immediate dominators of the EH pad to find a valid insertion point.

Fix for PR32107

Patch by Robert Olliff!

Differential Revision: https://reviews.llvm.org/D30511

llvm-svn: 296698
2017-03-01 22:41:12 +00:00
Daniel Berlin 283a60875e NewGVN: Add debug counter for value numbering
llvm-svn: 296665
2017-03-01 19:59:26 +00:00
Hans Wennborg cc4ff78c9d Revert r296575 "[SLP] Fixes the bug due to absence of in order uses of scalars which needs to be available"
It caused miscompiles, e.g. in Chromium (PR32109).

llvm-svn: 296654
2017-03-01 18:57:16 +00:00
Hans Wennborg 19c0be90f9 [GVNHoist] Don't hoist unsafe scalars at -Oz (PR31729)
Based on Aditya Kumar's patch:

Differential Revision: https://reviews.llvm.org/D29092

llvm-svn: 296642
2017-03-01 17:15:08 +00:00
Igor Laevsky b40152d5d1 [DeadStoreElimination] Check function modref behavior before considering memory clobbered
Differential Revision: https://reviews.llvm.org/D29996

llvm-svn: 296625
2017-03-01 14:38:29 +00:00
Alexey Bataev 4a45efa431 [SLP] Preserve IR flags when vectorizing horizontal reductions.
Summary:
The SLP vectorizer should propagate IR-level optimization hints/flags
(nsw, nuw, exact, fast-math) when converting scalar horizontal
reductions instructions into vectors, just like for other vectorized
instructions.
It doe not include IR propagation for extra arguments, we need to handle
original scalar operations for extra args to propagate correct flags.

Reviewers: mkuper, mzolotukhin, hfinkel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30418

llvm-svn: 296614
2017-03-01 12:43:39 +00:00
Alexey Bataev 74e5a36856 [SLP] Preserve IR flags for extra args.
Summary:
We should preserve IR flags for extra args. These IR flags should be
taken from original scalar operations, not from the reduction
operations.

Reviewers: mkuper, mzolotukhin, hfinkel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30447

llvm-svn: 296613
2017-03-01 12:22:33 +00:00
Alexey Bataev dfec81107f [SLP] Fix for PR32038: extra add of PHI node when it is not required.
Summary:
If horizontal reduction tree starts from the binary operation that is
used in PHI node, but this PHI is not used in horizontal reduction, we
may end up with extra addition of this PHI node after vectorization.
Here is an example:
```
%phi = phi i32 [ %tmp, %end], ...
...
%tmp = add i32 %tmp1, %tmp2
end:
```
after vectorization we always have something like:

```
%phi = phi i32 [ %tmp, %end], ...
...
%red = extractelement <8 x 32> %vec.red, 0
%tmp = add i32 %red, %phi
end:
```
even if `%phi` is not used in reduction tree. Patch considers these PHI
nodes as extra arguments and considers them in the final result iff they
really used in reduction.

Reviewers: mkuper, hfinkel, mzolotukhin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30409

llvm-svn: 296606
2017-03-01 10:50:44 +00:00
Mikael Holmen 760dc9aba7 Remove sometimes faulty rewrite of memcpy in instcombine.
Summary:
Solves PR 31990.

The bad rewrite could replace a memcpy of one word with
 store i4 -1
while it should actually be
 store i8 -1

Hopefully opt and llc has improved enough so the original optimization
done by the code isn't needed anymore.

One already existing testcase is affected. It originally tested that
the memcpy was replaced with
 load double
but since we now remove that rewrite it will be
 load i64
instead.

Patch suggestion by Eli Friedman.

Reviewers: eli.friedman, majnemer, efriedma

Reviewed By: efriedma

Subscribers: efriedma, llvm-commits

Differential Revision: https://reviews.llvm.org/D30254

llvm-svn: 296585
2017-03-01 06:45:20 +00:00
Adam Nemet 15032a0455 [LV] These remark should have been missed remarks
The practice in LV is that we emit analysis remarks and then finally report
either a missed or applied remark on the final decision whether vectorization
is taking place.  On this code path, we were closing with an analysis remark.

llvm-svn: 296578
2017-03-01 04:31:15 +00:00
Mohammad Shahid 175ffa8c35 [SLP] Fixes the bug due to absence of in order uses of scalars which needs to be available
for VectorizeTree() API.This API uses it for proper mask computation to be used in shufflevector IR.
The fix is to compute the mask for out of order memory accesses while building the vectorizable tree
instead of actual vectorization of vectorizable tree.

Reviewers: mkuper

Differential Revision: https://reviews.llvm.org/D30159

Change-Id: Id1e287f073fa4959713ba545fa4254db5da8b40d
llvm-svn: 296575
2017-03-01 03:51:54 +00:00
Adam Nemet abe46080d4 Revert "(HEAD, origin/master, origin/HEAD, master) [LV] These should missed remarks"
This reverts commit r296544.

This got committed by accident.

llvm-svn: 296546
2017-02-28 23:54:27 +00:00
Adam Nemet e193da191d [LV] These should missed remarks
llvm-svn: 296544
2017-02-28 23:48:58 +00:00
Daniel Berlin 03f6938edc Fix PR 24415 (at least), by making our post-dominator tree behavior sane.
Summary:
Currently, our post-dom tree tries to ignore and remove the effects of
infinite loops.  It fails miserably at this, because it tries to do it
ahead of time, and thus can only detect self-loops, and any other type
of infinite loop, it pretends doesn't exist at all.

This can, in a bunch of cases, lead to wrong answers and a completely
empty post-dom tree.

Wrong answer:

```
declare void foo()
define internal void @f() {
entry:
  br i1 undef, label %bb35, label %bb3.i

bb3.i:
  call void @foo()
  br label %bb3.i

bb35.loopexit3:
  br label %bb35

bb35:
  ret void
}
```
We get:
```
Inorder PostDominator Tree:
  [1]  <<exit node>> {0,7}
    [2] %bb35 {1,6}
      [3] %bb35.loopexit3 {2,3}
      [3] %entry {4,5}
```

This is a trivial modification of the testcase for PR 6047
Note that we pretend bb3.i doesn't exist.
We also pretend that bb35 post-dominates entry.

While it's true that it does not exit in a theoretical sense, it's not
really helpful to try to ignore the effect and pretend that bb35
post-dominates entry.  Worse, we pretend the infinite loop does
nothing (it's usually considered a side-effect), and doesn't even
exist, even when it calls a function.  Sadly, this makes it impossible
to use when you are trying to move code safely.  All compilers also
create virtual or real single exit nodes (including us), and connect
infinite loops there (which this patch does).  In fact, others have
worked around our behavior here, to the point of building their own
post-dom trees:
https://zneak.github.io/fcd/2016/02/17/structuring.html and pointing
out the region infrastructure is near-useless for them with postdom in
this state :(

Completely empty post-dom tree:
```
define void @spam() #0 {
bb:
  br label %bb1

bb1:                                              ; preds = %bb1, %bb
  br label %bb1

bb2:                                              ; No predecessors!
  ret void
}
```
Printing analysis 'Post-Dominator Tree Construction' for function 'foo':
=============================--------------------------------
Inorder PostDominator Tree:
  [1]  <<exit node>> {0,1}

:(

(note that even if you ignore the effects of infinite loops, bb2
should be present as an exit node that post-dominates nothing).

This patch changes post-dom to properly handle infinite loops and does
root finding during calculation to prevent empty tress in such cases.

We match gcc's (and the canonical theoretical) behavior for infinite
loops (find the backedge, connect it to the exit block).

Testcases coming as soon as i finish running this on a ton of random graphs :)

Reviewers: chandlerc, davide

Subscribers: bryant, llvm-commits

Differential Revision: https://reviews.llvm.org/D29705

llvm-svn: 296535
2017-02-28 22:57:50 +00:00
Dehao Chen a60cdd3881 Add function importing info from samplepgo profile to the module summary.
Summary: For SamplePGO, the profile may contain cross-module inline stacks. As we need to make sure the profile annotation happens when all the hot inline stacks are expanded, we need to pass this info to the module importer so that it can import proper functions if necessary. This patch implemented this feature by emitting cross-module targets as part of function entry metadata. In the module-summary phase, the metadata is used to build call edges that points to functions need to be imported.

Reviewers: mehdi_amini, tejohnson

Reviewed By: tejohnson

Subscribers: davidxl, llvm-commits

Differential Revision: https://reviews.llvm.org/D30053

llvm-svn: 296498
2017-02-28 18:09:44 +00:00
Adrian Prantl 80d0c93436 Strip debug info when inlining into a nodebug function.
The LLVM backend cannot produce any debug info for an llvm::Function
without a DISubprogram attachment. When inlining a debug-info-carrying
function into a nodebug function, there is therefore no reason to keep
any debug info intrinsic calls or debug locations on the instructions.

This fixes a problem discovered in PR32042.

rdar://problem/30679307

llvm-svn: 296488
2017-02-28 16:58:13 +00:00
Xin Tong c647dcfba0 Empty line. NFCI
llvm-svn: 296438
2017-02-28 05:30:48 +00:00
Xin Tong 01feb29ad2 [LoopUnswitch] Common pushing LIC's user to worklist.
llvm-svn: 296432
2017-02-28 03:32:41 +00:00
Matt Arsenault cdb468c0f9 AMDGPU: Basic folds for fmed3 intrinsic
Constant fold, canonicalize constants to RHS,
reduce to minnum/maxnum when inputs are nan/undef.

llvm-svn: 296409
2017-02-27 23:08:49 +00:00
Petr Hosek 6f16857167 [AddressSanitizer] Put shadow at 0 for Fuchsia
The Fuchsia ASan runtime reserves the low part of the address space.

Patch by Roland McGrath

Differential Revision: https://reviews.llvm.org/D30426

llvm-svn: 296405
2017-02-27 22:49:37 +00:00
Hans Wennborg 2d5841fa73 Revert r296366 "[InlineFunction] add nonnull assumptions based on argument attributes"
It causes miscompiles e.g. during self-host of Clang (PR32082).

llvm-svn: 296398
2017-02-27 22:33:02 +00:00
Xin Tong fe422f7462 Empty line. NFCI
llvm-svn: 296392
2017-02-27 21:51:48 +00:00
Sanjay Patel 40975e05eb [InlineFunction] add nonnull assumptions based on argument attributes
This was suggested in D27855: have the inliner add assumptions, so we don't 
lose nonnull info provided by argument attributes.

This still doesn't solve PR28430 (dyn_cast), but this gets us closer.

https://reviews.llvm.org/D29999

llvm-svn: 296366
2017-02-27 18:13:48 +00:00
Xin Tong 16b85a6601 Fix a bug when unswitching on partial LIV for SwitchInst
Summary: Fix a bug when unswitching on partial LIV for SwitchInst.

Reviewers: hfinkel, efriedma, sanjoy

Reviewed By: sanjoy

Subscribers: david2050, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D29107

llvm-svn: 296363
2017-02-27 18:00:13 +00:00
Artur Pilipenko 0860bfc676 Loop predication expand both sides of the widened condition
This is a fix for a loop predication bug which resulted in malformed IR generation.

Loop invariant side of the widened condition is not guaranteed to be available in the preheader as is, so we need to expand it as well. See added unsigned_loop_0_to_n_hoist_length test for example.

Reviewed By: sanjoy, mkazantsev

Differential Revision: https://reviews.llvm.org/D30099

llvm-svn: 296345
2017-02-27 15:44:49 +00:00
Xin Tong 3ca169f3a9 Update comments. NFCI
llvm-svn: 296298
2017-02-26 19:08:44 +00:00
Davide Italiano 49a0aac1aa [LoopDeletion] Modernize and simplify a bit. NFCI.
llvm-svn: 296294
2017-02-26 07:08:20 +00:00
Xin Tong 42ef2177af [SCCP] Remove manual folding of terminator instructions.
Summary:
BranchInst, SwitchInst (with non-default case) with Undef as input is not
possible at this point. As we always default-fold terminator to one target in
ResolvedUndefsIn and set the input accordingly.

So we should only have constantint/blockaddress here.

If ConstantFoldTerminator fails, that could mean 2 things.

1. ConstantFoldTerminator is doing something unexpected, i.e. not folding on constantint
or blockaddress and not making blocks that should be dead dead.
2. This is not a terminator on constantint or blockaddress. Its on a constant or
overdefined, then this block should not be dead.

In both cases, we should assert.

Reviewers: davide, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30381

llvm-svn: 296281
2017-02-26 02:11:24 +00:00
Xin Tong b529c66ef3 Empty line. NFCI
llvm-svn: 296250
2017-02-25 08:10:28 +00:00
Akira Hatanaka f003cd3344 Remove redundant code. NFC.
llvm-svn: 296219
2017-02-25 00:59:49 +00:00
Akira Hatanaka 2b882050ce Clean up ObjCARCOpts.cpp. NFC.
I removed unused functions and variables and moved variables closer to
their uses.

llvm-svn: 296218
2017-02-25 00:53:38 +00:00
Yaxun Liu e6d1ce59c0 [InstCombine] Fix bug in pointer replacement
This optimisation was crashing when there was a chain of more than one bitcast
instruction to replace, as a result of the changes in D27283.

Patch by James Price.

Differential Revision: https://reviews.llvm.org/D30347

llvm-svn: 296163
2017-02-24 20:27:25 +00:00
Matthew Simpson bdc9c78880 [LV] Merge floating-point and integer induction widening code
This patch merges the existing floating-point induction variable widening code
into the integer induction variable widening code, creating a single set of
functions for both kinds of inductions. The primary motivation for doing this
is to enable vector phi node creation for floating-point induction variables.

Differential Revision: https://reviews.llvm.org/D30211

llvm-svn: 296145
2017-02-24 18:20:12 +00:00
Sanjay Patel ec9a8de0e6 [InstCombine] don't try SimplifyDemandedInstructionBits from zext/sext because it's slow and unnecessary
This one seems more obvious than D30270 that it can't make improvements because an extension always needs
all of the incoming bits. There's one specific transform in SimplifyDemandedInstructionBits of converting
a sext to a zext when the sign-bit is known zero, but that is handled explicitly in visitSext() with
ComputeSignBit().

Like D30270, there are no IR differences (other than instruction names) for the case in PR32037:
https://bugs.llvm.org//show_bug.cgi?id=32037
...and no regression test differences.

Zext/sext are a smaller part of the profile, but this still appears to shave off another 0.5% or so from
'opt -O2'.

Differential Revision: https://reviews.llvm.org/D30280

llvm-svn: 296129
2017-02-24 15:18:42 +00:00
Xin Tong f51d804a9f Fix an iterator invalidation bug when simplifying LIC user.
LoopUnswitch/simplify-with-nonvalness.ll is the test case for this.
The LIC has 2 users and deleting the 1st user when it can be simplified
invalidated the iterator for the 2nd user.

llvm-svn: 296069
2017-02-24 01:43:36 +00:00
Evgeniy Stepanov d1daf631f4 [msan] Fix instrumentation of array allocas.
Before this, MSan poisoned exactly one element of any array alloca,
even if the number of elements was zero.

llvm-svn: 296050
2017-02-24 00:13:17 +00:00
Xin Tong df4dff3928 Delete outdated comment. NFC
llvm-svn: 296043
2017-02-23 23:47:10 +00:00
Xin Tong ec6f90bec9 LoopUnswitch - Simplify based on known not to a be constant.
Summary: In case we do not know what the condition is in an unswitched loop, but we know its definitely NOT a known constant. We can perform simplifcations based on this information.

Reviewers: sanjoy, hfinkel, chenli, efriedma

Reviewed By: efriedma

Subscribers: david2050, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D28968

llvm-svn: 296041
2017-02-23 23:42:19 +00:00
Adam Nemet de53bfb94d [OptDiag] Hide legacy remark ctors
These are only used when emitting remarks without ORE directly using the free
functions emitOptimizationRemark*.

llvm-svn: 296037
2017-02-23 23:11:11 +00:00
Hans Wennborg 5cd9a9bf8c Revert r282872 "CVP. Turn marking adds as no wrap on by default"
While not CVP's fault, this caused miscompiles (PR31181). Reverting
until those are resolved.

(This also reverts the follow-ups r288154 and r288161 which removed the
flag.)

llvm-svn: 296030
2017-02-23 22:29:00 +00:00
Dehao Chen cc75d2441d Add call branch annotation for ICP promoted direct call in SamplePGO mode.
Summary: SamplePGO uses branch_weight annotation to represent callsite hotness. When ICP promotes an indirect call to direct call, we need to make sure the direct call is annotated with branch_weight in SamplePGO mode, so that downstream function inliner can use hot callsite heuristic.

Reviewers: davidxl, eraman, xur

Reviewed By: davidxl, xur

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D30282

llvm-svn: 296028
2017-02-23 22:15:18 +00:00
Adam Nemet 41b019a39c [LAA] Remove unused LoopAccessReport
The need for this removed when I converted everything to use the opt-remark
classes directly with the streaming interface.

llvm-svn: 296017
2017-02-23 21:17:36 +00:00
Adam Nemet bf0e69532c [LV] Remove unused VectorizationReport
The need for this removed when I converted everything to use the opt-remark
classes directly with the streaming interface.

llvm-svn: 296016
2017-02-23 21:17:31 +00:00
Chad Rosier 95abfa35d6 [Reassociate] Add negated value of negative constant to the Duplicates list.
In OptimizeAdd, we scan the operand list to see if there are any common factors
between operands that can be factored out to reduce the number of multiplies
(e.g., 'A*A+A*B*C+D' -> 'A*(A+B*C)+D'). For each operand of the operand list, we
only consider unique factors (which is tracked by the Duplicate set). Now if we
find a factor that is a negative constant, we add the negated value as a factor
as well, because we can percolate the negate out. However, we mistakenly don't
add this negated constant to the Duplicates set.

Consider the expression A*2*-2 + B. Obviously, nothing to factor.

For the added value A*2*-2 we over count 2 as a factor without this change,
which causes the assert reported in PR30256.  The problem is that this code is
assuming that all the multiply operands of the add are already reassociated.
This change avoids the issue by making OptimizeAdd tolerate multiplies which
haven't been completely optimized; this sort of works, but we're doing wasted
work: we'll end up revisiting the add later anyway.

Another possible approach would be to enforce RPO iteration order more strongly.
If we have RedoInsts, we process them immediately in RPO order, rather than
waiting until we've finished processing the whole function. Intuitively, it
seems like the natural approach: reassociation works on expression trees, so
the optimization only works in one direction. That said, I'm not sure how
practical that is given the current Reassociate; the "optimal" form for an
expression depends on its use list (see all the uses of "user_back()"), so
Reassociate is really an iterative optimization of sorts, so any changes here
would probably get messy.

PR30256

Differential Revision: https://reviews.llvm.org/D30228

llvm-svn: 296003
2017-02-23 18:49:03 +00:00
Dehao Chen 533bc6ea8e Use base discriminator in sample pgo profile matching.
Summary: The discriminator has been encoded, and only the base discriminator should be used during profile matching.

Reviewers: dblaikie, davidxl

Reviewed By: dblaikie, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30218

llvm-svn: 295999
2017-02-23 18:27:45 +00:00
Filipe Cabecinhas 33dd486f1d [AddressSanitizer] Add PS4 offset
llvm-svn: 295994
2017-02-23 17:10:28 +00:00
Sanjay Patel 68e4cb3c86 [InstCombine] use loop instead of recursion to peek through FPExt; NFCI
llvm-svn: 295992
2017-02-23 16:39:51 +00:00
Sanjay Patel adf2ab16e4 [InstCombine] use 'match' to reduce code; NFCI
llvm-svn: 295991
2017-02-23 16:26:03 +00:00
Alexey Bataev f77d1656af [SLP] Fix for PR32036: Vectorized horizontal reduction returning wrong
result

Summary:
If the same value is used several times as an extra value, SLP
vectorizer takes it into account only once instead of actual number of
using.
For example:
```
int val = 1;
for (int y = 0; y < 8; y++) {
  for (int x = 0; x < 8; x++) {
    val = val + input[y * 8 + x] + 3;
  }
}
```
We have 2 extra rguments: `1` - initial value of horizontal reduction
and `3`, which is added 8*8 times to the reduction. Before the patch we
added `1` to the reduction value and added once `3`, though it must be
added 64 times.

Reviewers: mkuper, mzolotukhin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30262

llvm-svn: 295972
2017-02-23 13:37:09 +00:00
Alexey Bataev 14b370c1bf Revert "[SLP] Fix for PR32036: Vectorized horizontal reduction returning wrong"
This reverts commit 7c5141e577d9efd1c8e3087566a38ce6b3a41a84.

llvm-svn: 295957
2017-02-23 11:09:35 +00:00
Alexey Bataev 7ae653285d [SLP] Fix for PR32036: Vectorized horizontal reduction returning wrong
result

Summary:
If the same value is used several times as an extra value, SLP
vectorizer takes it into account only once instead of actual number of
using.
For example:
```
int val = 1;
for (int y = 0; y < 8; y++) {
  for (int x = 0; x < 8; x++) {
    val = val + input[y * 8 + x] + 3;
  }
}
```
We have 2 extra rguments: `1` - initial value of horizontal reduction
and `3`, which is added 8*8 times to the reduction. Before the patch we
added `1` to the reduction value and added once `3`, though it must be
added 64 times.

Reviewers: mkuper, mzolotukhin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30262

llvm-svn: 295956
2017-02-23 10:57:15 +00:00
Alexey Bataev 7337212e83 Revert "[SLP] Fix for PR32036: Vectorized horizontal reduction returning wrong"
This reverts commit d83c81ee6a8dea662808ac22b396d1bb0595c89d.

llvm-svn: 295951
2017-02-23 09:59:29 +00:00
Alexey Bataev 68f2402c61 [SLP] Fix for PR32036: Vectorized horizontal reduction returning wrong
result

Summary:
If the same value is used several times as an extra value, SLP
vectorizer takes it into account only once instead of actual number of
using.
For example:
```
int val = 1;
for (int y = 0; y < 8; y++) {
  for (int x = 0; x < 8; x++) {
    val = val + input[y * 8 + x] + 3;
  }
}
```
We have 2 extra rguments: `1` - initial value of horizontal reduction
and `3`, which is added 8*8 times to the reduction. Before the patch we
added `1` to the reduction value and added once `3`, though it must be
added 64 times.

Reviewers: mkuper, mzolotukhin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30262

llvm-svn: 295949
2017-02-23 09:40:38 +00:00
Matt Arsenault f0a88dbaab LoadStoreVectorizer: Split even sized illegal chains properly
Implement isLegalToVectorizeLoadChain for AMDGPU to avoid
producing private address spaces accesses that will need to be
split up later. This was doing the wrong thing in the case
where the queried chain was an even number of elements.

A possible <4 x i32> store was being split into
store <2 x i32>
store i32
store i32

rather than
store <2 x i32>
store <2 x i32>

when legal.

llvm-svn: 295933
2017-02-23 03:58:53 +00:00
Matt Arsenault d4bca1e9ef AMDGPU: Replace disabled exp inputs with undef
llvm-svn: 295914
2017-02-23 00:44:03 +00:00
Michael Kuperstein 6181c62b95 Revert r295868 because it breaks a different SLP lit test.
llvm-svn: 295906
2017-02-22 23:35:13 +00:00
Matt Arsenault f5262256a1 AMDGPU: Add replacement bfe intrinsics
llvm-svn: 295899
2017-02-22 23:04:58 +00:00
Sanjay Patel 4805ce0b17 [InstCombine] don't try SimplifyDemandedInstructionBits from add/sub because it's slow and unlikely to succeed
Notably, no regression tests change when we remove these calls, and these are expensive calls.

The motivation comes from the general acknowledgement that the compiler is getting slower:
http://lists.llvm.org/pipermail/llvm-dev/2017-January/109188.html
http://lists.llvm.org/pipermail/llvm-dev/2016-December/108279.html

And specifically the test case attached to PR32037:
https://bugs.llvm.org//show_bug.cgi?id=32037

Profiling the middle-end (opt) part of the compile:
$ ./opt -O2 row_common.bc -o /dev/null

...visitAdd and visitSub are near the top of the instcombine list, and the calls to SimplifyDemandedInstructionBits()
are high within each of those. Those calls account for 1%+ of the opt time in either debug or release profiles. And 
that's the rough win I see from this patch when testing opt built release from r295864 on an iMac with Haswell 4GHz
(model 4790K).

It seems unlikely that we'd be able to eliminate add/sub or change their operands given that add/sub normally affect
all bits, and the PR32037 example shows no IR difference after this change using -O2.

Also worth noting - the code comment in visitAdd:
// This handles stuff like (X & 254)+1 -> (X&254)|1
...isn't true. That transform is handled later with a call to haveNoCommonBitsSet().

Differential Revision: https://reviews.llvm.org/D30270

llvm-svn: 295898
2017-02-22 23:01:12 +00:00
Daniel Berlin fccbda967a PredicateInfo: Support switch statements
Summary:
Depends on D29606 and D29682

Makes us pass GVN's edge.ll (we also will pass a few other testcases
they just need cleaning up).

Thoughts on the Predicate* hiearchy of classes especially welcome :)
(it's not clear to me how best to organize it, and currently, the getBlock* seems ... uglier than maybe wasting a field somewhere or something).

Reviewers: davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D29747

llvm-svn: 295889
2017-02-22 22:20:58 +00:00
Daniel Berlin 17e8d0eae2 Move updating functions to MemorySSAUpdater.
Add updater to passes that now need it.
Move around code in MemorySSA to expose needed functions.

Summary: Mostly cleanup

Reviewers: george.burgess.iv

Subscribers: llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D30221

llvm-svn: 295887
2017-02-22 22:19:55 +00:00
Wei Mi 74d5a90fa6 [LSR] Canonicalize formula and put recursive Reg related with current loop in ScaledReg.
After rL294814, LSR formula can have multiple SCEVAddRecExprs inside of its BaseRegs.
Previous canonicalization will swap the first SCEVAddRecExpr in BaseRegs with ScaledReg.
But now we want to swap the SCEVAddRecExpr Reg related with current loop with ScaledReg.
Otherwise, we may generate code like this: RegA + lsr.iv + RegB, where loop invariant
parts RegA and RegB are not grouped together and cannot be promoted outside of loop.
With this patch, it will ensure lsr.iv to be generated later in the expr:
RegA + RegB + lsr.iv, so that RegA + RegB can be promoted outside of loop.

Differential Revision: https://reviews.llvm.org/D26781

llvm-svn: 295884
2017-02-22 21:47:08 +00:00
Alexey Bataev b551a81c28 [SLP] Fix for PR32036: Vectorized horizontal reduction returning wrong result
Summary:
If the same value is used several times as an extra value, SLP
vectorizer takes it into account only once instead of actual number of
using.
For example:
```
int val = 1;
for (int y = 0; y < 8; y++) {
  for (int x = 0; x < 8; x++) {
    val = val + input[y * 8 + x] + 3;
  }
}
```
We have 2 extra rguments: `1` - initial value of horizontal reduction
and `3`, which is added 8*8 times to the reduction. Before the patch we
added `1` to the reduction value and added once `3`, though it must be
added 64 times.

Reviewers: mkuper, mzolotukhin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30262

llvm-svn: 295868
2017-02-22 20:06:40 +00:00
Karl-Johan Karlsson 6eaed7aceb [LoopVectorize] Added address space check when analysing interleaved accesses
Prevent memory objects of different address spaces to be part of
the same load/store groups when analysing interleaved accesses.

This is fixing pr31900.

Reviewers: HaoLiu, mssimpso, mkuper

Reviewed By: mssimpso, mkuper

Subscribers: llvm-commits, efriedma, mzolotukhin

Differential Revision: https://reviews.llvm.org/D29717

This reverts r295042 (re-applies r295038) with an additional fix for the
buildbot problem.

llvm-svn: 295858
2017-02-22 18:37:36 +00:00
Alexey Bataev 2e0031b371 [SLP] Remove unused initial value from the variable, NFC.
llvm-svn: 295826
2017-02-22 12:57:58 +00:00
Sean Silva 9011aca5f4 Use const-ref in range-loop for to avoid copying pairs of std::string
No reason to create temporaries.

Differential Revision: https://reviews.llvm.org/D29871

Patch by sergio.martins!

llvm-svn: 295807
2017-02-22 06:34:04 +00:00
Matt Arsenault 1f17c66890 AMDGPU: Add cvt.pkrtz intrinsic
Convert llvm.SI.packf16 test uses

llvm-svn: 295797
2017-02-22 00:27:34 +00:00
Michael Kuperstein c2af82b4b7 [LoopUnroll] Enable PGO-based loop peeling by default.
This enables peeling of loops with low dynamic iteration count by default,
when profile information is available.

Differential Revision: https://reviews.llvm.org/D27734

llvm-svn: 295796
2017-02-22 00:27:34 +00:00
Xin Tong ccee0e0c05 Make default value for disable-licm-promotion in licm explicit.
llvm-svn: 295767
2017-02-21 20:53:48 +00:00
Sanjay Patel cb731f1538 [InstCombine] canonicalize non-obivous forms of integer min/max
This is part of trying to clean up our handling of min/max patterns in IR.
By converting these to canonical form, we're more likely to recognize them
because there are various places in InstCombine that don't use 
matchSelectPattern or m_SMax and friends.

The backend fixups referenced in the now deleted TODO comment were added with:
https://reviews.llvm.org/rL291392
https://reviews.llvm.org/rL289738

If there's any codegen fallout from this change, we should be able to address
it in DAGCombiner or target-specific lowering. 

llvm-svn: 295758
2017-02-21 19:33:53 +00:00
Xin Tong ebfe01c121 [LoopSimplify] Simplify how we compute UniqueExit
Summary: Simplify how we compute UniqueExit. Reuse ExitBlockSet.

Reviewers: sanjoy, efriedma, hfinkel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30182

llvm-svn: 295751
2017-02-21 19:10:58 +00:00
Anna Thomas ec36f3b79a [InstCombine] Do not exercise nested max/min pattern on abs
Summary:
This is a fix for assertion failure in
`getInverseMinMaxSelectPattern` when ABS is passed in as a select pattern.

We should not be invoking the simplification rule for
ABS(MIN(~ x,y))) or ABS(MAX(~x,y)) combinations.

Added a test case which would cause an assertion failure without the patch.

Reviewers: sanjoy, majnemer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30051

llvm-svn: 295719
2017-02-21 14:40:28 +00:00
Evgeny Stupachenko 9909872e30 The patch introduces new way of narrowing complex (>UINT16 variants) solutions.
The new method introduced under "-lsr-exp-narrow" option (currenlty set to true).

Summary:

The method is based on registers number mathematical expectation and should be
 generally closer to optimal solution.
Please see details in comments to
 "LSRInstance::NarrowSearchSpaceByDeletingCostlyFormulas()" function
 (in lib/Transforms/Scalar/LoopStrengthReduce.cpp).

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D29862

From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 295704
2017-02-21 07:34:40 +00:00
Sanjoy Das 90208720e3 Add a wrapper around copy_if in STLExtras; NFC
I will add one more use for this in a later change.

llvm-svn: 295685
2017-02-21 00:38:44 +00:00
Sanjoy Das 85cd132068 [IndVars] Add an assert
We've already checked that the loop is in simplify form before, but a
little paranoia never hurt anyone.

llvm-svn: 295680
2017-02-20 23:37:11 +00:00
Daniel Berlin 78cbd28102 MemorySSA: Add support for renaming uses in the updater.
Summary:
This lets one add aliasing stores to the updater.
(i'm next going to move the creation/etc functions to the updater)

Reviewers: george.burgess.iv

Subscribers: llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D30154

llvm-svn: 295677
2017-02-20 22:26:03 +00:00
Alexey Bataev f96465b9b8 [SLP] nullptr'ize initial value in `findBuildAggregate()`, NFC.
Initial value of V is sett nullptr, as it is not used.

llvm-svn: 295642
2017-02-20 08:04:11 +00:00