Commit Graph

8583 Commits

Author SHA1 Message Date
Rong Xu 48596b6f7a [PGO] Memory intrinsic calls optimization based on profiled size
This patch optimizes two memory intrinsic operations: memset and memcpy based
on the profiled size of the operation. The high level transformation is like:
  mem_op(..., size)
  ==>
  switch (size) {
    case s1:
       mem_op(..., s1);
       goto merge_bb;
    case s2:
       mem_op(..., s2);
       goto merge_bb;
    ...
    default:
       mem_op(..., size);
       goto merge_bb;
    }
  merge_bb:

Differential Revision: http://reviews.llvm.org/D28966

llvm-svn: 299446
2017-04-04 16:42:20 +00:00
Jonas Hahnfeld 1f9b00117c Align all scalar numbers to LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR
Otherwise, yamlize in YAMLTraits.h might be wrongly defined.
This makes some AMDGPU tests fail when LLVM_LINK_LLVM_DYLIB is set.

Differential Revision: https://reviews.llvm.org/D30508

llvm-svn: 299415
2017-04-04 06:02:32 +00:00
Zvi Rackover 8f460655a2 InstSimplify: Add a hook for shufflevector
Summary:
Add a hook for simplification of shufflevector's with the following rules:
- Constant folding - NFC, as it was already being done by the default handler.
-  If only one of the operands is constant, constant fold the shuffle if the
    mask does not select elements from the variable operand -  to show the hook is firing and affecting the test-cases.

Reviewers: RKSimon, craig.topper, spatel, sanjoy, nlopes, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31525

llvm-svn: 299393
2017-04-03 22:05:30 +00:00
Matt Arsenault b600e138cc AMDGPU: Remove llvm.SI.vs.load.input
llvm-svn: 299391
2017-04-03 21:45:13 +00:00
Craig Topper 8698fc09de [InstCombine] Add test cases showing how we fail to fold vector constants into selects the way we do with scalars.
llvm-svn: 299369
2017-04-03 17:49:15 +00:00
Daniel Berlin 07daac8a36 NewGVN: Handle coercion of constant stores, loads, memory insts.
Summary:
Depends on D30928.

This adds support for coercion of stores and memory instructions that do not require insertion to process.
Another few tests down.
I added the relevant tests from rle.ll

Reviewers: davide

Subscribers: llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D30929

llvm-svn: 299330
2017-04-02 13:23:44 +00:00
Nikolai Bozhenov fca527af5c [BypassSlowDivision] Do not bypass division of hash-like values
Disable bypassing if one of the operands looks like a hash value. Slow
division often occurs in hashtable implementations and fast division is
never taken there because a hash value is extremely unlikely to have
enough upper bits set to zero.

A value is considered to be hash-like if it is produced by

1) XOR operation
2) Multiplication by a constant wider than the shorter type
3) PHI node with all incoming values being hash-like

Differential Revision: https://reviews.llvm.org/D28200

llvm-svn: 299329
2017-04-02 13:14:30 +00:00
Zvi Rackover e479980686 Add another interesting shufflevector test case for InstSimplify. NFC.
Test case shows opportunity to constant fold a shuffle with one variable
input vector operand.

llvm-svn: 299327
2017-04-02 10:42:21 +00:00
Sanjay Patel 8b5ad3f00e [InstSimplify] add constant folding for fdiv/frem
Also, add a helper function so we don't have to repeat this code for each binop.

llvm-svn: 299309
2017-04-01 19:05:11 +00:00
Sanjay Patel ee0f5cc41f [InstSimplify] add tests for missed constant folding; NFC
llvm-svn: 299308
2017-04-01 18:44:03 +00:00
Peter Collingbourne 3420e4bf94 Fix a test to check assembly output instead of bitcode.
llvm-svn: 299279
2017-03-31 23:22:19 +00:00
Craig Topper 0d8801f991 [InstCombine] Add test case demonstrating missed opportunities for removing add/sub when the LSBs of one input are known to be 0 and MSBs of the output aren't consumed.
llvm-svn: 299263
2017-03-31 21:08:37 +00:00
Joerg Sonnenberger 28bed106e0 Do not translate rint into nearbyint, but truncate it like nearbyint.
A common way to implement nearbyint is by fiddling with the floating
point environment and calling rint. This is used at least by the BSD
libm and musl. As such, canonicalizing the latter to the former will
create infinite loops for libm and generally pessimize performance, at
least when the generic C versions are used.

This change preserves the rint in the libcall translation and also
handles the domain truncation logic, so that rint with float argument
will be reduced to rintf etc.

llvm-svn: 299247
2017-03-31 19:58:07 +00:00
Piotr Padlewski c050e6e91f [MSSA] Small test fix
llvm-svn: 299235
2017-03-31 17:39:07 +00:00
Dehao Chen fed890ea3a Fix the InstCombine to reserve the VP metadata and sets correct call count.
Summary: Currently the VP metadata was dropped when InstCombine converts a call to direct call. This patch converts the VP metadata to branch_weights so that its hotness is recorded.

Reviewers: eraman, davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31344

llvm-svn: 299228
2017-03-31 15:59:52 +00:00
Zvi Rackover 38ba75c238 Instsimplify: Adding shufflevector test. NFC.
Adding some test-cases demonstrating cases that need to be improved.
To be followed by patches that improve these cases.

llvm-svn: 299189
2017-03-31 07:46:02 +00:00
Mikael Holmen 79235bd4d8 [Scalarizer] Handle scalar arguments in vector GEP
Summary:
Triggered by commit r298620: "[LV] Vectorize GEPs".

If we encounter a vector GEP with scalar arguments, we splat the scalar
into a vector of appropriate size before we scatter the argument.

Reviewers: arsenm, mehdi_amini, bkramer

Reviewed By: arsenm

Subscribers: bjope, mssimpso, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D31416

llvm-svn: 299186
2017-03-31 06:29:49 +00:00
Matt Arsenault 79f837c254 AMDGPU: Add all atomicrmw fields to atomic.inc/dec
Add scope, order, isVolatile

llvm-svn: 299122
2017-03-30 22:21:40 +00:00
Hongbin Zheng bfd7c38de7 [SimplifyIndvar] Replace the sdiv used by IV if we can prove both of its operands are non-negative
Since there is no sdiv in SCEV, an 'udiv' is a better canonical form than an 'sdiv' as the user of induction variable

Differential Revision: https://reviews.llvm.org/D31488

llvm-svn: 299118
2017-03-30 21:56:56 +00:00
Adrian Prantl 346dcaf1fa Teach stripNonLineTableDebugInfo() to remap DILocations in !llvm.loop nodes.
llvm-svn: 299107
2017-03-30 20:10:56 +00:00
Matthew Simpson c8f0aeccda [InstCombine] Correct the check for vector GEPs
Some of the GEP combines (e.g., descaling) can't handle vector GEPs. We have an
existing check that attempts to bail out if given a vector GEP. However, the
check only tests the GEP's pointer operand. A GEP results in a vector of
pointers if at least one of its operands is vector-typed (e.g., its pointer
operand could be a scalar, but its index could be a vector). We should just
check the type of the GEP itself. This should fix PR32414.

Reference: https://bugs.llvm.org/show_bug.cgi?id=32414
Differential Revision: https://reviews.llvm.org/D31470

llvm-svn: 299017
2017-03-29 18:23:08 +00:00
Anna Thomas ba04f4e925 rename instcombine test file. NFC
llvm-svn: 298904
2017-03-28 08:34:07 +00:00
Matthew Simpson b8ff4a4a70 [LV] Transform truncations of non-primary induction variables
The vectorizer tries to replace truncations of induction variables with new
induction variables having the smaller type. After r295063, this optimization
was applied to all integer induction variables, including non-primary ones.
When optimizing the truncation of a non-primary induction variable, we still
need to transform the new induction so that it has the correct start value.
This should fix PR32419.

Reference: https://bugs.llvm.org/show_bug.cgi?id=32419
llvm-svn: 298882
2017-03-27 20:07:38 +00:00
Anna Thomas f57ae33381 [InstCombine] Avoid incorrect folding of select into phi nodes when incoming element is a vector type
Summary:
We are incorrectly folding selects into phi nodes when the incoming value of a phi
node is a constant vector. This optimization is done in `FoldOpIntoPhi` when the
select condition is a phi node with constant incoming values.
Without the fix, we are miscompiling (i.e. incorrectly folding the
select into the phi node) when the vector contains non-zero
elements.
This patch fixes the miscompile and we will correctly fold based on the
select vector operand (see added test cases).

Reviewers: majnemer, sanjoy, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31189

llvm-svn: 298845
2017-03-27 13:52:51 +00:00
Serge Pavlov b71bb80c2d [LoopUnroll] Remap references in peeled iteration
References in cloned blocks must be remapped prior to dominator
calculation.

Differential Revision: https://reviews.llvm.org/D31281

llvm-svn: 298811
2017-03-26 16:46:53 +00:00
Joerg Sonnenberger fa7367428a Split the SimplifyCFG pass into two variants.
The first variant contains all current transformations except
transforming switches into lookup tables. The second variant
contains all current transformations.

The switch-to-lookup-table conversion results in code that is more
difficult to analyze and optimize by other passes. Most importantly,
it can inhibit Dead Code Elimination. As such it is often beneficial to
only apply this transformation very late. A common example is inlining,
which can often result in range restrictions for the switch expression.

Changes in execution time according to LNT:
SingleSource/Benchmarks/Misc/fp-convert +3.03%
MultiSource/Benchmarks/ASC_Sequoia/CrystalMk/CrystalMk -11.20%
MultiSource/Benchmarks/Olden/perimeter/perimeter -10.43%
and a couple of smaller changes. For perimeter it also results 2.6%
a smaller binary.

Differential Revision: https://reviews.llvm.org/D30333

llvm-svn: 298799
2017-03-26 06:44:08 +00:00
Chandler Carruth 0d256c0f5d [IR] Make SwitchInst::CaseIt almost a normal iterator.
This moves it to the iterator facade utilities giving it full random
access semantics, etc. It can also now be used with standard algorithms
like std::all_of and std::any_of and range adaptors like llvm::reverse.

Also make the semantics of iterating match what every other iterator
uses and forbid decrementing past the begin iterator. This was used as
a hacky way to work around iterator invalidation. However, every
instance trying to do this failed to actually avoid touching invalid
iterators despite the clear documentation that the removed and all
subsequent iterators become invalid including the end iterator. So I've
added a return of the next iterator to removeCase and rewritten the
loops that were doing this to correctly follow the iterator pattern of
either incremneting or removing and assigning fresh values to the
iterator and the end.

In one case we were trying to go backwards to make this cleaner but it
doesn't actually work. I've made that code match the code we use
everywhere else to remove cases as we iterate. This changes the order of
cases in one test output and I moved that test to CHECK-DAG so it
wouldn't care -- the order isn't semantically meaningful anyways.

llvm-svn: 298791
2017-03-26 02:49:23 +00:00
Eric Christopher 0935875c40 Change the default attributes for llvm.prefetch to inaccessiblemem_or_argmemonly
so that we can perform some optimizations across it.

Fixes PR32365

llvm-svn: 298781
2017-03-25 20:20:23 +00:00
Ivan Krasin c2124e185c Revert r298620: [LV] Vectorize GEPs
Reason: breaks linking Chromium with LLD + ThinLTO (a pass crashes)
LLVM bug: https://bugs.llvm.org//show_bug.cgi?id=32413

Original change description:

[LV] Vectorize GEPs

This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.

With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.

The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.

Original Differential Revision: https://reviews.llvm.org/D30710

llvm-svn: 298735
2017-03-24 20:49:43 +00:00
Matt Arsenault 4c7795dd31 AMDGPU: Fold rcp/rsq of undef to undef
llvm-svn: 298725
2017-03-24 19:04:57 +00:00
Teresa Johnson 428b9e0627 [ThinLTO] Correct counting of functions in inliner stats
Summary: Declarations need to be filtered out when counting functions.

Reviewers: eraman

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31336

llvm-svn: 298720
2017-03-24 17:59:06 +00:00
Daniel Berlin 9d0796e5d0 NewGVN: Fix PR32403 - Handling of undef in phis was not quite correct
due to LLVM's view of phi nodes.  It would cause NewGVN not to fixpoint
in some interesting edge cases.

llvm-svn: 298687
2017-03-24 05:30:34 +00:00
Dehao Chen 722e94061b Set the prof weight correctly for call instructions in DeadArgumentElimination.
Summary: In DeadArgumentElimination, the call instructions will be replaced. We also need to set the prof weights so that function inlining can find the correct profile.

Reviewers: eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31143

llvm-svn: 298660
2017-03-23 23:26:00 +00:00
Bryant Wong def79b21e4 [MetaRenamer] Don't rename library functions.
Library functions can have specific semantics that affect the behavior of
certain passes. DSE, for instance, gives special treatment to malloc-ed pointers
but not to pointers returned from an equivalently typed (but differently named)
function.

MetaRenamer ought not to alter program semantics, so library functions must
remain untouched.

Reviewers: mehdi_amini, majnemer, chandlerc, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D31304

llvm-svn: 298659
2017-03-23 23:21:07 +00:00
Dehao Chen 775341a14c Use isFunctionHotInCallGraph to set the function section prefix.
Summary: The current prefix based function layout algorithm only looks at function's entry count, which is not sufficient. A function should be grouped together if its entry count or any call edge count is hot.

Reviewers: davidxl, eraman

Reviewed By: eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31225

llvm-svn: 298656
2017-03-23 23:14:11 +00:00
Gil Rapaport 638d4538cd [LV] Add regression test for r297610
The new test asserts that scalarized memory operations get memcheck metadata
added even if the loop is only unrolled.

Differential Revision: https://reviews.llvm.org/D30972

llvm-svn: 298641
2017-03-23 20:02:23 +00:00
Teresa Johnson 0c6a4ff8dc [ThinLTO] Add support for emitting minimized bitcode for thin link
Summary:
The cumulative size of the bitcode files for a very large application
can be huge, particularly with -g. In a distributed build environment,
all of these files must be sent to the remote build node that performs
the thin link step, and this can exceed size limits.

The thin link actually only needs the summary along with a bitcode
symbol table. Until we have a proper bitcode symbol table, simply
stripping the debug metadata results in significant size reduction.

Add support for an option to additionally emit minimized bitcode
modules, just for use in the thin link step, which for now just strips
all debug metadata. I plan to add a cc1 option so this can be invoked
easily during the compile step.

However, care must be taken to ensure that these minimized thin link
bitcode files produce the same index as with the original bitcode files,
as these original bitcode files will be used in the backends.

Specifically:
1) The module hash used for caching is typically produced by hashing the
written bitcode, and we want to include the hash that would correspond
to the original bitcode file. This is because we want to ensure that
changes in the stripped portions affect caching. Added plumbing to emit
the same module hash in the minimized thin link bitcode file.
2) The module paths in the index are constructed from the module ID of
each thin linked bitcode, and typically is automatically generated from
the input file path. This is the path used for finding the modules to
import from, and obviously we need this to point to the original bitcode
files. Added gold-plugin support to take a suffix replacement during the
thin link that is used to override the identifier on the MemoryBufferRef
constructed from the loaded thin link bitcode file. The assumption is
that the build system can specify that the minimized bitcode file has a
name that is similar but uses a different suffix (e.g. out.thinlink.bc
instead of out.o).

Added various tests to ensure that we get identical index files out of
the thin link step.

Reviewers: mehdi_amini, pcc

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31027

llvm-svn: 298638
2017-03-23 19:47:39 +00:00
Matthew Simpson 4e7b71bc86 [LV] Vectorize GEPs
This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.

With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.

The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.

Differential Revision: https://reviews.llvm.org/D30710

llvm-svn: 298620
2017-03-23 16:29:58 +00:00
Matthew Simpson 1fb4064531 [LV] Delete unneeded scalar GEP creation code
The code for generating scalar base pointers in vectorizeMemoryInstruction is
not needed. We currently scalarize all GEPs and maintain the scalarized values
in VectorLoopValueMap. The GEP cloning in this unneeded code is the same as
that in scalarizeInstruction. The test cases that changed as a result of this
patch changed because we were able to reuse the scalarized GEP that we
previously generated instead of cloning a new one.

Differential Revision: https://reviews.llvm.org/D30587

llvm-svn: 298615
2017-03-23 16:07:21 +00:00
Dehao Chen 53a0c082d2 Do not set branch weight if the branch weight annotation is present.
Summary: ThinLTO will annotate the CFG twice. If the branch weight is set by the first annotation, we should not set the branch weight again in the second annotation because the first annotation is more accurate as there is less optimization that could affect debug info accuracy.

Reviewers: tejohnson, davidxl

Reviewed By: tejohnson

Subscribers: mehdi_amini, aprantl, llvm-commits

Differential Revision: https://reviews.llvm.org/D31228

llvm-svn: 298602
2017-03-23 14:43:10 +00:00
Luqman Aden 3f807c91dc Preserve nonnull metadata on Loads through SROA & mem2reg.
Summary:
https://llvm.org/bugs/show_bug.cgi?id=31142 :

SROA was dropping the nonnull metadata on loads from allocas that got optimized out. This patch simply preserves nonnull metadata on loads through SROA and mem2reg.

Reviewers: chandlerc, efriedma

Reviewed By: efriedma

Subscribers: hfinkel, spatel, efriedma, arielb1, davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D27114

llvm-svn: 298540
2017-03-22 19:16:39 +00:00
Sanjay Patel 2f602cea41 [InstCombine] canonicalize insertelement of scalar constant ahead of insertelement of variable
insertelement (insertelement X, Y, IdxC1), ScalarC, IdxC2 -->
insertelement (insertelement X, ScalarC, IdxC2), Y, IdxC1

As noted in the code comment and seen in the test changes, the motivation is that by pulling
constant insertion up, we may be able to constant fold some insertelement instructions.

Differential Revision: https://reviews.llvm.org/D31196

llvm-svn: 298520
2017-03-22 17:10:44 +00:00
Craig Topper ad5c2d04f7 [ValueTracking] Make sure we keep range metadata information when calculating known bits for calls to bitreverse intrinsic.
llvm-svn: 298488
2017-03-22 07:22:49 +00:00
Craig Topper 07f2915ad8 [InstCombine] Teach SimplifyDemandedUseBits to shrink Constants on the left side of subtracts
Summary: Subtracts can have constants on the left side, but we don't shrink them based on demanded bits. This patch fixes that to match the right hand side.

Reviewers: davide, majnemer, spatel, sanjoy, hfinkel

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31119

llvm-svn: 298478
2017-03-22 04:03:53 +00:00
Matt Arsenault 3dbeefa978 AMDGPU: Mark all unspecified CC functions in tests as amdgpu_kernel
Currently the default C calling convention functions are treated
the same as compute kernels. Make this explicit so the default
calling convention can be changed to a non-kernel.

Converted with perl -pi -e 's/define void/define amdgpu_kernel void/'
on the relevant test directories (and undoing in one place that actually
wanted a non-kernel).

llvm-svn: 298444
2017-03-21 21:39:51 +00:00
Sanjay Patel 68c6beabe9 [InstCombine] regenerate checks; NFC
llvm-svn: 298432
2017-03-21 20:14:38 +00:00
George Burgess IV 56c7e88c2c Let llvm.objectsize be conservative with null pointers
This adds a parameter to @llvm.objectsize that makes it return
conservative values if it's given null.

This fixes PR23277.

Differential Revision: https://reviews.llvm.org/D28494

llvm-svn: 298430
2017-03-21 20:08:59 +00:00
Sanjay Patel 00ece756c3 [InstCombine] auto-generate better checks; NFC
llvm-svn: 298377
2017-03-21 14:04:44 +00:00
Matt Arsenault 6b00d40900 InstCombine: Check source value precision when reducing cast intrinsic
Missed this check when porting from the libcall version.

llvm-svn: 298312
2017-03-20 21:59:24 +00:00
Daniel Berlin fa42a23cfc Add missing updated test from VN coercion changes. Instructions were renamed. NFC
llvm-svn: 298280
2017-03-20 18:04:19 +00:00