Commit Graph

6571 Commits

Author SHA1 Message Date
Easwaran Raman f53baca686 Replace the use of MaxFunctionCount module flag
Adds an interface to get ProfileSummary for a module and makes InlineCost use ProfileSummary to get max function count.

Differential Revision: http://reviews.llvm.org/D18622

llvm-svn: 266477
2016-04-15 21:39:58 +00:00
Tim Northover 903f81ba18 ARM: don't try to hoist constant RHS out of a division.
Divisions by a constant can be converted into multiplies which are usually
cheaper, but this isn't possible if the constant gets separated (particularly
in loops). Fix this by telling ConstantHoisting that the immediate in a DIV is
cheap.

I considered making the check generic, but neither AArch64 (strangely) nor x86
showed any benefit on the tests I had.

llvm-svn: 266464
2016-04-15 18:17:18 +00:00
David Majnemer 2e02ba78d5 [InstCombine] Don't transform compares of calls to functions named fabs{f,l,}
InstCombine wants to optimize compares of calls to fabs with zero.
However, we didn't have the necessary legality checking to verify that
the function call had the same behavior as fabs.

llvm-svn: 266452
2016-04-15 17:21:03 +00:00
Adrian Prantl 75819aedf6 [PR27284] Reverse the ownership between DICompileUnit and DISubprogram.
Currently each Function points to a DISubprogram and DISubprogram has a
scope field. For member functions the scope is a DICompositeType. DIScopes
point to the DICompileUnit to facilitate type uniquing.

Distinct DISubprograms (with isDefinition: true) are not part of the type
hierarchy and cannot be uniqued. This change removes the subprograms
list from DICompileUnit and instead adds a pointer to the owning compile
unit to distinct DISubprograms. This would make it easy for ThinLTO to
strip unneeded DISubprograms and their transitively referenced debug info.

Motivation
----------

Materializing DISubprograms is currently the most expensive operation when
doing a ThinLTO build of clang.

We want the DISubprogram to be stored in a separate Bitcode block (or the
same block as the function body) so we can avoid having to expensively
deserialize all DISubprograms together with the global metadata. If a
function has been inlined into another subprogram we need to store a
reference the block containing the inlined subprogram.

Attached to https://llvm.org/bugs/show_bug.cgi?id=27284 is a python script
that updates LLVM IR testcases to the new format.

http://reviews.llvm.org/D19034
<rdar://problem/25256815>

llvm-svn: 266446
2016-04-15 15:57:41 +00:00
Sanjay Patel f11ab05bdb [SimplifyCFG] propagate branch metadata when creating select (PR27344)
This is almost identical to:
http://reviews.llvm.org/rL264527

This doesn't solve PR27344; it just allows the profile weights to survive. 
To solve the bug, we need to use the profile weights in the backend.

llvm-svn: 266442
2016-04-15 15:32:12 +00:00
Sanjay Patel 81433e99b9 [SimplifyCFG] add metadata to show failure to propagate (PR27344)
llvm-svn: 266435
2016-04-15 14:53:35 +00:00
Justin Lebar f04e678e36 Move divergent-target test into CodeGen/NVPTX because it requires an NVPTX target.
llvm-svn: 266403
2016-04-15 01:20:52 +00:00
Justin Lebar cad81cf6b3 [Speculation] Add a SpeculativeExecution mode where the pass does nothing unless TTI::hasBranchDivergence() is true.
Summary:
This lets us add this pass to the IR pass manager unconditionally; it
will simply not do anything on targets without branch divergence.

Reviewers: tra

Subscribers: llvm-commits, jingyue, rnk, chandlerc

Differential Revision: http://reviews.llvm.org/D18625

llvm-svn: 266398
2016-04-15 00:32:09 +00:00
Vedant Kumar 4960fbf391 [test] Require 'asserts' for a test which uses -debug-only
Without this line, bots which run check-all on Release compilers will
break.

llvm-svn: 266386
2016-04-14 23:32:40 +00:00
Michael Kuperstein 16f13e252b [AliasSetTracker] Correctly handle changing the size of an entry
If the size of an AST entry changes, we also need to make sure we perform
necessary alias set merges, as the new size may overlap pointers in other sets.
We happen to run into this with memset, because memset allows an entry for a
i8* pointer to have a decidedly non-i8 size.

This fixes PR27262.

Differential Revision: http://reviews.llvm.org/D18939

llvm-svn: 266381
2016-04-14 22:00:11 +00:00
Renato Golin 5cb666add7 [ARM] Adding IEEE-754 SIMD detection to loop vectorizer
Some SIMD implementations are not IEEE-754 compliant, for example ARM's NEON.

This patch teaches the loop vectorizer to only allow transformations of loops
that either contain no floating-point operations or have enough allowance
flags supporting lack of precision (ex. -ffast-math, Darwin).

For that, the target description now has a method which tells us if the
vectorizer is allowed to handle FP math without falling into unsafe
representations, plus a check on every FP instruction in the candidate loop
to check for the safety flags.

This commit makes LLVM behave like GCC with respect to ARM NEON support, but
it stops short of fixing the underlying problem: sub-normals. Neither GCC
nor LLVM have a flag for allowing sub-normal operations. Before this patch,
GCC only allows it using unsafe-math flags and LLVM allows it by default with
no way to turn it off (short of not using NEON at all).

As a first step, we push this change to make it safe and in sync with GCC.
The second step is to discuss a new sub-normal's flag on both communitues
and come up with a common solution. The third step is to improve the FastMath
flags in LLVM to encode sub-normals and use those flags to restrict NEON FP.

Fixes PR16275.

llvm-svn: 266363
2016-04-14 20:42:18 +00:00
Sanjay Patel e998b91d86 [InstCombine] remove constant by inverting compare + logic (PR27105)
https://llvm.org/bugs/show_bug.cgi?id=27105

We can check if all bits outside of a constant mask are set with a 
single constant.

As noted in the bug report, although this form should be considered the
canonical IR, backends may want to transform this into an 'andn' / 'andc' 
comparison against zero because that could be a single machine instruction.

Differential Revision: http://reviews.llvm.org/D18842

llvm-svn: 266362
2016-04-14 20:17:40 +00:00
Dehao Chen 46f8fbbb1b Update discriminator assignment algorithm to handle nested call correctly.
Summary: Add discriminator for nested call correctly.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D19127

llvm-svn: 266354
2016-04-14 18:37:18 +00:00
Adam Nemet 7aab648831 Revert "Support arbitrary addrspace pointers in masked load/store intrinsics"
This reverts commit r266086.

It breaks the LTO build of gcc in SPEC2000.

llvm-svn: 266282
2016-04-14 08:47:17 +00:00
Tim Northover 5c02f9ad28 ARM: override cost function to re-enable ConstantHoisting (& fix it).
At some point, ARM stopped getting any benefit from ConstantHoisting because
the pass called a different variant of getIntImmCost. Reimplementing the
correct variant revealed some problems, however:

  + ConstantHoisting was modifying switch statements. This is simply invalid,
    the cases must remain integer constants no matter the notional cost.
  + ConstantHoisting was mangling alloca instructions in the entry block. These
    should be handled by FrameLowering, so constants actually have a cost of 0.
    Worse, the resulting bitcasts meant they became dynamic allocas.

rdar://25707382

llvm-svn: 266260
2016-04-13 23:08:27 +00:00
Easwaran Raman cbd3989742 Test case for r265852.
llvm-svn: 266237
2016-04-13 19:43:31 +00:00
Betul Buyukkurt bf8554c279 [PGO] Remove redundant VP instrumentation
LLVM optimization passes may reduce a profiled target expression
to a constant. Removing runtime calls at such instrumentation points
would help speedup the runtime of the instrumented program.

llvm-svn: 266229
2016-04-13 18:52:19 +00:00
Mehdi Amini b5b289339b Revert "Make aliases explicit in the summary"
Inadvertently commited...

This reverts commit e618ec93786d99df2ddf280ad2d5e02f5516cecf.

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266215
2016-04-13 17:20:07 +00:00
Mehdi Amini ce744a95fd Make aliases explicit in the summary
Summary:
To be able to work accurately on the reference graph when taking decision
about internalizing, promoting, renaming, etc. We need to have the alias
information explicit.

Reviewers: tejohnson

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D18836

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266214
2016-04-13 17:18:42 +00:00
David L Kreitzer 752c1448fe Simplify strlen to a subtraction for certain cases.
Patch by Li Huang (li1.huang@intel.com)

Differential Revision: http://reviews.llvm.org/D18230

llvm-svn: 266200
2016-04-13 14:31:06 +00:00
Petar Jovanovic 644b8c1a5d Calculate __builtin_object_size when pointer depends on a condition
This patch fixes calculating of builtin_object_size if it depends on a
condition. Before this patch compiler did not know how to calculate the
object size when it finds a condition that cannot be eliminated.
This patch enables calculating of builtin_object_size even in case when
condition cannot be eliminated by choosing minimum or maximum value as a
result from condition. Choosing minimum or maximum value from condition
is based on the second argument of __builtin_object_size function.

Patch by Strahinja Petrovic.

Differential Revision: http://reviews.llvm.org/D18438

llvm-svn: 266193
2016-04-13 12:25:25 +00:00
David Majnemer 3ee5f34469 [InstCombine] We folded an fcmp to an i1 instead of a vector of i1
Remove an ad-hoc transform in InstCombine and replace it with more
general machinery (ValueTracking, InstructionSimplify and VectorUtils).

This fixes PR27332.

llvm-svn: 266175
2016-04-13 06:55:52 +00:00
Matt Arsenault b34eea9cb5 AMDGPU: Remove leftover ShaderType attributes in tests
llvm-svn: 266155
2016-04-13 00:39:48 +00:00
Sanjay Patel 5e5056d939 [x86, InstCombine] fix masked load pass-through operand to be a zero vector
This bug was introduced with:
http://reviews.llvm.org/rL262269

AVX masked loads are specified to set vector lanes to zero when the high bit of the mask
element for that lane is zero:
"If the mask is 0, the corresponding data element is set to zero in the load form of these
instructions, and unmodified in the store form." --Intel manual

Differential Revision: http://reviews.llvm.org/D19017

llvm-svn: 266148
2016-04-12 23:16:23 +00:00
Mehdi Amini d5faa267c4 Add a pass to name anonymous/nameless function
Summary:
For correct handling of alias to nameless
function, we need to be able to refer them through a GUID in the summary.
Here we name them using a hash of the non-private global names in the module.

Reviewers: tejohnson

Subscribers: joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D18883

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266132
2016-04-12 21:35:28 +00:00
Mehdi Amini 68da426eea Move summary creation out of llvm-as into opt
Summary:
Let keep llvm-as "dumb": it converts textual IR to bitcode. This
commit removes the dependency from llvm-as to libLLVMAnalysis.
We'll add back summary in llvm-as if we get to a textual
representation for it at some point. In the meantime, opt seems
like a better place for that.

Reviewers: tejohnson

Subscribers: joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D19032

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266131
2016-04-12 21:35:18 +00:00
James Y Knight 19f6cce4e3 Add __atomic_* lowering to AtomicExpandPass.
(Recommit of r266002, with r266011, r266016, and not accidentally
including an extra unused/uninitialized element in LibcallRoutineNames)

AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.

This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.

Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.

This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.

It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.

At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.

Differential Revision: http://reviews.llvm.org/D18200

llvm-svn: 266115
2016-04-12 20:18:48 +00:00
Artur Pilipenko dbe0bc8df4 Support arbitrary addrspace pointers in masked load/store intrinsics
This is a resubmittion of 263158 change.

This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace.

The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics.

Reviewed By: reames

Differential Revision: http://reviews.llvm.org/D17270

llvm-svn: 266086
2016-04-12 15:58:04 +00:00
Rafael Espindola d41b54be11 This reverts commit r266002, r266011 and r266016.
They broke the msan bot.

Original message:

Add __atomic_* lowering to AtomicExpandPass.

AtomicExpandPass can now lower atomic load, atomic store, atomicrmw,and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.

This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.

Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.

This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.

It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.

At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.

Differential Revision: http://reviews.llvm.org/D18200

llvm-svn: 266062
2016-04-12 12:30:25 +00:00
George Burgess IV 278199f615 Add the allocsize attribute to LLVM.
`allocsize` is a function attribute that allows users to request that
LLVM treat arbitrary functions as allocation functions.

This patch makes LLVM accept the `allocsize` attribute, and makes
`@llvm.objectsize` recognize said attribute.

The review for this was split into two patches for ease of reviewing:
D18974 and D14933. As promised on the revisions, I'm landing both
patches as a single commit.

Differential Revision: http://reviews.llvm.org/D14933

llvm-svn: 266032
2016-04-12 01:05:35 +00:00
JF Bastien 4f43cfd2c2 MergeFunctions: test alloca better
r237193 fix handling of alloca size / align in MergeFunctions, but only tested one and didn't follow FunctionComparator::cmpOperations's usual comparison pattern. It also didn't update Instruction.cpp:haveSameSpecialState which I'll do separately.

llvm-svn: 266022
2016-04-12 00:03:26 +00:00
Mehdi Amini ae280e54a9 ThinLTO renaming: use module hash instead of position in the summary
This is more robust to changes in the link ordering.

Differential Revision: http://reviews.llvm.org/D18946

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266018
2016-04-11 23:26:46 +00:00
Evgeniy Stepanov f17120a85f [safestack] Add canary to unsafe stack frames
Add StackProtector to SafeStack. This adds limited protection against
data corruption in the caller frame. Current implementation treats
all stack protector levels as -fstack-protector-all.

llvm-svn: 266004
2016-04-11 22:27:48 +00:00
James Y Knight b91d38c5fe Add __atomic_* lowering to AtomicExpandPass.
AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.

This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.

Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.

This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.

It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.

At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.

Differential Revision: http://reviews.llvm.org/D18200

llvm-svn: 266002
2016-04-11 22:22:33 +00:00
Davide Italiano 0778bec6f9 [DebugInfo/Test] Add CU as required.
llvm-svn: 265999
2016-04-11 21:16:48 +00:00
Matthew Simpson 53207a99f9 [LoopUtils, LV] Fix PR27246 (first-order recurrences)
This patch ensures that when we detect first-order recurrences, we reject a phi
node if its previous value is also a phi node. During vectorization the initial
and previous values of the recurrence are shuffled together to create the value
for the current iteration. However, phi nodes are not widened like other
instructions. This fixes PR27246.

Differential Revision: http://reviews.llvm.org/D18971

llvm-svn: 265983
2016-04-11 19:48:18 +00:00
Davide Italiano 7469644259 [DebugInfo] Fix even more tests to include DICompileunit.
llvm-svn: 265980
2016-04-11 18:53:27 +00:00
Adrian Prantl f95164227e Fix missing DICompileUnits in testcases
llvm-svn: 265974
2016-04-11 18:15:44 +00:00
Sanjay Patel 1dd212f900 [InstCombine] consolidate tests for related bugs
llvm-svn: 265973
2016-04-11 17:58:37 +00:00
Adrian Prantl d1f52b672a Update discriminator testcases to use proper NoDebug CUs instead of omitting
!llvm.dbg.cu.

llvm-svn: 265961
2016-04-11 16:58:35 +00:00
Sanjay Patel 816ec8882a [InstCombine] don't try to shift an illegal amount (PR26760)
This is the straightforward fix for PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760

But we still need to make some changes to generalize this helper function
and then send the lshr case into here.

llvm-svn: 265960
2016-04-11 16:50:32 +00:00
Adrian Prantl b01a4d48ac More upgrading of old- and very-old-style debug info in testcases.
llvm-svn: 265953
2016-04-11 15:53:44 +00:00
Sanjoy Das f9d88e650b This reverts commit r265913 and r265912
See PR27315

r265913: "[IndVars] Eliminate op.with.overflow when possible"

r265912: "[SCEV] See through op.with.overflow intrinsics"
llvm-svn: 265950
2016-04-11 15:26:18 +00:00
Sanjay Patel 2d2d67994c [InstCombine] replace test that no longer works as intended
This is step 1 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760

llvm-svn: 265946
2016-04-11 15:19:44 +00:00
Teresa Johnson 2d5487cf44 [ThinLTO] Move summary computation from BitcodeWriter to new pass
Summary:
This is the first step in also serializing the index out to LLVM
assembly.

The per-module summary written to bitcode is moved out of the bitcode
writer and to a new analysis pass (ModuleSummaryIndexWrapperPass).
The pass itself uses a new builder class to compute index, and the
builder class is used directly in places where we don't have a pass
manager (e.g. llvm-as).

Because we are computing summaries outside of the bitcode writer, we no
longer can use value ids created by the bitcode writer's
ValueEnumerator. This required changing the reference graph edge type
to use a new ValueInfo class holding a union between a GUID (combined
index) and Value* (permodule index). The Value* are converted to the
appropriate value ID during bitcode writing.

Also, this enables removal of the BitWriter library's dependence on the
Analysis library that was previously required for the summary computation.

Reviewers: joker.eph

Subscribers: joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D18763

llvm-svn: 265941
2016-04-11 13:58:45 +00:00
Sanjoy Das a07ad647ee [IndVars] Eliminate op.with.overflow when possible
Summary:
If we can prove that an op.with.overflow intrinsic does not overflow, we
can get rid of the intrinsic, and replace it with non-wrapping
arithmetic.

Reviewers: atrick, regehr

Subscribers: sanjoy, mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D18685

llvm-svn: 265913
2016-04-10 22:50:31 +00:00
Elena Demikhovsky 751ed0a06a Loop vectorization with uniform load
Vectorization cost of uniform load wasn't correctly calculated.
As a result, a simple loop that loads a uniform value wasn't vectorized.

Differential Revision: http://reviews.llvm.org/D18940

llvm-svn: 265901
2016-04-10 16:53:19 +00:00
Sanjoy Das dd77e1e6a5 Maintain calling convention when inling calls to llvm.deoptimize
The behavior here was buggy -- we'd forget the calling convention after
inlining a callsite calling llvm.deoptimize.

llvm-svn: 265867
2016-04-09 00:22:59 +00:00
Sanjoy Das 87b9e1b727 Propagate Undef in llvm.cos Intrinsic
Summary:
The llvm cos intrinsic currently does not propagate undef's. This change
transforms cos(undef) to null value or 0.

There are 2 test cases added as well.

Patch by Anna Thomas!

Reviewers: sanjoy

Subscribers: majnemer, llvm-commits

Differential Revision: http://reviews.llvm.org/D18863

llvm-svn: 265825
2016-04-08 18:21:11 +00:00
David Majnemer 56737722e4 [InstCombine] Fix miscompile in FoldSPFofSPF
We had a select of a cast of a select but attempted to replace the outer
select with the inner select dispite their incompatible types.

Patch by Anton Korobeynikov!

This fixes PR27236.

llvm-svn: 265805
2016-04-08 16:51:49 +00:00
David Majnemer 60c6abc3cc [LoopVectorize] Register cloned assumptions
InstCombine cannot effectively remove redundant assumptions without them
registered in the assumption cache.  The vectorizer can create identical
assumptions but doesn't register them with the cache, resulting in
slower compile times because InstCombine tries to reason about a lot
more assumptions.

Fix this by registering the cloned assumptions.

llvm-svn: 265800
2016-04-08 16:37:10 +00:00
Silviu Baranga 6f444dfd55 Re-commit [SCEV] Introduce a guarded backedge taken count and use it in LAA and LV
This re-commits r265535 which was reverted in r265541 because it
broke the windows bots. The problem was that we had a PointerIntPair
which took a pointer to a struct allocated with new. The problem
was that new doesn't provide sufficient alignment guarantees.
This pattern was already present before r265535 and it just happened
to work. To fix this, we now separate the PointerToIntPair from the
ExitNotTakenInfo struct into a pointer and a bool.

Original commit message:

Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.

However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.

In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.

We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.

Reviewers: anemet, mzolotukhin, hfinkel, sanjoy

Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D17201

llvm-svn: 265786
2016-04-08 14:29:09 +00:00
Duncan P. N. Exon Smith 4ec55f8ab6 Reapply "ValueMapper: Treat LocalAsMetadata more like function-local Values"
This reverts commit r265765, reapplying r265759 after changing a call from
LocalAsMetadata::get to ValueAsMetadata::get (and adding a unit test).  When a
local value is mapped to a constant (like "i32 %a" => "i32 7"), the new debug
intrinsic operand may no longer be pointing at a local.

    http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/19020/

The previous coommit message follows:

--

This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).

This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy.  In particular:

  - MapValue returns nullptr or "metadata !{}" for missing locals in
    MetadataAsValue/LocalAsMetadata bridging paris, depending on
    the RF_IgnoreMissingLocals flag.

  - MapValue doesn't memoize LocalAsMetadata-related results.

  - MapMetadata no longer deals with LocalAsMetadata or
    RF_IgnoreMissingLocals at all.  (This wasn't in r265631 at all, but
    I realized during testing it would make the patch simpler with no
    loss of generality.)

r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals.  This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:

    define void @foo(i32 %arg) {
      call void @llvm.some.intrinsic(metadata i32 %x)
      %x = add i32 1, i32 %arg
    }

If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.

I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.

As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset.  Otherwise, match
the behaviour of r265631.

Original commit message:

    ValueMapper: Make LocalAsMetadata match function-local Values

    Start treating LocalAsMetadata similarly to function-local members of
    the Value hierarchy in MapValue and MapMetadata.

      - Don't memoize them.
      - Return nullptr if they are missing.

    This also cleans up ConstantAsMetadata to stop listening to the
    RF_IgnoreMissingLocals flag.

llvm-svn: 265768
2016-04-08 03:13:22 +00:00
Duncan P. N. Exon Smith 805873148a Revert "ValueMapper: Treat LocalAsMetadata more like function-local Values"
This reverts commit r265759, since even this limited version breaks some
bots:
  http://lab.llvm.org:8011/builders/clang-ppc64be-linux/builds/3311
  http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-autoconf/builds/17696

This also reverts r265761 "ValueMapper: Unduplicate
RF_NoModuleLevelChanges check, NFC", since I had trouble separating it
from r265759.

llvm-svn: 265765
2016-04-08 00:56:21 +00:00
Sanjoy Das 5ce3272833 Don't IPO over functions that can be de-refined
Summary:
Fixes PR26774.

If you're aware of the issue, feel free to skip the "Motivation"
section and jump directly to "This patch".

Motivation:

I define "refinement" as discarding behaviors from a program that the
optimizer has license to discard.  So transforming:

```
void f(unsigned x) {
  unsigned t = 5 / x;
  (void)t;
}
```

to

```
void f(unsigned x) { }
```

is refinement, since the behavior went from "if x == 0 then undefined
else nothing" to "nothing" (the optimizer has license to discard
undefined behavior).

Refinement is a fundamental aspect of many mid-level optimizations done
by LLVM.  For instance, transforming `x == (x + 1)` to `false` also
involves refinement since the expression's value went from "if x is
`undef` then { `true` or `false` } else { `false` }" to "`false`" (by
definition, the optimizer has license to fold `undef` to any non-`undef`
value).

Unfortunately, refinement implies that the optimizer cannot assume
that the implementation of a function it can see has all of the
behavior an unoptimized or a differently optimized version of the same
function can have.  This is a problem for functions with comdat
linkage, where a function can be replaced by an unoptimized or a
differently optimized version of the same source level function.

For instance, FunctionAttrs cannot assume a comdat function is
actually `readnone` even if it does not have any loads or stores in
it; since there may have been loads and stores in the "original
function" that were refined out in the currently visible variant, and
at the link step the linker may in fact choose an implementation with
a load or a store.  As an example, consider a function that does two
atomic loads from the same memory location, and writes to memory only
if the two values are not equal.  The optimizer is allowed to refine
this function by first CSE'ing the two loads, and the folding the
comparision to always report that the two values are equal.  Such a
refined variant will look like it is `readonly`.  However, the
unoptimized version of the function can still write to memory (since
the two loads //can// result in different values), and selecting the
unoptimized version at link time will retroactively invalidate
transforms we may have done under the assumption that the function
does not write to memory.

Note: this is not just a problem with atomics or with linking
differently optimized object files.  See PR26774 for more realistic
examples that involved neither.

This patch:

This change introduces a new set of linkage types, predicated as
`GlobalValue::mayBeDerefined` that returns true if the linkage type
allows a function to be replaced by a differently optimized variant at
link time.  It then changes a set of IPO passes to bail out if they see
such a function.

Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D18634

llvm-svn: 265762
2016-04-08 00:48:30 +00:00
Duncan P. N. Exon Smith 267185ec92 ValueMapper: Treat LocalAsMetadata more like function-local Values
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).

This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy.  In particular:

  - MapValue returns nullptr or "metadata !{}" for missing locals in
    MetadataAsValue/LocalAsMetadata bridging paris, depending on
    the RF_IgnoreMissingLocals flag.

  - MapValue doesn't memoize LocalAsMetadata-related results.

  - MapMetadata no longer deals with LocalAsMetadata or
    RF_IgnoreMissingLocals at all.  (This wasn't in r265631 at all, but
    I realized during testing it would make the patch simpler with no
    loss of generality.)

r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals.  This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:

    define void @foo(i32 %arg) {
      call void @llvm.some.intrinsic(metadata i32 %x)
      %x = add i32 1, i32 %arg
    }

If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.

I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.

As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset.  Otherwise, match
the behaviour of r265631.

Original commit message:

    ValueMapper: Make LocalAsMetadata match function-local Values

    Start treating LocalAsMetadata similarly to function-local members of
    the Value hierarchy in MapValue and MapMetadata.

      - Don't memoize them.
      - Return nullptr if they are missing.

    This also cleans up ConstantAsMetadata to stop listening to the
    RF_IgnoreMissingLocals flag.

llvm-svn: 265759
2016-04-08 00:33:44 +00:00
Ulrich Weigand 6e6966460a [GVN] Fix handling of sub-byte types in big-endian mode
When GVN wants to re-interpret an already available value in a smaller
type, it needs to right-shift the value on big-endian systems to ensure
the correct bytes are accessed.  The shift value is the difference of
the sizes of the two types.

This is correct as long as both types occupy multiples of full bytes.
However, when one of them is a sub-byte type like i1, this no longer
holds true: we still need to shift, but only to access the correct
*byte*.  Accessing bits within the byte requires no shift in either
endianness; e.g. an i1 resides in the least-significant bit of its
containing byte on both big- and little-endian systems.

Therefore, the appropriate shift value to be used is the difference of
the *storage* sizes of the two types.  This is already handled correctly
in one place where such a shift takes place (GetStoreValueForLoad), but
is incorrect in two other places: GetLoadValueForLoad and
CoerceAvailableValueToLoadType.

This patch changes both places to use the storage size as well.

Differential Revision: http://reviews.llvm.org/D18662

llvm-svn: 265684
2016-04-07 15:45:02 +00:00
Michael Zolotukhin 97567e141e [LoopUnroll] Fix the way we update DT after complete unrolling.
Updating dominators for exit-blocks of the unrolled loops is not enough,
as shown in PR27157. The proper way is to update dominators for all
dominance-children of original loop blocks.

llvm-svn: 265605
2016-04-06 21:47:12 +00:00
Sanjay Patel 6cc488004d regenerate checks
llvm-svn: 265591
2016-04-06 19:58:06 +00:00
Fiona Glaser 16332ba861 LoopUnroll: only allow non-modulo Partial unrolling when Runtime=true
Patch by Evgeny Stupachenko <evstupac@gmail.com>.

llvm-svn: 265558
2016-04-06 16:43:45 +00:00
Silviu Baranga a393baf1fd Revert r265535 until we know how we can fix the bots
llvm-svn: 265541
2016-04-06 14:06:32 +00:00
Silviu Baranga 72b4a4a330 [SCEV] Introduce a guarded backedge taken count and use it in LAA and LV
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.

However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.

In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.

We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.

Reviewers: anemet, mzolotukhin, hfinkel, sanjoy

Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D17201

llvm-svn: 265535
2016-04-06 13:18:26 +00:00
David Majnemer 12fd50410d [SLPVectorizer] Vectorizing the libm sqrt to llvm's sqrt intrinsic requires nnan
To quote the langref "Unlike sqrt in libm, however, llvm.sqrt has
undefined behavior for negative numbers other than -0.0 (which allows
for better optimization, because there is no need to worry about errno
being set). llvm.sqrt(-0.0) is defined to return -0.0 like IEEE sqrt."

This means that it's unsafe to replace sqrt with llvm.sqrt unless the
call is annotated with nnan.

Thanks to Hal Finkel for pointing this out!

llvm-svn: 265521
2016-04-06 07:04:53 +00:00
Duncan P. N. Exon Smith 6f2e37429a ValueMapper: Fix delayed blockaddress handling after r265273
r265273 added Mapper::mapBlockAddress, which delays mapping a
blockaddress value until the function has a body.  The condition was
backwards, and should be checking Function::empty instead of
GlobalValue::isDeclaration.

llvm-svn: 265508
2016-04-06 02:25:12 +00:00
David Majnemer 25d03dbcde [SLPVectorizer] Vectorize libcalls of sqrt
We didn't realize that we could transform the libcall into a vectorized
intrinsic.

llvm-svn: 265493
2016-04-06 00:14:59 +00:00
Davide Italiano ea04026c13 [DebugInfo] Fix tests so that each subprogram belongs to a CU.
llvm-svn: 265490
2016-04-05 23:37:08 +00:00
Sanjoy Das 49e974b33b [RS4GC] Better codegen for deoptimize calls
Don't emit a gc.result for a statepoint lowered from
@llvm.experimental.deoptimize since the call into __llvm_deoptimize is
effectively noreturn.  Instead follow the corresponding gc.statepoint
with an "unreachable".

llvm-svn: 265485
2016-04-05 23:18:35 +00:00
Sanjay Patel fd16e62d56 add tests to show missing optimization from D18230
llvm-svn: 265431
2016-04-05 18:09:36 +00:00
Sanjay Patel 6ecf1b6760 [InstCombine] regenerate checks
utils/update_test_checks.py was improved with:
http://reviews.llvm.org/rL265414
to CHECK-NEXT the first line of the IR function. This ensures that nothing bad
has happened before that.

llvm-svn: 265417
2016-04-05 17:24:54 +00:00
Chuang-Yu Cheng d3fb38cae5 Don't delete empty preheaders in CodeGenPrepare if it would create a critical edge
Presently, CodeGenPrepare deletes all nearly empty (only phi and branch)
basic blocks. This pass can delete loop preheaders which frequently creates
critical edges. A preheader can be a convenient place to spill registers to
the stack. If the entrance to a loop body is a critical edge, then spills
may occur in the loop body rather than immediately before it. This patch
protects loop preheaders from deletion in CodeGenPrepare even if they are
nearly empty.

Since the patch alters the CFG, it affects a large number of test cases.
In most cases, the changes are merely cosmetic (basic blocks have different
names or instruction orders change slightly). I am somewhat concerned about
the test/CodeGen/Mips/brdelayslot.ll test case. If the loop preheader is not
deleted, then the MIPS backend does not take advantage of a branch delay
slot. Consequently, I would like some close review by a MIPS expert.

The patch also partially subsumes D16893 from George Burgess IV. George
correctly notes that CodeGenPrepare does not actually preserve the dominator
tree. I think the dominator tree was usually not valid when CodeGenPrepare
ran, but I am using LoopInfo to mark preheaders, so the dominator tree is
now always valid before CodeGenPrepare.

Author: Tom Jablin (tjablin)
Reviewers: hfinkel george.burgess.iv vkalintiris dsanders kbarton cycheng

http://reviews.llvm.org/D16984

llvm-svn: 265397
2016-04-05 14:06:20 +00:00
David L Kreitzer 188de5ae69 Adds the ability to use an epilog remainder loop during loop unrolling and makes
this the default behavior.

Patch by Evgeny Stupachenko (evstupac@gmail.com).

Differential Revision: http://reviews.llvm.org/D18158

llvm-svn: 265388
2016-04-05 12:19:35 +00:00
Zia Ansari a82a58a4e5 Enable unroll for constant bound loops when TripCount is not modulo of unroll factor, reducing it to maximum power-of-2 that satisfies threshold limit.
Commit for Evgeny Stupachenko (evstupac@gmail.com)

Differential Revision: http://reviews.llvm.org/D18290

llvm-svn: 265337
2016-04-04 19:24:46 +00:00
Teresa Johnson 03e93bab7f Fix bot errors from r265327, exact GUID which depends on path
E.g. http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/21919

The source file path name will affect exact GUID, don't try to match
exact value.

llvm-svn: 265334
2016-04-04 19:11:00 +00:00
Betul Buyukkurt 18131c4216 [PGO] Avoid instrumenting direct callee's at value sites.
Direct callees' that are cast to other function prototypes,
show up in the Call/Invoke instructions as ConstantExpr's.
Currently llvm::CallSite's getCalledFunction() fails
to return the callees in such expressions as direct calls.
Value profiling should avoid instrumenting such cases. Mostly NFC.

llvm-svn: 265330
2016-04-04 18:56:36 +00:00
Teresa Johnson 916495d894 [ThinLTO] Add option to dump value name to GUID mapping
Summary:
Useful for debugging since we lose this correlation after the permodule
summary/VST is read and until we later materialize source modules in the
function importer.

Reviewers: joker.eph

Subscribers: llvm-commits, joker.eph

Differential Revision: http://reviews.llvm.org/D18555

llvm-svn: 265327
2016-04-04 18:52:58 +00:00
Peter Zotov 0b6d7bc682 [CodeGenPrepare] Avoid sinking soft-FP comparisons
Sinking comparisons in CGP can undo the job of hoisting them done
earlier by LICM, and soft-FP makes this an expensive mistake.

A common pattern that produces floating point comparisons uniform
over a loop is an explicit check for division by zero. If the divisor
is hoisted out of the loop, the comparison can also be, but hoisting
the function that unwinds is never legal, since it may cause side
effects in the loop body prior to the unwinding to not be executed.

Differential Revision: http://reviews.llvm.org/D18744

llvm-svn: 265264
2016-04-03 16:36:17 +00:00
Peter Zotov 0218d0f383 Mark some FP intrinsics as safe to speculatively execute
Floating point intrinsics in LLVM are generally not speculatively
executed, since most of them are defined to behave the same as libm
functions, which set errno.

However, the only error that can happen  when executing ceil, floor,
nearbyint, rint and round libm functions per POSIX.1-2001 is -ERANGE,
and that requires the maximum value of the exponent to be smaller
than  the number of mantissa bits, which is not the case with any of
the floating point types supported by LLVM.

The trunc and copysign functions never set errno per per POSIX.1-2001.

Differential Revision: http://reviews.llvm.org/D18643

llvm-svn: 265262
2016-04-03 12:30:46 +00:00
David Majnemer fe3f9d1721 [InstCombine] Don't sink an instr after a catchswitch
A catchswitch is a terminator, instructions cannot be inserted after it.

llvm-svn: 265158
2016-04-01 17:28:17 +00:00
David Majnemer 6f1f85f0e1 [SLPVectorizer] Don't insert an extractelement before a catchswitch
A catchswitch cannot be preceded by another instruction in the same
basic block (other than a PHI node).

Instead, insert the extract element right after the materialization of
the vectorized value.  This isn't optimal but is a reasonable compromise
given the constraints of WinEH.

This fixes PR27163.

llvm-svn: 265157
2016-04-01 17:28:15 +00:00
Vedant Kumar 7784171721 [PGOProfile] Rename a test to make it more reusable, NFC
llvm-svn: 265144
2016-04-01 15:45:33 +00:00
Sanjoy Das f83ab6de56 Don't insert stackrestore on deoptimizing returns
They're not necessary (since the stack pointer is trivially restored on
return), and the way LLVM inserts the stackrestore calls breaks the
IR (we get a stackrestore between the deoptimize call and the return).

llvm-svn: 265101
2016-04-01 02:51:30 +00:00
Sanjoy Das 18b92968ea Don't insert lifetime end markers on deoptimizing returns
They're not necessary (since the lifetime of the alloca is trivially
over due to the return), and the way LLVM inserts the lifetime.end
markers breaks the IR (we get a lifetime end marker between the
deoptimize call and the return).

llvm-svn: 265100
2016-04-01 02:51:26 +00:00
Adrian Prantl b8089516a5 testcase gardening: update the emissionKind enum to the new syntax. (NFC)
llvm-svn: 265081
2016-04-01 00:16:49 +00:00
Adrian Prantl b939a25707 Move the DebugEmissionKind enum from DIBuilder into DICompileUnit.
This mostly cosmetic patch moves the DebugEmissionKind enum from DIBuilder
into DICompileUnit. DIBuilder is not the right place for this enum to live
in — a metadata consumer should not have to include DIBuilder.h.
I also added a Verifier check that checks that the emission kind of a
DICompileUnit is actually legal.

http://reviews.llvm.org/D18612
<rdar://problem/25427165>

llvm-svn: 265077
2016-03-31 23:56:58 +00:00
David Majnemer ae272d718e [NVPTX] Infer __nvvm_reflect as nounwind, readnone
This patch simply mirrors the attributes we give to @llvm.nvvm.reflect
to the __nvvm_reflect libdevice call.  This shaves about 30% of the code
in libdevice away because of CSE opportunities.  It's also helps us
figure out that libdevice implementations of transcendental functions
don't have side-effects.

llvm-svn: 265060
2016-03-31 21:29:57 +00:00
Sanjoy Das 56df0ec610 [InstCombine] Fix incorrect rule from rL236202
The rule for SMIN introduced in rL236202 doesn't work as advertised: the
check for Pred == ICmpInst::ICMP_SGT was missing.

llvm-svn: 264996
2016-03-31 05:14:34 +00:00
Davide Italiano 936a2b09f3 [DebugInfo] Subprograms should belong to a CU.
Start fixing tests accordingly. There are still
about 35 failures before we can enable this check
in the IR verifier.

llvm-svn: 264990
2016-03-31 03:40:07 +00:00
Sanjoy Das 021de058df Introduce a @llvm.experimental.guard intrinsic
Summary:
As discussed on llvm-dev[1].

This change adds the basic boilerplate code around having this intrinsic
in LLVM:

 - Changes in Intrinsics.td, and the IR Verifier
 - A lowering pass to lower @llvm.experimental.guard to normal
   control flow
 - Inliner support

[1]: http://lists.llvm.org/pipermail/llvm-dev/2016-February/095523.html

Reviewers: reames, atrick, chandlerc, rnk, JosephTremoulet, echristo

Subscribers: mcrosier, llvm-commits

Differential Revision: http://reviews.llvm.org/D18527

llvm-svn: 264976
2016-03-31 00:18:46 +00:00
Matt Arsenault 2fe4fbc184 AMDGPU: Add frexp_exp intrinsic
llvm-svn: 264944
2016-03-30 22:28:52 +00:00
Matt Arsenault 5cd4f8f89f AMDGPU: Constant folding for frexp_mant
llvm-svn: 264943
2016-03-30 22:28:26 +00:00
David Majnemer 5d518386b6 [IndVarSimplify] Don't insert after a catchswitch
Widening a PHI requires us to insert a trunc.
The logical place for this trunc is in the same BB as the PHI.
This is not possible if the BB is terminated by a catchswitch.

This fixes PR27133.

llvm-svn: 264926
2016-03-30 21:12:06 +00:00
Justin Bogner a5a6378700 test: Remove a test for a transform that hasn't existed in 5 years.
The TailDup transform was removed in r138841 in 2011, along with most
of the tests for it. This test, however, was missed. Probably because
it had already been XFAIL'd for 3 years at that point (since r52243!)
and continued to fail when the opt flag for -tailduplicate stopped
being valid.

llvm-svn: 264916
2016-03-30 20:36:07 +00:00
Hal Finkel 2e0ff2b244 [LoopVectorize] Don't vectorize loops when everything will be scalarized
This change prevents the loop vectorizer from vectorizing when all of the vector
types it generates will be scalarized. I've run into this problem on the PPC's QPX
vector ISA, which only holds floating-point vector types. The loop vectorizer
will, however, happily vectorize loops with purely integer computation. Here's
an example:

  LV: The Smallest and Widest types: 32 / 32 bits.
  LV: The Widest register is: 256 bits.
  LV: Found an estimated cost of 0 for VF 1 For instruction:   %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ]
  LV: Found an estimated cost of 0 for VF 1 For instruction:   %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25
  LV: Found an estimated cost of 0 for VF 1 For instruction:   %2 = trunc i64 %indvars.iv25 to i32
  LV: Found an estimated cost of 1 for VF 1 For instruction:   store i32 %2, i32* %arrayidx, align 4
  LV: Found an estimated cost of 1 for VF 1 For instruction:   %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1
  LV: Found an estimated cost of 1 for VF 1 For instruction:   %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600
  LV: Found an estimated cost of 0 for VF 1 For instruction:   br i1 %exitcond27, label %for.cond.cleanup, label %for.body
  LV: Scalar loop costs: 3.
  LV: Found an estimated cost of 0 for VF 2 For instruction:   %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ]
  LV: Found an estimated cost of 0 for VF 2 For instruction:   %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25
  LV: Found an estimated cost of 0 for VF 2 For instruction:   %2 = trunc i64 %indvars.iv25 to i32
  LV: Found an estimated cost of 2 for VF 2 For instruction:   store i32 %2, i32* %arrayidx, align 4
  LV: Found an estimated cost of 1 for VF 2 For instruction:   %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1
  LV: Found an estimated cost of 1 for VF 2 For instruction:   %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600
  LV: Found an estimated cost of 0 for VF 2 For instruction:   br i1 %exitcond27, label %for.cond.cleanup, label %for.body
  LV: Vector loop of width 2 costs: 2.
  LV: Found an estimated cost of 0 for VF 4 For instruction:   %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ]
  LV: Found an estimated cost of 0 for VF 4 For instruction:   %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25
  LV: Found an estimated cost of 0 for VF 4 For instruction:   %2 = trunc i64 %indvars.iv25 to i32
  LV: Found an estimated cost of 4 for VF 4 For instruction:   store i32 %2, i32* %arrayidx, align 4
  LV: Found an estimated cost of 1 for VF 4 For instruction:   %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1
  LV: Found an estimated cost of 1 for VF 4 For instruction:   %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600
  LV: Found an estimated cost of 0 for VF 4 For instruction:   br i1 %exitcond27, label %for.cond.cleanup, label %for.body
  LV: Vector loop of width 4 costs: 1.
  ...
  LV: Selecting VF: 8.
  LV: The target has 32 registers
  LV(REG): Calculating max register usage:
  LV(REG): At #0 Interval # 0
  LV(REG): At #1 Interval # 1
  LV(REG): At #2 Interval # 2
  LV(REG): At #4 Interval # 1
  LV(REG): At #5 Interval # 1
  LV(REG): VF = 8

The problem is that the cost model here is not wrong, exactly. Since all of
these operations are scalarized, their cost (aside from the uniform ones) are
indeed VF*(scalar cost), just as the model suggests. In fact, the larger the VF
picked, the lower the relative overhead from the loop itself (and the
induction-variable update and check), and so in a sense, picking the largest VF
here is the right thing to do.

The problem is that vectorizing like this, where all of the vectors will be
scalarized in the backend, isn't really vectorizing, but rather interleaving.
By itself, this would be okay, but then the vectorizer itself also interleaves,
and that's where the problem manifests itself. There's aren't actually enough
scalar registers to support the normal interleave factor multiplied by a factor
of VF (8 in this example). In other words, the problem with this is that our
register-pressure heuristic does not account for scalarization.

While we might want to improve our register-pressure heuristic, I don't think
this is the right motivating case for that work. Here we have a more-basic
problem: The job of the vectorizer is to vectorize things (interleaving aside),
and if the IR it generates won't generate any actual vector code, then
something is wrong. Thus, if every type looks like it will be scalarized (i.e.
will be split into VF or more parts), then don't consider that VF.

This is not a problem specific to PPC/QPX, however. The problem comes up under
SSE on x86 too, and as such, this change fixes PR26837 too. I've added Sanjay's
reduced test case from PR26837 to this commit.

Differential Revision: http://reviews.llvm.org/D18537

llvm-svn: 264904
2016-03-30 19:37:08 +00:00
James Molloy 8e46cd05a1 [VectorUtils] Don't try and truncate PHIs to a smaller bitwidth
We already try not to truncate PHIs in computeMinimalBitwidths. LoopVectorize can't handle it and we really don't need to, because both induction and reduction PHIs are truncated by other means.

However, we weren't bailing out in all the places we should have, and we ended up by returning a PHI to be truncated, which has caused PR27018.

This fixes PR17018.

llvm-svn: 264852
2016-03-30 10:11:43 +00:00
George Burgess IV 49cad7d70b [MemorySSA] Make the visitor more careful with calls.
Prior to this patch, the MemorySSA caching visitor would cache all
calls that it visited. When paired with phi optimization, this can be
problematic. Consider:

define void @foo() {
  ; 1 = MemoryDef(liveOnEntry)
  call void @clobberFunction()
  br i1 undef, label %if.end, label %if.then

if.then:
  ; MemoryUse(??)
  call void @readOnlyFunction()
  ; 2 = MemoryDef(1)
  call void @clobberFunction()
  br label %if.end

if.end:
  ; 3 = MemoryPhi(...)
  ; MemoryUse(?)
  call void @readOnlyFunction()
  ret void
}

When optimizing MemoryUse(?), we visit defs 1 and 2, so we note to
cache them later. We ultimately end up not being able to optimize
passed the Phi, so we set MemoryUse(?) to point to the Phi. We then
cache the clobbering call for def 1 to be the Phi.

This commit changes this behavior so that we wipe out any calls
added to VisistedCalls while visiting the defs of a phi we couldn't
optimize.

Aside: With this patch, we now can bootstrap clang/LLVM without a
single MemorySSA verifier failure. Woohoo. :)

llvm-svn: 264820
2016-03-30 03:12:08 +00:00
Xinliang David Li a55fd1a9dc [PGO] Handle invoke inst in IR based icall instrumentation
Differential Revision: http://reviews.llvm.org/D18580

llvm-svn: 264818
2016-03-30 02:16:07 +00:00
George Burgess IV 82ee942a8c [MemorySSA] Change how the walker views/walks visited phis.
This patch teaches the caching MemorySSA walker a few things:

1. Not to walk Phis we've walked before. It seems that we tried to do
   this before, but it didn't work so well in cases like:

define void @foo() {
  %1 = alloca i8
  %2 = alloca i8
  br label %begin

begin:
  ; 3 = MemoryPhi({%0,liveOnEntry},{%end,2})
  ; 1 = MemoryDef(3)
  store i8 0, i8* %2
  br label %end

end:
  ; MemoryUse(?)
  load i8, i8* %1
  ; 2 = MemoryDef(1)
  store i8 0, i8* %2
  br label %begin
}

Because we wouldn't put Phis in Q.Visited until we tried to visit them.
So, when trying to optimize MemoryUse(?):
  - We would visit 3 above
    - ...Which would make us put {%0,liveOnEntry} in Q.Visited
    - ...Which would make us visit {%0,liveOnEntry}
    - ...Which would make us put {%end,2} in Q.Visited
    - ...Which would make us visit {%end,2}
      - ...Which would make us visit 3
        - ...Which would realize we've already visited everything in 3
        - ...Which would make us conservatively return 3.

In the added test-case, (@looped_visitedonlyonce) this behavior would
cause us to give incorrect results. Specifically, we'd visit 4 twice
in the same query, but on the second visit, we'd skip while.cond because
it had been visited, visit if.then/if.then2, and cache "1" as the
clobbering def on the way back.

2. If we try to walk the defs of a {Phi,MemLoc} and see it has been
   visited before, just hand back the Phi we're trying to optimize.

I promise this isn't as terrible as it seems. :)

We now insert {Phi,MemLoc} pairs just before walking the Phi's upward
defs. So, we check the cache for the {Phi,MemLoc} pair before checking
if we've already walked the Phi.

The {Phi,MemLoc} pair is (almost?) always guaranteed to have a cache
entry if we've already fully walked it, because we cache as we go.

So, if the {Phi,MemLoc} pair isn't in cache, either:
 (a) we must be in the process of visiting it (in which case, we can't
     give a better answer in a cache-as-we-go DFS walker)

 (b) we visited it, but didn't cache it on the way back (...which seems
     to require `ModifyingAccess` to not dominate `StartingAccess`,
     so I'm 99% sure that would be an error. If it's not an error, I
     haven't been able to get it to happen locally, so I suspect it's
     rare.)

- - - - -

As a consequence of this change, we no longer skip upward defs of phis,
so we can kill the `VisitedOnlyOne` check. This gives us better accuracy
than we had before, at the cost of potentially doing a bit more work
when we have a loop.

llvm-svn: 264814
2016-03-30 00:26:26 +00:00
Adam Nemet 1428d41f9a [LoopDataPrefetch] Centralize the tuning cl::opts under the pass
This is effectively NFC, minus the renaming of the options
(-cyclone-prefetch-distance -> -prefetch-distance).

The change was requested by Tim in D17943.

llvm-svn: 264806
2016-03-29 23:45:52 +00:00
Duncan P. N. Exon Smith e8eb94a9a5 ADCE: Remove debug info intrinsics in dead scopes
During ADCE, track which debug info scopes still have live references
from the code, and delete debug info intrinsics for the dead ones.

These intrinsics describe the locations of variables (in registers or
stack slots).  If there's no code left corresponding to a variable's
scope, then there's no way to reference the variable in the debugger and
it doesn't matter what its value is.

I add a DEBUG printout when the described location in an SSA register,
in case it helps some trying to track down why locations get lost.
However, we still delete these; the scope itself isn't attached to any
real code, so the ship has already sailed.

llvm-svn: 264800
2016-03-29 22:57:12 +00:00
Adrian Prantl 4a09777b37 Upgrade some wildly anachronistic debug info in testcases.
llvm-svn: 264797
2016-03-29 22:34:30 +00:00
Nirav Dave 2aab7f4358 Add support for no-jump-tables
Add function soft attribute to the generation of Jump Tables in CodeGen
as initial step towards clang support of gcc's no-jump-table support

Reviewers: hans, echristo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D18321

llvm-svn: 264756
2016-03-29 17:46:23 +00:00
Elena Demikhovsky 0053eb9479 fixed typo - CHECK-LABEL
llvm-svn: 264702
2016-03-29 06:49:38 +00:00
Hyojin Sung 4673f10568 [SimlifyCFG] Prevent passes from destroying canonical loop structure, especially for nested loops
When eliminating or merging almost empty basic blocks, the existence of non-trivial PHI nodes
is currently used to recognize potential loops of which the block is the header and keep the block.
However, the current algorithm fails if the loops' exit condition is evaluated only with volatile
values hence no PHI nodes in the header. Especially when such a loop is an outer loop of a nested
loop, the loop is collapsed into a single loop which prevent later optimizations from being
applied (e.g., transforming nested loops into simplified forms and loop vectorization).
    
The patch augments the existing PHI node-based check by adding a pre-test if the BB actually
belongs to a set of loop headers and not eliminating it if yes.

llvm-svn: 264697
2016-03-29 04:08:57 +00:00
Sanjay Patel 3e9664fd60 regenerate checks
llvm-svn: 264677
2016-03-28 22:12:21 +00:00
Adrian Prantl faebbb053d Add an IR Verifier check for orphaned DICompileUnits.
A DICompileUnit that is not listed in llvm.dbg.cu will cause assertion
failures and/or crashes in the backend. The Verifier should reject this.

rdar://problem/25369499

llvm-svn: 264657
2016-03-28 21:06:26 +00:00
Adam Nemet 64fa75391c [LVers] Change CHECK_LABEL to CHECK-LABEL (underscore->dash)
Thanks to Sanjoy for catching this.

llvm-svn: 264656
2016-03-28 21:04:13 +00:00
Reid Kleckner ba85781f58 Revert "[SimlifyCFG] Prevent passes from destroying canonical loop structure, especially for nested loops"
This reverts commit r264596.

It does not compile.

llvm-svn: 264604
2016-03-28 18:07:40 +00:00
Hyojin Sung 0ada5b0d14 [SimlifyCFG] Prevent passes from destroying canonical loop structure, especially for nested loops
When eliminating or merging almost empty basic blocks, the existence of non-trivial PHI nodes
is currently used to recognize potential loops of which the block is the header and keep the block.
However, the current algorithm fails if the loops' exit condition is evaluated only with volatile
values hence no PHI nodes in the header. Especially when such a loop is an outer loop of a nested
loop, the loop is collapsed into a single loop which prevent later optimizations from being 
applied (e.g., transforming nested loops into simplified forms and loop vectorization).

The patch augments the existing PHI node-based check by adding a pre-test if the BB actually 
belongs to a set of loop headers and not eliminating it if yes. 

llvm-svn: 264596
2016-03-28 17:22:25 +00:00
Davide Italiano 6db1dcbf6b [SimplifyLibCalls] Transform printf("%s", "a") -> putchar('a').
llvm-svn: 264588
2016-03-28 15:54:01 +00:00
NAKAMURA Takumi a51d6ea990 llvm/test/Transforms/FunctionImport/funcimport.ll: -stats REQUIRES +Asserts.
llvm-svn: 264561
2016-03-28 02:14:49 +00:00
Teresa Johnson 569af59b14 Use DAG check to try to appease bot
Try to appease
http://bb.pgr.jp/builders/cmake-llvm-x86_64-linux/builds/34772. This was
the only check that didn't use DAG and it wasn't found.

llvm-svn: 264538
2016-03-27 15:36:43 +00:00
Teresa Johnson d29478f70e [ThinLTO] Add optional import message and statistics
Summary:
Add a statistic to count the number of imported functions. Also, add a
new -print-imports option to emit a trace of imported functions, that
works even for an NDEBUG build.

Note that emitOptimizationRemark does not work for the above printing as
it expects a Function object and DebugLoc, neither of which we have
with summary-based importing.

This is part 2 of D18487, the first part was committed separately as
r264536.

Reviewers: joker.eph

Subscribers: llvm-commits, joker.eph

Differential Revision: http://reviews.llvm.org/D18487

llvm-svn: 264537
2016-03-27 15:27:30 +00:00
Michael Kruse ff379b69b2 [Verifier] Reject PHIs using defs from own block.
Reject the following IR as malformed (assuming that %entry, %next are
not in a loop):

    next:
      %y = phi i32 [ 0, %entry ]
      %x = phi i32 [ %y, %entry ]

Such PHI nodes came up in PR26718. While there was no consensus on
whether or not this is valid IR, most opinions on that bug and in a
discussion on the llvm-dev mailing list tended towards a
"strict interpretation" (term by Joseph Tremoulet) of PHI node uses.
Also, the language reference explicitly states that "the use of each
incoming value is deemed to occur on the edge from the corresponding
predecessor block to the current block" and
`DominatorTree::dominates(Instruction*, Use&)` uses this definition as
well.

For the code mentioned in PR15384, clang does not compile to such PHIs
(anymore?). The test case still hangs when replacing `%tmp6` with `%tmp`
in revisions before r176366 (where PR15384 has been fixed). The
occurrence of %tmp6 therefore was probably unintentional. Its value is
not used except in other PHIs.

Reviewers: majnemer, reames, JosephTremoulet, bkramer, grosser, jdoerfert, kparzysz, sanjoy

Differential Revision: http://reviews.llvm.org/D18443

llvm-svn: 264528
2016-03-26 23:32:57 +00:00
Sanjay Patel 796db35f62 [SimplifyCFG] propagate branch metadata when creating select (PR26636)
llvm-svn: 264527
2016-03-26 23:30:50 +00:00
Sanjay Patel 342f7c7e10 minimize test cases
These are tests for store transforms. 
The loads, adds, and geps were irrelevant.

llvm-svn: 264526
2016-03-26 23:09:25 +00:00
Mehdi Amini 01e321306b ThinLTO: use the callgraph from the combined index to drive the FunctionImporter
Summary:
Now that the summary contains the full reference/call graph, we can
replace the existing function importer that loads and inspect the IR
to iteratively walk the call graph by a traversal based purely on the
summary information. Decouple the actual importing decision from any
IR manipulation.

Reviewers: tejohnson

Subscribers: llvm-commits, joker.eph

Differential Revision: http://reviews.llvm.org/D18343

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 264503
2016-03-26 05:40:34 +00:00
Philip Reames b5681138e4 Allow value forwarding past release fences in GVN
A release fence acts as a publication barrier for stores within the current thread to become visible to other threads which might observe the release fence. It does not require the current thread to observe stores performed on other threads. As a result, we can allow store-load and load-load forwarding across a release fence.  

We choose to be much more conservative about stores.  In theory, nothing prevents us from shifting a store from after a release fence to before it, and then eliminating the preceeding (previously fenced) store.  Doing this without actually moving the second store is likely also legal, but we chose to be conservative at this time.

The LangRef indicates only atomic loads and stores are effected by fences. This patch chooses to be far more conservative then that. 

This is the GVN companion to http://reviews.llvm.org/D11434 which applied the same logic in EarlyCSE and has been baking in tree for a while now.

Differential Revision: http://reviews.llvm.org/D11436

llvm-svn: 264472
2016-03-25 22:40:35 +00:00
Sanjay Patel 69632447b1 [InstSimplify] regenerate checks using a script
I didn't notice any significant changes in the actual checks here;
all of these tests already used FileCheck, so a script can batch
update them in one shot.

This commit is just to show the value of automating this process: 
We have uniform formatting as opposed to a mish-mash of check
structure that changes based on individual prefs and the current
fashion. This makes it simpler to update when we find a bug or
make an enhancement.

llvm-svn: 264457
2016-03-25 20:12:25 +00:00
Sanjoy Das d4c783335b [RS4GC] Lower calls to @llvm.experimental.deoptimize
This changes RS4GC to lower calls to ``@llvm.experimental.deoptimize``
to gc.statepoints wrapping ``__llvm_deoptimize``, and changes
``callsGCLeafFunction`` to recognize ``@llvm.experimental.deoptimize``
as a non GC leaf function.

I've had to hard code the ``"__llvm_deoptimize"`` name in
RewriteStatepointsForGC; since ``TargetLibraryInfo`` is available only
during codegen.  This isn't without precedent in the codebase, so I'm
not overtly concerned.

llvm-svn: 264456
2016-03-25 20:12:13 +00:00
Sanjay Patel cd7d3ae7cc [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264438
2016-03-25 18:03:40 +00:00
Sanjay Patel d3d1179463 [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264437
2016-03-25 18:03:17 +00:00
Sanjay Patel 721fec09b5 [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264435
2016-03-25 18:03:01 +00:00
Sanjay Patel 1395cf0d3c [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264434
2016-03-25 18:02:14 +00:00
Sanjay Patel bfbac177d2 [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264433
2016-03-25 18:01:55 +00:00
Sanjay Patel 08da4b7cd8 [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264432
2016-03-25 18:01:37 +00:00
Sanjay Patel 8f22390137 [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264431
2016-03-25 18:01:23 +00:00
Sanjay Patel 5270746978 [InstCombine] use FileCheck for better checking
(testing script for autogeneration of check lines)

llvm-svn: 264430
2016-03-25 18:01:04 +00:00
Sanjay Patel 246e7f7057 [InstCombine] consolidate regression tests of the ancients (2002)
Testing out the check-generator-script that's now in the utils folder.

llvm-svn: 264424
2016-03-25 17:16:32 +00:00
David L Kreitzer 8d441eb936 Enable non-power-of-2 #pragma unroll counts.
Patch by Evgeny Stupachenko.

Differential Revision: http://reviews.llvm.org/D18202

llvm-svn: 264407
2016-03-25 14:24:52 +00:00
Matt Arsenault 8c8fcb2585 AMDGPU: Cost model for basic integer operations
This resolves bug 21148 by preventing promotion to
i64 induction variables.

llvm-svn: 264376
2016-03-25 01:16:40 +00:00
David Majnemer e09d035dad [LoopStrengthReduce] Don't hoist into a catchswitch
We try to hoist the insertion point as high as possible to encourage
sharing.  However, we must be careful not to hoist into a catchswitch as
it is both an EHPad and a terminator.

llvm-svn: 264344
2016-03-24 21:40:22 +00:00
Adam Nemet 7aba60c853 [LLE] Check for mismatching types between the store and the load earlier
isDependenceDistanceOfOne asserts that the store and the load access
through the same type.  This function is also used by
removeDependencesFromMultipleStores so we need to make sure we filter
out mismatching types before reaching this point.

Now we do this when the initial candidates are gathered.

This is a refinement of the fix made in r262267.

Fixes PR27048.

llvm-svn: 264313
2016-03-24 17:59:26 +00:00
George Burgess IV 0e4898685f Fix bugs in the MemorySSA walker.
There are a few bugs in the walker that this patch addresses.
Primarily:
- Caching can break when we have multiple BBs without phis
- We weren't optimizing some phis properly
- Because of how the DFS iterator works, there were times where we
  wouldn't cache any results of our DFS

I left the test cases with FIXMEs in, because I'm not sure how much
effort it will take to get those to work (read: We'll probably
ultimately have to end up redoing the walker, or we'll have to come up
with some creative caching tricks), and more test coverage = better.

Differential Revision: http://reviews.llvm.org/D18065

llvm-svn: 264180
2016-03-23 18:31:55 +00:00
George Burgess IV d4febd1612 Keep CodeGenPrepare from preserving the domtree.
CGP modifies the domtree in some cases, so saying that it preserves the
domtree is a lie. We'll be able to selectively preserve it with the new
pass manager.

Differential Revision: http://reviews.llvm.org/D16893

llvm-svn: 264099
2016-03-22 21:25:08 +00:00
Matthias Braun 68bb2931cc Revert "Support arbitrary addrspace pointers in masked load/store intrinsics"
This commit broke LTO builds. Reverting it to unbreak the bots while the
issue is investigated. See also:

http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160321/341002.html

This reverts r263158

llvm-svn: 264088
2016-03-22 20:24:34 +00:00
Adam Nemet 8b47e0d0ea [LoopVersioning] Relax an assert for LCSSA PHIs
When you have multiple LCSSA (single-operand) PHIs that are converted
into two-operand PHIs due to versioning, only assert that the PHI
currently being converted has a single operand.  I.e. we don't want to
check PHIs that were converted earlier in the loop.

Fixes PR27023.

Thanks to Karl-Johan Karlsson for the minimized testcase!

llvm-svn: 264081
2016-03-22 18:38:15 +00:00
Zinovy Nis 07ac2bd4d0 [PATCH] Force LoopReroll to reset the loop trip count value after reroll.
It's a bug fix. 
For rerolled loops SE trip count remains unchanged. It leads to incorrect work of the next passes.
My patch just resets SE info for rerolled loop forcing SE to re-evaluate it next time it requested.
I also added a verifier call in the exisitng test to be sure no invalid SE data remain. Without my fix this test would fail with -verify-scev.

Differential Revision: http://reviews.llvm.org/D18316

llvm-svn: 264051
2016-03-22 13:50:57 +00:00
George Burgess IV 3887a41725 [MemorySSA] Consider def-only BBs for live-in calculations.
If we have a BB with only MemoryDefs, live-in calculations will ignore
it. This means we get results like this:

define void @foo(i8* %p) {
  ; 1 = MemoryDef(liveOnEntry)
  store i8 0, i8* %p
  br i1 undef, label %if.then, label %if.end

if.then:
  ; 2 = MemoryDef(1)
  store i8 1, i8* %p
  br label %if.end

if.end:
  ; 3 = MemoryDef(1)
  store i8 2, i8* %p
  ret void
}

...When there should be a MemoryPhi in the `if.end` BB.

This patch fixes that behavior.

llvm-svn: 263991
2016-03-21 21:25:39 +00:00
Matt Arsenault 155dda9134 Implement constant folding for bitreverse
llvm-svn: 263945
2016-03-21 15:00:35 +00:00
Silviu Baranga f875e4fd92 [IndVars] Fix PR26974: make sure replaceCongruentIVs doesn't break LCSSA
Summary:
replaceCongruentIVs can break LCSSA when trying to replace IV increments
since it tries to replace all uses of a phi node with another phi node
while both of the phi nodes are not necessarily in the processed loop.
This will cause an assert in IndVars.

To fix this, we add a check to make sure that the replacement maintains
LCSSA.

Reviewers: sanjoy

Subscribers: mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D18266

llvm-svn: 263941
2016-03-21 12:44:29 +00:00
David Majnemer abae6b588b [SimplifyLibCalls] Only consider sinpi/cospi functions within the same function
The sinpi/cospi can be replaced with sincospi to remove unnecessary
computations.  However, we need to make sure that the calls are within
the same function!

This fixes PR26993.

llvm-svn: 263875
2016-03-19 04:53:02 +00:00
David Majnemer cdf2873e36 [InstCombine] Don't insert instructions before a catch switch
CatchSwitches are not splittable, we cannot insert casts, etc. before
them.

This fixes PR26992.

llvm-svn: 263874
2016-03-19 04:39:52 +00:00
Mehdi Amini 8d05185a26 Rework linkInModule(), making it oblivious to ThinLTO
Summary:
ThinLTO is relying on linkInModule to import selected function.
However a lot of "magic" was hidden in linkInModule and the IRMover,
who would rename and promote global variables on the fly.

This is moving to an approach where the steps are decoupled and the
client is reponsible to specify the list of globals to import.
As a consequence some test are changed because they were relying on
the previous behavior which was importing the definition of *every*
single global without control on the client side.
Now the burden is on the client to decide if a global has to be imported
or not.

Reviewers: tejohnson

Subscribers: joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D18122

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263863
2016-03-19 00:40:31 +00:00
Michael Kuperstein 5abc2765fa Have DataLayout::isLegalInteger() accept uint64_t
While not strictly necessary, since we don't support large integer
types, this avoids bugs due to silent truncation from uint64_t to a
32-bit unsigned (e.g. DL.isLegalInteger(DL.getTypeSizeInBits(Ty) )

This fixes PR26972.

Differential Revision: http://reviews.llvm.org/D18258

llvm-svn: 263850
2016-03-18 23:19:29 +00:00
Sanjoy Das 60fb899f28 [IndVars] Pass the right loop to isLoopInvariantPredicate
The loop on IVOperand's incoming values assumes IVOperand to be an
induction variable on the loop over which `S Pred X` is invariant;
otherwise loop invariant incoming values to IVOperand are not guaranteed
to dominate the comparision.

This fixes PR26973.

llvm-svn: 263827
2016-03-18 20:37:07 +00:00
Adam Nemet 709e3046ee [LoopDataPrefetch] Add TTI to limit the number of iterations to prefetch ahead
Summary:
It can hurt performance to prefetch ahead too much.  Be conservative for
now and don't prefetch ahead more than 3 iterations on Cyclone.

Reviewers: hfinkel

Subscribers: llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D17949

llvm-svn: 263772
2016-03-18 00:27:43 +00:00
Adam Nemet 6d8beeca53 [LoopDataPrefetch/Aarch64] Allow selective prefetching of large-strided accesses
Summary:
And use this TTI for Cyclone.  As it was explained in the original RFC
(http://thread.gmane.org/gmane.comp.compilers.llvm.devel/92758), the HW
prefetcher work up to 2KB strides.

I am also adding tests for this and the previous change (D17943):

* Cyclone prefetching accesses with a large stride
* Cyclone not prefetching accesses with a small stride
* Generic Aarch64 subtarget not prefetching either

Reviewers: hfinkel

Subscribers: aemerson, rengolin, llvm-commits, mzolotukhin

Differential Revision: http://reviews.llvm.org/D17945

llvm-svn: 263771
2016-03-18 00:27:38 +00:00
Adam Nemet b0c4eae073 [LoopVectorize] Annotate versioned loop with noalias metadata
Summary:
Use the new LoopVersioning facility (D16712) to add noalias metadata in
the vector loop if we versioned with memchecks.  This can enable some
optimization opportunities further down the pipeline (see the included
test or the benchmark improvement quoted in D16712).

The test also covers the bug I had in the initial version in D16712.

The vectorizer did not previously use LoopVersioning.  The reason is
that the vectorizer performs its transformations in single shot.  It
creates an empty single-block vector loop that it then populates with
the widened, if-converted instructions.  Thus creating an intermediate
versioned scalar loop seems wasteful.

So this patch (rather than bringing in LoopVersioning fully) adds a
special interface to LoopVersioning to allow the vectorizer to add
no-alias annotation while still performing its own versioning.

As the vectorizer propagates metadata from the instructions in the
original loop to the vector instructions we also check the pointer in
the original instruction and see if LoopVersioning can add no-alias
metadata based on the issued memchecks.

Reviewers: hfinkel, nadav, mzolotukhin

Subscribers: mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D17191

llvm-svn: 263744
2016-03-17 20:32:37 +00:00
Adam Nemet 5eccf07df3 [LoopVersioning] Annotate versioned loop with noalias metadata
Summary:
If we decide to version a loop to benefit a transformation, it makes
sense to record the now non-aliasing accesses in the newly versioned
loop.  This allows non-aliasing information to be used by subsequent
passes.

One example is 456.hmmer in SPECint2006 where after loop distribution,
we vectorize one of the newly distributed loops.  To vectorize we
version this loop to fully disambiguate may-aliasing accesses.  If we
add the noalias markers, we can use the same information in a later DSE
pass to eliminate some dead stores which amounts to ~25% of the
instructions of this hot memory-pipeline-bound loop.  The overall
performance improves by 18% on our ARM64.

The scoped noalias annotation is added in LoopVersioning.  The patch
then enables this for loop distribution.  A follow-on patch will enable
it for the vectorizer.  Eventually this should be run by default when
versioning the loop but first I'd like to get some feedback whether my
understanding and application of scoped noalias metadata is correct.

Essentially my approach was to have a separate alias domain for each
versioning of the loop.  For example, if we first version in loop
distribution and then in vectorization of the distributed loops, we have
a different set of memchecks for each versioning.  By keeping the scopes
in different domains they can conveniently be defined independently
since different alias domains don't affect each other.

As written, I also have a separate domain for each loop.  This is not
necessary and we could save some metadata here by using the same domain
across the different loops.  I don't think it's a big deal either way.

Probably the best is to review the tests first to see if I mapped this
problem correctly to scoped noalias markers.  I have plenty of comments
in the tests.

Note that the interface is prepared for the vectorizer which needs the
annotateInstWithNoAlias API.  The vectorizer does not use LoopVersioning
so we need a way to pass in the versioned instructions.  This is also
why the maps have to become part of the object state.

Also currently, we only have an AA-aware DSE after the vectorizer if we
also run the LTO pipeline.  Depending how widely this triggers we may
want to schedule a DSE toward the end of the regular pass pipeline.

Reviewers: hfinkel, nadav, ashutosh.nema

Subscribers: mssimpso, aemerson, llvm-commits, mcrosier

Differential Revision: http://reviews.llvm.org/D16712

llvm-svn: 263743
2016-03-17 20:32:32 +00:00
Guozhi Wei 7b390ec4cd [InstCombine] Combine A->B->A BitCast
This patch enhances InstCombine to handle following case:

        A  ->  B    bitcast
        PHI
        B  ->  A    bitcast

llvm-svn: 263734
2016-03-17 18:47:20 +00:00