different widths. In a use with a narrower fixup, formulae
may be wider than the fixup, in which case the high bits
aren't necessarily meaningful, so it isn't safe to reuse
them for uses with wider fixups.
This fixes PR7618, though the testcase is too large for a
reasonable regression test, since it heavily dependes on
hitting LSR's heuristics in a certain way.
llvm-svn: 108455
the corresponding or-icmp-and pattern. This has the added benefit of doing
the matching earlier, and thus being less susceptible to being confused by
earlier transforms.
llvm-svn: 108429
it *changing* the things it replaces, not just causing them
to drop to null. There is no functionality change yet, but
this is required for a subsequent patch.
llvm-svn: 108414
"bonus" instruction to be speculatively executed. Add a heuristic to
ensure we're not tripping up out-of-order execution by checking that this bonus
instruction only uses values that were already guaranteed to be available.
This allows us to eliminate the short circuit in (x&1)&&(x&2).
llvm-svn: 108351
by a return that returns a constant, while elsewhere in the function
another return instruction returns a different constant. This is a
special case of accumulator recursion, so just generalize the existing
logic a bit.
llvm-svn: 108241
operation, but the way it's implemented requires the operation to also be
commutative. So add a check for commutativity (and tweak the corresponding
comments). This makes no difference in practice since every associative
LLVM instruction is also commutative! Here's an example to show the need
for commutativity: the accum_recursion.ll testcase calculates the factorial
function. Before the transformation the result of a call is
((((1*1)*2)*3)...)*x
while afterwards it is
(((1*x)*(x-1))...*2)*1
which clearly requires both associativity and commutativity of * to be equal
to the original.
llvm-svn: 108056
(X >s -1) ? C1 : C2 and (X <s 0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.
This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.
for
int sel(int n) {
return n >= 0 ? 60 : 100;
}
we now generate
sarl $31, %edi
andl $40, %edi
leal 60(%rdi), %eax
instead of
testl %edi, %edi
movl $60, %ecx
movl $100, %eax
cmovnsl %ecx, %eax
llvm-svn: 107866
builds to "Release". The default build is unchanged (optimization on,
assertions on), however it is now called Release+Asserts. The intent
is that future LLVM releases released via llvm.org will be Release builds
in the new sense, i.e. will have assertions disabled (currently they have
assertions enabled, for a more than 20% slowdown). This will bring them
in line with MacOS releases, which ship with assertions disabled. It also
means that "Release" now means the same things in make and cmake builds:
cmake already disables assertions for "Release" builds AFAICS.
llvm-svn: 107758
have any effect, and second, deleting stores can potentially invalidate
an AliasAnalysis, and there's currently no notification for this.
llvm-svn: 107496
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.
For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:
.globl l_objc_msgSend_fixup_alloc
.weak_definition l_objc_msgSend_fixup_alloc
.section __DATA, __objc_msgrefs, coalesced
.align 3
l_objc_msgSend_fixup_alloc:
.quad _objc_msgSend_fixup
.quad L_OBJC_METH_VAR_NAME_1
This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".
Currently only supported on Darwin platforms.
llvm-svn: 107433
such a way that debug info for symbols preserved even if symbols are
optimized away by the optimizer.
Add new special pass to remove debug info for such symbols.
llvm-svn: 107416
metadata types which should be marked as "weak", but which the linker will
remove upon final linkage. For example, the "objc_msgSend_fixup_alloc" symbol is
defined like this:
.globl l_objc_msgSend_fixup_alloc
.weak_definition l_objc_msgSend_fixup_alloc
.section __DATA, __objc_msgrefs, coalesced
.align 3
l_objc_msgSend_fixup_alloc:
.quad _objc_msgSend_fixup
.quad L_OBJC_METH_VAR_NAME_1
This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".
llvm-svn: 107205
is stripped off. Currently set unconditionally, since the API
does not provide a way of working out if anything was actually
stripped off.
llvm-svn: 107142
large integers, the first inserted value would always create
an 'or X, 0'. Even though this is trivially zapped by
instcombine, don't bother creating this pointless instruction.
llvm-svn: 106979
the returned value after the tail call if it differs from other return
values. The optimal thing to do would be to introduce a phi node for
the return value, but for the moment just fix the miscompile.
llvm-svn: 106947
SCEVUnknown values which are loop-variant, as LSR can't do anything
interesting with these values in any case. This fixes very slow compile
times on loops which have large numbers of such values.
llvm-svn: 106897
for an "i" constraint should get lowered; PR 6309. While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.
llvm-svn: 106893
Failure to seed metdata in such cases causes troubles when in a cloned module, metadata from a new module refers to values in old module. Usually this results in mysterious bugpoint crashes. For example,
Checking to see if we can delete global inits: Unknown constant!
UNREACHABLE executed at /d/g/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp:904!
llvm-svn: 106592
use sharing map. The reconcileNewOffset logic already forces a
separate use if the kinds differ, so incorporating the kind in the
key means we can track more sharing opportunities.
More sharing means fewer total uses to track, which means smaller
problem sizes, which means the conservative throttles don't kick
in as often.
llvm-svn: 106396
Changed directly instead of using a return value.
Rename FilterOutUndesirableDedicatedRegisters's Changed variable to
distinguish it from LSRInstance's Changed member.
llvm-svn: 104269
operand on the left, the interesting operand is on the right. This
fixes a bug where LSR was failing to recognize ICmpZero uses,
which led it to be unable to reverse the induction variable in the
attached testcase.
Delete test/CodeGen/X86/stack-color-with-reg-2.ll, because its test
is extremely fragile and hard to meaningfully update.
llvm-svn: 104262
vector<>::push_back() in:
int foo(vector<int> &a, vector<unsigned> &b) {
a.push_back(10);
b.push_back(11);
}
to two calls to the same push_back function, or fold away the two copies of
push_back() in:
struct T { int; };
struct S { char; };
vector<T*> t;
vector<S*> s;
void f(T *x) { t.push_back(x); }
void g(S *x) { s.push_back(x); }
but leave f() and g() separate, since they refer to two different global
variables.
llvm-svn: 103698
on RAUW of functions, this is a correctness issue instead of a mere memory
usage problem.
No testcase until the new MergeFunctions can land.
llvm-svn: 103653
when it detects undefined behavior. llvm.trap generally codegens into some
thing really small (e.g. a 2 byte ud2 instruction on x86) and debugging this
sort of thing is "nontrivial". For example, we now compile:
void foo() { *(int*)0 = 42; }
into:
_foo:
pushl %ebp
movl %esp, %ebp
ud2
Some may even claim that this is a security hole, though that seems dubious
to me. This addresses rdar://7958343 - Optimizing away null dereference
potentially allows arbitrary code execution
llvm-svn: 103356
with a vector input and output into a shuffle vector. This sort of
sequence happens when the input code stores with one type and reloads
with another type and then SROA promotes to i96 integers, which make
everyone sad.
This fixes rdar://7896024
llvm-svn: 103354
LSRUse's Regs set after all pruning is done, rather than trying
to do it on the fly, which can produce an incomplete result.
This fixes a case where heuristic pruning was stripping all
formulae from a use, which led the solver to enter an infinite
loop.
Also, add a few asserts to diagnose this kind of situation.
llvm-svn: 103328
indirect branches in all the predecessors. This avoids unnecessarily
splitting edges in cases where load PRE is not possible anyway.
Thanks to Jakub Staszak for pointing this out.
llvm-svn: 103034
halting analysis, it is illegal to delete a call to a read-only function.
The correct solution is almost certainly to add a "must halt" attribute and
only allow deletions in its presence.
XFAIL the relevant testcase for now.
llvm-svn: 102831
that can have a big effect :). The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call. In this case, it will rerun all the passes it
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.
The second patch is to add *all* call sites to the
DevirtualizedCalls list the inliner uses. This list is
about to get renamed, but the jist of this is that the
inliner now reconsiders *all* inlined call sites as candidates
for further inlining. The intuition is this that in cases
like this:
f() { g(1); } g(int x) { h(x); }
We analyze this bottom up, and may decide that it isn't
profitable to inline H into G. Next step, we decide that it is
profitable to inline G into F, and do so, which means that F
now calls H. Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).
In my spot checks, this doesn't have a big impact on code. For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes). 252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.
llvm-svn: 102823
that appear due to inlining a callee as candidates for
futher inlining, but a recent patch made it do this if
those call sites were indirect and became direct.
Unfortunately, in bizarre cases (see testcase) doing this
can cause us to infinitely inline mutually recursive
functions into callers not in the cycle. Fix this by
keeping track of the inline history from which callsite
inline candidates got inlined from.
This shouldn't affect any "real world" code, but is required
for a follow on patch that is coming up next.
llvm-svn: 102822
add a version of createLowerInvokePass that allows the client
to specify whether it wants "expensive" or "cheap" lowering.
Patch by Alex Mac!
llvm-svn: 102402
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call. The improved inliner behavior is still
disabled for now.
llvm-svn: 102196
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
llvm-svn: 102146
arguments are handled with a new InlineFunctionInfo class. This
makes it easier to extend InlineFunction to return more info in the
future.
llvm-svn: 102137
define void @f3(void (i8*)* %__f) ssp {
entry:
call void %__f(i8* undef)
unreachable
}
define void @f4(i8* %this) ssp align 2 {
entry:
call void @f3(void (i8*)* @f2) ssp
ret void
}
The inliner is turning the indirect call to %__f into a direct
call to F2. Make the call graph more precise when this happens.
The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.
llvm-svn: 102131
condition we're unswitching on. In this case, don't try to
simplify the second copy of the loop which may be dead or not,
but is probably a constant now. This fixes PR6879
llvm-svn: 101870
Arg promotion was deleting call graph nodes that still had references
from the 'indirect' CGN. Like the inliner, it should only delete the
function if all references are gone.
llvm-svn: 101845
just ask ScalarEvolution for it on demand. This helps IVUsers be more robust
in the case of expressions changing underneath it. This fixes PR6862.
llvm-svn: 101819
to determine where to place PHIs by iteratively comparing reaching definitions
at each block. That was just plain wrong. This version now computes the
dominator tree within the subset of the CFG where PHIs may need to be placed,
and then places the PHIs in the iterated dominance frontier of each definition.
The rest of the patch is mostly the same, with a few more performance
improvements added in.
llvm-svn: 101612
to CallGraphSCCPass's instead of passing around a
std::vector<CallGraphNode*>. No functionality change,
but now we have a much tidier interface.
llvm-svn: 101558
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
The commit "Adding IPSCCP and Internalize passes to the C-bindings" introduced
new dependencies for IPO. Add these to the CMAKE build as otherwise the
BUILD_SHARED_LIBS=1 build fails.
llvm-svn: 101313
- TryToOptimizeStoreOfMallocToGlobal should check if TargetData is available and bail out if it is not. The transformations being done requires TD.
llvm-svn: 101285
it can check whether the visible direct callers are passing in parameters to
dead arguments and replace those with undef.
This reinstates r94322 with bugs fixed.
llvm-svn: 101213
numerator is an induction variable. For example, with code like this:
for (i=0;i<n;++i)
x[i%n] = 0;
IndVarSimplify will now recognize that i is always less than n inside
the loop, and eliminate the remainder.
llvm-svn: 101113
expression is a UDiv and it doesn't appear that the UDiv came from
the user's source.
ScalarEvolution has recently figured out how to compute a tripcount
expression for the inner loop in
SingleSource/Benchmarks/Shootout/sieve.c, using a udiv. Emitting a
udiv instruction dramatically slows down the enclosing loop.
llvm-svn: 101068
a ScalarEvolution bug with overflow handling is fixed, the normal analysis
code will automatically decline to operate on the icmp instructions which
are responsible for the loop exit.
llvm-svn: 101032
instead of deleting just the user. This makes it more consistent with
other code in IndVarSimplify, and theoretically can eliminate more users
earlier.
llvm-svn: 101027
the loop exit test. This usually doesn't come up for a variety of
reasons, but it isn't impossible, so make IndVarSimplify handle it
conservatively.
llvm-svn: 101008
variables. For example, with code like this:
for (i=0;i<n;++i)
if (i<n)
x[i] = 0;
IndVarSimplify will now recognize that i is always less than n inside
the loop, and eliminate the if.
llvm-svn: 101000
parameters in the CBE by implicitly adding a fixed argument.
This allows eliminating a work-around from DAE. Patch by
Sylvere Teissier!
llvm-svn: 100944
into adjacent loops. Also, ensure that the insert position is
dominated by the loop latch of any loop in the post-inc set which
has a latch.
llvm-svn: 100906
forced constant is changed to a constant, we would end
up adding the instruction to the wrong worklist,
preventing it from being properly revisited. This fixes
rdar://7832370
llvm-svn: 100837
explicitly split into stride-and-offset pairs. Also, add the
ability to track multiple post-increment loops on the same expression.
This refines the concept of "normalizing" SCEV expressions used for
to post-increment uses, and introduces a dedicated utility routine for
normalizing and denormalizing expressions.
This fixes the expansion of expressions which are post-increment users
of more than one loop at a time. More broadly, this takes LSR another
step closer to being able to reason about more than one loop at a time.
llvm-svn: 100699
undefs in branches/switches, we have two cases: a branch on a literal
undef or a branch on a symbolic value which is undef. If we have a
literal undef, the code was correct: forcing it to a constant is the
right thing to do.
If we have a branch on a symbolic value that is undef, we should force
the symbolic value to a constant, which then makes the successor block
live. Forcing the condition of the branch to being a constant isn't
safe if later paths become live and the value becomes overdefined. This
is the case that 'forcedconstant' is designed to handle, so just use it.
This fixes rdar://7765019 but there is no good testcase for this, the
one I have is too insane to be useful in the future.
llvm-svn: 100478
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
exits the loop. With this information we can guarantee
the iteration count of the loop is bounded by the
compare. I think this xforms is finally safe now.
llvm-svn: 100285
checker. Amusingly, we already had tests that we should
have rejects because they would be miscompiled in the
testsuite.
The remaining issue with this is that we don't check that
the branch causes us to exit the loop if it fails, so we
don't actually know if we remain in bounds.
llvm-svn: 100284
to a signed vs unsigned value depending on the sign of the
constant fp means that we can't distinguish between a
truly negative number and a positive number so large the
32nd bit is set. So, do don't this!
llvm-svn: 100283
this cleans up a bunch of code and also fixes several crashes and
miscompiles. More to come unfortunately, this optimization
is quite broken.
llvm-svn: 100270
(what was I thinking?) and there's also a problem with LCSSA. I'll try again
later with fixes.
--- Reverse-merging r100263 into '.':
U lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100177 into '.':
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100148 into '.':
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100147 into '.':
U include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100131 into '.':
G include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100130 into '.':
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100126 into '.':
G include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
--- Reverse-merging r100050 into '.':
D test/Transforms/GVN/2010-03-31-RedundantPHIs.ll
--- Reverse-merging r100047 into '.':
G include/llvm/Transforms/Utils/SSAUpdater.h
G lib/Transforms/Utils/SSAUpdater.cpp
llvm-svn: 100264
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191