of a pointer. This allows is to catch more equivalencies. For example,
the type_lists_compatible_p function used to require two iterations of
the gvn pass (!) to delete its 18 redundant loads because the first pass
would CSE all the addressing computation cruft, which would unblock the
second memdep/gvn passes from recognizing them. This change allows
memdep/gvn to catch all 18 when run just once on the function (as is
typical :) instead of just 3.
On all of 403.gcc, this bumps up the # reundandancies found from:
63 gvn - Number of instructions PRE'd
153991 gvn - Number of instructions deleted
50069 gvn - Number of loads deleted
to:
63 gvn - Number of instructions PRE'd
154137 gvn - Number of instructions deleted
50185 gvn - Number of loads deleted
+120 loads deleted isn't bad.
llvm-svn: 60799
MemDep::getNonLocalPointerDependency method. There are
some open issues with this (missed optimizations) and
plenty of future work, but this does allow GVN to eliminate
*slightly* more loads (49246 vs 49033).
Switching over now allows simplification of the other code
path in memdep.
llvm-svn: 60780
doesn't do its own local caching, and is slightly more aggressive about
free/store dse (see testcase). This eliminates the last external client
of MemDep::getDependenceFrom().
llvm-svn: 60619
loops when they can be subsumed into addressing modes.
Change X86 addressing mode check to realize that
some PIC references need an extra register.
(I believe this is correct for Linux, if not, I'm sure
someone will tell me.)
llvm-svn: 60608
1. Merge the 'None' result into 'Normal', making loads
and stores return their dependencies on allocations as Normal.
2. Split the 'Normal' result into 'Clobber' and 'Def' to
distinguish between the cases when memdep knows the value is
produced from when we just know if may be changed.
3. Move some of the logic for determining whether readonly calls
are CSEs into memdep instead of it being in GVN. This still
leaves verification that the arguments are hte same to GVN to
let it know about value equivalences in different contexts.
4. Change memdep's call/call dependency analysis to use
getModRefInfo(CallSite,CallSite) instead of doing something
very weak. This only really matters for things like DSA, but
someday maybe we'll have some other decent context sensitive
analyses :)
5. This reimplements the guts of memdep to handle the new results.
6. This simplifies GVN significantly:
a) readonly call CSE is slightly simpler
b) I eliminated the "getDependencyFrom" chaining for load
elimination and load CSE doesn't have to worry about
volatile (they are always clobbers) anymore.
c) GVN no longer does any 'lastLoad' caching, leaving it to
memdep.
7. The logic in DSE is simplified a bit and sped up. A potentially
unsafe case was eliminated.
llvm-svn: 60607
This fixes many bugs. I will add more test cases in a separate check-in.
Some day, the code that manipulates CFG and updates dom. info could use refactoring help.
llvm-svn: 60554
1) have it fold "br undef", which does occur with
surprising frequency as jump threading iterates.
2) teach j-t to delete dead blocks. This removes the successor
edges, reducing the in-edges of other blocks, allowing
recursive simplification.
3) Fold things like:
br COND, BBX, BBY
BBX:
br COND, BBZ, BBW
which also happens because jump threading iterates.
llvm-svn: 60470
straight-forward implementation. This does not require any extra
alias analysis queries beyond what we already do for non-local loads.
Some programs really really like load PRE. For example, SPASS triggers
this ~1000 times, ~300 times in 255.vortex, and ~1500 times on 403.gcc.
The biggest limitation to the implementation is that it does not split
critical edges. This is a huge killer on many programs and should be
addressed after the initial patch is enabled by default.
The implementation of this should incidentally speed up rejection of
non-local loads because it avoids creating the repl densemap in cases
when it won't be used for fully redundant loads.
This is currently disabled by default.
Before I turn this on, I need to fix a couple of miscompilations in
the testsuite, look at compile time performance numbers, and look at
perf impact. This is pretty close to ready though.
llvm-svn: 60408
constant. If X is a constant, then this is folded elsewhere.
- Added a note to Target/README.txt to indicate that we'd like to implement
this when we're able.
llvm-svn: 60399
figuring out the base of the IV. This produces better
code in the example. (Addresses use (IV) instead of
(BASE,IV) - a significant improvement on low-register
machines like x86).
llvm-svn: 60374
instead of std::sort. This shrinks the release-asserts LSR.o file
by 1100 bytes of code on my system.
We should start using array_pod_sort where possible.
llvm-svn: 60335
buggy rewrite, this notifies ScalarEvolution of a pending instruction
about to be removed and then erases it, instead of erasing it then
notifying.
llvm-svn: 60329
new instructions it simplifies. Because we're threading jumps on edges
with constants coming in from PHI's, we inherently are exposing a lot more
constants to the new block. Folding them and deleting dead conditions
allows the cost model in jump threading to be more accurate as it iterates.
llvm-svn: 60327
elimination: when finding dependent load/stores, realize that
they are the same if aliasing claims must alias instead of relying
on the pointers to be exactly equal. This makes load elimination
more aggressive. For example, on 403.gcc, we had:
< 68 gvn - Number of instructions PRE'd
< 152718 gvn - Number of instructions deleted
< 49699 gvn - Number of loads deleted
< 6153 memdep - Number of dirty cached non-local responses
< 169336 memdep - Number of fully cached non-local responses
< 162428 memdep - Number of uncached non-local responses
now we have:
> 64 gvn - Number of instructions PRE'd
> 153623 gvn - Number of instructions deleted
> 49856 gvn - Number of loads deleted
> 5022 memdep - Number of dirty cached non-local responses
> 159030 memdep - Number of fully cached non-local responses
> 162443 memdep - Number of uncached non-local responses
That's an extra 157 loads deleted and extra 905 other instructions nuked.
This slows down GVN very slightly, from 3.91 to 3.96s.
llvm-svn: 60314
vector instead of a densemap. This shrinks the memory usage of this thing
substantially (the high water mark) as well as making operations like
scanning it faster. This speeds up memdep slightly, gvn goes from
3.9376 to 3.9118s on 403.gcc
This also splits out the statistics for the cached non-local case to
differentiate between the dirty and clean cached case. Here's the stats
for 403.gcc:
6153 memdep - Number of dirty cached non-local responses
169336 memdep - Number of fully cached non-local responses
162428 memdep - Number of uncached non-local responses
yay for caching :)
llvm-svn: 60313
Note that the FoldOpIntoPhi call is dead because it's impossible for the
first operand of a subtraction to be both a ConstantInt and a PHINode.
llvm-svn: 60306
"For signed integers, the determination of overflow of x*y is not so simple. If
x and y have the same sign, then overflow occurs iff xy > 2**31 - 1. If they
have opposite signs, then overflow occurs iff xy < -2**31."
In this case, x == -1.
llvm-svn: 60278
overflowed on negation. This commit checks to make sure that neithe C nor X
overflows. This requires that the RHS of X (a subtract instruction) be a
constant integer.
llvm-svn: 60275
If we see that a load depends on the allocation of its memory with no
intervening stores, we now return a 'None' depedency instead of "Normal".
This tweaks GVN to do its optimization with the new result.
llvm-svn: 60267
query. This makes it crystal clear what cases can escape from MemDep that
the clients have to handle. This also gives the clients a nice simplified
interface to it that is easy to poke at.
This patch also makes DepResultTy and MemoryDependenceAnalysis::DepType
private, yay.
llvm-svn: 60231
of a pointer/int pair instead of a manually bitmangled pointer.
This forces clients to think a little more about checking the
appropriate pieces and will be useful for internal
implementation improvements later.
I'm not particularly happy with this. After going through this
I don't think that the clients of memdep should be exposed to
the internal type at all. I'll fix this in a subsequent commit.
This has no functionality change.
llvm-svn: 60230
wrappers around the interesting code and use an obscure iterator
abstraction that dates back many many years.
Move EraseDeadInstructions to Transforms/Utils and name it
RecursivelyDeleteTriviallyDeadInstructions.
llvm-svn: 60191
1. Make it fold blocks separated by an unconditional branch. This enables
jump threading to see a broader scope.
2. Make jump threading able to eliminate locally redundant loads when they
feed the branch condition of a block. This frequently occurs due to
reg2mem running.
3. Make jump threading able to eliminate *partially redundant* loads when
they feed the branch condition of a block. This is common in code with
lots of loads and stores like C++ code and 255.vortex.
This implements thread-loads.ll and rdar://6402033.
Per the fixme's, several pieces of this should be moved into Transforms/Utils.
llvm-svn: 60148
performance in most cases on the Grawp tester, but does speed some
things up (like shootout/hash by 15%). This also doesn't impact
compile time in a noticable way on the Grawp tester.
It also, of course, gets the testcase it was designed for right :)
llvm-svn: 60120
heuristic: the value is already live at the new memory operation if
it is used by some other instruction in the memop's block. This is
cheap and simple to compute (moreso than full liveness).
This improves the new heuristic even more. For example, it cuts two
out of three new instructions out of 255.vortex:DbmFileInGrpHdr,
which is one of the functions that the heuristic regressed. This
overall eliminates another 40 instructions from 403.gcc and visibly
reduces register pressure in 255.vortex (though this only actually
ends up saving the 2 instructions from the whole program).
llvm-svn: 60084
phrased in terms of liveness instead of as a horrible hack. :)
In pratice, this doesn't change the generated code for either
255.vortex or 403.gcc, but it could cause minor code changes in
theory. This is framework for coming changes.
llvm-svn: 60082
-enable-smarter-addr-folding to llc) that gives CGP a better
cost model for when to sink computations into addressing modes.
The basic observation is that sinking increases register
pressure when part of the addr computation has to be available
for other reasons, such as having a use that is a non-memory
operation. In cases where it works, it can substantially reduce
register pressure.
This code is currently an overall win on 403.gcc and 255.vortex
(the two things I've been looking at), but there are several
things I want to do before enabling it by default:
1. This isn't doing any caching of results, so it is much slower
than it could be. It currently slows down release-asserts llc
by 1.7% on 176.gcc: 27.12s -> 27.60s.
2. This doesn't think about inline asm memory operands yet.
3. The cost model botches the case when the needed value is live
across the computation for other reasons.
I'll continue poking at this, and eventually turn it on as llcbeta.
llvm-svn: 60074
optimize addressing modes. This allows us to optimize things like isel-sink2.ll
into:
movl 4(%esp), %eax
cmpb $0, 4(%eax)
jne LBB1_2 ## F
LBB1_1: ## TB
movl $4, %eax
ret
LBB1_2: ## F
movzbl 7(%eax), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
cmpb $0, 4(%eax)
leal 4(%eax), %eax
jne LBB1_2 ## F
LBB1_1: ## TB
movl $4, %eax
ret
LBB1_2: ## F
movzbl 3(%eax), %eax
ret
This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.
Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
it is really testing what it thinks it is.
llvm-svn: 60068
can recursively match things) and scales by 0 by ignoring them.
This triggers once in 403.gcc, saving 1 (!!!!) instruction in the
whole huge app.
llvm-svn: 60013
into a new AddressingModeMatcher class. This makes it easier
to reason about and reduces passing around of stuff, but has
no functionality change.
llvm-svn: 60012
g++ -m32 -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wmissing-format-attribute -fno-common -mdynamic-no-pic -DHAVE_CONFIG_H -Wno-unused -DTARGET_NAME=\"i386-apple-darwin9.5.0\" -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include -DENABLE_LLVM -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/../llvm.src/include -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include ../../llvm-gcc.src/gcc/llvm-types.cpp -o llvm-types.o
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemCpy(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemMove(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemSet(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i64' is not a member of 'llvm::Intrinsic'
make[3]: *** [llvm-convert.o] Error 1
make[3]: *** Waiting for unfinished jobs....
rm fsf-funding.pod gcov.pod gfdl.pod cpp.pod gpl.pod gcc.pod
make[2]: *** [all-stage1-gcc] Error 2
make[1]: *** [stage1-bubble] Error 2
make: *** [all] Error 2
llvm-svn: 59809
The previous patches didn't match correctly. Also, we need to make sure that
the conditional is the same before doing the transformation.
llvm-svn: 58978
original code was matching like this:
if (match(A, m_Not(m_Value(B))))
B was already matched as a 'select' instruction. However, this isn't matching
what we think it's matching. It would match B as a 'Value', so basically
anything would match to it. In this case, a Constant matched. B was replaced
with a constant representation. And then the wrong value would be used in the
SelectInst::Create statement, causing a crash.
After thinking on this for a moment, and after Nick L. told me how the pattern
matching stuff was supposed to work, the solution was to match NOT an m_Value,
but an m_Select.
llvm-svn: 58946
to generate signed ICMP instructions to replace the FCMP. This would violate
the following:
define i1 @test1(i32 %val) {
%1 = uitofp i32 %val to double
%2 = fcmp ole double %1, 0.000000e+00
ret i1 %2
}
would be transformed into:
define i1 @test1(i32 %val) {
%1 = icmp slt i33 %val, 1
ret i1 %1
}
which is obviously wrong. This patch modifes InstCombiner::FoldFCmp_IntToFP_Cst
to handle when the LHS comes from UIToFP.
llvm-svn: 58929
This allows SCEV users to effectively calculate trip count.
LSR later on transforms back integer IVs to floating point IVs
later on to avoid int-to-float casts inside the loop.
llvm-svn: 58625
* merge two weak functions by making them both alias a third non-weak fn
* don't reimplement CallSite::hasArgument
* whitelist the safe linkage types
llvm-svn: 58568
This triggers only 60 times in llvm-test (look at .llvm.bc, not .linked.rbc)
and so it probably wont be turned on by default. Also, may of those are likely
to go away when PR2973 is fixed.
llvm-svn: 58557
function.
- This explicitly models the costs for functions which should
"always" or "never" be inlined. This fixes bugs where such costs
were not previously respected.
llvm-svn: 58450
LargeBlockInfo, we can now dramatically simplify their implementation
and speed them up at the same time. Now the code has time proportional
to the number of uses of the alloca, not the size of the block.
This also eliminates code that tried to batch up different allocas which
are used in the same blocks, and eliminates the 'retry list' logic which
was baroque and no unneccesary. In addition to being a speedup for crazy
cases, this is also a nice cleanup:
PromoteMemoryToRegister.cpp | 270 +++++++++++++++-----------------------------
1 file changed, 96 insertions(+), 174 deletions(-)
llvm-svn: 58229
a trivial dense map. Use this in RewriteSingleStoreAlloca to
avoid aggressively rescanning blocks over and over again. This
fixes PR2925, speeding up mem2reg on the testcase in that bug
from 4.56s to 0.02s in a debug build on my machine.
llvm-svn: 58227
LoopPass*.
- Although less precise, this means they can be used in clients
without RTTI (who would otherwise need to include LoopPass.h, which
eventually includes things using dynamic_cast). This was the
simplest solution that presented itself, but I am happy to use a
better one if available.
llvm-svn: 58010
to find opportunities for store-to-load forwarding or load CSE,
in the same way that visitStore scans back to do DSE. Also, define
a new helper function for testing whether the addresses of two
memory accesses are known to have the same value, and use it in
both visitStore and visitLoad.
These two changes allow instcombine to eliminate loads in code
produced by front-ends that frequently emit obviously redundant
addressing for memory references.
llvm-svn: 57608
This includes not marking a GEP involving a vector as unsafe, but only when it
has all zero indices. This allows scalarrepl to work in a few more cases.
llvm-svn: 57177
shifting and masking inside a bswap expr. This allows it to handle
the cases from PR2842, which involve the intermediate 'or'
expressions being shifted, not just the input value.
llvm-svn: 57095
when deciding whether to mark a function readnone/readonly.
Since the pass is currently run before SROA, this may be
quite helpful. Requested by Chris on IRC.
llvm-svn: 57050
pointer bitcasts and GEP's", and centralize the
logic in Value::getUnderlyingObject. The
difference with stripPointerCasts is that
stripPointerCasts only strips GEPs if all
indices are zero, while getUnderlyingObject
strips GEPs no matter what the indices are.
llvm-svn: 56922