Commit Graph

72 Commits

Author SHA1 Message Date
Chandler Carruth 59ff93afe6 Refactor insert and extract of sub-integers into static helpers that
operate purely on values. Sink the alloca loading and storing logic into
the rewrite routines that are specific to alloca-integer-rewrite
driving. This is just a refactoring here, but the subsequent step will
be to reuse the insertion and extraction logic when rewriting integer
loads and stores that have been split and decomposed into narrower loads
and stores.

No functionality changed other than different names for instructions.

llvm-svn: 166176
2012-10-18 09:56:08 +00:00
Chandler Carruth e793a50f45 This FIXME was fixed some time ago. =]
llvm-svn: 166175
2012-10-18 09:56:06 +00:00
Chandler Carruth 6fab42aa39 This just in, it is a *bad idea* to use 'udiv' on an offset of
a pointer. A very bad idea. Let's not do that. Fixes PR14105.

Note that this wasn't *that* glaring of an oversight. Originally, these
routines were only called on offsets within an alloca, which are
intrinsically positive. But over the evolution of the pass, they ended
up being called for arbitrary offsets, and things went downhill...

llvm-svn: 166095
2012-10-17 09:23:48 +00:00
Chandler Carruth 40617f593e Fix a really annoying "bug" introduced in r165941. The change from that
revision makes no sense. We cannot use the address space of the *post
indexed* type to conclude anything about a *pre indexed* pointer type's
size. More importantly, this index can never be over a pointer. We are
indexing over arrays and vectors here.

Of course, I have no test case here. Neither did the original patch. =/

llvm-svn: 166091
2012-10-17 07:22:16 +00:00
Micah Villmow 4bb926d91d Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
llvm-svn: 165941
2012-10-15 16:24:29 +00:00
Chandler Carruth 49c8eea3c0 Update the memcpy rewriting to fully support widened int rewriting. This
includes extracting ints for copying elsewhere and inserting ints when
copying into the alloca. This should fix the CanSROA assertion coming
out of Clang's regression test suite.

llvm-svn: 165931
2012-10-15 10:24:43 +00:00
Chandler Carruth 9d966a2002 Follow-up fix to r165928: handle memset rewriting for widened integers,
and generally clean up the memset handling. It had rotted a bit as the
other rewriting logic got polished more.

llvm-svn: 165930
2012-10-15 10:24:40 +00:00
Chandler Carruth 435c4e0792 First major step toward addressing PR14059. This teaches SROA to handle
cases where we have partial integer loads and stores to an otherwise
promotable alloca to widen[1] those loads and stores to cover the entire
alloca and bitcast them into the appropriate type such that promotion
can proceed.

These partial loads and stores stem from an annoying confluence of ARM's
calling convention and ABI lowering and the FCA pre-splitting which
takes place in SROA. Clang lowers a { double, double } in-register
function argument as a [4 x i32] function argument to ensure it is
placed into integer 32-bit registers (a really unnerving implicit
contract between Clang and the ARM backend I would add). This results in
a FCA load of [4 x i32]* from the { double, double } alloca, and SROA
decomposes this into a sequence of i32 loads and stores. Inlining
proceeds, code gets folded, but at the end of the day, we still have i32
stores to the low and high halves of a double alloca. Widening these to
be i64 operations, and bitcasting them to double prior to loading or
storing allows promotion to proceed for these allocas.

I looked quite a bit changing the IR which Clang produces for this case
to be more friendly, but small changes seem unlikely to help. I think
the best representation we could use currently would be to pass 4 i32
arguments thereby avoiding any FCAs, but that would still require this
fix. It seems like it might eventually be nice to somehow encode the ABI
register selection choices outside of the parameter type system so that
the parameter can be a { double, double }, but the CC register
annotations indicate that this should be passed via 4 integer registers.

This patch does not address the second problem in PR14059, which is the
reverse: when a struct alloca is loaded as a *larger* single integer.

This patch also does not address some of the code quality issues with
the FCA-splitting. Those don't actually impede any optimizations really,
but they're on my list to clean up.

[1]: Pedantic footnote: for those concerned about memory model issues
here, this is safe. For the alloca to be promotable, it cannot escape or
have any use of its address that could allow these loads or stores to be
racing. Thus, widening is always safe.

llvm-svn: 165928
2012-10-15 08:40:30 +00:00
Chandler Carruth aa6afbb831 Hoist the canConvertValue predicate and the convertValue transform out
into static helper functions. They're really quite generic and are going
to be needed elsewhere shortly.

llvm-svn: 165927
2012-10-15 08:40:22 +00:00
Chandler Carruth ba9319925e Teach SROA to cope with wrapper aggregates. These show up a lot in ABI
type coercion code, especially when targetting ARM. Things like [1
x i32] instead of i32 are very common there.

The goal of this logic is to ensure that when we are picking an alloca
type, we look through such wrapper aggregates and across any zero-length
aggregate elements to find the simplest type possible to form a type
partition.

This logic should (generally speaking) rarely fire. It only ends up
kicking in when an alloca is accessed using two different types (for
instance, i32 and float), and the underlying alloca type has wrapper
aggregates around it. I noticed a significant amount of this occurring
looking at stepanov_abstraction generated code for arm, and suspect it
happens elsewhere as well.

Note that this doesn't yet address truly heinous IR productions such as
PR14059 is concerning. Those result in mismatched *sizes* of types in
addition to mismatched access and alloca types.

llvm-svn: 165870
2012-10-13 10:49:33 +00:00
Chandler Carruth 482c61787c Speculatively harden the conversion logic. I have no idea if this will
help the dragonegg builders, and no test case at this point, but this
was one dimly plausible case I spotted by inspection. Hopefully will get
a testcase from those bots soon-ish, and will tidy this up with proper
testing.

llvm-svn: 165869
2012-10-13 10:49:30 +00:00
Chandler Carruth 0fb8a7787e Silence a warning in -assert builds.
llvm-svn: 165867
2012-10-13 05:09:27 +00:00
Chandler Carruth 891fec0b56 Clean up how we rewrite loads and stores to the whole alloca. When these
are single value types, the load and store should be directly based upon
the alloca and then bitcasting can fix the type as needed afterward.
This might in theory improve some of the IR coming out of SROA, but
I don't expect big changes yet and don't have any test cases on hand.
This is really just a cleanup/refactoring patch. The next patch will
cause this code path to be hit a lot more, actually get SROA to promote
more allocas and include several more test cases.

llvm-svn: 165864
2012-10-13 02:41:05 +00:00
Micah Villmow 0c61134d8d Revert 165732 for further review.
llvm-svn: 165747
2012-10-11 21:27:41 +00:00
Micah Villmow 083189730e Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
llvm-svn: 165726
2012-10-11 17:21:41 +00:00
Chandler Carruth 503eb2bb49 Fix PR14034, an infloop / heap corruption / crash bug in the new SROA.
Thanks to Benjamin for the raw test case. This one took about 50 times
longer to reduce than to fix. =/

llvm-svn: 165476
2012-10-09 01:58:35 +00:00
Micah Villmow cdfe20b97f Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
NAKAMURA Takumi 605fe78aca SROA.cpp: Fix a warning, [-Wunused-variable]
llvm-svn: 165309
2012-10-05 13:56:23 +00:00
Chandler Carruth e5b7a2ccd2 Teach the new SROA a new trick. Now we zap any memcpy or memmoves which
are in fact identity operations. We detect these and kill their
partitions so that even splitting is unaffected by them. This is
particularly important because Clang relies on emitting identity memcpy
operations for struct copies, and these fold away to constants very
often after inlining.

Fixes the last big performance FIXME I have on my plate.

llvm-svn: 165285
2012-10-05 01:29:09 +00:00
Chandler Carruth 90c4a3ae20 Lift the speculation visitor above all the helpers that are targeted at
the rewrite visitor to make the fact that the speculation is completely
independent a bit more clear.

I promise that this is just a cut/paste of the one visitor and adding
the annonymous namespace wrappings. The diff may look completely
preposterous, it does in git for some reason.

llvm-svn: 165284
2012-10-05 01:29:06 +00:00
Chandler Carruth ac8317fd36 Fix PR13969, a mini-phase-ordering issue with the new SROA pass.
Currently, we re-visit allocas when something changes about the way they
might be *split* to allow better scalarization to take place. However,
we weren't handling the case when the *promotion* is what would change
the behavior of SROA. When an address derived from an alloca is stored
into another alloca, we consider the first to have escaped. If the
second is ever promoted to an SSA value, we will suddenly be able to run
the SROA pass on the first alloca.

This patch adds explicit support for this form if iteration. When we
detect a store of a pointer derived from an alloca, we flag the
underlying alloca for reprocessing after promotion. The logic works hard
to only do this when there is definitely going to be promotion and it
might remove impediments to the analysis of the alloca.

Thanks to Nick for the great test case and Benjamin for some sanity
check review.

llvm-svn: 165223
2012-10-04 12:33:50 +00:00
Chandler Carruth 43c8b46deb Teach the integer-promotion rewrite strategy to be endianness aware.
Sorry for this being broken so long. =/

As part of this, switch all of the existing tests to be Little Endian,
which is the behavior I was asserting in them anyways! Add in a new
big-endian test that checks the interesting behavior there.

Another part of this is to tighten the rules abotu when we perform the
full-integer promotion. This logic now rejects cases where there fully
promoted integer is a non-multiple-of-8 bitwidth or cases where the
loads or stores touch bits which are in the allocated space of the
alloca but are not loaded or stored when accessing the integer. Sadly,
these aren't really observable today as the rest of the pass will
already ensure the invariants hold. However, the latter situation is
likely to become a potential concern in the future.

Thanks to Benjamin and Duncan for early review of this patch. I'm still
looking into whether there are further endianness issues, please let me
know if anyone sees BE failures persisting past this.

llvm-svn: 165219
2012-10-04 10:39:28 +00:00
Chandler Carruth 08e5f49f90 Fix an issue where we failed to adjust the alignment constraint on
a memcpy to reflect that '0' has a different meaning when applied to
a load or store. Now we correctly use underaligned loads and stores for
the test case added.

llvm-svn: 165101
2012-10-03 08:26:28 +00:00
Chandler Carruth 4b2b38d398 Try to use a better set of abstractions for computing the alignment
necessary during rewriting. As part of this, fix a real think-o here
where we might have left off an alignment specification when the address
is in fact underaligned. I haven't come up with any way to trigger this,
as there is always some other factor that reduces the alignment, but it
certainly might have been an observable bug in some way I can't think
of. This also slightly changes the strategy for placing explicit
alignments on loads and stores to only do so when the alignment does not
match that required by the ABI. This causes a few redundant alignments
to go away from test cases.

I've also added a couple of tests that really push on the alignment that
we end up with on loads and stores. More to come here as I try to fix an
underlying bug I have conjectured and produced test cases for, although
it's not clear if this bug is the one currently hitting dragonegg's
gcc47 bootstrap.

llvm-svn: 165100
2012-10-03 08:14:02 +00:00
Chandler Carruth 3f57b82979 Switch the SetVector::remove_if implementation to use partition which
preserves the values of the relocated entries, unlikely remove_if. This
allows walking them and erasing them.

Also flesh out the predicate we are using for this to support the
various constraints actually imposed on a UnaryPredicate -- without this
we can't compose it with std::not1.

Thanks to Sean Silva for the review here and noticing the issue with
std::remove_if.

llvm-svn: 165073
2012-10-03 00:03:00 +00:00
Chandler Carruth b09f0a3c75 Teach the new SROA to handle cases where an alloca that has already been
scheduled for processing on the worklist eventually gets deleted while
we are processing another alloca, fixing the original test case in
PR13990.

To facilitate this, add a remove_if helper to the SetVector abstraction.
It's not easy to use the standard abstractions for this because of the
specifics of SetVectors types and implementation.

Finally, a nice small test case is included. Thanks to Benjamin for the
fantastic reduced test case here! All I had to do was delete some empty
basic blocks!

llvm-svn: 165065
2012-10-02 22:46:45 +00:00
Chandler Carruth 6c3890b680 Fix another crasher in SROA, reported by Joel.
We require that the indices into the use lists are stable in order to
build fast lookup tables to locate a particular partition use from an
operand of a PHI or select. This is (obviously in hind sight)
incompatible with erasing elements from the array. Really, we don't want
to erase anyways. It is expensive, and a rare operation. Instead, simply
weaken the contract of the PartitionUse structure to allow null Use
pointers to represent dead uses. Now we can clear out the pointer to
mark things as dead, and all it requires is adding some 'continue'
checks to the various loops.

I'm still reducing a test case for this, as the test case I have is
huge. I think this one I can get a nice test case for though, as it was
much more deterministic.

llvm-svn: 165032
2012-10-02 18:57:13 +00:00
Chandler Carruth 3903e05244 Fix a silly coding error on my part. The whole point of the speculator
being separate was that it can grow the use list. As a consequence, we
can't use the iterator-pair interface, we need an index based interface.
Expose such an interface from the AllocaPartitioning, and use it in the
speculator.

This should at least fix a use-after-free bug found by Duncan, and may
fix some of the other crashers.

I don't have a nice deterministic test case yet, but if I get a good
one, I'll add it.

llvm-svn: 165027
2012-10-02 17:49:47 +00:00
Chandler Carruth d71ef3a02a Make this plural. Spotted by Duncan in review (and a very old typo, this
is the second time I've moved this comment around...)

llvm-svn: 164939
2012-10-01 12:24:42 +00:00
Chandler Carruth d325f8021b Prune some unnecessary includes.
llvm-svn: 164938
2012-10-01 12:21:54 +00:00
Chandler Carruth 176ca71a82 Fix several issues with alignment. We weren't always accounting for type
alignment requirements of the new alloca. As one consequence which was
reported as a bug by Duncan, we overaligned memcpy calls to ranges of
allocas after they were rewritten to types with lower alignment
requirements. Other consquences are possible, but I don't have any test
cases for them.

llvm-svn: 164937
2012-10-01 12:16:54 +00:00
Chandler Carruth 82a57543d6 Factor the PHI and select speculation into a separate rewriter. This
could probably be factored still further to hoist this logic into
a generic helper, but currently I don't have particularly clean ideas
about how to handle that.

This at least allows us to drop custom load rewriting from the
speculation logic, which in turn allows the existing load rewriting
logic to fire. In theory, this could enable vector promotion or other
tricks after speculation occurs, but I've not dug into such issues. This
is primarily just cleaning up the factoring of the code and the
resulting logic.

llvm-svn: 164933
2012-10-01 10:54:05 +00:00
Chandler Carruth 54e8f0b4cf Refactor the PartitionUse structure to actually use the Use* instead of
a pair of instructions, one for the used pointer and the second for the
user. This simplifies the representation and also makes it more dense.

This was noticed because of the miscompile in PR13926. In that case, we
were running up against a fundamental "bad idea" in the speculation of
PHI and select instructions: the speculation and rewriting are
interleaved, which requires phi speculation to also perform load
rewriting! This is bad, and causes us to miss opportunities to do (for
example) vector rewriting only exposed after PHI speculation, etc etc.
It also, in the old system, required us to insert *new* load uses into
the current partition's use list, which would then be ignored during
rewriting because we had already extracted an end iterator for the use
list. The appending behavior (and much of the other oddities) stem from
the strange de-duplication strategy in the PartitionUse builder.
Amusingly, all this went without notice for so long because it could
only be triggered by having *different* GEPs into the same partition of
the same alloca, where both different GEPs were operands of a single
PHI, and where the GEP which was not encountered first also had multiple
uses within that same PHI node... Hence the insane steps required to
reproduce.

So, step one in fixing this fundamental bad idea is to make the
PartitionUse actually contain a Use*, and to make the builder do proper
deduplication instead of funky de-duplication. This is enough to remove
the appending behavior, and fix the miscompile in PR13926, but there is
more work to be done here. Subsequent commits will lift the speculation
into its own visitor. It'll be a useful step toward potentially
extracting all of the speculation logic into a generic utility
transform.

The existing PHI test case for repeated operands has been made more
extreme to catch even these issues. This test case, run through the old
pass, will exactly reproduce the miscompile from PR13926. ;] We were so
close here!

llvm-svn: 164925
2012-10-01 01:49:22 +00:00
Chandler Carruth 903790eff5 Fix a somewhat surprising miscompile where code relying on an ABI
alignment could lose it due to the alloca type moving down to a much
smaller alignment guarantee.

Now SROA will actively compute a proper alignment, factoring the target
data, any explicit alignment, and the offset within the struct. This
will in some cases lower the alignment requirements, but when we lower
them below those of the type, we drop the alignment entirely to give
freedom to the code generator to align it however is convenient.

Thanks to Duncan for the lovely test case that pinned this down. =]

llvm-svn: 164891
2012-09-29 10:41:21 +00:00
Chandler Carruth 208124f5a2 Analogous fix to memset and memcpy rewriting. Don't have a test case
contrived for these yet, as I spotted them by inspection and the test
cases are a bit more tricky to phrase.

llvm-svn: 164691
2012-09-26 10:59:22 +00:00
Chandler Carruth 3e4273dd0c When rewriting the pointer operand to a load or store which has
alignment guarantees attached, re-compute the alignment so that we
consider offsets which impact alignment.

llvm-svn: 164690
2012-09-26 10:45:28 +00:00
Chandler Carruth 871ba7249c Teach all of the loads, stores, memsets and memcpys created by the
rewriter in SROA to carry a proper alignment. This involves
interrogating various sources of alignment, etc. This is a more complete
and principled fix to PR13920 as well as related bugs pointed out by Eli
in review and by inspection in the area.

Also by inspection fix the integer and vector promotion paths to create
aligned loads and stores. I still need to work up test cases for
these... Sorry for the delay, they were found purely by inspection.

llvm-svn: 164689
2012-09-26 10:27:46 +00:00
Chandler Carruth 4bd8f66ed9 Revert the business end of r164636 and try again. I'll come in again. ;]
This should really, really fix PR13916. For real this time. The
underlying bug is... a bit more subtle than I had imagined.

The setup is a code pattern that leads to an @llvm.memcpy call with two
equal pointers to an alloca in the source and dest. Now, not any pattern
will do. The alloca needs to be formed just so, and both pointers should
be wrapped in different bitcasts etc. When this precise pattern hits,
a funny sequence of events transpires. First, we correctly detect the
potential for overlap, and correctly optimize the memcpy. The first
time. However, we do simplify the set of users of the alloca, and that
causes us to run the alloca back through the SROA pass in case there are
knock-on simplifications. At this point, a curious thing has happened.
If we happen to have an i8 alloca, we have direct i8 pointer values. So
we don't bother creating a cast, we rewrite the arguments to the memcpy
to dircetly refer to the alloca.

Now, in an unrelated area of the pass, we have clever logic which
ensures that when visiting each User of a particular pointer derived
from an alloca, we only visit that User once, and directly inspect all
of its operands which refer to that particular pointer value. However,
the mechanism used to detect memcpy's with the potential to overlap
relied upon getting visited once per *Use*, not once per *User*. This is
always true *unless* the same exact value is both source and dest. It
turns out that almost nothing actually produces that pattern though.

We can hand craft test cases that more directly test this behavior of
course, and those are included. Also, note that there is a significant
missed optimization here -- we prove in many cases that there is
a non-volatile memcpy call with identical source and dest addresses. We
shouldn't prevent splitting the alloca in that case, and in fact we
should just remove such memcpy calls eagerly. I'll address that in
a subsequent commit.

llvm-svn: 164669
2012-09-26 07:41:40 +00:00
Nick Lewycky d9f7910671 Don't drop the alignment on a memcpy intrinsic when producing a store. This is
only a missed optimization opportunity if the store is over-aligned, but a
miscompile if the store's new type has a higher natural alignment than the
memcpy did. Fixes PR13920!

llvm-svn: 164641
2012-09-25 22:46:21 +00:00
Nick Lewycky a0c16aee0a Revert the business end of r164634, and replace it with a different fix. The
reason we were getting two of the same alloca is because of a memmove/memcpy
which had the same alloca in both the src and dest. Now we detect that case
directly. This has the same testcase as before, but fixes a clang test
CodeGenObjC/exceptions.m which runs clang -O2.

llvm-svn: 164636
2012-09-25 21:50:37 +00:00
Nick Lewycky 9f19349846 Don't try to promote the same alloca twice. Fixes PR13916!
Chandler, it's not obvious that it's okay that this alloca gets into the list
twice to begin with. Please review and see whether this is the fix you really
want, but I wanted to get a fix checked in quickly.

llvm-svn: 164634
2012-09-25 21:15:50 +00:00
Chandler Carruth 8b907e8acb Fix a case where SROA did not correctly detect dead PHI or selects due
to chains or cycles between PHIs and/or selects. Also add a couple of
really nice test cases reduced from Kostya's reports in PR13905 and
PR13906. Both are fixed by this patch.

llvm-svn: 164596
2012-09-25 10:03:40 +00:00
Chandler Carruth 2603a18769 Fix a crash in SROA. This was reported independently by Takumi and
David (I think), but I would appreciate folks verifying that this fixes
the big crasher.

I'm still working on a reduced test case, but because this was causing
problems I wanted to get the fix checked in quickly.

llvm-svn: 164585
2012-09-25 02:42:03 +00:00
Chandler Carruth 92924fd28f Address one of the original FIXMEs for the new SROA pass by implementing
integer promotion analogous to vector promotion. When there is an
integer alloca being accessed both as its integer type and as a narrower
integer type, promote the narrower access to "insert" and "extract" the
smaller integer from the larger one, and make the integer alloca
a candidate for promotion.

In the new formulation, we don't care about target legal integer or use
thresholds to control things. Instead, we only perform this promotion to
an integer type which the frontend has already emitted a load or store
for. This bounds the scope and prevents optimization passes from
coalescing larger and larger entities into a single integer.

llvm-svn: 164479
2012-09-24 00:34:20 +00:00
Chandler Carruth e7a1ba5e8b Switch to a signed representation for the dynamic offsets while walking
across the uses of the alloca. It's entirely possible for negative
numbers to come up here, and in some rare cases simply doing the 2's
complement arithmetic isn't the correct decision. Notably, we can't zext
the index of the GEP. The definition of GEP is that these offsets are
sign extended or truncated to the size of the pointer, and then wrapping
2's complement arithmetic used.

This patch fixes an issue that comes up with *no* input from the
buildbots or bootstrap afaict. The only place where it manifested,
disturbingly, is Clang's own regression test suite. A reduced and
targeted collection of tests are added to cope with this. Note that I've
tried to pin down the potential cases of overflow, but may have missed
some cases. I've tried to add a few cases to test this, but its hard
because LLVM has quite limited support for >64bit constructs.

llvm-svn: 164475
2012-09-23 11:43:14 +00:00
Chandler Carruth 225d4bdb07 Fix a case where the new SROA pass failed to zap dead operands to
selects with a constant condition. This resulted in the operands
remaining live through the SROA rewriter. Most of the time, this just
caused some dead allocas to persist and get zapped by later passes, but
in one case found by Joerg, it caused a crash when we tried to *promote*
the alloca despite it having this dead use. We already have the
mechanisms in place to handle this, just wire select up to them.

llvm-svn: 164427
2012-09-21 23:36:40 +00:00
Chandler Carruth 3f882d4cf5 Fix the last crasher I've gotten a reproduction for in SROA. This one
from the dragonegg build bots when we turned on the full version of the
pass. Included a much reduced test case for this pesky bug, despite
bugpoint's uncooperative behavior.

Also, I audited all the similar code I could find and didn't spot any
other cases where this mistake cropped up.

llvm-svn: 164178
2012-09-18 22:37:19 +00:00
Chandler Carruth d356fd02a9 Fix getCommonType in a different way from the way I fixed it when
working on FCA splitting. Instead of refusing to form a common type when
there are uses of a subsection of the alloca as well as a use of the
entire alloca, just skip the subsection uses and continue looking for
a whole-alloca use with a type that we can use.

This produces slightly prettier IR I think, and also fixes the other
failure in the test.

llvm-svn: 164146
2012-09-18 17:49:37 +00:00
Benjamin Kramer a59ef5795d Fix build for compilers that don't understand injected class names properly.
llvm-svn: 164142
2012-09-18 17:11:47 +00:00
Benjamin Kramer 73a9e4a1f9 SROA: Use CRTP for OpSplitter to get rid of virtual dispatch and the virtual-dtor warnings that come with it.
llvm-svn: 164140
2012-09-18 17:06:32 +00:00