rewriter in SROA to carry a proper alignment. This involves
interrogating various sources of alignment, etc. This is a more complete
and principled fix to PR13920 as well as related bugs pointed out by Eli
in review and by inspection in the area.
Also by inspection fix the integer and vector promotion paths to create
aligned loads and stores. I still need to work up test cases for
these... Sorry for the delay, they were found purely by inspection.
llvm-svn: 164689
tables in bitmaps when they fit in a target-legal register.
This saves some space, and it also allows for building tables that would
otherwise be deemed too sparse.
One interesting case that this hits is example 7 from
http://blog.regehr.org/archives/320. We currently generate good code
for this when lowering the switch to the selection DAG: we build a
bitmask to decide whether to jump to one block or the other. My patch
will result in the same bitmask, but it removes the need for the jump,
as the return value can just be retrieved from the mask.
llvm-svn: 164684
For example, under a Linux chroot, /proc/ might not be mounted.
Therefor, we test if this file exist. If it is the case, use it (the current
behavior). Otherwise, we fall back to the detection used by *BSD.
The issue has been reported initially on the Debian bug tracker:
http://bugs.debian.org/674588
llvm-svn: 164676
This should really, really fix PR13916. For real this time. The
underlying bug is... a bit more subtle than I had imagined.
The setup is a code pattern that leads to an @llvm.memcpy call with two
equal pointers to an alloca in the source and dest. Now, not any pattern
will do. The alloca needs to be formed just so, and both pointers should
be wrapped in different bitcasts etc. When this precise pattern hits,
a funny sequence of events transpires. First, we correctly detect the
potential for overlap, and correctly optimize the memcpy. The first
time. However, we do simplify the set of users of the alloca, and that
causes us to run the alloca back through the SROA pass in case there are
knock-on simplifications. At this point, a curious thing has happened.
If we happen to have an i8 alloca, we have direct i8 pointer values. So
we don't bother creating a cast, we rewrite the arguments to the memcpy
to dircetly refer to the alloca.
Now, in an unrelated area of the pass, we have clever logic which
ensures that when visiting each User of a particular pointer derived
from an alloca, we only visit that User once, and directly inspect all
of its operands which refer to that particular pointer value. However,
the mechanism used to detect memcpy's with the potential to overlap
relied upon getting visited once per *Use*, not once per *User*. This is
always true *unless* the same exact value is both source and dest. It
turns out that almost nothing actually produces that pattern though.
We can hand craft test cases that more directly test this behavior of
course, and those are included. Also, note that there is a significant
missed optimization here -- we prove in many cases that there is
a non-volatile memcpy call with identical source and dest addresses. We
shouldn't prevent splitting the alloca in that case, and in fact we
should just remove such memcpy calls eagerly. I'll address that in
a subsequent commit.
llvm-svn: 164669
scalar-to-vector conversion that we cannot handle. For instance, when an invalid
constraint is used in an inline asm statement.
<rdar://problem/12284092>
llvm-svn: 164662
- Instead of embedding 'lock' into each mnemonic of atomic
instructions except 'xchg', we teach X86 assembly printer to output 'lock'
prefix similar to or consistent with code emitter.
llvm-svn: 164659
typeid (and a couple other non-standard places where we can transform an
unevaluated expression into an evaluated expression) is special
because it introduces an an expression evaluation context,
which conflicts with the mechanism to compute the current
lambda mangling context. PR12123.
I would appreciate if someone would double-check that we get the mangling
correct with this patch.
llvm-svn: 164658
scalar-to-vector conversion that we cannot handle. For instance, when an invalid
constraint is used in an inline asm statement.
<rdar://problem/12284092>
llvm-svn: 164657
enough information so we can mangle them correctly in cases involving
dependent parameter types. (This specifically impacts cases involving
null pointers and cases involving parameters of reference type.)
Fix the mangler to use this information instead of trying to scavenge
it out of the parameter declaration.
<rdar://problem/12296776>.
llvm-svn: 164656
The Apple buildbots are set up to pass --target to configure for both
cross- and non-cross-compile builds, and the standard autoconf response
to this is to set the program prefix to '<target>-'. Until we can figure
out the proper way to handle this (don't pass --target? pass an explicit
--program-prefix=""? don't auto-populate program_prefix with target_alias?)
it's more important to keep the buildbots running.
This reverts r164633 / ba48ceb1a3802e20e781ef04ea2573ffae2ac414.
llvm-svn: 164651
only a missed optimization opportunity if the store is over-aligned, but a
miscompile if the store's new type has a higher natural alignment than the
memcpy did. Fixes PR13920!
llvm-svn: 164641