"%z = %x and %y". If GVN can prove that %y equals %x, then it turns
this into "%z = %x and %x". With the new code, %z will be replaced
with %x everywhere (and then deleted). Previously %z would be value
numbered too, which is a waste of time. Also, while a clever value
numbering algorithm would give %z the same value number as %x, our
current one doesn't do so (at least I don't think it does). The new
logic has an essentially equivalent effect to what you would get if
%z was given the same value number as %x, i.e. it should make value
numbering smarter. While there, get hold of target data once at the
start rather than a gazillion times all over the place.
llvm-svn: 118923
We only produce debug line information if we have seen a line directive, so
this code is dead. Also, if we want to be bug by bug compatible with
gas and sometimes produce "empty" .debug_line sections, this will
match the content produced by gas.
llvm-svn: 118914
catastrophic compilation time in the event of unreasonable LLVM
IR. Code quality is a separate issue--someone upstream needs to do a
better job of reducing to llvm.memcpy. If the situation can be reproduced with
any supported frontend, then it will be a separate bug.
llvm-svn: 118904
With the addition of 2 enum values for Neon vectors, this field must now
hold 6 different values and so requires 3 bits. Make the NumElements field
one bit smaller to compensate.
llvm-svn: 118900
support for the case where alignment<value size.
These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.
llvm-svn: 118889
parameters to the Transform*Type functions and instead call out
the specific cases where an object type and the unqualified lookup
results are important. Fixes an assert and failed compile on
a testcase from PR7248.
llvm-svn: 118887
needs to use the current pc and current offset in two ways: To
determine which function we are currently executing, and the decide
how much of that function has executed so far. For the former use,
we need to back up the saved pc value by one byte if we're going to
use the correct function's unwind information -- we may be executing
a CALL instruction at the end of a function and the following instruction
belongs to a new function, or we may be looking at unwind information
which only covers the call instruction and not the subsequent instruction.
But when we're talking about deciding which row of an UnwindPlan to
execute, we want to use the actual byte offset in the function, not the
byte offset - 1.
Right now RegisterContextLLDB is tracking both the "real" offset and
an "offset minus one" and different parts of the class have to know
which one to use and they need to be updated/set in tandem. I want
to revisit this at some point.
The second change made in looking up eh_frame information; it was
formerly done by looking for the start address of the function we
are currently executing. But it is possible to have unwind information
for a function which only covers a small section of the function's
address range. In which case looking up by the start pc value may not
find the eh_frame FDE.
The hand-written _sigtramp() unwind info on Mac OS X, which covers
exactly one instruction in the middle of the function, happens to
trigger both of these issues.
I still need to get the UnwindPlan runner to handle arbitrary dwarf
expressions in the FDE but there's a good chance it will be easy to
reuse the DWARFExpression class to do this.
llvm-svn: 118882
Now we explicitly memset all of its values.
This bug was uncovered by the 'Index/recursive-cxx-member-calls.cpp', which exhibited an assertion
on an i386 darwin build of clang. Adding this test case back since the assertion is now resolved.
llvm-svn: 118881