instruction. This makes it re-materializable.
Thumb2 will split it back out into two instructions so IT pass will generate the
right mask. Also, this expose opportunies to optimize the movw to a 16-bit move.
llvm-svn: 82982
physical registers. This is especially critical for the later two since they
start the live interval of a super-register. e.g.
%DO<def> = INSERT_SUBREG %D0<undef>, %S0<kill>, 1
If this instruction is eliminated, the register scavenger will not be happy as
D0 is not defined previously.
This fixes PR5055.
llvm-svn: 82968
the PassManager code into a regular verifyAnalysis method.
Also, reorganize loop verification. Make the LoopPass infrastructure
call verifyLoop as needed instead of having LoopInfo::verifyAnalysis
check every loop in the function after each looop pass. Add a new
command-line argument, -verify-loop-info, to enable the expensive
full checking.
llvm-svn: 82952
code that stops the timer doesn't have to search to find the timer
object before it stops the timer. This avoids a lock acquisition
and a few other things done with the timer running.
llvm-svn: 82949
LoopPasses for that loop. This avoids trouble with the PassManager
trying to call verifyAnalysis on them, and frees up some memory
sooner rather than later.
llvm-svn: 82945
used to support GlobalVariables storing MDNodes, back when they were derived
from Constant before the introduction of NamedMDNode, but never removed.
llvm-svn: 82943
that are phi nodes. Also tighten up FoldOpIntoPhi to treat constantexpr
operands to phis just like other variables, avoiding moving constantexpr
computations around.
Patch by Daniel Dunbar.
llvm-svn: 82913
aren't in canonical loop-simplify form, since it doesn't itself depend
on LoopSimplify. This means handling loops without preheaders and loops
with multiple backedges.
llvm-svn: 82905
test whether it properly dominates the loop header. This is equivalent
when the loop has a preheader, and has the advantage of working when
the loop doesn't have a preheader. Since IVUsers doesn't Require
LoopSimplify, the loop isn't guaranteed to have a preheader.
llvm-svn: 82899
which have no defs anywhere in the function. In particular, this fixes sinking
of instructions that reference RIP on x86-64, which is currently being modeled
as a register.
llvm-svn: 82815
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
This eliminates MachineInstr's std::list member and allows the data to be
created by isel and live for the remainder of codegen, avoiding a lot of
copying and unnecessary translation. This also shrinks MemSDNode.
- Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
fields for MachineMemOperands.
- Change MemSDNode to have a MachineMemOperand member instead of its own
fields with the same information. This introduces some redundancy, but
it's more consistent with what MachineInstr will eventually want.
- Ignore alignment when searching for redundant loads for CSE, but remember
the greatest alignment.
Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.
llvm-svn: 82794
naming scheme used in SelectionDAG, where there are multiple kinds
of "target" nodes, but "machine" nodes are nodes which represent
a MachineInstr.
llvm-svn: 82790
allows appropriate backends to generate a sqrt instruction.
On x86, this isn't done at -O0 because we go through
FastISel instead. This is a behavior change from before
this series of sqrt patches started. I think this is OK
considering that compile speed is most important at -O0, but
could be convinced otherwise.
llvm-svn: 82778
For the AAPCS ABI, SP must always be 4-byte aligned, and at any "public
interface" it must be 8-byte aligned. For the older ARM APCS ABI, the stack
alignment is just always 4 bytes. For X86, we currently align SP at
entry to a function (e.g., to 16 bytes for Darwin), but no stack alignment
is needed at other times, such as for a leaf function.
After discussing this with Dan, I decided to go with the approach of adding
a new "TransientStackAlignment" field to TargetFrameInfo. This value
specifies the stack alignment that must be maintained even in between calls.
It defaults to 1 except for ARM, where it is 4. (Some other targets may
also want to set this if they have similar stack requirements. It's not
currently required for PPC because it sets targetHandlesStackFrameRounding
and handles the alignment in target-specific code.) The existing StackAlignment
value specifies the alignment upon entry to a function, which is how we've
been using it anyway.
llvm-svn: 82767
interest for this, as it currently reserves a register rather than using
the scavenger for matierializing constants as needed.
Instead of scavenging registers on the fly while eliminating frame indices,
new virtual registers are created, and then a scavenged collectively in a
post-pass over the function. This isolates the bits that need to interact
with the scavenger, and sets the stage for more intelligent use, and reuse,
of scavenged registers.
For the time being, this is disabled by default. Once the bugs are worked out,
the current scavenging calls in replaceFrameIndices() will be removed and
the post-pass scavenging will be the default. Until then,
-enable-frame-index-scavenging enables the new code. Currently, only the
Thumb1 back end is set up to use it.
llvm-svn: 82734
LocalAreaOffset. (We don't have any of those right now.)
PEI::calculateFrameObjectOffsets includes the absolute value of the
LocalAreaOffset in the cumulative offset value used to calculate the
stack frame size. It then adds the raw value of the LocalAreaOffset
to the stack size. For a StackGrowsDown target, that raw value is negative
and has the effect of cancelling out the absolute value that was added
earlier, but that obviously won't work for a StackGrowsUp target. Change
to subtract the absolute value of the LocalAreaOffset.
llvm-svn: 82693
LiveVariables add implicit kills to correctly track partial register kills. This works well enough and is fairly accurate. But coalescer can make it impossible to maintain these markers. e.g.
BL <ga:sss1>, %R0<kill,undef>, %S0<kill>, %R0<imp-def>, %R1<imp-def,dead>, %R2<imp-def,dead>, %R3<imp-def,dead>, %R12<imp-def,dead>, %LR<imp-def,dead>, %D0<imp-def>, ...
...
%reg1031<def> = FLDS <cp#1>, 0, 14, %reg0, Mem:LD4[ConstantPool]
...
%S0<def> = FCPYS %reg1031<kill>, 14, %reg0, %D0<imp-use,kill>
When reg1031 and S0 are coalesced, the copy (FCPYS) will be eliminated the the implicit-kill of D0 is lost. In this case it's possible to move the marker to the FLDS. But in many cases, this is not possible. Suppose
%reg1031<def> = FOO <cp#1>, %D0<imp-def>
...
%S0<def> = FCPYS %reg1031<kill>, 14, %reg0, %D0<imp-use,kill>
When FCPYS goes away, the definition of S0 is the "FOO" instruction. However, transferring the D0 implicit-kill to FOO doesn't work since it is the def of D0 itself. We need to fix this in another time by introducing a "kill" pseudo instruction to track liveness.
Disabling the assertion is not ideal, but machine verifier is doing that job now. It's important to know double-def is not a miscomputation since it means a register should be free but it's not tracked as free. It's a performance issue instead.
llvm-svn: 82677
The machine code verifier did not check for explicit operands correctly. It
used MachineInstr::getNumExplicitOperands, but that method may cheat and use
the declared count in the TargetInstrDesc.
Now we check the explicit operands one at a time in visitMachineOperand.
llvm-svn: 82652
default implementation. Update comment on the default version, which made it
sound like most targets override it. Currently only X86 and SystemZ override
this method.
llvm-svn: 82651
of the defs are processed.
Also fix a implicit_def propagation bug: a implicit_def of a physical register
should be applied to uses of the sub-registers.
llvm-svn: 82616
two different places for printing MachineMemOperands.
Drop the virtual from Value::dump and instead give Value a
protected virtual hook that can be overridden by subclasses
to implement custom printing. This lets printing be more
consistent, and simplifies printing of PseudoSourceValue
values.
llvm-svn: 82599
- This also fixes a dereference of std::string::end, which makes MSVC unhappy and was causing all the static analyzer clang tests to fail.
llvm-svn: 82517
static const class member into each translation unit, with external linkage???
- If someone understands this issue better, please clue me in, I haven't
consulted the standard yet.
llvm-svn: 82516
This is designed for tracking a value even when it might move (like WeakVH), but it is an error to delete the referenced value (unlike WeakVH0. TrackingVH is templated like AssertingVH on the tracked Value subclass, it is an error to RAUW a tracked value to an incompatible type.
For implementation reasons the latter error is only diagnosed on accesses to a mis-RAUWed TrackingVH, because we don't want a virtual interface in a templated class.
The former error is also only diagnosed on access, so that clients are allowed to delete a tracked value, as long as they don't use it. This makes it easier for the client to reason about destruction.
llvm-svn: 82506
%S0<def> = EXTRACT_SUBREG %Q0<kill>, 1
to
%S0<def> = IMPLICIT_DEF %Q0<imp-use,kill>
Implicit_def does not *read* any register so the operand should be marked "implicit". The missing "implicit" marker on the operand is wrong, but it doesn't actually break anything.
llvm-svn: 82503
take into consideration that the result of an invoke is only valid in
the normal dest, not the unwind dest. This caused 'PHINode::hasConstantValue'
to return true in an invalid situation, causing mem2reg to delete a phi that
was actually needed. This caused a crash building 483.xalancbmk.
llvm-svn: 82491
variable increment / decrement slighter high priority.
This has major impact on some micro-benchmarks. On MultiSource/Applications
and spec tests, it's a minor win. It also reduce 256.bzip instruction count
by 8%, 55 on 164.gzip on i386 / Darwin.
llvm-svn: 82485
And fix a bug with the behavior of min/max instructions formed from
fcmp uge comparisons.
Also, use FiniteOnlyFPMath() for this code instead of UnsafeFPMath,
as it is more specific.
llvm-svn: 82466
from a piece of a large store when both are in the same block.
This allows clang to compile the testcase in PR4216 to this code:
_test_bitfield:
movl 4(%esp), %eax
movl %eax, %ecx
andl $-65536, %ecx
orl $32962, %eax
andl $40186, %eax
orl %ecx, %eax
ret
This is not ideal, but is a whole lot better than the code produced
by llvm-gcc:
_test_bitfield:
movw $-32574, %ax
orw 4(%esp), %ax
andw $-25350, %ax
movw %ax, 4(%esp)
movw 7(%esp), %cx
shlw $8, %cx
movzbl 6(%esp), %edx
orw %cx, %dx
movzwl %dx, %ecx
shll $16, %ecx
movzwl %ax, %eax
orl %ecx, %eax
ret
and dramatically better than that produced by gcc 4.2:
_test_bitfield:
pushl %ebx
call L3
"L00000000001$pb":
L3:
popl %ebx
movl 8(%esp), %eax
leal 0(,%eax,4), %edx
sarb $7, %dl
movl %eax, %ecx
andl $7168, %ecx
andl $-7201, %ebx
movzbl %dl, %edx
andl $1, %edx
sall $5, %edx
orl %ecx, %ebx
orl %edx, %ebx
andl $24, %eax
andl $-58336, %ebx
orl %eax, %ebx
orl $32962, %ebx
movl %ebx, %eax
popl %ebx
ret
llvm-svn: 82439
feature, either build the JIT in debug mode to enable it by default or pass
-jit-emit-debug to lli.
Right now, the only debug information that this communicates to GDB is call
frame information, since it's already being generated to support exceptions in
the JIT. Eventually, when DWARF generation isn't tied so tightly to AsmPrinter,
it will be easy to push that information to GDB through this interface.
Here's a step-by-step breakdown of how the feature works:
- The JIT generates the machine code and DWARF call frame info
(.eh_frame/.debug_frame) for a function into memory.
- The JIT copies that info into an in-memory ELF file with a symbol for the
function.
- The JIT creates a code entry pointing to the ELF buffer and adds it to a
linked list hanging off of a global descriptor at a special symbol that GDB
knows about.
- The JIT calls a function marked noinline that GDB knows about and has put an
internal breakpoint in.
- GDB catches the breakpoint and reads the global descriptor to look for new
code.
- When sees there is new code, it reads the ELF from the inferior's memory and
adds it to itself as an object file.
- The JIT continues, and the next time we stop the program, we are able to
produce a proper backtrace.
Consider running the following program through the JIT:
#include <stdio.h>
void baz(short z) {
long w = z + 1;
printf("%d, %x\n", w, *((int*)NULL)); // SEGFAULT here
}
void bar(short y) {
int z = y + 1;
baz(z);
}
void foo(char x) {
short y = x + 1;
bar(y);
}
int main(int argc, char** argv) {
char x = 1;
foo(x);
}
Here is a backtrace before this patch:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x2aaaabdfbd10 (LWP 25476)]
0x00002aaaabe7d1a8 in ?? ()
(gdb) bt
#0 0x00002aaaabe7d1a8 in ?? ()
#1 0x0000000000000003 in ?? ()
#2 0x0000000000000004 in ?? ()
#3 0x00032aaaabe7cfd0 in ?? ()
#4 0x00002aaaabe7d12c in ?? ()
#5 0x00022aaa00000003 in ?? ()
#6 0x00002aaaabe7d0aa in ?? ()
#7 0x01000002abe7cff0 in ?? ()
#8 0x00002aaaabe7d02c in ?? ()
#9 0x0100000000000001 in ?? ()
#10 0x00000000014388e0 in ?? ()
#11 0x00007fff00000001 in ?? ()
#12 0x0000000000b870a2 in llvm::JIT::runFunction (this=0x1405b70,
F=0x14024e0, ArgValues=@0x7fffffffe050)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/JIT/JIT.cpp:395
#13 0x0000000000baa4c5 in llvm::ExecutionEngine::runFunctionAsMain
(this=0x1405b70, Fn=0x14024e0, argv=@0x13f06f8, envp=0x7fffffffe3b0)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/ExecutionEngine.cpp:377
#14 0x00000000007ebd52 in main (argc=2, argv=0x7fffffffe398,
envp=0x7fffffffe3b0) at /home/rnk/llvm-gdb/tools/lli/lli.cpp:208
And a backtrace after this patch:
Program received signal SIGSEGV, Segmentation fault.
0x00002aaaabe7d1a8 in baz ()
(gdb) bt
#0 0x00002aaaabe7d1a8 in baz ()
#1 0x00002aaaabe7d12c in bar ()
#2 0x00002aaaabe7d0aa in foo ()
#3 0x00002aaaabe7d02c in main ()
#4 0x0000000000b870a2 in llvm::JIT::runFunction (this=0x1405b70,
F=0x14024e0, ArgValues=...)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/JIT/JIT.cpp:395
#5 0x0000000000baa4c5 in llvm::ExecutionEngine::runFunctionAsMain
(this=0x1405b70, Fn=0x14024e0, argv=..., envp=0x7fffffffe3c0)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/ExecutionEngine.cpp:377
#6 0x00000000007ebd52 in main (argc=2, argv=0x7fffffffe3a8,
envp=0x7fffffffe3c0) at /home/rnk/llvm-gdb/tools/lli/lli.cpp:208
llvm-svn: 82418
so that nonlocal and partially redundant loads can use it as well.
The testcase shows examples of craziness this can handle. This triggers
*many* times in 176.gcc.
llvm-svn: 82403
(and load -> load) when the base pointers must alias but when
they are different types. This occurs very very frequently in
176.gcc and other code that uses bitfields a lot.
llvm-svn: 82399
U lib/CodeGen/AsmPrinter/DwarfException.cpp
U lib/CodeGen/AsmPrinter/DwarfException.h
--- Reverse-merging r82274 into '.':
U lib/Target/TargetLoweringObjectFile.cpp
G lib/CodeGen/AsmPrinter/DwarfException.cpp
These revisions were breaking everything.
llvm-svn: 82396
1. Change some "\n" -> '\n'.
2. eliminte some std::string's by using raw_ostream::indent.
3. move a bunch of code out of the main arg parser routine into
a new static HandlePrefixedOrGroupedOption function.
4. Greatly simplify the implementation of getOptionPred, and make
it avoid splitting prefix options at = when that doesn't match
a non-prefix option.
llvm-svn: 82362