stack slots and giving them different PseudoSourceValue's did not fix the
problem of post-alloc scheduling miscompiling llvm itself.
- Apply Dan's conservative workaround by assuming any non fixed stack slots can
alias other memory locations. This means a load from spill slot #1 cannot
move above a store of spill slot #2.
- Enable post-alloc scheduling for x86 at optimization leverl Default and above.
llvm-svn: 84424
I can see with the original code was that I forgot that this runs after
type legalization and hence the result type will always be i32. (Custom
legalization of EXTRACT_VECTOR_ELT is only enabled for vector types with
8- and 16-bit elements.)
Regarding the FIXME comment: any information about sign and zero-extension
should be captured by separate extension operations. The DAG combiner should
handle those to produce either VGETLANEu or VGETLANEs, and that seems to be
working now. If there are cases that we're missing, let me know.
llvm-svn: 84218
1. Emit external function type information for all COFF targets since it's
a feature of object format
2. Emit linker directives only for cygming (since this is ld-specific stuff)
llvm-svn: 84214
In the case where there are no good places to put constants and we fall back
upon inserting unconditional branches to make new blocks, allow all constant
pool references in range of those blocks to put constants there, even if that
means resetting the "high water marks" for those references. This will still
terminate because you can't keep splitting blocks forever, and in the bad
cases where we have to split blocks, it is important to avoid splitting more
than necessary.
llvm-svn: 84202
as expressions, code for parsing a few arm specific directives (still needs
the MCStreamer calls for these). Some clean up of the operand parsing code
and adding some comments.
llvm-svn: 84201
When ARMConstantIslandPass cannot find any good locations (i.e., "water") to
place constants, it falls back to inserting unconditional branches to make a
place to put them. My recent change exposed a problem in this area. We may
sometimes append to the same block more than one unconditional branch. The
symptoms of this are that the generated assembly has a branch to an undefined
label and running llc with -debug will cause a seg fault.
This happens more easily since my change to prevent CPEs from moving from
lower to higher addresses as the algorithm iterates, but it could have
happened before. The end of the block may be in range for various constant
pool references, but the insertion point for new CPEs is not right at the end
of the block -- it is at the end of the CPEs that have already been placed
at the end of the block. The insertion point could be out of range. When
that happens, the fallback code will always append another unconditional
branch if the end of the block is in range.
The fix is to only append an unconditional branch if the block does not
already end with one. I also removed a check to see if the constant pool load
instruction is at the end of the block, since that is redundant with
checking if the end of the block is in-range.
There is more to be done here, but I think this fixes the immediate problem.
llvm-svn: 84172
(for uses marked kill and defs marked dead) a few instructions in
addition to forwards. Also, increase the maximum number of instructions
to scan, as it appears to help in a fair number of cases.
llvm-svn: 84061
Also fixed a couple of coding style things that crept in. And added more
to the temporary hacked up ARMAsmParser::MatchInstruction() method for testing.
llvm-svn: 84040
before its reference is only supported on ARM has not been true for a while.
In fact, until recently, that was only supported for Thumb. Besides that,
CPEs are always a multiple of 4 bytes in size, so inserting a CPE should have
no effect on Thumb alignment.
llvm-svn: 83916
MultiSource/Benchmarks/MiBench/automotive-susan test. The failure has
since been masked by an unrelated change (just randomly), so I don't have
a testcase for this now. Radar 7291928.
The situation where this happened is that a constant pool entry (CPE) was
placed at a lower address than the load that referenced it. There were in
fact 2 CPEs placed at adjacent addresses and referenced by 2 loads that were
close together in the code. The distance from the loads to the CPEs was
right at the limit of what they could handle, so that only one of the CPEs
could be placed within range. On every iteration, the first CPE was found
to be out of range, causing a new CPE to be inserted. The second CPE had
been in range but the newly inserted entry pushed it too far away. Thus the
second CPE was also replaced by a new entry, which in turn pushed the first
CPE out of range. Etc.
Judging from some comments in the code, the initial implementation of this
pass did not support CPEs placed _before_ their references. In the case
where the CPE is placed at a higher address, the key to making the algorithm
terminate is that new CPEs are only inserted at the end of a group of adjacent
CPEs. This is implemented by removing a basic block from the "WaterList"
once it has been used, and then adding the newly inserted CPE block to the
list so that the next insertion will come after it. This avoids the ping-pong
effect where CPEs are repeatedly moved to the beginning of a group of
adjacent CPEs. This does not work when going backwards, however, because the
entries at the end of an adjacent group of CPEs are closer than the CPEs
earlier in the group.
To make this pass terminate, we need to maintain a property that changes can
only happen in some sort of monotonic fashion. The fix used here is to require
that the CPE for a particular constant pool load can only move to lower
addresses. This is a very simple change to the code and should not cause
any significant degradation in the results.
llvm-svn: 83902
bootstrap of FSF-style PPC, so there is some
reason to believe the original bug (which was
never analyzed) has been fixed, probably by
82266.
llvm-svn: 83871
it to hold the address of an sret return value, for x86-64 ABI purposes.
Also, fix the test that was originally intended to test this to actually
test it, using FileCheck.
llvm-svn: 83853
lists. Changed ARMAsmParser::MatchRegisterName to return -1 instead of 0 on
errors so 0-15 values could be returned as register numbers. Also added the
rest of the arm register names to the currently hacked up version to allow more
testing. Some changes to ARMAsmParser::ParseOperand to give different errors
for things not yet supported and some additions to the hacked
ARMAsmParser::MatchInstruction to allow more testing for now.
llvm-svn: 83673
when one of the bits being tested would end up being the sign bit in the
narrower type, and a signed comparison is being performed, since this would
change the result of the signed comparison. This fixes PR5132.
llvm-svn: 83670
with writeback, things like "sp!", etc. Also added some more stuff to the
temporarily hacked methods ARMAsmParser::MatchRegisterName and
ARMAsmParser::MatchInstruction to allow more parser testing.
llvm-svn: 83477
implementations with a new MachineInstr::isInvariantLoad, which uses
MachineMemOperands and is target-independent. This brings MachineLICM
and other functionality to targets which previously lacked an
isInvariantLoad implementation.
llvm-svn: 83475
a virtual register to eliminate a frame index, it can return that register
and the constant stored there to PEI to track. When scavenging to allocate
for those registers, PEI then tracks the last-used register and value, and
if it is still available and matches the value for the next index, reuses
the existing value rather and removes the re-materialization instructions.
Fancier tracking and adjustment of scavenger allocations to keep more
values live for longer is possible, but not yet implemented and would likely
be better done via a different, less special-purpose, approach to the
problem.
eliminateFrameIndex() is modified so the target implementations can return
the registers they wish to be tracked for reuse.
ARM Thumb1 implements and utilizes the new mechanism. All other targets are
simply modified to adjust for the changed eliminateFrameIndex() prototype.
llvm-svn: 83467
operands. Some parsing of arm memory operands for preindexing and postindexing
forms including with register controled shifts. This is a work in progress.
llvm-svn: 83424
verbose-asm mode, print comments instead. This eliminates a non-comment
difference between verbose-asm mode and non-verbose-asm mode.
Also, factor out the relevant code out of all the targets and into
target-independent code.
llvm-svn: 83392
spill slot. When frame references are via the frame pointer, they will be
negative, but Thumb1 load/store instructions only allow positive immediate
offsets. Instead, Thumb1 will spill to R12.
llvm-svn: 83336
the new predicates I added) instead of going through a context and doing a
pointer comparison. Besides being cheaper, this allows a smart compiler
to turn the if sequence into a switch.
llvm-svn: 83297
to emit target-specific things at the beginning of the asm output. This
fixes a problem for PPC, where the text sections are not being kept together
as expected. The base class doInitialization code calls DW->BeginModule()
which emits a bunch of DWARF section directives. The PPC doInitialization
code then emits all the TEXT section directives, with the intention that they
will be kept together. But as I understand it, the Darwin assembler treats
the default TEXT section as a special case and moves it to the beginning of
the file, which means that all those DWARF sections are in the middle of
the text. With this change, the EmitStartOfAsmFile hook is called before
the DWARF section directives are emitted, so that all the PPC text section
directives come out right at the beginning of the file.
llvm-svn: 83176
section directives. This causes the assembler to put the text sections at
the beginning of the object file, which helps work around a limitation of the
Darwin ARM relocations. Radar 7255355.
llvm-svn: 83127
unused DECLARE instruction.
KILL is not yet used anywhere, it will replace TargetInstrInfo::IMPLICIT_DEF
in the places where IMPLICIT_DEF is just used to alter liveness of physical
registers.
llvm-svn: 83006
instruction. This makes it re-materializable.
Thumb2 will split it back out into two instructions so IT pass will generate the
right mask. Also, this expose opportunies to optimize the movw to a 16-bit move.
llvm-svn: 82982
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
This eliminates MachineInstr's std::list member and allows the data to be
created by isel and live for the remainder of codegen, avoiding a lot of
copying and unnecessary translation. This also shrinks MemSDNode.
- Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
fields for MachineMemOperands.
- Change MemSDNode to have a MachineMemOperand member instead of its own
fields with the same information. This introduces some redundancy, but
it's more consistent with what MachineInstr will eventually want.
- Ignore alignment when searching for redundant loads for CSE, but remember
the greatest alignment.
Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.
llvm-svn: 82794
naming scheme used in SelectionDAG, where there are multiple kinds
of "target" nodes, but "machine" nodes are nodes which represent
a MachineInstr.
llvm-svn: 82790
For the AAPCS ABI, SP must always be 4-byte aligned, and at any "public
interface" it must be 8-byte aligned. For the older ARM APCS ABI, the stack
alignment is just always 4 bytes. For X86, we currently align SP at
entry to a function (e.g., to 16 bytes for Darwin), but no stack alignment
is needed at other times, such as for a leaf function.
After discussing this with Dan, I decided to go with the approach of adding
a new "TransientStackAlignment" field to TargetFrameInfo. This value
specifies the stack alignment that must be maintained even in between calls.
It defaults to 1 except for ARM, where it is 4. (Some other targets may
also want to set this if they have similar stack requirements. It's not
currently required for PPC because it sets targetHandlesStackFrameRounding
and handles the alignment in target-specific code.) The existing StackAlignment
value specifies the alignment upon entry to a function, which is how we've
been using it anyway.
llvm-svn: 82767
interest for this, as it currently reserves a register rather than using
the scavenger for matierializing constants as needed.
Instead of scavenging registers on the fly while eliminating frame indices,
new virtual registers are created, and then a scavenged collectively in a
post-pass over the function. This isolates the bits that need to interact
with the scavenger, and sets the stage for more intelligent use, and reuse,
of scavenged registers.
For the time being, this is disabled by default. Once the bugs are worked out,
the current scavenging calls in replaceFrameIndices() will be removed and
the post-pass scavenging will be the default. Until then,
-enable-frame-index-scavenging enables the new code. Currently, only the
Thumb1 back end is set up to use it.
llvm-svn: 82734
default implementation. Update comment on the default version, which made it
sound like most targets override it. Currently only X86 and SystemZ override
this method.
llvm-svn: 82651
And fix a bug with the behavior of min/max instructions formed from
fcmp uge comparisons.
Also, use FiniteOnlyFPMath() for this code instead of UnsafeFPMath,
as it is more specific.
llvm-svn: 82466
feature, either build the JIT in debug mode to enable it by default or pass
-jit-emit-debug to lli.
Right now, the only debug information that this communicates to GDB is call
frame information, since it's already being generated to support exceptions in
the JIT. Eventually, when DWARF generation isn't tied so tightly to AsmPrinter,
it will be easy to push that information to GDB through this interface.
Here's a step-by-step breakdown of how the feature works:
- The JIT generates the machine code and DWARF call frame info
(.eh_frame/.debug_frame) for a function into memory.
- The JIT copies that info into an in-memory ELF file with a symbol for the
function.
- The JIT creates a code entry pointing to the ELF buffer and adds it to a
linked list hanging off of a global descriptor at a special symbol that GDB
knows about.
- The JIT calls a function marked noinline that GDB knows about and has put an
internal breakpoint in.
- GDB catches the breakpoint and reads the global descriptor to look for new
code.
- When sees there is new code, it reads the ELF from the inferior's memory and
adds it to itself as an object file.
- The JIT continues, and the next time we stop the program, we are able to
produce a proper backtrace.
Consider running the following program through the JIT:
#include <stdio.h>
void baz(short z) {
long w = z + 1;
printf("%d, %x\n", w, *((int*)NULL)); // SEGFAULT here
}
void bar(short y) {
int z = y + 1;
baz(z);
}
void foo(char x) {
short y = x + 1;
bar(y);
}
int main(int argc, char** argv) {
char x = 1;
foo(x);
}
Here is a backtrace before this patch:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x2aaaabdfbd10 (LWP 25476)]
0x00002aaaabe7d1a8 in ?? ()
(gdb) bt
#0 0x00002aaaabe7d1a8 in ?? ()
#1 0x0000000000000003 in ?? ()
#2 0x0000000000000004 in ?? ()
#3 0x00032aaaabe7cfd0 in ?? ()
#4 0x00002aaaabe7d12c in ?? ()
#5 0x00022aaa00000003 in ?? ()
#6 0x00002aaaabe7d0aa in ?? ()
#7 0x01000002abe7cff0 in ?? ()
#8 0x00002aaaabe7d02c in ?? ()
#9 0x0100000000000001 in ?? ()
#10 0x00000000014388e0 in ?? ()
#11 0x00007fff00000001 in ?? ()
#12 0x0000000000b870a2 in llvm::JIT::runFunction (this=0x1405b70,
F=0x14024e0, ArgValues=@0x7fffffffe050)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/JIT/JIT.cpp:395
#13 0x0000000000baa4c5 in llvm::ExecutionEngine::runFunctionAsMain
(this=0x1405b70, Fn=0x14024e0, argv=@0x13f06f8, envp=0x7fffffffe3b0)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/ExecutionEngine.cpp:377
#14 0x00000000007ebd52 in main (argc=2, argv=0x7fffffffe398,
envp=0x7fffffffe3b0) at /home/rnk/llvm-gdb/tools/lli/lli.cpp:208
And a backtrace after this patch:
Program received signal SIGSEGV, Segmentation fault.
0x00002aaaabe7d1a8 in baz ()
(gdb) bt
#0 0x00002aaaabe7d1a8 in baz ()
#1 0x00002aaaabe7d12c in bar ()
#2 0x00002aaaabe7d0aa in foo ()
#3 0x00002aaaabe7d02c in main ()
#4 0x0000000000b870a2 in llvm::JIT::runFunction (this=0x1405b70,
F=0x14024e0, ArgValues=...)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/JIT/JIT.cpp:395
#5 0x0000000000baa4c5 in llvm::ExecutionEngine::runFunctionAsMain
(this=0x1405b70, Fn=0x14024e0, argv=..., envp=0x7fffffffe3c0)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/ExecutionEngine.cpp:377
#6 0x00000000007ebd52 in main (argc=2, argv=0x7fffffffe3a8,
envp=0x7fffffffe3c0) at /home/rnk/llvm-gdb/tools/lli/lli.cpp:208
llvm-svn: 82418
U lib/CodeGen/AsmPrinter/DwarfException.cpp
U lib/CodeGen/AsmPrinter/DwarfException.h
--- Reverse-merging r82274 into '.':
U lib/Target/TargetLoweringObjectFile.cpp
G lib/CodeGen/AsmPrinter/DwarfException.cpp
These revisions were breaking everything.
llvm-svn: 82396
internal, they shouldn't use the indirect pointer stuff. In the case of
throw_rethrow_test, it was marked as 'internal' and calculated its own offset to
its contents.
llvm-svn: 82354
into the __DATA section. At launch time, dyld has to update most of the section
to fix up the type info pointers. It's better to place it into the __TEXT
section and use pc-rel indirect pointer encodings. Similar to the personality
routine.
llvm-svn: 82274
getSymbolForDwarfGlobalReference is smart enough to know that it
needs to register the stub it references with MachineModuleInfoMachO,
so that it gets emitted at the end of the file.
Move stub emission from X86ATTAsmPrinter::doFinalization to the
new X86ATTAsmPrinter::EmitEndOfAsmFile asmprinter hook. The important
thing here is that EmitEndOfAsmFile is called *after* the ehframes are
emitted, so we get all the stubs.
This allows us to remove a gross hack from the asmprinter where it would
"just know" that it needed to output stubs for personality functions.
Now this is all driven from a consistent interface.
The testcase change is just reordering the expected output now that the
stubs come out after the ehframe instead of before.
This also unblocks other changes that Bill wants to make.
llvm-svn: 82269
trying to create RMW opportunities in the x86 backend. This can cause a
cycle to appear in the graph, since the other uses may eventually feed into
the TokenFactor we are sinking the load below.
llvm-svn: 81996
the Intel instruction tables.
The patterns will stay blank because ADD reg, reg
is faster, but having the encoding available is
useful for the disassembler.
llvm-svn: 81994
Eliminate the PersonalityPrefix/Suffix & NeedsIndirectEncoding
fields from MAI: they aren't part of the asm syntax, they are
related to the structure of the object file.
To replace their functionality, add a new
TLOF::getSymbolForDwarfGlobalReference method which asks targets
to decide how to reference a global from EH in a pc-relative way.
The default implementation just returns the symbol. The default
darwin implementation references the symbol through an indirect
$non_lazy_ptr stub. The bizarro x86-64 darwin specialization
handles the weird "foo@GOTPCREL+4" hack.
DwarfException.cpp now uses this to emit the reference to the
symbol in the right way, and this also eliminates another
horrible hack from DwarfException.cpp:
- if (strcmp(MAI->getPersonalitySuffix(), "+4@GOTPCREL"))
- O << "-" << MAI->getPCSymbol();
llvm-svn: 81991
All of these do not have patterns (they're for the
disassembler).
Many of the floating-point instructions will probably
be rolled into definitions that have patterns, and may
eventually be superseded by mdefs. So I put them
together and left a comment.
llvm-svn: 81979
and use PersonalityPrefix/Suffix to achieve the same effect (like
the x86 backend).
This changes the code generated for ppc static mode, but guess what,
we were generating this before:
.byte 0x9B ; Personality (indirect pcrel sdata4)
.long ___gxx_personality_v0-. ; Personality
which is not correct! (it is not an 'indirect' reference).
llvm-svn: 81965
interrupt instruction, which shouldn't arise any other way). 0xcd is
also used by JITMemoryManager to initialize the buffer to garbage,
which means it could appear following a noreturn call even when
that is not a stub, confusing X86CompilationCallback2. PR 4929.
llvm-svn: 81888
VLDM/VSTM instructions, and without this check, the code assumes that an
offset is allowed, as it would be with VLDR/VSTR. The asm printer,
however, silently drops the offset, producing incorrect code. Since the
address register in this case is either the stack or frame pointer, the
spill location ends up conflicting with some other stack slot or with
outgoing arguments on the stack.
llvm-svn: 81879