* remove some old code that was needed when we'd put ESP in the scale instead of
the base of some instructions.
* Fix a bug with the P modifier in inline asm that caused us to drop it.
llvm-svn: 75077
DWARF requires frame moves be specified at specific times. If you have a
prologue like this:
__Z3fooi:
Leh_func_begin1:
LBB1_0: ## entry
pushl %ebp
Llabel1:
movl %esp, %ebp
Llabel2:
pushl %esi
Llabel3:
subl $20, %esp
call "L1$pb"
"L1$pb":
popl %esi
The "pushl %ebp" needs a table entry specifying the offset. The "movl %esp,
%ebp" makes %ebp the new stack frame register, so that needs to be specified in
DWARF. And "pushl %esi" saves the callee-saved %esi register, which also needs
to be specified in DWARF.
Before, all of this logic was in one method. This didn't work too well, because
as you can see there are multiple FDE line entries that need to be created.
This fix creates the "MachineMove" objects directly when they're needed; instead
of waiting until the end, and losing information.
There is some ugliness where we generate code like this:
LBB22_0: ## entry
pushl %ebp
Llabel280:
movl %esp, %ebp
Llabel281:
Llabel284:
pushl %ebp <----------
pushl %ebx
pushl %edi
pushl %esi
Llabel282:
subl $328, %esp
Notice the extra "pushl %ebp". If we generate a "machine move" instruction in
the FDE for that pushl, the linker may get very confused about what value %ebp
should have when exitting the function. I.e., it'll give it the value %esp
instead of the %ebp value from the first "pushl". Not to mention that, in this
case, %ebp isn't modified in the function (that's a separate bug). I put a small
hack in to get it to work. It might be the only solution, but should be
revisited once the above case is fixed.
llvm-svn: 75047
U lib/Target/X86/X86RegisterInfo.cpp
U lib/Target/X86/X86RegisterInfo.h
Temporarily revert. This was causing an infinite loop in the linker on Leopard.
llvm-svn: 74970
prologue like this:
__Z3fooi:
Leh_func_begin1:
LBB1_0: ## entry
pushl %ebp
Llabel1:
movl %esp, %ebp
Llabel2:
pushl %esi
Llabel3:
subl $20, %esp
call "L1$pb"
"L1$pb":
popl %esi
The "pushl %ebp" needs a table entry specifying the offset. The "movl %esp,
%ebp" makes %ebp the new stack frame register, so that needs to be specified in
DWARF. And "pushl %esi" saves the callee-saved %esi register, which also needs
to be specified in DWARF.
Before, all of this logic was in one method. This didn't work too well, because
as you can see there are multiple FDE line entries that need to be created.
This fix creates the "MachineMove" objects directly when they're needed; instead
of waiting until the end, and losing information.
llvm-svn: 74952
With the SVR4 ABI on PowerPC, vector arguments for vararg calls are passed differently depending on whether they are a fixed or a variable argument. Variable vector arguments always go into memory, fixed vector arguments are put
into vector registers. If there are no free vector registers available, fixed vector arguments are put on the stack.
The NumFixedArgs attribute allows to decide for an argument in a vararg call whether it belongs to the fixed or variable portion of the parameter list.
llvm-svn: 74764
have the alignment be calculated up front, and have the back-ends obey whatever
alignment is decided upon.
This allows for future work that would allow for precise no-op placement and the
like.
llvm-svn: 74564
Avoid unnecessary duplication of operand 0 of X86::FpSET_ST0_80. This duplication would
cause one register to remain on the stack at the function return.
llvm-svn: 74534
The register allocator, when it allocates a register to a virtual register defined by an implicit_def, can allocate any physical register without worrying about overlapping live ranges. It should mark all of operands of the said virtual register so later passes will do the right thing.
This is not the best solution. But it should be a lot less fragile to having the scavenger try to track what is defined by implicit_def.
llvm-svn: 74518
fence-atomic-fence down to just the atomic op. This is possible thanks to
X86's relatively strong memory model, which guarantees that locked instructions
(which are used to implement atomics) are implicit fences.
llvm-svn: 74435
implementation primarily differs from the former in that the asmprinter
doesn't make a zillion decisions about whether or not something will be
RIP relative or not. Instead, those decisions are made by isel lowering
and propagated through to the asm printer. To achieve this, we:
1. Represent RIP relative addresses by setting the base of the X86 addr
mode to X86::RIP.
2. When ISel Lowering decides that it is safe to use RIP, it lowers to
X86ISD::WrapperRIP. When it is unsafe to use RIP, it lowers to
X86ISD::Wrapper as before.
3. This removes isRIPRel from X86ISelAddressMode, representing it with
a basereg of RIP instead.
4. The addressing mode matching logic in isel is greatly simplified.
5. The asmprinter is greatly simplified, notably the "NotRIPRel" predicate
passed through various printoperand routines is gone now.
6. The various symbol printing routines in asmprinter now no longer infer
when to emit (%rip), they just print the symbol.
I think this is a big improvement over the previous situation. It does have
two small caveats though: 1. I implemented a horrible "no-rip" modifier for
the inline asm "P" constraint modifier. This is a short term hack, there is
a much better, but more involved, solution. 2. I had to xfail an
-aggressive-remat testcase because it isn't handling the use of RIP in the
constant-pool reading instruction. This specific test is easy to fix without
-aggressive-remat, which I intend to do next.
llvm-svn: 74372