instead of calling lower_bound or upper_bound directly.
This cleans up the search logic a bit because {lower,upper}_bound compare
LR->start by default, and it is usually simpler to search LR->end.
Funnelling all searches through one function also makes it possible to replace
the search algorithm with something faster than binary search.
llvm-svn: 114448
into OptimizeCompareInstr.
This necessitates the passing of CmpValue around,
so widen the virtual functions to accomodate.
No functionality changes.
llvm-svn: 114428
"getFixedStack" on the MachinePointerInfo class. While
this isn't the problem I'm setting out to solve, it is the
right way to eliminate PseudoSourceValue, so lets go with it.
llvm-svn: 114406
MachinePointerInfo struct, no functionality change.
This also adds an assert to MachineMemOperand::MachineMemOperand
that verifies that the Value* is either null or is an IR pointer type.
llvm-svn: 114389
CombinerAA cannot assume that different FrameIndex's never alias, but can instead use
MachineFrameInfo to get the actual offsets of these slots and check for actual aliasing.
This fixes CodeGen/X86/2010-02-19-TailCallRetAddrBug.ll and CodeGen/X86/tailcallstack64.ll
when CombinerAA is enabled, modulo a different register allocation sequence.
llvm-svn: 114348
For now the allocator still uses the old (internal) construction mechanism by default. This will be phased out soon assuming
no issues with the builder system come up.
To invoke the new construction mechanism just pass '-regalloc=pbqp -pbqp-builder' to llc. To provide custom constraints a
Target just needs to extend PBQPBuilder and pass an instance of their derived builder to the RegAllocPBQP constructor.
llvm-svn: 114272
NO path to the destination containing side effects, not that SOME path contains no side effects.
In practice, this only manifests with CombinerAA enabled, because otherwise the chain has little
to no branching, so "any" is effectively equivalent to "all".
llvm-svn: 114268
1) Do forward copy propagation. This makes it easier to estimate the cost of the
instruction being sunk.
2) Break critical edges on demand, including cases where the value is used by
PHI nodes.
Critical edge splitting is not yet enabled by default.
llvm-svn: 114227
great deal because we don't have to worry about maintaining SSA form.
Unconditionally copy back to dupli when the register is live out of the split
range, even if the live-out value was defined outside the range. Skipping the
back-copy only makes sense when the live range is going to spill outside the
split range, and we don't know that it will. Besides, this was a hack to avoid
SSA update issues.
Clear up some confusion about the end point of a half-open LiveRange. Methinks
LiveRanges need to be closed so both start and end are included in the range.
The low bits of a SlotIndex are symbolic, so a half-open range doesn't really
make sense. This would be a pervasive change, though.
llvm-svn: 114043
iterator when an optimization took place. This allows us to do more insane
things with the code than just remove an instruction or two.
llvm-svn: 113640
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.
llvm-svn: 113570
LiveIntervals already adds <imp-def> operands for super-registers when a subreg
def defines the whole register. Thus, it is not necessary to do it again when
rewriting.
In fact, the super-register imp-defs caused miscompilations because the late
scheduler couldn't see that the super-register was read.
We still add super-reg <imp-use,kill> operands when rewriting virtuals to
physicals.
llvm-svn: 113299
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:
int foo(int x, int y, int z) {
return x+y+z;
}
used to compile into:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
movl 4(%rsp), %esi
addl %edx, %esi
movl (%rsp), %edx
addl %esi, %edx
movl %edx, %eax
addq $12, %rsp
ret
Now we produce:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
addl 4(%rsp), %edx ## Folded load
addl (%rsp), %edx ## Folded load
movl %edx, %eax
addq $12, %rsp
ret
Fewer instructions and less register use = faster compiles.
llvm-svn: 113102
overload UserInInstr. Explicitly check Allocatable. The early exit in the
condition will mean the performance impact of the extra test should be
minimal.
llvm-svn: 113016
solve the root problem, but it corrects the bug in the code I added to
support legalizing in the case where the non-extended type is also legal.
llvm-svn: 112997
slot.
Teach it to also check for early clobbered aliases, and early clobber operands
following the current operand.
This fixes the miscompilation in PR8044 where EC registers eax and ecx were
being used for inputs.
llvm-svn: 112988
Original commit message:
Use the SSAUpdator to turn calls to eh.exception that are not in a
landing pad into uses of registers rather than loads from a stack
slot. Doesn't touch the 'orrible hack code - Bill needs to persuade
me harder :)
llvm-svn: 112952
there are clearly no stores between the load and the store. This fixes
this miscompile reported as PR7833.
This breaks the test/CodeGen/X86/narrow_op-2.ll optimization, which is
safe, but awkward to prove safe. Move it to X86's README.txt.
llvm-svn: 112861
landing pad into uses of registers rather than loads from a stack
slot. Doesn't touch the 'orrible hack code - Bill needs to persuade
me harder :)
llvm-svn: 112702
Reserved registers are unpredictable, and are treated as always live by machine
DCE.
Allocatable registers are never reserved, and can be used for virtual registers.
Unreserved, unallocatable registers can not be used for virtual registers, but
otherwise behave like a normal allocatable register. Most targets only have
the flag register in this set.
llvm-svn: 112649
1. Allocate them in the entry block of the function to enable function-wide
re-use. The instructions to create them should be re-materializable, so
there shouldn't be additional cost compared to creating them local
to the basic blocks where they are used.
2. Collect all of the frame index references for the function and sort them
by the local offset referenced. Iterate over the sorted list to
allocate the virtual base registers. This enables creation of base
registers optimized for positive-offset access of frame references.
(Note: This may be appropriate to later be a target hook to do the
sorting in a target appropriate manner. For now it's done here for
simplicity.)
llvm-svn: 112609
any more. I plan to reimplement alloca promotion using SSAUpdater later.
It looks like Bill's URoR logic really always needs domtree, so the pass
now always asks for domtree info.
llvm-svn: 112597
Eventually, we want to disable physreg coalescing completely, and let the
register allocator do its job using hints.
This option makes it possible to measure the impact of disabling physreg
coalescing.
llvm-svn: 112567