set and eliminating the need to iterate whenever something is removed (which
can be really slow in some cases). Thx to Jim for pointing out something silly
I was getting stuck on. :)
llvm-svn: 24241
alignment information appropriately. Includes code for PowerPC to support
fixed-size allocas with alignment larger than the stack. Support for
arbitrarily aligned dynamic allocas coming soon.
llvm-svn: 24224
a special case hack for X86, make the hack more general: if an incoming argument
register is not used in any block other than the entry block, don't copy it to
a vreg. This helps us compile code like this:
%struct.foo = type { int, int, [0 x ubyte] }
int %test(%struct.foo* %X) {
%tmp1 = getelementptr %struct.foo* %X, int 0, uint 2, int 100
%tmp = load ubyte* %tmp1 ; <ubyte> [#uses=1]
%tmp2 = cast ubyte %tmp to int ; <int> [#uses=1]
ret int %tmp2
}
to:
_test:
lbz r3, 108(r3)
blr
instead of:
_test:
lbz r2, 108(r3)
or r3, r2, r2
blr
The (dead) copy emitted to copy r3 into a vreg for extra-block uses was
increasing the live range of r3 past the load, preventing the coallescing.
This implements CodeGen/PowerPC/reg-coallesce-simple.ll
llvm-svn: 24115
generating results in vregs that will need them. In the case of something
like this: CopyToReg((add X, Y), reg1024), we no longer emit code like
this:
reg1025 = add X, Y
reg1024 = reg 1025
Instead, we emit:
reg1024 = add X, Y
Whoa! :)
llvm-svn: 24111
pointer marking the end of the list, the zero *must* be cast to the pointer
type. An un-cast zero is a 32-bit int, and at least on x86_64, gcc will
not extend the zero to 64 bits, thus allowing the upper 32 bits to be
random junk.
The new END_WITH_NULL macro may be used to annotate a such a function
so that GCC (version 4 or newer) will detect the use of un-casted zero
at compile time.
llvm-svn: 23888
the input is that type, this caused a failure on gs on X86 last night.
Move the hard checks into Build[US]Div since that is where decisions like
this should be made.
llvm-svn: 23881
For example, we can now join things like [0-30:0)[31-40:1)[52-59:2)
with [40:60:0) if the 52-59 range is defined by a copy from the 40-60 range.
The resultant range ends up being [0-30:0)[31-60:1).
This fires a lot through-out the test suite (e.g. shrinking bc from
19492 -> 18509 machineinstrs) though most gains are smaller (e.g. about
50 copies eliminated from crafty).
llvm-svn: 23866
(an unused method).
Fix the merger so that it can merge ranges like this [10:12)[16:40) with
[12:38) into [10:40) instead of bogus ranges. This sort of input will be
possible for the merger coming shortly
llvm-svn: 23865
Add a new flag to TargetLowering indicating if the target has really cheap
signed division by powers of two, make ppc use it. This will probably go
away in the future.
Implement some more ISD::SDIV folds in the dag combiner
Remove now dead code in the x86 backend.
llvm-svn: 23853
Fix a *bug* in the extendIntervalEndTo method. In particular, if adding
[2:10) to an interval containing [0:2),[10:30), we produced [0:10),[10,30).
Which is not the most smart thing to do. Now produce [0:30).
llvm-svn: 23841
allows us to lower legal return types to something else, to meet ABI
requirements (such as that i64 be returned in two i32 regs on Darwin/ppc).
llvm-svn: 23802
a lot throughout many programs. In particular, specfp triggers it a bunch for
constant FP nodes when you have code like cond ? 1.0 : -1.0.
If the PPC ISel exposed the loads implicit in pic references to external globals,
we would be able to eliminate a load in cases like this as well:
%X = external global int
%Y = external global int
int* %test4(bool %C) {
%G = select bool %C, int* %X, int* %Y
ret int* %G
}
Note that this breaks things that use SrcValue's (see the fixme), but since nothing
uses them yet, this is ok.
Also, simplify some code to use hasOneUse() on an SDOperand instead of hasNUsesOfValue directly.
llvm-svn: 23781
you could be AND'ing with the result of a shift that shifts out all the
bits you care about, in addition to a constant.
Also, move over an add/sub_parts fold from legalize to the dag combiner,
where it works for things other than constants. Woot!
llvm-svn: 23720
out, where after the first CombineTo() call, the node the second CombineTo
wishes to replace may no longer exist.
Fix a very real bug with the truncated load optimization on little endian
targets, which do not need a byte offset added to the load.
llvm-svn: 23704
like turning:
_foo:
fctiwz f0, f1
stfd f0, -8(r1)
lwz r2, -4(r1)
rlwinm r3, r2, 0, 16, 31
blr
into
_foo:
fctiwz f0,f1
stfd f0,-8(r1)
lhz r3,-2(r1)
blr
Also removed an unncessary constraint from sra -> srl conversion, which
should take care of hte only reason we would ever need to handle sra in
MaskedValueIsZero, AFAIK.
llvm-svn: 23703
location, replace them with a new store of the last value. This occurs
in the same neighborhood in 197.parser, speeding it up about 1.5%
llvm-svn: 23691
multiple results.
Use this support to implement trivial store->load forwarding, implementing
CodeGen/PowerPC/store-load-fwd.ll. Though this is the most simple case and
can be extended in the future, it is still useful. For example, it speeds
up 197.parser by 6.2% by avoiding an LSU reject in xalloc:
stw r6, lo16(l5_end_of_array)(r2)
addi r2, r5, -4
stwx r5, r4, r2
- lwzx r5, r4, r2
- rlwinm r5, r5, 0, 0, 30
stwx r5, r4, r2
lwz r2, -4(r4)
ori r2, r2, 1
llvm-svn: 23690
creating a new vreg and inserting a copy: just use the input vreg directly.
This speeds up the compile (e.g. about 5% on mesa with a debug build of llc)
by not adding a bunch of copies and vregs to be coallesced away. On mesa,
for example, this reduces the number of intervals from 168601 to 129040
going into the coallescer.
llvm-svn: 23671
previous copy elisions and we discover we need to reload a register, make
sure to use the regclass of the original register for the reload, not the
class of the current register. This avoid using 16-bit loads to reload 32-bit
values.
llvm-svn: 23645
store r12 -> [ss#2]
R3 = load [ss#1]
use R3
R3 = load [ss#2]
R4 = load [ss#1]
and turn it into this code:
store R12 -> [ss#2]
R3 = load [ss#1]
use R3
R3 = R12
R4 = R3 <- oops!
The problem was that promoting R3 = load[ss#2] to a copy missed the fact that
the instruction invalidated R3 at that point.
llvm-svn: 23638
that testcase still does not pass with the dag combiner. This is because
not all forms of br* are folded yet.
Also, when we combine a node into another one, delete the node immediately
instead of waiting for the node to potentially come up in the future.
llvm-svn: 23632
Since calls return more than one value, don't bail if one of their uses
happens to be a node that's not an MVT::Other when following the chain
from CALLSEQ_START to CALLSEQ_END.
Once we've found a CALLSEQ_START, we can just return; there's no need to
tail-recurse further up the graph.
Most importantly, just because something only has one use doesn't mean we
should use it's one use to follow from start to end. This faulty logic
caused us to follow a chain of one-use FP operations back to a much earlier
call, putting a cycle in the graph from a later start to an earlier end.
This is a better fix that reverting to the workaround committed earlier
today.
llvm-svn: 23620
Though I have done extensive testing, it is possible that this will break
things in configs I can't test. Please let me know if this causes a problem
and I'll fix it ASAP.
llvm-svn: 23504
for testing and will require target machine info to do a proper scheduling.
The simple scheduler can be turned on using -sched=simple (defaults
to -sched=none)
llvm-svn: 23455
This happens all the time on PPC for bool values, e.g. eliminating a xori
in inverted-bool-compares.ll.
This should be added to the dag combiner as well.
llvm-svn: 23403
when possible, avoiding the load (and avoiding the copy if the value is already
in the right register).
This patch came about when I noticed code like the following being generated:
store R17 -> [SS1]
...blah...
R4 = load [SS1]
This was causing an LSU reject on the G5. This problem was due to the register
allocator folding spill code into a reg-reg copy (producing the load), which
prevented the spiller from being able to rewrite the load into a copy, despite
the fact that the value was already available in a register. In the case
above, we now rip out the R4 load and replace it with a R4 = R17 copy.
This speeds up several programs on X86 (which spills a lot :) ), e.g.
smg2k from 22.39->20.60s, povray from 12.93->12.66s, 168.wupwise from
68.54->53.83s (!), 197.parser from 7.33->6.62s (!), etc. This may have a larger
impact in some cases on the G5 (by avoiding LSU rejects), though it probably
won't trigger as often (less spilling in general).
Targets that implement folding of loads/stores into copies should implement
the isLoadFromStackSlot hook to get this.
llvm-svn: 23388
are, simplify logic, and cause things to not be nested as deeply. This also
uses MRI->areAliases instead of an explicit loop.
No functionality change, just code cleanup.
llvm-svn: 23296
only add a reload live range once for the instruction. This is one step
towards fixing a regalloc pessimization that Nate notice, but is later undone
by the spiller (so no code is changed).
llvm-svn: 23293
we were losing a node, causing an assertion to fail. Now we eagerly delete
discovered CSE's, and provide an optional vector to keep track of these
discovered equivalences.
llvm-svn: 23255
i64 values on targets that need that expanded to 32-bit registers. This fixes
PowerPC/2005-09-02-LegalizeDuplicatesCalls.ll and speeds up 189.lucas from
taking 122.72s to 81.96s on my desktop.
llvm-svn: 23228
from the binary ops map, even if they had multiple results. This latent bug
caused a few failures with the dag isel last night.
To prevent stuff like this from happening in the future, add some really
strict checking to make sure that the CSE maps always match up with reality!
llvm-svn: 23221
instead of ZERO_EXTEND to eliminate extraneous extensions. This eliminates
dead zero extensions on formal arguments and other cases on PPC, implementing
the newly tightened up test/Regression/CodeGen/PowerPC/small-arguments.ll test.
llvm-svn: 23205
over to DAGCombiner.cpp
1. Don't assume that SetCC returns i1 when folding (xor (setcc) constant)
2. Don't duplicate code in folding AND with AssertZext that is handled by
MaskedValueIsZero
llvm-svn: 23196
them. This allows for elminination of redundant extends in the entry
blocks of functions on PowerPC.
Add support for i32 x i32 -> i64 multiplies, by recognizing when the inputs
to ISD::MUL in ExpandOp are actually just extended i32 values and not real
i64 values. this allows us to codegen
int mulhs(int a, int b) { return ((long long)a * b) >> 32; }
as:
_mulhs:
mulhw r3, r4, r3
blr
instead of:
_mulhs:
mulhwu r2, r4, r3
srawi r5, r3, 31
mullw r5, r4, r5
add r2, r2, r5
srawi r4, r4, 31
mullw r3, r4, r3
add r3, r2, r3
blr
with a similar improvement on x86.
llvm-svn: 23147
token chains first. For this C function:
int test() {
int i;
for (i = 0; i < 100000; ++i)
foo();
}
Instead of emitting this (condition before call)
.LBB_test_1: ; no_exit
addi r30, r30, 1
lis r2, 1
ori r2, r2, 34464
cmpw cr2, r30, r2
bl L_foo$stub
bne cr2, .LBB_test_1 ; no_exit
Emit this:
.LBB_test_1: ; no_exit
bl L_foo$stub
addi r30, r30, 1
lis r2, 1
ori r2, r2, 34464
cmpw cr0, r30, r2
bne cr0, .LBB_test_1 ; no_exit
Which makes it so we don't have to save/restore cr2 in the prolog/epilog of
the function.
This also makes the code much more similar to what the pattern isel produces.
llvm-svn: 23135
putting it into the constant pool. This allows the isel machinery to
create constants that it will end up deciding are not needed, without them
ending up in the resultant function constant pool.
llvm-svn: 23081
select. Also teach it that the bit count instructions can only set the low bits
of the result, depending on the size of the input.
This allows us to compile this:
int %eq0(int %a) {
%tmp.1 = seteq int %a, 0 ; <bool> [#uses=1]
%tmp.2 = cast bool %tmp.1 to int ; <int> [#uses=1]
ret int %tmp.2
}
To this:
_eq0:
cntlzw r2, r3
srwi r3, r2, 5
blr
instead of this:
_eq0:
cntlzw r2, r3
rlwinm r3, r2, 27, 31, 31
blr
when setcc is marked illegal on ppc (which restores parity to non-illegal
setcc). Thanks to Nate for pointing this out.
llvm-svn: 23013
Use this information to avoid doing expensive interval intersections for
registers that could not possible be interesting. This speeds up linscan
on ia64 compiling kc++ in release mode from taking 7.82s to 4.8s(!), total
itanium llc time on this program is 27.3s now. This marginally speeds up
PPC and X86, but they appear to be limited by other parts of linscan, not
this code.
On this program, on itanium, live intervals now takes 41% of llc time.
llvm-svn: 22986
number of regs (e.g. most riscs), many functions won't need to use callee
clobbered registers. Do a speculative check to see if we can get a free
register without processing the fixed list (which has all of these). This
saves a lot of time on machines with lots of callee clobbered regs (e.g.
ppc and itanium, also x86).
This reduces ppc llc compile time from 184s -> 172s on kc++. This is probably
worth FAR FAR more on itanium though.
llvm-svn: 22972
we spill out of the fast path. The scan of active_ and the calls to
updateSpillWeights don't need to happen unless a spill occurs. This reduces
debug llc time of kc++ with ppc from 187.3s to 183.2s.
llvm-svn: 22971
the old condition to a one bit value. The incoming value must have been
promoted, and the top bits are undefined. This causes us to generate:
_test:
rlwinm r2, r3, 0, 31, 31
li r3, 17
cmpwi cr0, r2, 0
bne .LBB_test_2 ;
.LBB_test_1: ;
li r3, 1
.LBB_test_2: ;
blr
instead of:
_test:
rlwinm r2, r3, 0, 31, 31
li r2, 17
cmpwi cr0, r3, 0
bne .LBB_test_2 ;
.LBB_test_1: ;
li r2, 1
.LBB_test_2: ;
or r3, r2, r2
blr
for:
int %test(bool %c) {
%retval = select bool %c, int 17, int 1
ret int %retval
}
llvm-svn: 22947
This gets us this for the previous testcase:
_test:
lis r2, 0
ori r3, r2, 65535
blr
Note that we actually write to r3 (the return reg) correctly now :)
llvm-svn: 22933