directly out of R1 (without using a CopyFromReg, which uses a chain), multiple
allocas were getting CSE'd together, producing bogus code. For this:
int %foo(bool %X, int %A, int %B) {
br bool %X, label %T, label %F
F:
%G = alloca int
%H = alloca int
store int %A, int* %G
store int %B, int* %H
%R = load int* %G
ret int %R
T:
ret int 0
}
We were generating:
_foo:
stwu r1, -16(r1)
stw r31, 4(r1)
or r31, r1, r1
stw r1, 12(r31)
cmpwi cr0, r3, 0
bne cr0, .LBB_foo_2 ; T
.LBB_foo_1: ; F
li r2, 16
subf r2, r2, r1 ;; One alloca
or r1, r2, r2
or r3, r1, r1
or r1, r2, r2
or r2, r1, r1
stw r4, 0(r3)
stw r5, 0(r2)
lwz r3, 0(r3)
lwz r1, 12(r31)
lwz r31, 4(r31)
lwz r1, 0(r1)
blr
.LBB_foo_2: ; T
li r3, 0
lwz r1, 12(r31)
lwz r31, 4(r31)
lwz r1, 0(r1)
blr
Now we generate:
_foo:
stwu r1, -16(r1)
stw r31, 4(r1)
or r31, r1, r1
stw r1, 12(r31)
cmpwi cr0, r3, 0
bne cr0, .LBB_foo_2 ; T
.LBB_foo_1: ; F
or r2, r1, r1
li r3, 16
subf r2, r3, r2 ;; Alloca 1
or r1, r2, r2
or r2, r1, r1
or r6, r1, r1
subf r3, r3, r6 ;; Alloca 2
or r1, r3, r3
or r3, r1, r1
stw r4, 0(r2)
stw r5, 0(r3)
lwz r3, 0(r2)
lwz r1, 12(r31)
lwz r31, 4(r31)
lwz r1, 0(r1)
blr
.LBB_foo_2: ; T
li r3, 0
lwz r1, 12(r31)
lwz r31, 4(r31)
lwz r1, 0(r1)
blr
This fixes Povray and SPASS with the dag isel, the last two failing cases.
Tommorow we will hopefully turn it on by default! :)
llvm-svn: 23190
could cause a miscompile. Fixing this didn't fix the two programs that fail
though. :(
This also changes the implementation to follow the pattern selector more
closely, causing us to select 0 to li instead of lis.
llvm-svn: 23189
this, it is a requirement on PPC, which can have an f32 value in r3 at one
point in a function and a f64 value in r3 at another point. :(
llvm-svn: 23160
Remove code (last hunk) that miscompiled immediate and's, such as
and uint %tmp.30, 4294958079
into
andi. r8, r8, 56319
andis. r8, r8, 65535
instead of:
li r9, -9217
and r8, r8, r9
The first always generates zero.
This fixes espresso.
llvm-svn: 23155
them. This allows for elminination of redundant extends in the entry
blocks of functions on PowerPC.
Add support for i32 x i32 -> i64 multiplies, by recognizing when the inputs
to ISD::MUL in ExpandOp are actually just extended i32 values and not real
i64 values. this allows us to codegen
int mulhs(int a, int b) { return ((long long)a * b) >> 32; }
as:
_mulhs:
mulhw r3, r4, r3
blr
instead of:
_mulhs:
mulhwu r2, r4, r3
srawi r5, r3, 31
mullw r5, r4, r5
add r2, r2, r5
srawi r4, r4, 31
mullw r3, r4, r3
add r3, r2, r3
blr
with a similar improvement on x86.
llvm-svn: 23147
linking them to calls when appropriate, this prevents the scheduler from
pulling these copies away from the call.
This fixes Ptrdist/yacr2
llvm-svn: 23143