long long test2(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) + B;
}
is equivalent to this:
long long test1(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) | B;
}
Now they are both codegen'd to this on ppc:
_test2:
blr
or this on x86:
test2:
movl 4(%esp), %edx
movl 8(%esp), %eax
ret
llvm-svn: 21231
masking shifts.
This fixes the miscompilation of this:
long long test1(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) | B;
}
into this:
test1:
movl 4(%esp), %edx
movl %edx, %eax
orl 8(%esp), %eax
ret
allowing us to generate this instead:
test1:
movl 4(%esp), %edx
movl 8(%esp), %eax
ret
llvm-svn: 21230
Fix libcall code to not crash or assert looking for an ADJCALLSTACKUP node
when it is known that there is no ADJCALLSTACKDOWN to match.
Expand i64 multiply when ISD::MULHU is legal for the target.
llvm-svn: 21214
dont' regen the whole dag if unneccesary. Second, fix and ugly bug with
the _PARTS nodes that caused legalize to produce multiples of them.
Finally, implement initial support for FABS and FNEG. Currently FNEG is
the only one to be trusted though.
llvm-svn: 21009
Teach the SelectionDAG code how to expand and promote it
Have PPC32 LowerCallTo generate ISD::UNDEF for int arg regs used up by fp
arguments, but not shadowing their value. This allows us to do the right
thing with both fixed and vararg floating point arguments.
llvm-svn: 20988
Changing 'op' here caused us to not enter the store into a map, causing
reemission of the code!! In practice, a simple loop like this:
no_exit: ; preds = %no_exit, %entry
%indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ] ; <uint> [#uses=3]
%tmp.4 = getelementptr "complex long double"* %P, uint %indvar, uint 0 ; <double*> [#uses=1]
store double 0.000000e+00, double* %tmp.4
%indvar.next = add uint %indvar, 1 ; <uint> [#uses=2]
%exitcond = seteq uint %indvar.next, %N ; <bool> [#uses=1]
br bool %exitcond, label %return, label %no_exit
was being code gen'd to:
.LBBtest_1: # no_exit
movl %edx, %esi
shll $4, %esi
movl $0, 4(%eax,%esi)
movl $0, (%eax,%esi)
incl %edx
movl $0, (%eax,%esi)
movl $0, 4(%eax,%esi)
cmpl %ecx, %edx
jne .LBBtest_1 # no_exit
Note that we are doing 4 32-bit stores instead of 2. Now we generate:
.LBBtest_1: # no_exit
movl %edx, %esi
incl %esi
shll $4, %edx
movl $0, (%eax,%edx)
movl $0, 4(%eax,%edx)
cmpl %ecx, %esi
movl %esi, %edx
jne .LBBtest_1 # no_exit
This is much happier, though it would be even better if the increment of ESI
was scheduled after the compare :-/
llvm-svn: 20265
select operations or to shifts that are by a constant. This automatically
implements (with no special code) all of the special cases for shift by 32,
shift by < 32 and shift by > 32.
llvm-svn: 19679
do it. This results in better code on X86 for floats (because if strict
precision is not required, we can elide some more expensive double -> float
conversions like the old isel did), and allows other targets to emit
CopyFromRegs that are not legal for arguments.
llvm-svn: 19668
track of how to deal with it, and provide the target with a hook that they
can use to legalize arbitrary operations in arbitrary ways.
Implement custom lowering for a couple of ops, implement promotion for select
operations (which x86 needs).
llvm-svn: 19613