Andrew Lenharth
30db2ec59f
allow custom lowering to return null for legal results
...
llvm-svn: 25007
2005-12-25 01:07:37 +00:00
Andrew Lenharth
7259426d88
Support Custom lowering of a few more operations.
...
Alpha needs to custom lower *DIV and *REM
llvm-svn: 25006
2005-12-24 23:42:32 +00:00
Jim Laskey
bdba3e2a46
Remove redundant debug locations.
...
llvm-svn: 24995
2005-12-23 20:08:28 +00:00
Chris Lattner
c7037abc5b
unbreak the build :-/
...
llvm-svn: 24992
2005-12-23 16:12:20 +00:00
Evan Cheng
31d15fa093
Allow custom lowering of LOAD, EXTLOAD, ZEXTLOAD, STORE, and TRUNCSTORE. Not
...
currently used.
llvm-svn: 24988
2005-12-23 07:29:34 +00:00
Chris Lattner
26943b9691
Simplify store(bitconv(x)) to store(x). This allows us to compile this:
...
void bar(double Y, double *X) {
*X = Y;
}
to this:
bar:
save -96, %o6, %o6
st %i1, [%i2+4]
st %i0, [%i2]
restore %g0, %g0, %g0
retl
nop
instead of this:
bar:
save -104, %o6, %o6
st %i1, [%i6+-4]
st %i0, [%i6+-8]
ldd [%i6+-8], %f0
std %f0, [%i2]
restore %g0, %g0, %g0
retl
nop
on sparcv8.
llvm-svn: 24983
2005-12-23 05:48:07 +00:00
Chris Lattner
54560f6887
fold (conv (load x)) -> (load (conv*)x).
...
This allows us to compile this:
void foo(double);
void bar(double *X) { foo(*X); }
To this:
bar:
save -96, %o6, %o6
ld [%i0+4], %o1
ld [%i0], %o0
call foo
nop
restore %g0, %g0, %g0
retl
nop
instead of this:
bar:
save -104, %o6, %o6
ldd [%i0], %f0
std %f0, [%i6+-8]
ld [%i6+-4], %o1
ld [%i6+-8], %o0
call foo
nop
restore %g0, %g0, %g0
retl
nop
on SparcV8.
llvm-svn: 24982
2005-12-23 05:44:41 +00:00
Chris Lattner
efbbedbf4a
Fold bitconv(bitconv(x)) -> x. We now compile this:
...
void foo(double);
void bar(double X) { foo(X); }
to this:
bar:
save -96, %o6, %o6
or %g0, %i0, %o0
or %g0, %i1, %o1
call foo
nop
restore %g0, %g0, %g0
retl
nop
instead of this:
bar:
save -112, %o6, %o6
st %i1, [%i6+-4]
st %i0, [%i6+-8]
ldd [%i6+-8], %f0
std %f0, [%i6+-16]
ld [%i6+-12], %o1
ld [%i6+-16], %o0
call foo
nop
restore %g0, %g0, %g0
retl
nop
on V8.
llvm-svn: 24981
2005-12-23 05:37:50 +00:00
Chris Lattner
a187460552
constant fold bits_convert in getNode and in the dag combiner for fp<->int
...
conversions. This allows V8 to compiles this:
void %test() {
call float %test2( float 1.000000e+00, float 2.000000e+00, double 3.000000e+00, double* null )
ret void
}
into:
test:
save -96, %o6, %o6
sethi 0, %o3
sethi 1049088, %o2
sethi 1048576, %o1
sethi 1040384
, %o0
or %g0, %o3, %o4
call test2
nop
restore %g0, %g0, %g0
retl
nop
instead of:
test:
save -112, %o6, %o6
sethi 0, %o4
sethi 1049088, %l0
st %o4, [%i6+-12]
st %l0, [%i6+-16]
ld [%i6+-12], %o3
ld [%i6+-16], %o2
sethi 1048576, %o1
sethi 1040384
, %o0
call test2
nop
restore %g0, %g0, %g0
retl
nop
llvm-svn: 24980
2005-12-23 05:30:37 +00:00
Chris Lattner
884eb3adc3
Fix a pasto
...
llvm-svn: 24973
2005-12-23 00:52:30 +00:00
Chris Lattner
9eae8d5d03
fix a thinko in the bit_convert handling code
...
llvm-svn: 24972
2005-12-23 00:50:25 +00:00
Chris Lattner
36e663d6e1
add very simple support for the BIT_CONVERT node
...
llvm-svn: 24970
2005-12-23 00:16:34 +00:00
Chris Lattner
177d7af5d5
remove dead code
...
llvm-svn: 24965
2005-12-22 21:16:08 +00:00
Chris Lattner
1408c05a8b
The 81st column doesn't like code in it.
...
llvm-svn: 24943
2005-12-22 05:23:45 +00:00
Reid Spencer
2335fc2f44
Add an eol at the end to shut gcc sup.
...
llvm-svn: 24926
2005-12-22 01:41:00 +00:00
Evan Cheng
9cdc16c6d3
* Fix a GlobalAddress lowering bug.
...
* Teach DAG combiner about X86ISD::SETCC by adding a TargetLowering hook.
llvm-svn: 24921
2005-12-21 23:05:39 +00:00
Jim Laskey
9e296bee9a
Disengage DEBUG_LOC from non-PPC targets.
...
llvm-svn: 24919
2005-12-21 20:51:37 +00:00
Evan Cheng
c1583dbd63
* Added support for X86 RET with an additional operand to specify number of
...
bytes to pop off stack.
* Added support for X86 SETCC.
llvm-svn: 24917
2005-12-21 20:21:51 +00:00
Jim Laskey
7b52a923b8
Start of Dwarf framework.
...
llvm-svn: 24914
2005-12-21 19:48:16 +00:00
Chris Lattner
0fab459362
make sure to relegalize all cases
...
llvm-svn: 24911
2005-12-21 19:40:42 +00:00
Chris Lattner
44c07ed61a
enable the gep isel opt
...
llvm-svn: 24910
2005-12-21 19:36:36 +00:00
Chris Lattner
ac12f68424
fix a bug I introduced that broke recursive expansion of nodes (e.g. scalarizing vectors)
...
llvm-svn: 24905
2005-12-21 18:02:52 +00:00
Chris Lattner
803a575616
Lower ConstantAggregateZero into zeros
...
llvm-svn: 24890
2005-12-21 02:43:26 +00:00
Chris Lattner
434ffe49a9
Don't emit a null terminator, nor anything after it, to the ctor/dtor list
...
llvm-svn: 24887
2005-12-21 01:17:37 +00:00
Evan Cheng
6af02635a7
Added a hook to print out names of target specific DAG nodes.
...
llvm-svn: 24877
2005-12-20 06:22:03 +00:00
Chris Lattner
2af3ee4bdd
Fix a nasty latent bug in the legalizer that was triggered by my patch
...
last night, breaking crafty and twolf. Make sure that the newly found
legal nodes are themselves not re-legalized until the next iteration.
Also, since this functionality exists now, we can reduce number of legalizer
iterations by depending on this behavior instead of having to misuse 'do
another iteration' to get the same effect.
llvm-svn: 24875
2005-12-20 00:53:54 +00:00
Evan Cheng
6fc31046aa
X86 conditional branch support.
...
llvm-svn: 24870
2005-12-19 23:12:38 +00:00
Evan Cheng
9fd9541367
Print out opcode number if it's an unknown target node.
...
llvm-svn: 24869
2005-12-19 23:11:49 +00:00
Chris Lattner
50b2d302d5
Fix a case where the DAG Combiner would accidentally CSE flag-producing nodes,
...
creating graphs that cannot be scheduled.
llvm-svn: 24866
2005-12-19 22:21:21 +00:00
Jim Laskey
9b9688aeb8
Amend comment.
...
llvm-svn: 24861
2005-12-19 16:32:26 +00:00
Jim Laskey
ce23987e6b
Create a strong dependency for loads following stores. This will leave a
...
latency period between the two.
llvm-svn: 24860
2005-12-19 16:30:13 +00:00
Chris Lattner
c06da626b4
Make sure to relegalize new nodes
...
llvm-svn: 24843
2005-12-18 23:54:29 +00:00
Jeff Cohen
c7cb351aac
Keep VC++ happy.
...
llvm-svn: 24835
2005-12-18 22:20:05 +00:00
Chris Lattner
ebcfa0c210
More corrections for flagged copyto/from reg
...
llvm-svn: 24828
2005-12-18 15:36:21 +00:00
Chris Lattner
e3c67e97c7
legalize copytoreg and copyfromreg nodes that have flag operands correctly.
...
llvm-svn: 24826
2005-12-18 15:27:43 +00:00
Jim Laskey
c97b7d0be9
Fix a bug Sabre was having where the DAG root was a group. The group dominator
...
needed to be added to the ordering list, not the first member of the group.
llvm-svn: 24816
2005-12-18 04:40:52 +00:00
Jim Laskey
e220821deb
Groups were not emitted if the dominator node and the node in the ordering list
...
were not the same node. Ultimately the test was bogus.
llvm-svn: 24815
2005-12-18 03:59:21 +00:00
Chris Lattner
cf12118965
Simplify code
...
llvm-svn: 24806
2005-12-18 01:03:46 +00:00
Chris Lattner
bf0bd99e03
allow custom expansion of BR_CC
...
llvm-svn: 24804
2005-12-17 23:46:46 +00:00
Evan Cheng
225a4d0d6d
X86 lowers SELECT to a cmp / test followed by a conditional move.
...
llvm-svn: 24754
2005-12-17 01:21:05 +00:00
Jim Laskey
7c462768ed
Added source file/line correspondence for dwarf (PowerPC only at this point.)
...
llvm-svn: 24748
2005-12-16 22:45:29 +00:00
Chris Lattner
83e4407379
Don't create SEXTLOAD/ZEXTLOAD instructions that the target doesn't support
...
if after legalize. This fixes IA64 failures.
llvm-svn: 24725
2005-12-15 19:02:38 +00:00
Chris Lattner
d39c60fcc8
When folding loads into ops, immediately replace uses of the op with the
...
load. This reduces number of worklist iterations and avoid missing optimizations
depending on folding of things into sext_inreg nodes (which aren't supported by
all targets).
Tested by Regression/CodeGen/X86/extend.ll:test2
llvm-svn: 24712
2005-12-14 19:25:30 +00:00
Chris Lattner
7dac1083da
Fix the (zext (zextload)) case to trigger, similarly for sign extends.
...
Allow (zext (truncate)) to apply after legalize if the target supports
AND (which all do).
This compiles
short %foo() {
%tmp.0 = load ubyte* %X ; <ubyte> [#uses=1]
%tmp.3 = cast ubyte %tmp.0 to short ; <short> [#uses=1]
ret short %tmp.3
}
to:
_foo:
movzbl _X, %eax
ret
instead of:
_foo:
movzbl _X, %eax
movzbl %al, %eax
ret
thanks to Evan for pointing this out.
llvm-svn: 24709
2005-12-14 19:05:06 +00:00
Chris Lattner
f753d1a574
Fix a miscompilation in crafty due to a recent patch
...
llvm-svn: 24706
2005-12-14 07:58:38 +00:00
Evan Cheng
bce7c47306
Fold (zext (load x) to (zextload x).
...
llvm-svn: 24702
2005-12-14 02:19:23 +00:00
Chris Lattner
5d4e61dd87
Don't lump the filename and working dir together
...
llvm-svn: 24697
2005-12-13 17:40:33 +00:00
Chris Lattner
f0e9aef954
Add a couple more fields, move ctor init list to .cpp file, add support
...
for emitting the ctor/dtor list for common targets.
llvm-svn: 24694
2005-12-13 06:32:10 +00:00
Nate Begeman
956aef45c9
Lowering constant pool entries on ppc exposed a bug in the recently added
...
ConstantVec legalizing code, which would return constantpool nodes that
were not of the target's pointer type.
llvm-svn: 24691
2005-12-13 03:03:23 +00:00
Chris Lattner
9e8b633ec1
Accept and ignore prefetches for now
...
llvm-svn: 24678
2005-12-12 22:51:16 +00:00
Chris Lattner
b42ce7ca63
Fix CodeGen/Generic/2005-12-12-ExpandSextInreg.ll
...
llvm-svn: 24677
2005-12-12 22:27:43 +00:00
Chris Lattner
f1a54c0d14
Minor tweak to get isel opt
...
llvm-svn: 24663
2005-12-11 09:05:13 +00:00
Nate Begeman
4e56db674c
Add support for TargetConstantPool nodes to the dag isel emitter, and use
...
them in the PPC backend, to simplify some logic out of Select and
SelectAddr.
llvm-svn: 24657
2005-12-10 02:36:00 +00:00
Evan Cheng
dadc1057ac
Added new getNode and getTargetNode variants for X86 stores.
...
llvm-svn: 24653
2005-12-10 00:37:58 +00:00
Chris Lattner
a6f835f5a0
Avoid emitting two tabs when switching to a named section
...
llvm-svn: 24646
2005-12-09 19:28:49 +00:00
Chris Lattner
268d457b69
Teach legalize how to promote sext_inreg to fix a problem Andrew pointed
...
out to me.
llvm-svn: 24644
2005-12-09 17:32:47 +00:00
Chris Lattner
be73d6eece
improve code insertion in two ways:
...
1. Only forward subst offsets into loads and stores, not into arbitrary
things, where it will likely become a load.
2. If the source is a cast from pointer, forward subst the cast as well,
allowing us to fold the cast away (improving cases when the cast is
from an alloca or global).
This hasn't been fully tested, but does appear to further reduce register
pressure and improve code. Lets let the testers grind on it a bit. :)
llvm-svn: 24640
2005-12-08 08:00:12 +00:00
Nate Begeman
ae89d862f5
Fix a crash where ConstantVec nodes were being generated with the wrong
...
type when the target did not support them. Also teach Legalize how to
expand ConstantVecs.
This allows us to generate
_test:
lwz r2, 12(r3)
lwz r4, 8(r3)
lwz r5, 4(r3)
lwz r6, 0(r3)
addi r2, r2, 4
addi r4, r4, 3
addi r5, r5, 2
addi r6, r6, 1
stw r2, 12(r3)
stw r4, 8(r3)
stw r5, 4(r3)
stw r6, 0(r3)
blr
For:
void %test(%v4i *%P) {
%T = load %v4i* %P
%S = add %v4i %T, <int 1, int 2, int 3, int 4>
store %v4i %S, %v4i * %P
ret void
}
On PowerPC.
llvm-svn: 24633
2005-12-07 19:48:11 +00:00
Chris Lattner
57c882edf8
Only transform (sext (truncate x)) -> (sextinreg x) if before legalize or
...
if the target supports the resultant sextinreg
llvm-svn: 24632
2005-12-07 18:02:05 +00:00
Chris Lattner
cbd3d01a43
Teach the dag combiner to turn a truncate/sign_extend pair into a sextinreg
...
when the types match up. This allows the X86 backend to compile:
sbyte %toggle_value(sbyte* %tmp.1) {
%tmp.2 = load sbyte* %tmp.1
ret sbyte %tmp.2
}
to this:
_toggle_value:
mov %EAX, DWORD PTR [%ESP + 4]
movsx %EAX, BYTE PTR [%EAX]
ret
instead of this:
_toggle_value:
mov %EAX, DWORD PTR [%ESP + 4]
movsx %EAX, BYTE PTR [%EAX]
movsx %EAX, %AL
ret
noticed in Shootout/objinst.
-Chris
llvm-svn: 24630
2005-12-07 07:11:03 +00:00
Nate Begeman
41b1cdc771
Teach the SelectionDAG ISel how to turn ConstantPacked values into
...
constant nodes with vector types. Also teach the asm printer how to print
ConstantPacked constant pool entries. This allows us to generate altivec
code such as the following, which adds a vector constantto a packed float.
LCPI1_0: <4 x float> < float 0.0e+0, float 0.0e+0, float 0.0e+0, float 1.0e+0 >
.space 4
.space 4
.space 4
.long 1065353216 ; float 1
.text
.align 4
.globl _foo
_foo:
lis r2, ha16(LCPI1_0)
la r2, lo16(LCPI1_0)(r2)
li r4, 0
lvx v0, r4, r2
lvx v1, r4, r3
vaddfp v0, v1, v0
stvx v0, r4, r3
blr
For the llvm code:
void %foo(<4 x float> * %a) {
entry:
%tmp1 = load <4 x float> * %a;
%tmp2 = add <4 x float> %tmp1, < float 0.0, float 0.0, float 0.0, float 1.0 >
store <4 x float> %tmp2, <4 x float> *%a
ret void
}
llvm-svn: 24616
2005-12-06 06:18:55 +00:00
Chris Lattner
3539778883
Fix the #1 code quality problem that I have seen on X86 (and it also affects
...
PPC and other targets). In a particular, consider code like this:
struct Vector3 { double x, y, z; };
struct Matrix3 { Vector3 a, b, c; };
double dot(Vector3 &a, Vector3 &b) {
return a.x * b.x + a.y * b.y + a.z * b.z;
}
Vector3 mul(Vector3 &a, Matrix3 &b) {
Vector3 r;
r.x = dot( a, b.a );
r.y = dot( a, b.b );
r.z = dot( a, b.c );
return r;
}
void transform(Matrix3 &m, Vector3 *x, int n) {
for (int i = 0; i < n; i++)
x[i] = mul( x[i], m );
}
we compile transform to a loop with all of the GEP instructions for indexing
into 'm' pulled out of the loop (9 of them). Because isel occurs a bb at a time
we are unable to fold the constant index into the loads in the loop, leading to
PPC code that looks like this:
LBB3_1: ; no_exit.preheader
li r2, 0
addi r6, r3, 64 ;; 9 values live across the loop body!
addi r7, r3, 56
addi r8, r3, 48
addi r9, r3, 40
addi r10, r3, 32
addi r11, r3, 24
addi r12, r3, 16
addi r30, r3, 8
LBB3_2: ; no_exit
lfd f0, 0(r30)
lfd f1, 8(r4)
fmul f0, f1, f0
lfd f2, 0(r3) ;; no constant indices folded into the loads!
lfd f3, 0(r4)
lfd f4, 0(r10)
lfd f5, 0(r6)
lfd f6, 0(r7)
lfd f7, 0(r8)
lfd f8, 0(r9)
lfd f9, 0(r11)
lfd f10, 0(r12)
lfd f11, 16(r4)
fmadd f0, f3, f2, f0
fmul f2, f1, f4
fmadd f0, f11, f10, f0
fmadd f2, f3, f9, f2
fmul f1, f1, f6
stfd f0, 0(r4)
fmadd f0, f11, f8, f2
fmadd f1, f3, f7, f1
stfd f0, 8(r4)
fmadd f0, f11, f5, f1
addi r29, r4, 24
stfd f0, 16(r4)
addi r2, r2, 1
cmpw cr0, r2, r5
or r4, r29, r29
bne cr0, LBB3_2 ; no_exit
uh, yuck. With this patch, we now sink the constant offsets into the loop, producing
this code:
LBB3_1: ; no_exit.preheader
li r2, 0
LBB3_2: ; no_exit
lfd f0, 8(r3)
lfd f1, 8(r4)
fmul f0, f1, f0
lfd f2, 0(r3)
lfd f3, 0(r4)
lfd f4, 32(r3) ;; much nicer.
lfd f5, 64(r3)
lfd f6, 56(r3)
lfd f7, 48(r3)
lfd f8, 40(r3)
lfd f9, 24(r3)
lfd f10, 16(r3)
lfd f11, 16(r4)
fmadd f0, f3, f2, f0
fmul f2, f1, f4
fmadd f0, f11, f10, f0
fmadd f2, f3, f9, f2
fmul f1, f1, f6
stfd f0, 0(r4)
fmadd f0, f11, f8, f2
fmadd f1, f3, f7, f1
stfd f0, 8(r4)
fmadd f0, f11, f5, f1
addi r6, r4, 24
stfd f0, 16(r4)
addi r2, r2, 1
cmpw cr0, r2, r5
or r4, r6, r6
bne cr0, LBB3_2 ; no_exit
This is much nicer as it reduces register pressure in the loop a lot. On X86,
this takes the function from having 9 spilled registers to 2. This should help
some spec programs on X86 (gzip?)
This is currently only enabled with -enable-gep-isel-opt to allow perf testing
tonight.
llvm-svn: 24606
2005-12-05 07:10:48 +00:00
Chris Lattner
8782b782cd
dbg.stoppoint returns a value, don't forget to init it
...
llvm-svn: 24583
2005-12-03 18:50:48 +00:00
Andrew Lenharth
f9b27d7011
bah, must generate all results
...
llvm-svn: 24574
2005-12-02 06:08:08 +00:00
Andrew Lenharth
73420b3795
cycle counter fix
...
llvm-svn: 24573
2005-12-02 04:56:24 +00:00
Chris Lattner
0142afd6c1
Don't remove two operand, two result nodes from the binary ops map. These
...
should come from the arbitrary ops map.
This fixes Regression/CodeGen/PowerPC/2005-12-01-Crash.ll
llvm-svn: 24571
2005-12-01 23:14:50 +00:00
Chris Lattner
05b0b4575b
Promote line and column number information for our friendly 64-bit targets.
...
llvm-svn: 24568
2005-12-01 18:21:35 +00:00
Chris Lattner
9d0d715e83
This is a bugfix for SelectNodeTo. In certain situations, we could be
...
selecting a node and use a mix of getTargetNode() and SelectNodeTo. Because
SelectNodeTo didn't check the CSE maps for a preexisting node and didn't insert
its result into the CSE maps, we would sometimes miss a CSE opportunity.
This is extremely rare, but worth fixing for completeness.
llvm-svn: 24565
2005-12-01 18:00:57 +00:00
Nate Begeman
006bb04f3a
Support multiple ValueTypes per RegisterClass, needed for upcoming vector
...
work. This change has no effect on generated code.
llvm-svn: 24563
2005-12-01 04:51:06 +00:00
Chris Lattner
be5dd5da19
Make SelectNodeTo return N
...
llvm-svn: 24548
2005-11-30 22:45:14 +00:00
Chris Lattner
c174048430
CALLSEQ_START/END nodes don't get memoized, do not add them in when
...
replaceAllUses'ing.
llvm-svn: 24539
2005-11-30 18:20:52 +00:00
Andrew Lenharth
6ee8566cae
At long last, you can say that f32 isn't supported for setcc
...
llvm-svn: 24537
2005-11-30 17:12:26 +00:00
Nate Begeman
1064d6ec43
First chunk of actually generating vector code for packed types. These
...
changes allow us to generate the following code:
_foo:
li r2, 0
lvx v0, r2, r3
vaddfp v0, v0, v0
stvx v0, r2, r3
blr
for this llvm:
void %foo(<4 x float>* %a) {
entry:
%tmp1 = load <4 x float>* %a
%tmp2 = add <4 x float> %tmp1, %tmp1
store <4 x float> %tmp2, <4 x float>* %a
ret void
}
llvm-svn: 24534
2005-11-30 08:22:07 +00:00
Andrew Lenharth
8d17c70171
add support for custom lowering SINT_TO_FP
...
llvm-svn: 24531
2005-11-30 06:43:03 +00:00
Reid Spencer
3fd1b4c9bf
Fix a problem with llvm-ranlib that (on some platforms) caused the archive
...
file to become corrupted due to interactions between mmap'd memory segments
and file descriptors closing. The problem is completely avoiding by using
a third temporary file.
Patch provided by Evan Jones
llvm-svn: 24527
2005-11-30 05:21:10 +00:00
Evan Cheng
11d61613af
Fixed a bug introduced by my last commit: TargetGlobalValues should key on
...
GlobalValue * and index pair. Update getGlobalAddress() for symmetry.
llvm-svn: 24524
2005-11-30 02:49:21 +00:00
Evan Cheng
0e0de2f3f0
Added an index field to GlobalAddressSDNode so it can represent X+12, etc.
...
llvm-svn: 24523
2005-11-30 02:04:11 +00:00
Chris Lattner
435b402e1f
Add support for a new STRING and LOCATION node for line number support, patch
...
contributed by Daniel Berlin, with a few cleanups here and there by me.
llvm-svn: 24515
2005-11-29 06:21:05 +00:00
Nate Begeman
89b049af90
Add the majority of the vector machien value types we expect to support,
...
and make a few changes to the legalization machinery to support more than
16 types.
llvm-svn: 24511
2005-11-29 05:45:29 +00:00
Nate Begeman
d37c13154a
Check in code to scalarize arbitrarily wide packed types for some simple
...
vector operations (load, add, sub, mul).
This allows us to codegen:
void %foo(<4 x float> * %a) {
entry:
%tmp1 = load <4 x float> * %a;
%tmp2 = add <4 x float> %tmp1, %tmp1
store <4 x float> %tmp2, <4 x float> *%a
ret void
}
on ppc as:
_foo:
lfs f0, 12(r3)
lfs f1, 8(r3)
lfs f2, 4(r3)
lfs f3, 0(r3)
fadds f0, f0, f0
fadds f1, f1, f1
fadds f2, f2, f2
fadds f3, f3, f3
stfs f0, 12(r3)
stfs f1, 8(r3)
stfs f2, 4(r3)
stfs f3, 0(r3)
blr
llvm-svn: 24484
2005-11-22 18:16:00 +00:00
Nate Begeman
07890bbec4
Rather than attempting to legalize 1 x float, make sure the SD ISel never
...
generates it. Make MVT::Vector expand-only, and remove the code in
Legalize that attempts to legalize it.
The plan for supporting N x Type is to continually epxand it in ExpandOp
until it gets down to 2 x Type, where it will be scalarized into a pair of
scalars.
llvm-svn: 24482
2005-11-22 01:29:36 +00:00
Duraid Madina
f28b3bd8b4
I think I know what you meant here, but just to be safe I'll let you
...
do it. :)
<_sabre_> excuses excuses
llvm-svn: 24471
2005-11-21 14:09:40 +00:00
Chris Lattner
f2991cee1f
Allow target to customize directive used to switch to arbitrary section in SwitchSection,
...
add generic constant pool emitter
llvm-svn: 24464
2005-11-21 08:25:09 +00:00
Chris Lattner
08adbd13ff
increment the function number in SetupMachineFunction
...
llvm-svn: 24461
2005-11-21 08:13:27 +00:00
Chris Lattner
bb644e39c0
Adjust to capitalized asmprinter method names
...
llvm-svn: 24457
2005-11-21 07:51:36 +00:00
Chris Lattner
2ea5c99eca
Add section switching to common code generator code. Add a couple of
...
asserts.
llvm-svn: 24445
2005-11-21 07:06:27 +00:00
Chris Lattner
44c28c22b7
Legalize MERGE_VALUES, expand READCYCLECOUNTER correctly, so it doesn't
...
break control dependence.
llvm-svn: 24437
2005-11-20 22:56:56 +00:00
Andrew Lenharth
627cbd49b1
The first patch of X86 support for read cycle counter
...
llvm-svn: 24429
2005-11-20 21:32:07 +00:00
Chris Lattner
a8d37d748f
more progress towards bug 291 being finished. Patch by Owen Anderson,
...
HAVE_GV case fixed up by me.
llvm-svn: 24428
2005-11-20 03:45:52 +00:00
Chris Lattner
19baba67b5
Unbreak codegen of bools. This should fix the llc/jit/llc-beta failures
...
from last night.
llvm-svn: 24427
2005-11-19 18:40:42 +00:00
Chris Lattner
377bdbff91
Improve Selection DAG printer portability. Patch by Owen Anderson!
...
llvm-svn: 24425
2005-11-19 07:44:09 +00:00
Chris Lattner
a22eae0163
Teach the graph viewer to handle register operands that are zero.
...
llvm-svn: 24421
2005-11-19 06:58:46 +00:00
Chris Lattner
301015a703
Silence a bogus warning
...
llvm-svn: 24420
2005-11-19 05:51:46 +00:00
Chris Lattner
f090f7eb0e
Add some method variants, patch by Evan Cheng
...
llvm-svn: 24418
2005-11-19 01:44:53 +00:00
Nate Begeman
b2e089c31b
Teach LLVM how to scalarize packed types. Currently, this only works on
...
packed types with an element count of 1, although more generic support is
coming. This allows LLVM to turn the following code:
void %foo(<1 x float> * %a) {
entry:
%tmp1 = load <1 x float> * %a;
%tmp2 = add <1 x float> %tmp1, %tmp1
store <1 x float> %tmp2, <1 x float> *%a
ret void
}
Into:
_foo:
lfs f0, 0(r3)
fadds f0, f0, f0
stfs f0, 0(r3)
blr
llvm-svn: 24416
2005-11-19 00:36:38 +00:00
Nate Begeman
127321b14c
Split out the shift code from visitBinary.
...
llvm-svn: 24412
2005-11-18 07:42:56 +00:00
Chris Lattner
45ca1c0194
Allow targets to custom legalize leaf nodes like GlobalAddress.
...
llvm-svn: 24387
2005-11-17 06:41:44 +00:00
Chris Lattner
4ff65ec745
Teach legalize about targetglobaladdress
...
llvm-svn: 24385
2005-11-17 05:52:24 +00:00
Chris Lattner
f2b62f317c
when debugging lower dbg intrinsics to calls
...
llvm-svn: 24377
2005-11-16 07:22:30 +00:00
Chris Lattner
bba9c372c1
Remove extraneous parents around constants when using a constant expr cast.
...
llvm-svn: 24357
2005-11-15 00:03:16 +00:00
Chris Lattner
dd8eeed096
Teach emitAlignment to handle explicit alignment requests by globals.
...
llvm-svn: 24354
2005-11-14 19:00:06 +00:00
Jeff Cohen
cf1f782a2f
Fix operator precedence bug caught by VC++.
...
llvm-svn: 24318
2005-11-12 00:59:01 +00:00
Andrew Lenharth
de1b5d6baa
added a chain output
...
llvm-svn: 24306
2005-11-11 22:48:54 +00:00
Andrew Lenharth
01aa56397d
continued readcyclecounter support
...
llvm-svn: 24300
2005-11-11 16:47:30 +00:00
Chris Lattner
4f827446da
nuke blank line
...
llvm-svn: 24278
2005-11-10 18:49:46 +00:00
Chris Lattner
c0a1eba0ab
Get rid of casts by #including the right header
...
llvm-svn: 24275
2005-11-10 18:36:17 +00:00
Chris Lattner
747960d21e
Compile C strings to:
...
l1__2E_str_1: ; '.str_1'
.asciz "foo"
not:
.align 0
l1__2E_str_1: ; '.str_1'
.asciz "foo"
llvm-svn: 24273
2005-11-10 18:09:27 +00:00
Chris Lattner
55a6d9067b
add support for .asciz, and enable it by default. If your target assemblerdoesn't support .asciz, just set AscizDirective to null in your asmprinter.
...
This compiles C strings to:
l1__2E_str_1: ; '.str_1'
.asciz "foo"
instead of:
l1__2E_str_1: ; '.str_1'
.ascii "foo\000"
llvm-svn: 24272
2005-11-10 18:06:33 +00:00
Chris Lattner
bf4f233214
Switch the allnodes list from a vector of pointers to an ilist of nodes.This eliminates the vector, allows constant time removal of a node froma graph, and makes iteration over the all nodes list stable when adding
...
nodes to the graph.
llvm-svn: 24263
2005-11-09 23:47:37 +00:00
Chris Lattner
cd6f0f47f2
Refactor intrinsic lowering stuff out of visitCall
...
llvm-svn: 24261
2005-11-09 19:44:01 +00:00
Chris Lattner
af3aefa10e
Handle the trivial (but common) two-op case more efficiently
...
llvm-svn: 24259
2005-11-09 18:48:57 +00:00
Chris Lattner
619dfaa42b
Nuke noop copies.
...
llvm-svn: 24258
2005-11-09 18:22:42 +00:00
Chris Lattner
41fd6d5d27
Fix CodeGen/X86/shift-folding.ll:test3 on X86
...
llvm-svn: 24256
2005-11-09 16:50:40 +00:00
Chris Lattner
35ecaa76fa
Disable some overly-aggressive checking code. This speeds up the local
...
allocator from 23s to 11s on kc++ in debug mode.
llvm-svn: 24255
2005-11-09 05:28:45 +00:00
Chris Lattner
b7cad90e55
Avoid creating a token factor node in trivially redundant cases. This
...
eliminates almost one node per block in common cases.
llvm-svn: 24254
2005-11-09 05:03:03 +00:00
Chris Lattner
43535a19b1
Handle GEP's a bit more intelligently. Fold constant indices early and
...
turn power-of-two multiplies into shifts early to improve compile time.
llvm-svn: 24253
2005-11-09 04:45:33 +00:00
Chris Lattner
c4d6050db6
Allocate the right amount of memory for this vector up front.
...
llvm-svn: 24252
2005-11-08 23:32:44 +00:00
Chris Lattner
88fa11c3d5
Change the ValueList array for each node to be shared instead of individuallyallocated. Further, in the common case where a node has a single value, justreference an element from a small array. This is a small compile-time win.
...
llvm-svn: 24251
2005-11-08 23:30:28 +00:00
Chris Lattner
7e4b5d33cb
Switch the operandlist/valuelist from being vectors to being just an array.This saves 12 bytes from SDNode, but doesn't speed things up substantially
...
(our graphs apparently already fit within the cache on my g5). In any case
this reduces memory usage.
llvm-svn: 24249
2005-11-08 22:07:03 +00:00
Chris Lattner
3ba38cba64
Explicitly initialize some instance vars
...
llvm-svn: 24247
2005-11-08 21:54:57 +00:00
Chris Lattner
aba48dd34c
Clean up RemoveDeadNodes significantly, by eliminating the need for a temporary
...
set and eliminating the need to iterate whenever something is removed (which
can be really slow in some cases). Thx to Jim for pointing out something silly
I was getting stuck on. :)
llvm-svn: 24241
2005-11-08 18:52:27 +00:00
Jim Laskey
1d2f26adcc
Let's try ignoring resource utilization on the backward pass.
...
llvm-svn: 24231
2005-11-07 19:08:53 +00:00
Chris Lattner
629ba44e50
Always compute max align.
...
llvm-svn: 24227
2005-11-06 17:43:20 +00:00
Nate Begeman
3ee3e69556
Add the necessary support to the ISel to allow targets to codegen the new
...
alignment information appropriately. Includes code for PowerPC to support
fixed-size allocas with alignment larger than the stack. Support for
arbitrarily aligned dynamic allocas coming soon.
llvm-svn: 24224
2005-11-06 09:00:38 +00:00
Jim Laskey
904dbb4a27
Fix logic bug in finding retry slot in tally.
...
llvm-svn: 24188
2005-11-05 00:01:25 +00:00
Jim Laskey
ded4759d81
Fix a warning
...
llvm-svn: 24187
2005-11-04 18:26:02 +00:00
Jim Laskey
e682b677c1
Scheduling now uses itinerary data.
...
llvm-svn: 24180
2005-11-04 04:05:35 +00:00
Nate Begeman
ee065281e8
Fix a crash that Andrew noticed, and add a pair of braces to unfconfuse
...
XCode's indenting.
llvm-svn: 24159
2005-11-02 18:42:59 +00:00
Chris Lattner
17df608719
Fix a source of undefined behavior when dealing with 64-bit types. This
...
may fix PR652. Thanks to Andrew for tracking down the problem.
llvm-svn: 24145
2005-11-02 01:47:04 +00:00
Jim Laskey
5ce0538253
1. Embed and not inherit vector for NodeGroup.
...
2. Iterate operands and not uses (performance.)
3. Some long pending comment changes.
llvm-svn: 24119
2005-10-31 12:49:09 +00:00
Chris Lattner
6871b23d02
Significantly simplify this code and make it more aggressive. Instead of having
...
a special case hack for X86, make the hack more general: if an incoming argument
register is not used in any block other than the entry block, don't copy it to
a vreg. This helps us compile code like this:
%struct.foo = type { int, int, [0 x ubyte] }
int %test(%struct.foo* %X) {
%tmp1 = getelementptr %struct.foo* %X, int 0, uint 2, int 100
%tmp = load ubyte* %tmp1 ; <ubyte> [#uses=1]
%tmp2 = cast ubyte %tmp to int ; <int> [#uses=1]
ret int %tmp2
}
to:
_test:
lbz r3, 108(r3)
blr
instead of:
_test:
lbz r2, 108(r3)
or r3, r2, r2
blr
The (dead) copy emitted to copy r3 into a vreg for extra-block uses was
increasing the live range of r3 past the load, preventing the coallescing.
This implements CodeGen/PowerPC/reg-coallesce-simple.ll
llvm-svn: 24115
2005-10-30 19:42:35 +00:00
Chris Lattner
dd5663dfa0
Reduce the number of copies emitted as machine instructions by
...
generating results in vregs that will need them. In the case of something
like this: CopyToReg((add X, Y), reg1024), we no longer emit code like
this:
reg1025 = add X, Y
reg1024 = reg 1025
Instead, we emit:
reg1024 = add X, Y
Whoa! :)
llvm-svn: 24111
2005-10-30 18:54:27 +00:00
Chris Lattner
a70878d4fb
Codegen mul by negative power of two with a shift and negate.
...
This implements test/Regression/CodeGen/PowerPC/mul-neg-power-2.ll,
producing:
_foo:
slwi r2, r3, 1
subfic r3, r2, 63
blr
instead of:
_foo:
mulli r2, r3, -2
addi r3, r2, 63
blr
llvm-svn: 24106
2005-10-30 06:41:49 +00:00
Chris Lattner
4b6d583d7a
Fix DSE to not nuke dead stores unless they redundant store is the same
...
VT as the killing one. Fix fixes PR491
llvm-svn: 24034
2005-10-27 07:10:34 +00:00
Chris Lattner
d8c5c066a1
Add a simple xform that is useful for bitfield operations.
...
llvm-svn: 24029
2005-10-27 05:06:38 +00:00
Chris Lattner
3c7974aade
Fix some spello's pointed out by Gabor Greif
...
llvm-svn: 24019
2005-10-26 18:41:41 +00:00
Nate Begeman
d8f2a1a0f3
Allow custom lowered FP_TO_SINT ops in the check for whether a larger
...
FP_TO_SINT is preferred to a larger FP_TO_UINT. This seems to be begging
for a TLI.isOperationCustom() helper function.
llvm-svn: 23992
2005-10-25 23:47:25 +00:00
Chris Lattner
3b409a85eb
Clear a bit in this file that was causing a miscompilation of 178.galgel.
...
llvm-svn: 23980
2005-10-25 18:57:30 +00:00
Chris Lattner
476b8ddd55
Alkis agrees that that iterative scan allocator isn't going to be worked on
...
in the future, remove it.
llvm-svn: 23952
2005-10-24 04:14:30 +00:00
Jeff Cohen
11e26b52b2
When a function takes a variable number of pointer arguments, with a zero
...
pointer marking the end of the list, the zero *must* be cast to the pointer
type. An un-cast zero is a 32-bit int, and at least on x86_64, gcc will
not extend the zero to 64 bits, thus allowing the upper 32 bits to be
random junk.
The new END_WITH_NULL macro may be used to annotate a such a function
so that GCC (version 4 or newer) will detect the use of un-casted zero
at compile time.
llvm-svn: 23888
2005-10-23 04:37:20 +00:00
Andrew Lenharth
4b3932aa89
add TargetExternalSymbol
...
llvm-svn: 23886
2005-10-23 03:40:17 +00:00
Chris Lattner
9faa5b7a9a
BuildSDIV and BuildUDIV only work for i32/i64, but they don't check that
...
the input is that type, this caused a failure on gs on X86 last night.
Move the hard checks into Build[US]Div since that is where decisions like
this should be made.
llvm-svn: 23881
2005-10-22 18:50:15 +00:00
Chris Lattner
75ea5b10bf
add a case missing from the dag combiner that exposed the failure on
...
2005-10-21-longlonggtu.ll.
llvm-svn: 23875
2005-10-21 21:23:25 +00:00
Chris Lattner
e95b5745c0
Make the coallescer a bit smarter, allowing it to join more live ranges.
...
For example, we can now join things like [0-30:0)[31-40:1)[52-59:2)
with [40:60:0) if the 52-59 range is defined by a copy from the 40-60 range.
The resultant range ends up being [0-30:0)[31-60:1).
This fires a lot through-out the test suite (e.g. shrinking bc from
19492 -> 18509 machineinstrs) though most gains are smaller (e.g. about
50 copies eliminated from crafty).
llvm-svn: 23866
2005-10-21 06:49:50 +00:00
Chris Lattner
76c97afbbc
Fix LiveInterval::getOverlapingRanges to take things in the right order
...
(an unused method).
Fix the merger so that it can merge ranges like this [10:12)[16:40) with
[12:38) into [10:40) instead of bogus ranges. This sort of input will be
possible for the merger coming shortly
llvm-svn: 23865
2005-10-21 06:41:30 +00:00
Nate Begeman
8f62cd32ad
Fix a typo in the dag combiner, so that this can work on i64 targets
...
llvm-svn: 23856
2005-10-21 01:51:45 +00:00
Nate Begeman
4dd383120f
Invert the TargetLowering flag that controls divide by consant expansion.
...
Add a new flag to TargetLowering indicating if the target has really cheap
signed division by powers of two, make ppc use it. This will probably go
away in the future.
Implement some more ISD::SDIV folds in the dag combiner
Remove now dead code in the x86 backend.
llvm-svn: 23853
2005-10-21 00:02:42 +00:00
Chris Lattner
b7b75e1b68
Fix a conditional so we don't access past the end of the range. Thanks to
...
Andrew for bringing this to my attn.
llvm-svn: 23850
2005-10-20 22:50:10 +00:00
Nate Begeman
7efe53d90b
Fix a couple bugs in the const div stuff where we'd generate MULHS/MULHU
...
for types that aren't legal, and fail a divisor is less than zero
comparison, which would cause us to drop a subtract.
llvm-svn: 23846
2005-10-20 17:45:03 +00:00
Chris Lattner
a6efeb01f9
don't use llabs with apparently VC++ doesn't have
...
llvm-svn: 23845
2005-10-20 17:01:00 +00:00