Commit Graph

18465 Commits

Author SHA1 Message Date
Chris Lattner 0e852afb4c Instead of making ZERO_EXTEND_INREG nodes, use the helper method in
SelectionDAG to do the job with AND.  Don't legalize Z_E_I anymore as
it is gone

llvm-svn: 21266
2005-04-13 02:38:47 +00:00
Chris Lattner 2b4e3fca38 Remove all foldings of ZERO_EXTEND_INREG, moving them to work for AND nodes
instead.  OVerall, this increases the amount of folding we can do.

llvm-svn: 21265
2005-04-13 02:38:18 +00:00
Chris Lattner 50b63f7015 Add a new helper method which returns the and that is equivalent to what
ZERO_EXTEND_INREG was.

llvm-svn: 21264
2005-04-13 02:37:19 +00:00
Chris Lattner 71886d95d5 Remove the ZERO_EXTEND_INREG node which is redundant with AND
llvm-svn: 21263
2005-04-13 02:36:41 +00:00
Nate Begeman ca916ba4a0 Fold shift x, [sz]ext(y) -> shift x, y
llvm-svn: 21262
2005-04-12 23:32:28 +00:00
Nate Begeman af1c0f7a00 Fold shift by size larger than type size to undef
Make llvm undef values generate ISD::UNDEF nodes

llvm-svn: 21261
2005-04-12 23:12:17 +00:00
Nate Begeman 818eb6ddd2 Implement setcc op, -1 sequences
Remove dead setcc op, 0 sequences
Coming later: generalization of op, imm

llvm-svn: 21260
2005-04-12 21:22:28 +00:00
Chris Lattner 0b73a6d8bc promote extload i1 -> extload i8
llvm-svn: 21258
2005-04-12 20:30:10 +00:00
Chris Lattner 9daef352e9 add an argument to allow avoiding deleting phi nodes.
llvm-svn: 21255
2005-04-12 18:52:14 +00:00
Chris Lattner eb958b0e45 add an argument.
llvm-svn: 21254
2005-04-12 18:51:53 +00:00
Chris Lattner 95f16a3ac4 Get rid of this for_each loop
llvm-svn: 21253
2005-04-12 18:51:33 +00:00
Duraid Madina fd469bddac * OK, after changing to use liveIn/liveOut instead of IDEFs,
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.

FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:

out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127

and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)

llvm-svn: 21252
2005-04-12 18:42:59 +00:00
Andrew Lenharth 740f93ca10 Get rid of idefs for arguments (oops)
llvm-svn: 21251
2005-04-12 17:47:57 +00:00
Andrew Lenharth 10c6eb4be2 Get rid of idefs for arguments
llvm-svn: 21250
2005-04-12 17:35:16 +00:00
Chris Lattner 14f72885dd Put out* into the allocation order, allowing the register allocator to
coallesce moves into outgoing args.

llvm-svn: 21249
2005-04-12 15:12:51 +00:00
Chris Lattner 6b91767b77 Make sure to realize that calls use their argument regs
llvm-svn: 21248
2005-04-12 15:12:19 +00:00
Duraid Madina b6dfb227b7 stop emitting IDEFs for args - change to using liveIn/liveOut
llvm-svn: 21247
2005-04-12 14:54:44 +00:00
Nate Begeman f67f3bf627 Initial support for allocation condition registers
llvm-svn: 21246
2005-04-12 07:04:16 +00:00
Chris Lattner 6febe5ef40 Fix a crash analyzing MultiSource/Benchmarks/MallocBench/gs
llvm-svn: 21245
2005-04-12 03:59:27 +00:00
Chris Lattner af5b25f139 Remove some redundant checks, add a couple of new ones. This allows us to
compile this:

int foo (unsigned long a, unsigned long long g) {
  return a >= g;
}

To:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        cmpl $0, 12(%esp)
        sete %cl
        andb %al, %cl
        movzbl %cl, %eax
        ret

instead of:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        movzbw %al, %cx
        movl 12(%esp), %edx
        cmpl $0, %edx
        sete %al
        movzbw %al, %ax
        cmpl $0, %edx
        cmove %cx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21244
2005-04-12 02:54:39 +00:00
Chris Lattner aedcabe8db Emit comparisons against the sign bit better. Codegen this:
bool %test1(long %X) {
        %A = setlt long %X, 0
        ret bool %A
}

like this:

test1:
        cmpl $0, 8(%esp)
        setl %al
        movzbl %al, %eax
        ret

instead of:

test1:
        movl 8(%esp), %ecx
        cmpl $0, %ecx
        setl %al
        movzbw %al, %ax
        cmpl $0, 4(%esp)
        setb %dl
        movzbw %dl, %dx
        cmpl $0, %ecx
        cmove %dx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21243
2005-04-12 02:19:10 +00:00
Chris Lattner 71ff44e46c Emit long comparison against -1 better. Instead of this (x86):
test2:
        movl 8(%esp), %eax
        notl %eax
        movl 4(%esp), %ecx
        notl %ecx
        orl %eax, %ecx
        cmpl $0, %ecx
        sete %al
        movzbl %al, %eax
        ret

or this (PPC):

_test2:
        nor r2, r4, r4
        nor r3, r3, r3
        or r2, r2, r3
        cntlzw r2, r2
        srwi r3, r2, 5
        blr

Emit this:

test2:
        movl 8(%esp), %eax
        andl 4(%esp), %eax
        cmpl $-1, %eax
        sete %al
        movzbl %al, %eax
        ret

or this:

_test2:
.LBB_test2_0:   ;
        and r2, r4, r3
        cmpwi cr0, r2, -1
        li r3, 1
        li r2, 0
        beq .LBB_test2_2        ;
.LBB_test2_1:   ;
        or r3, r2, r2
.LBB_test2_2:   ;
        blr

it seems like the PPC isel could do better for R32 == -1 case.

llvm-svn: 21242
2005-04-12 01:46:05 +00:00
Chris Lattner 87bd69884a canonicalize x <u 1 -> x == 0. On this testcase:
unsigned long long g;
unsigned long foo (unsigned long a) {
  return (a >= g) ? 1 : 0;
}

It changes the ppc code from:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cmplwi cr0, r4, 1
        li r3, 1
        li r5, 0
        blt .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r3, r5, r5
.LBB_foo_4:     ; entry
        cmpwi cr0, r4, 0
        beq .LBB_foo_6  ; entry
.LBB_foo_5:     ; entry
        or r2, r3, r3
.LBB_foo_6:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr


to:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cntlzw r3, r4
        srwi r3, r3, 5
        cmpwi cr0, r4, 0
        beq .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r2, r3, r3
.LBB_foo_4:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr

llvm-svn: 21241
2005-04-12 00:28:49 +00:00
Nate Begeman 79a3bea4ca Implement bitfield clears
Implement divide by negative power of two

llvm-svn: 21240
2005-04-12 00:10:02 +00:00
Nate Begeman 08698cf644 Update PPC readme. Remove things that are done or aren't ppc specific
llvm-svn: 21232
2005-04-11 20:48:57 +00:00
Chris Lattner 8ffd004920 Teach the dag mechanism that this:
long long test2(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) + B;
}

is equivalent to this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

Now they are both codegen'd to this on ppc:

_test2:
        blr

or this on x86:

test2:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21231
2005-04-11 20:29:59 +00:00
Chris Lattner edd197062f Fix expansion of shifts by exactly NVT bits on arch's (like X86) that have
masking shifts.

This fixes the miscompilation of this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

into this:

test1:
        movl 4(%esp), %edx
        movl %edx, %eax
        orl 8(%esp), %eax
        ret

allowing us to generate this instead:

test1:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21230
2005-04-11 20:08:52 +00:00
Chris Lattner 607bd26b38 IA64 supports this operation.
llvm-svn: 21228
2005-04-11 18:55:36 +00:00
Chris Lattner 67291ea580 ORo sets CR0
llvm-svn: 21227
2005-04-11 15:03:48 +00:00
Chris Lattner f29cc88210 Revert the previous patch, which I didn't mean to check in.
llvm-svn: 21226
2005-04-11 15:03:41 +00:00
Chris Lattner d3dc31009f Fix a minor bug (ORo didn't mark that it set CR0).
Refactor how . instructions are handled.  In particular, instead of passing
the RC flag all the way up the inheritance hierarchy, just make a new tblgen
class 'DOT' which can be added to an instruction definition.

For example, instead of this:

-def AND  : XForm_6<31,  28, 0, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
-let Defs = [CR0] in
-def ANDo : XForm_6<31,  28, 1, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
-                   "and. $rA, $rS, $rB">;

We now have this:

+def AND  : XForm_6<31,  28, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
                    "and $rA, $rS, $rB">;

llvm-svn: 21225
2005-04-11 15:01:39 +00:00
Duraid Madina 8de7ac092d hmm, should probably change addImm() to take 64-bit arguments one day anyway.
llvm-svn: 21224
2005-04-11 07:16:39 +00:00
Duraid Madina 247def9c2b rename addU64Imm() to addImm64()
llvm-svn: 21223
2005-04-11 07:14:41 +00:00
Nate Begeman bebefac791 Add recording variants of ISD::AND and ISD::OR. This kills almost 1000
(1.5%) instructions in 186.crafty

llvm-svn: 21222
2005-04-11 06:34:10 +00:00
Duraid Madina fb43ef78c5 assorted fixes:
* clean up immediates (we use 14, 22 and 64 bit immediates now. sane.)
  * fold r0/f0/f1 registers into comparisons against 0/0.0/1.0
  * fix nasty thinko - didn't use two-address form of conditional add
    for extending bools to integers, so occasionally there would be
    garbage in the result. it's amazing how often zeros are just
    sitting around in registers ;) - this should fix a bunch of tests.

llvm-svn: 21221
2005-04-11 05:55:56 +00:00
Reid Spencer 7a763bfbc5 Ensure that the arguments passed to sys::Program::ExecuteAndWait include
the program name as the first argument. Thanks go to Markus Oberhumer for
noticing this problem.

llvm-svn: 21220
2005-04-11 05:48:04 +00:00
Jeff Cohen a3b1458175 Eliminate tabs
llvm-svn: 21216
2005-04-11 03:44:22 +00:00
Jeff Cohen ecbfa98ce7 Eliminate major source of VC++ "possible loss of data" warnings.
llvm-svn: 21215
2005-04-11 03:38:28 +00:00
Nate Begeman add0c63ad2 Fix libcall code to not pass a NULL Chain to LowerCallTo
Fix libcall code to not crash or assert looking for an ADJCALLSTACKUP node
  when it is known that there is no ADJCALLSTACKDOWN to match.
Expand i64 multiply when ISD::MULHU is legal for the target.

llvm-svn: 21214
2005-04-11 03:01:51 +00:00
Chris Lattner e2427c9afc Don't bother sign/zext_inreg'ing the result of an and operation if we know
the result does change as a result of the extend.

This improves codegen for Alpha on this testcase:

int %a(ushort* %i) {
        %tmp.1 = load ushort* %i
        %tmp.2 = cast ushort %tmp.1 to int
        %tmp.4 = and int %tmp.2, 1
        ret int %tmp.4
}

Generating:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        ret $31,($26),1

instead of:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        addl $0,0,$0
        ret $31,($26),1

btw, alpha really should switch to livein/outs for args :)

llvm-svn: 21213
2005-04-10 23:37:16 +00:00
Chris Lattner a3b7ef05f4 Teach legalize to deal with targets that don't support some SEXTLOAD/ZEXTLOADs
llvm-svn: 21212
2005-04-10 22:54:25 +00:00
Chris Lattner 672fe7267b The first argument to ExecuteAndWait should be the program name, but pointed
out by Markus F.X.J. Oberhumer.

llvm-svn: 21211
2005-04-10 20:59:38 +00:00
Chris Lattner 751cc5f49f fix this testcase so the regex doesn't match the function name
llvm-svn: 21210
2005-04-10 20:45:35 +00:00
Chris Lattner 391a351ede don't zextload fp values!
llvm-svn: 21209
2005-04-10 17:40:35 +00:00
Duraid Madina 7b0287b78d * store immediate values as int64_t, not int. come on, we should be happy
when there are immediates, let's not worry about the memory overhead of
this :)

* add addU64Imm(uint64_t val) to machineinstrbuilder

(seriously: this seems required to support 64-bit immediates cleanly. if it
_really_ gets on your nerves, feel free to pull it out ;) )

coming up next week: "all your floating point constants are belong to us"

llvm-svn: 21208
2005-04-10 09:18:55 +00:00
Nate Begeman 492370311d Fix another fixme: factor out the constant fp generation code.
llvm-svn: 21207
2005-04-10 06:06:10 +00:00
Nate Begeman 941a01802f Fix 64 bit argument loading that straddles the args in regs / args on stack
boundary.

llvm-svn: 21206
2005-04-10 05:53:14 +00:00
Chris Lattner c53cd501b5 Until we have a dag combiner, promote using zextload's instead of extloads.
This gives the optimizer a bit of information about the top-part of the
value.

llvm-svn: 21205
2005-04-10 04:33:47 +00:00
Chris Lattner f74c794ccf Fold zext_inreg(zextload), likewise for sext's
llvm-svn: 21204
2005-04-10 04:33:08 +00:00
Chris Lattner f2bff92411 add a simple xform
llvm-svn: 21203
2005-04-10 04:04:49 +00:00