Commit Graph

18331 Commits

Author SHA1 Message Date
Chris Lattner 3ff0e11294 implementing shifting of literal integers
llvm-svn: 21336
2005-04-19 01:17:35 +00:00
Chris Lattner 101fc501d0 Add initial lexer and parser support for shifting values. Every use of this
will lead to it being rejected though.

llvm-svn: 21335
2005-04-19 01:11:03 +00:00
Nate Begeman 2331c061ee Next round of PPC CR optimizations. For the following code:
int %bar(float %a, float %b, float %c, float %d) {
entry:
    %tmp.1 = setlt float %a, %d
    %tmp.2 = setlt float %b, %d
    %or = or bool %tmp.1, %tmp.2
    %tmp.3 = setgt float %c, %d
    %tmp.4 = or bool %or, %tmp.3
    %tmp.5 = and bool %tmp.4, true
    %retval = cast bool %tmp.5 to int
    ret int %retval
}

We now emit:

_bar:
.LBB_bar_0:     ; entry
        fcmpu cr0, f1, f4
        fcmpu cr1, f2, f4
        cror 0, 0, 4
        fcmpu cr1, f3, f4
        cror 28, 0, 5
        mfcr r2
        rlwinm r3, r2, 29, 31, 31
        blr

Instead of:

_bar:
.LBB_bar_0:     ; entry
        fcmpu cr7, f1, f4
        mfcr r2
        rlwinm r2, r2, 29, 31, 31
        fcmpu cr7, f2, f4
        mfcr r3
        rlwinm r3, r3, 29, 31, 31
        or r2, r2, r3
        fcmpu cr7, f3, f4
        mfcr r3
        rlwinm r3, r3, 30, 31, 31
        or r3, r2, r3
        blr

llvm-svn: 21321
2005-04-18 07:48:09 +00:00
Chris Lattner ee84413730 silence a bogus warning
llvm-svn: 21320
2005-04-18 05:26:21 +00:00
Chris Lattner b61ecb5875 Fold setcc of MVT::i1 operands into logical operations
llvm-svn: 21319
2005-04-18 04:48:12 +00:00
Chris Lattner 6d40fd01fe Another minor simplification: handle setcc (zero_extend x), c -> setcc(x, c')
llvm-svn: 21318
2005-04-18 04:30:45 +00:00
Chris Lattner 868d473009 Another simple xform
llvm-svn: 21317
2005-04-18 04:11:19 +00:00
Chris Lattner bd22d83d15 Fold:
// (X != 0) | (Y != 0) -> (X|Y != 0)
        // (X == 0) & (Y == 0) -> (X|Y == 0)

Compiling this:

int %bar(int %a, int %b) {
        entry:
        %tmp.1 = setne int %a, 0
        %tmp.2 = setne int %b, 0
        %tmp.3 = or bool %tmp.1, %tmp.2
        %retval = cast bool %tmp.3 to int
        ret int %retval
        }

to this:

_bar:
        or r2, r3, r4
        addic r3, r2, -1
        subfe r3, r3, r2
        blr

instead of:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r3, r2, r3
        blr

llvm-svn: 21316
2005-04-18 03:59:53 +00:00
Chris Lattner d929f8bcd3 Make the AND elimination operation recursive and significantly more powerful,
eliminating an and for Nate's testcase:

int %bar(int %a, int %b) {
        entry:
        %tmp.1 = setne int %a, 0
        %tmp.2 = setne int %b, 0
        %tmp.3 = or bool %tmp.1, %tmp.2
        %retval = cast bool %tmp.3 to int
        ret int %retval
        }

generating:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r3, r2, r3
        blr

instead of:

_bar:
        addic r2, r3, -1
        subfe r2, r2, r3
        addic r3, r4, -1
        subfe r3, r3, r4
        or r2, r2, r3
        rlwinm r3, r2, 0, 31, 31
        blr

llvm-svn: 21315
2005-04-18 03:48:41 +00:00
Nate Begeman 602a45f415 Change codegen for setcc to read the bit directly out of the condition
register.  Added support in the .td file for the g5-specific variant
  of cr -> gpr moves that executes faster, but we currently don't
  generate it.

llvm-svn: 21314
2005-04-18 02:43:24 +00:00
Chris Lattner ed5189da51 Add support for targets that require stubs for external functions.
llvm-svn: 21313
2005-04-18 01:44:27 +00:00
Chris Lattner f281f791b5 Handle ExternalSymbol operands in the PPC JIT
llvm-svn: 21312
2005-04-18 00:46:10 +00:00
Nate Begeman f24225d4d7 Update dejagnu tests to use the new pattern isel flag
llvm-svn: 21311
2005-04-16 04:25:48 +00:00
Jeff Cohen df4f480498 Add CondPropagate.cpp to VC++ Transforms project
llvm-svn: 21310
2005-04-16 02:51:44 +00:00
Nate Begeman 779c5cbb44 Make pattern isel default for ppc
Add new ppc beta option related to using condition registers
Make pattern isel control flag (-enable-pattern-isel) global and tristate
  0 == off
  1 == on
  2 == target default

llvm-svn: 21309
2005-04-15 22:12:16 +00:00
Chris Lattner e0a9d042e2 new pass
llvm-svn: 21307
2005-04-15 21:13:16 +00:00
Chris Lattner 16a50fd0a0 a new simple pass, which will be extended to be more useful in the future.
This pass forward branches through conditions when it can show that the
conditions is either always true or false for a predecessor.  This currently
only handles the most simple cases of this, but is successful at threading
across 2489 branches and 65 switch instructions in 176.gcc, which isn't bad.

llvm-svn: 21306
2005-04-15 19:28:32 +00:00
Chris Lattner d041242256 add a new prototype
llvm-svn: 21305
2005-04-15 19:24:49 +00:00
Chris Lattner 8579c2c474 new testcase
llvm-svn: 21304
2005-04-15 19:24:36 +00:00
Andrew Lenharth 00ce283b3f fix calls
llvm-svn: 21303
2005-04-14 17:34:20 +00:00
Andrew Lenharth 7ae3aba5aa a 21264 fix, and fix the operator precidence on an and -> zap check (should fix hundreds of test cases
llvm-svn: 21302
2005-04-14 16:24:00 +00:00
Andrew Lenharth 10b7bceb8e added a random and mask test
llvm-svn: 21301
2005-04-14 16:17:49 +00:00
Duraid Madina 0a7c2b9078 print negative 64 bit immediates as negative numbers, makes things a little
easier on the eyes, not that numbers like 18446744073709541376 are bad or
anything

llvm-svn: 21300
2005-04-14 10:08:01 +00:00
Duraid Madina dfbbcc098b oops, this stopped us turning movl r4=0xFFFFFFFF;; and rX, r4 into zxt4
llvm-svn: 21299
2005-04-14 10:06:35 +00:00
Nate Begeman 53d3eccbe2 Implement multi-way branches through logical ops on condition registers.
This can generate considerably shorter code, reducing the size of crafty
by almost 1%.  Also fix the printing of mcrf.  The code is currently
disabled until it gets a bit more testing, but should work as-is.

llvm-svn: 21298
2005-04-14 09:45:08 +00:00
Nate Begeman 80c095f422 Add a couple missing transforms in getSetCC that were triggering assertions
in the PPC Pattern ISel

llvm-svn: 21297
2005-04-14 08:56:52 +00:00
Duraid Madina f6b666fb06 we have zextloads, not sextloads!
llvm-svn: 21296
2005-04-14 08:37:32 +00:00
Nate Begeman 65a82c562e Add the necessary support to codegen condition register logical ops with
register allocated condition registers.  Make sure that the printed
  output is gas compatible.

llvm-svn: 21295
2005-04-14 03:20:38 +00:00
Nate Begeman be21011911 Start allocating condition registers. Almost all explicit uses of CR0 are
now gone.  Next step is to get rid of the remaining ones and then start
allocating bools to CRs where appropriate.

llvm-svn: 21294
2005-04-13 23:15:44 +00:00
Nate Begeman dd1bb81d04 Implement the fold shift X, zext(Y) -> shift X, Y at the target level,
where it is safe to do so.

llvm-svn: 21293
2005-04-13 22:14:14 +00:00
Nate Begeman 04ae873648 Add CodeGen tests for the recent SelectionDAG transforms
llvm-svn: 21292
2005-04-13 21:45:13 +00:00
Nate Begeman 4ddd81657b Disbale the broken fold of shift + sz[ext] for now
Move the transform for select (a < 0) ? b : 0 into the dag from ppc isel
Enable the dag to fold and (setcc, 1) -> setcc for targets where setcc
  always produces zero or one.

llvm-svn: 21291
2005-04-13 21:23:31 +00:00
Chris Lattner 56d177a344 fix an infinite loop
llvm-svn: 21289
2005-04-13 20:06:29 +00:00
Chris Lattner e3d17d8225 fix some serious miscompiles on ia64, alpha, and ppc
llvm-svn: 21288
2005-04-13 19:53:40 +00:00
Chris Lattner 8c3d409dc7 avoid work when possible, perhaps fix the problem nate and andrew are seeing
with != 0 comparisons vanishing.

llvm-svn: 21287
2005-04-13 19:41:05 +00:00
Andrew Lenharth 93341a0f82 WOW, function calls still seem to work after this.
llvm-svn: 21286
2005-04-13 17:17:28 +00:00
Andrew Lenharth c3621316ee prepare for func call optimization
llvm-svn: 21285
2005-04-13 16:19:50 +00:00
Andrew Lenharth c2ff402c84 regression case for faster call sequence
llvm-svn: 21284
2005-04-13 16:16:01 +00:00
Andrew Lenharth 714dd6a0ec check that casts still use zap
llvm-svn: 21283
2005-04-13 13:00:16 +00:00
Duraid Madina 2f2312575b * add the shladd instruction
* fold left shifts of 1, 2, 3 or 4 bits into adds

  This doesn't save much now, but should get a serious workout once
  multiplies by constants get converted to shift/add/sub sequences.
  Hold on! :)

llvm-svn: 21282
2005-04-13 06:12:04 +00:00
Andrew Lenharth c7287c8eda add matches for SxADDL and company, as well as simplify the SxADDQ code
llvm-svn: 21281
2005-04-13 05:19:55 +00:00
Chris Lattner e69ad5fd12 Implement expansion of unsigned i64 -> FP.
Note that this probably only works for little endian targets, but is enough
to get siod working :)

llvm-svn: 21280
2005-04-13 05:09:42 +00:00
Duraid Madina e7ef27bcfe * if ANDing with a constant of the form:
0x00000..00FFF..FF
      ^      ^
      ^      ^
    any number of
    0's followed by
    some number of
    1's

    then we use dep.z to just paste zeros over the input. For the special
    cases where this is zxt1/zxt2/zxt4, we use those instructions instead,
    because we're all about readability!!!
    that's what it's about!! readability!

  *twitch* ;D

llvm-svn: 21279
2005-04-13 04:50:54 +00:00
Andrew Lenharth 5cee4ef049 added s4addl matching test
llvm-svn: 21277
2005-04-13 04:41:06 +00:00
Andrew Lenharth 8eb82fb524 added all flavors of zap for anding
llvm-svn: 21276
2005-04-13 03:47:03 +00:00
Chris Lattner 0efd77eda7 Make expansion of uint->fp cast assert out instead of infinitely recurse.
llvm-svn: 21275
2005-04-13 03:42:14 +00:00
Chris Lattner 60c23bd169 Fix some mysteriously missing {}'s which cause the miscompilation of
Olden/mst, Ptrdist/bc, Obsequi, etc.

llvm-svn: 21274
2005-04-13 03:29:53 +00:00
Chris Lattner b1f25ac188 add back the optimization that Nate added for shl X, (zext_inreg y)
llvm-svn: 21273
2005-04-13 02:58:13 +00:00
Chris Lattner 39844ac337 Oops, remove these too.
llvm-svn: 21272
2005-04-13 02:47:57 +00:00
Chris Lattner e0efd1fa72 remove one more occurance of this that snuck in
llvm-svn: 21271
2005-04-13 02:46:17 +00:00
Chris Lattner 857624f47a Remove support for ZERO_EXTEND_INREG. This pessimizes code, genering stuff
like this:

        ldah $1,1($31)
        lda $1,-1($1)
        and $0,$1,$24

instead of this:

        zap $0,252,$24

To get this back, the selector should recognize the ISD::AND case where this
happens and emit the appropriate ZAP instruction.

llvm-svn: 21270
2005-04-13 02:43:40 +00:00
Chris Lattner 7f4c4179a6 Remove special handling of ZERO_EXTEND_INREG. This pessimizes code, causing
things like this:

       mov r9 = 65535;;
       and r8 = r8, r9;;

To be emitted instead of:

        zxt2 r8 = r8;;

To get this back, the selector for ISD::AND should recognize this case.

llvm-svn: 21269
2005-04-13 02:41:52 +00:00
Chris Lattner 83075510ee Elimate handling of ZERO_EXTEND_INREG. This causes the PPC backend to emit
andi instructions instead of rlwinm instructions for zero extend, but they
seem like they would take the same time.

llvm-svn: 21268
2005-04-13 02:40:26 +00:00
Chris Lattner 248fe6bda2 Z_E_I is gone
llvm-svn: 21267
2005-04-13 02:39:05 +00:00
Chris Lattner 0e852afb4c Instead of making ZERO_EXTEND_INREG nodes, use the helper method in
SelectionDAG to do the job with AND.  Don't legalize Z_E_I anymore as
it is gone

llvm-svn: 21266
2005-04-13 02:38:47 +00:00
Chris Lattner 2b4e3fca38 Remove all foldings of ZERO_EXTEND_INREG, moving them to work for AND nodes
instead.  OVerall, this increases the amount of folding we can do.

llvm-svn: 21265
2005-04-13 02:38:18 +00:00
Chris Lattner 50b63f7015 Add a new helper method which returns the and that is equivalent to what
ZERO_EXTEND_INREG was.

llvm-svn: 21264
2005-04-13 02:37:19 +00:00
Chris Lattner 71886d95d5 Remove the ZERO_EXTEND_INREG node which is redundant with AND
llvm-svn: 21263
2005-04-13 02:36:41 +00:00
Nate Begeman ca916ba4a0 Fold shift x, [sz]ext(y) -> shift x, y
llvm-svn: 21262
2005-04-12 23:32:28 +00:00
Nate Begeman af1c0f7a00 Fold shift by size larger than type size to undef
Make llvm undef values generate ISD::UNDEF nodes

llvm-svn: 21261
2005-04-12 23:12:17 +00:00
Nate Begeman 818eb6ddd2 Implement setcc op, -1 sequences
Remove dead setcc op, 0 sequences
Coming later: generalization of op, imm

llvm-svn: 21260
2005-04-12 21:22:28 +00:00
Chris Lattner 0b73a6d8bc promote extload i1 -> extload i8
llvm-svn: 21258
2005-04-12 20:30:10 +00:00
Chris Lattner 9daef352e9 add an argument to allow avoiding deleting phi nodes.
llvm-svn: 21255
2005-04-12 18:52:14 +00:00
Chris Lattner eb958b0e45 add an argument.
llvm-svn: 21254
2005-04-12 18:51:53 +00:00
Chris Lattner 95f16a3ac4 Get rid of this for_each loop
llvm-svn: 21253
2005-04-12 18:51:33 +00:00
Duraid Madina fd469bddac * OK, after changing to use liveIn/liveOut instead of IDEFs,
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.

FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:

out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127

and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)

llvm-svn: 21252
2005-04-12 18:42:59 +00:00
Andrew Lenharth 740f93ca10 Get rid of idefs for arguments (oops)
llvm-svn: 21251
2005-04-12 17:47:57 +00:00
Andrew Lenharth 10c6eb4be2 Get rid of idefs for arguments
llvm-svn: 21250
2005-04-12 17:35:16 +00:00
Chris Lattner 14f72885dd Put out* into the allocation order, allowing the register allocator to
coallesce moves into outgoing args.

llvm-svn: 21249
2005-04-12 15:12:51 +00:00
Chris Lattner 6b91767b77 Make sure to realize that calls use their argument regs
llvm-svn: 21248
2005-04-12 15:12:19 +00:00
Duraid Madina b6dfb227b7 stop emitting IDEFs for args - change to using liveIn/liveOut
llvm-svn: 21247
2005-04-12 14:54:44 +00:00
Nate Begeman f67f3bf627 Initial support for allocation condition registers
llvm-svn: 21246
2005-04-12 07:04:16 +00:00
Chris Lattner 6febe5ef40 Fix a crash analyzing MultiSource/Benchmarks/MallocBench/gs
llvm-svn: 21245
2005-04-12 03:59:27 +00:00
Chris Lattner af5b25f139 Remove some redundant checks, add a couple of new ones. This allows us to
compile this:

int foo (unsigned long a, unsigned long long g) {
  return a >= g;
}

To:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        cmpl $0, 12(%esp)
        sete %cl
        andb %al, %cl
        movzbl %cl, %eax
        ret

instead of:

foo:
        movl 8(%esp), %eax
        cmpl %eax, 4(%esp)
        setae %al
        movzbw %al, %cx
        movl 12(%esp), %edx
        cmpl $0, %edx
        sete %al
        movzbw %al, %ax
        cmpl $0, %edx
        cmove %cx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21244
2005-04-12 02:54:39 +00:00
Chris Lattner aedcabe8db Emit comparisons against the sign bit better. Codegen this:
bool %test1(long %X) {
        %A = setlt long %X, 0
        ret bool %A
}

like this:

test1:
        cmpl $0, 8(%esp)
        setl %al
        movzbl %al, %eax
        ret

instead of:

test1:
        movl 8(%esp), %ecx
        cmpl $0, %ecx
        setl %al
        movzbw %al, %ax
        cmpl $0, 4(%esp)
        setb %dl
        movzbw %dl, %dx
        cmpl $0, %ecx
        cmove %dx, %ax
        movzbl %al, %eax
        ret

llvm-svn: 21243
2005-04-12 02:19:10 +00:00
Chris Lattner 71ff44e46c Emit long comparison against -1 better. Instead of this (x86):
test2:
        movl 8(%esp), %eax
        notl %eax
        movl 4(%esp), %ecx
        notl %ecx
        orl %eax, %ecx
        cmpl $0, %ecx
        sete %al
        movzbl %al, %eax
        ret

or this (PPC):

_test2:
        nor r2, r4, r4
        nor r3, r3, r3
        or r2, r2, r3
        cntlzw r2, r2
        srwi r3, r2, 5
        blr

Emit this:

test2:
        movl 8(%esp), %eax
        andl 4(%esp), %eax
        cmpl $-1, %eax
        sete %al
        movzbl %al, %eax
        ret

or this:

_test2:
.LBB_test2_0:   ;
        and r2, r4, r3
        cmpwi cr0, r2, -1
        li r3, 1
        li r2, 0
        beq .LBB_test2_2        ;
.LBB_test2_1:   ;
        or r3, r2, r2
.LBB_test2_2:   ;
        blr

it seems like the PPC isel could do better for R32 == -1 case.

llvm-svn: 21242
2005-04-12 01:46:05 +00:00
Chris Lattner 87bd69884a canonicalize x <u 1 -> x == 0. On this testcase:
unsigned long long g;
unsigned long foo (unsigned long a) {
  return (a >= g) ? 1 : 0;
}

It changes the ppc code from:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cmplwi cr0, r4, 1
        li r3, 1
        li r5, 0
        blt .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r3, r5, r5
.LBB_foo_4:     ; entry
        cmpwi cr0, r4, 0
        beq .LBB_foo_6  ; entry
.LBB_foo_5:     ; entry
        or r2, r3, r3
.LBB_foo_6:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr


to:

_foo:
.LBB_foo_0:     ; entry
        mflr r11
        stw r11, 8(r1)
        bl "L00000$pb"
"L00000$pb":
        mflr r2
        addis r2, r2, ha16(L_g$non_lazy_ptr-"L00000$pb")
        lwz r2, lo16(L_g$non_lazy_ptr-"L00000$pb")(r2)
        lwz r4, 0(r2)
        lwz r2, 4(r2)
        cmplw cr0, r3, r2
        li r2, 1
        li r3, 0
        bge .LBB_foo_2  ; entry
.LBB_foo_1:     ; entry
        or r2, r3, r3
.LBB_foo_2:     ; entry
        cntlzw r3, r4
        srwi r3, r3, 5
        cmpwi cr0, r4, 0
        beq .LBB_foo_4  ; entry
.LBB_foo_3:     ; entry
        or r2, r3, r3
.LBB_foo_4:     ; entry
        rlwinm r3, r2, 0, 31, 31
        lwz r11, 8(r1)
        mtlr r11
        blr

llvm-svn: 21241
2005-04-12 00:28:49 +00:00
Nate Begeman 79a3bea4ca Implement bitfield clears
Implement divide by negative power of two

llvm-svn: 21240
2005-04-12 00:10:02 +00:00
Nate Begeman 08698cf644 Update PPC readme. Remove things that are done or aren't ppc specific
llvm-svn: 21232
2005-04-11 20:48:57 +00:00
Chris Lattner 8ffd004920 Teach the dag mechanism that this:
long long test2(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) + B;
}

is equivalent to this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

Now they are both codegen'd to this on ppc:

_test2:
        blr

or this on x86:

test2:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21231
2005-04-11 20:29:59 +00:00
Chris Lattner edd197062f Fix expansion of shifts by exactly NVT bits on arch's (like X86) that have
masking shifts.

This fixes the miscompilation of this:

long long test1(unsigned A, unsigned B) {
        return ((unsigned long long)A << 32) | B;
}

into this:

test1:
        movl 4(%esp), %edx
        movl %edx, %eax
        orl 8(%esp), %eax
        ret

allowing us to generate this instead:

test1:
        movl 4(%esp), %edx
        movl 8(%esp), %eax
        ret

llvm-svn: 21230
2005-04-11 20:08:52 +00:00
Chris Lattner 607bd26b38 IA64 supports this operation.
llvm-svn: 21228
2005-04-11 18:55:36 +00:00
Chris Lattner 67291ea580 ORo sets CR0
llvm-svn: 21227
2005-04-11 15:03:48 +00:00
Chris Lattner f29cc88210 Revert the previous patch, which I didn't mean to check in.
llvm-svn: 21226
2005-04-11 15:03:41 +00:00
Chris Lattner d3dc31009f Fix a minor bug (ORo didn't mark that it set CR0).
Refactor how . instructions are handled.  In particular, instead of passing
the RC flag all the way up the inheritance hierarchy, just make a new tblgen
class 'DOT' which can be added to an instruction definition.

For example, instead of this:

-def AND  : XForm_6<31,  28, 0, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
-let Defs = [CR0] in
-def ANDo : XForm_6<31,  28, 1, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
-                   "and. $rA, $rS, $rB">;

We now have this:

+def AND  : XForm_6<31,  28, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
                    "and $rA, $rS, $rB">;

llvm-svn: 21225
2005-04-11 15:01:39 +00:00
Duraid Madina 8de7ac092d hmm, should probably change addImm() to take 64-bit arguments one day anyway.
llvm-svn: 21224
2005-04-11 07:16:39 +00:00
Duraid Madina 247def9c2b rename addU64Imm() to addImm64()
llvm-svn: 21223
2005-04-11 07:14:41 +00:00
Nate Begeman bebefac791 Add recording variants of ISD::AND and ISD::OR. This kills almost 1000
(1.5%) instructions in 186.crafty

llvm-svn: 21222
2005-04-11 06:34:10 +00:00
Duraid Madina fb43ef78c5 assorted fixes:
* clean up immediates (we use 14, 22 and 64 bit immediates now. sane.)
  * fold r0/f0/f1 registers into comparisons against 0/0.0/1.0
  * fix nasty thinko - didn't use two-address form of conditional add
    for extending bools to integers, so occasionally there would be
    garbage in the result. it's amazing how often zeros are just
    sitting around in registers ;) - this should fix a bunch of tests.

llvm-svn: 21221
2005-04-11 05:55:56 +00:00
Reid Spencer 7a763bfbc5 Ensure that the arguments passed to sys::Program::ExecuteAndWait include
the program name as the first argument. Thanks go to Markus Oberhumer for
noticing this problem.

llvm-svn: 21220
2005-04-11 05:48:04 +00:00
Jeff Cohen a3b1458175 Eliminate tabs
llvm-svn: 21216
2005-04-11 03:44:22 +00:00
Jeff Cohen ecbfa98ce7 Eliminate major source of VC++ "possible loss of data" warnings.
llvm-svn: 21215
2005-04-11 03:38:28 +00:00
Nate Begeman add0c63ad2 Fix libcall code to not pass a NULL Chain to LowerCallTo
Fix libcall code to not crash or assert looking for an ADJCALLSTACKUP node
  when it is known that there is no ADJCALLSTACKDOWN to match.
Expand i64 multiply when ISD::MULHU is legal for the target.

llvm-svn: 21214
2005-04-11 03:01:51 +00:00
Chris Lattner e2427c9afc Don't bother sign/zext_inreg'ing the result of an and operation if we know
the result does change as a result of the extend.

This improves codegen for Alpha on this testcase:

int %a(ushort* %i) {
        %tmp.1 = load ushort* %i
        %tmp.2 = cast ushort %tmp.1 to int
        %tmp.4 = and int %tmp.2, 1
        ret int %tmp.4
}

Generating:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        ret $31,($26),1

instead of:

a:
        ldgp $29, 0($27)
        ldwu $0,0($16)
        and $0,1,$0
        addl $0,0,$0
        ret $31,($26),1

btw, alpha really should switch to livein/outs for args :)

llvm-svn: 21213
2005-04-10 23:37:16 +00:00
Chris Lattner a3b7ef05f4 Teach legalize to deal with targets that don't support some SEXTLOAD/ZEXTLOADs
llvm-svn: 21212
2005-04-10 22:54:25 +00:00
Chris Lattner 672fe7267b The first argument to ExecuteAndWait should be the program name, but pointed
out by Markus F.X.J. Oberhumer.

llvm-svn: 21211
2005-04-10 20:59:38 +00:00
Chris Lattner 751cc5f49f fix this testcase so the regex doesn't match the function name
llvm-svn: 21210
2005-04-10 20:45:35 +00:00
Chris Lattner 391a351ede don't zextload fp values!
llvm-svn: 21209
2005-04-10 17:40:35 +00:00
Duraid Madina 7b0287b78d * store immediate values as int64_t, not int. come on, we should be happy
when there are immediates, let's not worry about the memory overhead of
this :)

* add addU64Imm(uint64_t val) to machineinstrbuilder

(seriously: this seems required to support 64-bit immediates cleanly. if it
_really_ gets on your nerves, feel free to pull it out ;) )

coming up next week: "all your floating point constants are belong to us"

llvm-svn: 21208
2005-04-10 09:18:55 +00:00
Nate Begeman 492370311d Fix another fixme: factor out the constant fp generation code.
llvm-svn: 21207
2005-04-10 06:06:10 +00:00
Nate Begeman 941a01802f Fix 64 bit argument loading that straddles the args in regs / args on stack
boundary.

llvm-svn: 21206
2005-04-10 05:53:14 +00:00
Chris Lattner c53cd501b5 Until we have a dag combiner, promote using zextload's instead of extloads.
This gives the optimizer a bit of information about the top-part of the
value.

llvm-svn: 21205
2005-04-10 04:33:47 +00:00
Chris Lattner f74c794ccf Fold zext_inreg(zextload), likewise for sext's
llvm-svn: 21204
2005-04-10 04:33:08 +00:00
Chris Lattner f2bff92411 add a simple xform
llvm-svn: 21203
2005-04-10 04:04:49 +00:00
Nate Begeman b076731713 Remove unnecessary Implicit Defs. Since r0 is not in allocation, we do not
have to inform the register allocator it might be stepped on.

llvm-svn: 21202
2005-04-10 03:59:42 +00:00
Chris Lattner 2de306ba83 make this harder
llvm-svn: 21201
2005-04-10 03:18:18 +00:00
Chris Lattner d65632a9ca oops add ~
llvm-svn: 21200
2005-04-10 03:07:25 +00:00
Chris Lattner 38b1ae75fc new testcase for previously unsupported unary complex operators
llvm-svn: 21199
2005-04-10 03:06:27 +00:00
Nate Begeman 6566e8ac06 Make sure that BRCOND branches can be converted into long branches too.
llvm-svn: 21198
2005-04-10 01:48:29 +00:00
Nate Begeman 3345eadc37 Don't hand ISD::CALL nodes off to SelectExprFP. This fixes siod.
llvm-svn: 21197
2005-04-10 01:14:13 +00:00
Chris Lattner d8cbfe82ba Fix a thinko. If the operand is promoted, pass the promoted value into
the new zero extend, not the original operand.  This fixes cast bool -> long
on ppc.

Add an unrelated fixme

llvm-svn: 21196
2005-04-10 01:13:15 +00:00
Chris Lattner 9ff4b4190f rename getPPCOpcodeForSetCCNumber -> getPPCOpcodeForSetCCOpode to be more
correct.  Remove the EmitComparison retvalue, as it is always the first arg.

Fix a place where we incorrectly passed in the setcc opcode instead of the
setcc number, causing us to miscompile crafty.  Crafty now works!

llvm-svn: 21195
2005-04-10 01:03:31 +00:00
Nate Begeman 2121a54868 fix ISD::BRCONDTWOWAY codegen to not deference the end() iterator
llvm-svn: 21193
2005-04-09 23:35:05 +00:00
Chris Lattner 228fed92e6 Fix CodeGen/Generic/2005-05-09-GlobalInPHI.ll, which was reduced from 254.gap.
This caused the "use before a def" assertion on some programs.

With this patch, 254.gap now passes with the PPC backend.

llvm-svn: 21191
2005-04-09 22:05:17 +00:00
Chris Lattner db32a632c9 new testcase that used to crash the ppc fe. It could effect any simpleisel
that is not careful, so I'm checking it into the generic tests.

llvm-svn: 21190
2005-04-09 22:03:10 +00:00
Chris Lattner da504741da add a little peephole optimization. This allows us to codegen:
int a(short i) {
        return i & 1;
}

as

_a:
        andi. r3, r3, 1
        blr

instead of:

_a:
        rlwinm r2, r3, 0, 16, 31
        andi. r3, r2, 1
        blr

on ppc.  It should also help the other risc targets.

llvm-svn: 21189
2005-04-09 21:43:54 +00:00
Chris Lattner e8e070dbfb do not set the root to null if an argument is dead
llvm-svn: 21188
2005-04-09 21:23:24 +00:00
Nate Begeman 8309a333dd Add rlwnm instruction for variable rotate
Generate rotate left/right immediate
Generate code for brcondtwoway
Use new livein/liveout functionality

llvm-svn: 21187
2005-04-09 20:09:12 +00:00
Chris Lattner 3a7f5768c5 Fix a crash on 173.applu by asking for a constant bigger than 32-bits.
llvm-svn: 21185
2005-04-09 19:47:21 +00:00
Chris Lattner a55a5f2580 Switch this instruction selector over to using liveins and liveouts, eliminating
implicit defs on entry to the function.  yaay :)

llvm-svn: 21184
2005-04-09 16:32:30 +00:00
Chris Lattner 1a44855f8f there is no need to remove this instruction, linscan does it already as it
removes noop moves.

llvm-svn: 21183
2005-04-09 16:24:20 +00:00
Chris Lattner 0b1681bce1 Adjust live intervals to support a livein set
llvm-svn: 21182
2005-04-09 16:17:50 +00:00
Chris Lattner b59006c4a1 Use live out sets for return values instead of imp_defs, which is cleaner and faster.
llvm-svn: 21181
2005-04-09 15:23:56 +00:00
Chris Lattner 4c6ab01a20 Consider the livein/out set for a function, allowing targets to not have to
use ugly imp_def/imp_uses for arguments and return values.

llvm-svn: 21180
2005-04-09 15:23:25 +00:00
Chris Lattner 576db37185 add routines to track the livein/out set for a function
llvm-svn: 21179
2005-04-09 15:22:53 +00:00
Duraid Madina 46aa06cfed ok, the "ia64 has a boatload of registers" joke stopped being funny today ;)
* fix overallocation of integer (stacked) registers: we can't allocate
  registers for local use if they are required as output registers

this fixes 'toast' in the test suite, and all sorts of larger programs
like bzip2 etc.

llvm-svn: 21178
2005-04-09 11:53:00 +00:00
Nate Begeman 2f64122319 Optimize FSEL a bit for fneg arguments. This fixes the recently added test
case so that we emit

_test_fneg_sel:
.LBB_test_fneg_sel_0:   ;
        fsel f1, f1, f3, f2
        blr

instead of:

_test_fneg_sel:
.LBB_test_fneg_sel_0:   ;
        fneg f0, f1
        fneg f0, f0
        fsel f1, f0, f3, f2
        blr

llvm-svn: 21177
2005-04-09 09:33:07 +00:00
Nate Begeman 7d3e44fb12 Add a testcase to make sure that we don't emit two fneg instructions back
to back for certain fsel instructions.

llvm-svn: 21176
2005-04-09 09:30:09 +00:00
Nate Begeman 968e44a900 Add cases to cover the rest of the patterns we should be matching
llvm-svn: 21175
2005-04-09 08:29:59 +00:00
Chris Lattner 888c5fdcc2 Fix CodeGen/SparcV9/2005-05-09-GEP-Crash.ll a crash on some specfp program
lets hope this doesn't break other programs with induced entropy

llvm-svn: 21174
2005-04-09 06:27:14 +00:00
Chris Lattner 3aa6ec0dda New testcase that the sparc backend crashes on
llvm-svn: 21173
2005-04-09 06:26:27 +00:00
Chris Lattner 6a31b878f8 recognize some patterns as fabs operations, so that fabs at the source level
is deconstructed then reconstructed here.  This catches 19 fabs's in 177.mesa
9 in 168.wupwise, 5 in 171.swim, 3 in 172.mgrid, and 14 in 173.applu out of
specfp2000.

This allows the X86 code generator to make MUCH better code than before for
each of these and saves one instr on ppc.

This depends on the previous CFE patch to expose these correctly.

llvm-svn: 21171
2005-04-09 05:15:53 +00:00
Chris Lattner d9748bcae5 make this test more interesting
llvm-svn: 21170
2005-04-09 04:55:14 +00:00
Chris Lattner ec90861662 add a test for fnabs
llvm-svn: 21169
2005-04-09 04:03:16 +00:00
Chris Lattner b9a11b8b7f add a partial test for the fma operations that ppc supports. I'm sure I'm
missing some and not all of these match yet, but I'm sure that Nate will
clean up my mess :)

llvm-svn: 21168
2005-04-09 04:01:32 +00:00
Chris Lattner 8a98c7f337 Emit BRCONDTWOWAY when possible.
llvm-svn: 21167
2005-04-09 03:30:29 +00:00
Chris Lattner fd98678a8a Legalize BRCONDTWOWAY into a BRCOND/BR pair if a target doesn't support it.
llvm-svn: 21166
2005-04-09 03:30:19 +00:00
Chris Lattner b0713c74a2 print and fold BRCONDTWOWAY correctly
llvm-svn: 21165
2005-04-09 03:27:28 +00:00
Chris Lattner a3a135a9f7 This target does not support/want ISD::BRCONDTWOWAY
llvm-svn: 21164
2005-04-09 03:22:37 +00:00
Chris Lattner 4f77badaa3 This target does not yet support ISD::BRCONDTWOWAY
llvm-svn: 21163
2005-04-09 03:22:30 +00:00
Chris Lattner 4b1323e846 Add a new node
llvm-svn: 21162
2005-04-09 03:21:50 +00:00
Nate Begeman e8ce0cda40 64b: Expand S/UREM
32b: No longer pattern match fneg(fsub(fmul)) as fnmsub
     Pattern match fsub a, mul(b, c) as fnmsub
     Pattern match fadd a, mul(b, c) as fmadd
Those changes speed up hydro2d by 2.5%, distray by 6%, and scimark by 8%

llvm-svn: 21161
2005-04-09 03:05:51 +00:00
Chris Lattner 0ea81f9db4 canonicalize a bunch of operations involving fneg
llvm-svn: 21160
2005-04-09 03:02:46 +00:00
Nate Begeman f50b597f67 Fix 64b shifts
llvm-svn: 21159
2005-04-08 23:45:01 +00:00
Chris Lattner 61b6f04ae9 fix this method for 64-bit constants
llvm-svn: 21158
2005-04-08 21:31:29 +00:00
Nate Begeman 705d3c18e8 Match Mac OS X 64 bit calling conventions
llvm-svn: 21157
2005-04-08 21:26:05 +00:00
Andrew Lenharth de5aed3f12 collect a few statistics, factor constants (constant loading and mult), fix logic operation pattern matchs, supress FP div when int dividing by a constant
llvm-svn: 21156
2005-04-08 17:28:49 +00:00
Andrew Lenharth ce9e043c78 oops
llvm-svn: 21155
2005-04-08 16:55:15 +00:00
Andrew Lenharth 2e184e2522 added some tests to check stupid pattern matching mistakes
llvm-svn: 21154
2005-04-08 16:46:44 +00:00
Duraid Madina 41ff502549 fix bogus division-by-power-of-2 (was wrong for negative input, adds extr insn)
fix hack in division (clean up frcpa instruction)

llvm-svn: 21153
2005-04-08 10:01:48 +00:00