Commit Graph

2841 Commits

Author SHA1 Message Date
Chris Lattner eea983eb5d simplify code
llvm-svn: 30659
2006-09-28 23:17:41 +00:00
Evan Cheng bcf1b4120f TargetRegisterClass specifies the desired spill alignment. However, it cannot be honored if stack alignment is smaller.
llvm-svn: 30648
2006-09-28 18:52:32 +00:00
Bill Wendling 5d409822d6 "Once more into the breach, dear friends, once more, or fill the wall up
with our English dead."

No! Really! Serious this time...It was how the vreg uses were being
adjusted that was causing hte Olden tests to fail. I corrected this and
the Olden and Regression tests all passed.

llvm-svn: 30644
2006-09-28 07:10:24 +00:00
Chris Lattner a9caf95591 refactor critical edge breaking out into the SplitCritEdgesForPHIConstants method.
This is a baby step towards fixing PR925.

llvm-svn: 30643
2006-09-28 06:17:10 +00:00
Chris Lattner 764caf6129 re-re-revert this, back to the right revision. It currently breaks bisort/mst
in olden among others.

llvm-svn: 30637
2006-09-28 00:11:54 +00:00
Evan Cheng d31f55c236 PEI now place callee save spills closest to the address pointed to by the
incoming stack. This allows X86 backend to use push / pop in epilogue /
prologue.

llvm-svn: 30636
2006-09-28 00:10:27 +00:00
Chris Lattner 9f7d0d1dc5 re-revert this patch, bisort and mst are still broken in Olden.
llvm-svn: 30634
2006-09-28 00:04:21 +00:00
Bill Wendling 1f29e6c5c5 Reapplying this patch. With the newest commits, the error in Olden/bisort
has disappeared.

llvm-svn: 30633
2006-09-27 22:37:35 +00:00
Chris Lattner 5f6c93701b Temporarily revert this. This breaks Olden/bisort on PPC
llvm-svn: 30628
2006-09-27 16:59:16 +00:00
Bill Wendling 0a7f617a6c PR878: Instead of calculating the vreg to PHI use count everytime we get
a function, do it up front in linear time (going through all of the
instructions once). We create a map out of them. Then it's no problem to
use the information in it during elimination...

llvm-svn: 30624
2006-09-27 09:04:15 +00:00
Chris Lattner 30b4708df2 Add support for ${:private} which prints "L" on darwin.
llvm-svn: 30620
2006-09-27 00:06:07 +00:00
Chris Lattner a32814b033 Add support for ${:comment}, which expands to the current target's comment
character, and ${:uid} which expands to a unique ID for the MachineInstr.
More can be added if/when they are needed.

llvm-svn: 30619
2006-09-26 23:59:50 +00:00
Evan Cheng 5405c06737 Rename function. It's determining which callee-save registers to save.
llvm-svn: 30616
2006-09-26 22:29:31 +00:00
Andrew Lenharth c19ef92403 Comments on JumpTableness
llvm-svn: 30615
2006-09-26 20:02:30 +00:00
Jim Laskey 60832693a7 Load chain check is not needed
llvm-svn: 30613
2006-09-26 17:44:58 +00:00
Jim Laskey dde51671e5 Chain can be any operand
llvm-svn: 30611
2006-09-26 09:32:41 +00:00
Jim Laskey 5f3e0af9d0 Wrong size for load
llvm-svn: 30610
2006-09-26 08:14:06 +00:00
Jim Laskey b4a864d533 Can't move a load node if it's chain is not used.
llvm-svn: 30609
2006-09-26 07:37:42 +00:00
Chris Lattner 9a1e91b107 print the preds of each MBB
llvm-svn: 30606
2006-09-26 03:41:59 +00:00
Chris Lattner 66af390631 Add support for targets that want to do something with the llvm.used list,
because they have an aggressive linker that does dead code stripping.

llvm-svn: 30604
2006-09-26 03:38:18 +00:00
Jim Laskey 7aa0638aa9 Accidental enable of bad code
llvm-svn: 30601
2006-09-25 21:11:32 +00:00
Jim Laskey b5534e5c28 Fix chain dropping in load and drop unused stores in ret blocks.
llvm-svn: 30600
2006-09-25 19:32:58 +00:00
Jim Laskey d07be232ba Core antialiasing for load and store.
llvm-svn: 30597
2006-09-25 16:29:54 +00:00
Andrew Lenharth 783a4a9d86 Add support for other relocation bases to jump tables, as well as custom asm directives
llvm-svn: 30593
2006-09-24 19:45:58 +00:00
Evan Cheng 77c0757f8b PIC jump table entries are always 32-bit. This fixes PIC jump table support on X86-64.
llvm-svn: 30590
2006-09-24 05:22:38 +00:00
Evan Cheng 449a0c7e33 Make it work for DAG combine of multi-value nodes.
llvm-svn: 30573
2006-09-21 19:04:05 +00:00
Jim Laskey 35f7eebb49 core corrections
llvm-svn: 30570
2006-09-21 17:35:47 +00:00
Jim Laskey 5d19d59017 Basic "in frame" alias analysis.
llvm-svn: 30568
2006-09-21 16:28:59 +00:00
Chris Lattner 082db3f9aa fold (aext (and (trunc x), cst)) -> (and x, cst).
llvm-svn: 30561
2006-09-21 06:40:43 +00:00
Chris Lattner fa9f92cf65 Check the right value type. This fixes 186.crafty on x86
llvm-svn: 30560
2006-09-21 06:17:39 +00:00
Chris Lattner 8d8a3bf9c9 Compile:
int %test(ulong *%tmp) {
        %tmp = load ulong* %tmp         ; <ulong> [#uses=1]
        %tmp.mask = shr ulong %tmp, ubyte 50            ; <ulong> [#uses=1]
        %tmp.mask = cast ulong %tmp.mask to ubyte
        %tmp2 = and ubyte %tmp.mask, 3          ; <ubyte> [#uses=1]
        %tmp2 = cast ubyte %tmp2 to int         ; <int> [#uses=1]
        ret int %tmp2
}

to:

_test:
        movl 4(%esp), %eax
        movl 4(%eax), %eax
        shrl $18, %eax
        andl $3, %eax
        ret

instead of:

_test:
        movl 4(%esp), %eax
        movl 4(%eax), %eax
        shrl $18, %eax
        # TRUNCATE movb %al, %al
        andb $3, %al
        movzbl %al, %eax
        ret

llvm-svn: 30558
2006-09-21 06:14:31 +00:00
Chris Lattner a31f0a622b Generalize (zext (truncate x)) and (sext (truncate x)) folding to work when
the src/dst are not the same size.  This catches things like "truncate
32-bit X to 8 bits, then zext to 16", which happens a bit on X86.

llvm-svn: 30557
2006-09-21 06:00:20 +00:00
Chris Lattner c8cd62d381 Compile:
int test3(int a, int b) { return (a < 0) ? a : 0; }

to:

_test3:
        srawi r2, r3, 31
        and r3, r2, r3
        blr

instead of:

_test3:
        cmpwi cr0, r3, 1
        li r2, 0
        blt cr0, LBB2_2 ;entry
LBB2_1: ;entry
        mr r3, r2
LBB2_2: ;entry
        blr


This implements: PowerPC/select_lt0.ll:seli32_a_a

llvm-svn: 30517
2006-09-20 06:41:35 +00:00
Chris Lattner 8746e2cd57 Fold the full generality of (any_extend (truncate x))
llvm-svn: 30514
2006-09-20 06:29:17 +00:00
Chris Lattner 8b68decb27 Two things:
1. teach SimplifySetCC that '(srl (ctlz x), 5) == 0' is really x != 0.
2. Teach visitSELECT_CC to use SimplifySetCC instead of calling it and
   ignoring the result.  This allows us to compile:

bool %test(ulong %x) {
  %tmp = setlt ulong %x, 4294967296
  ret bool %tmp
}

to:

_test:
        cntlzw r2, r3
        cmplwi cr0, r3, 1
        srwi r2, r2, 5
        li r3, 0
        beq cr0, LBB1_2 ;
LBB1_1: ;
        mr r3, r2
LBB1_2: ;
        blr

instead of:

_test:
        addi r2, r3, -1
        cntlzw r2, r2
        cntlzw r3, r3
        srwi r2, r2, 5
        cmplwi cr0, r2, 0
        srwi r2, r3, 5
        li r3, 0
        bne cr0, LBB1_2 ;
LBB1_1: ;
        mr r3, r2
LBB1_2: ;
        blr

This isn't wonderful, but it's an improvement.

llvm-svn: 30513
2006-09-20 06:19:26 +00:00
Chris Lattner 875ea0cdbd Expand 64-bit shifts more optimally if we know that the high bit of the
shift amount is one or zero.  For example, for:

long long foo1(long long X, int C) {
  return X << (C|32);
}

long long foo2(long long X, int C) {
  return X << (C&~32);
}

we get:

_foo1:
        movb $31, %cl
        movl 4(%esp), %edx
        andb 12(%esp), %cl
        shll %cl, %edx
        xorl %eax, %eax
        ret
_foo2:
        movb $223, %cl
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        andb 12(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        ret

instead of:

_foo1:
        subl $4, %esp
        movl %ebx, (%esp)
        movb $32, %bl
        movl 8(%esp), %eax
        movl 12(%esp), %edx
        movb %bl, %cl
        orb 16(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        xorl %ecx, %ecx
        testb %bl, %bl
        cmovne %eax, %edx
        cmovne %ecx, %eax
        movl (%esp), %ebx
        addl $4, %esp
        ret
_foo2:
        subl $4, %esp
        movl %ebx, (%esp)
        movb $223, %cl
        movl 8(%esp), %eax
        movl 12(%esp), %edx
        andb 16(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        xorl %ecx, %ecx
        xorb %bl, %bl
        testb %bl, %bl
        cmovne %eax, %edx
        cmovne %ecx, %eax
        movl (%esp), %ebx
        addl $4, %esp
        ret

llvm-svn: 30506
2006-09-20 03:38:48 +00:00
Chris Lattner 698000b0da Fix UnitTests/2005-05-12-Int64ToFP.c with llc-beta. In particular, do not
allow it to go into an infinite loop, filling up the disk!

llvm-svn: 30494
2006-09-19 18:02:01 +00:00
Chris Lattner 5a42ebcff3 Fold extract_element(cst) to cst
llvm-svn: 30478
2006-09-19 05:02:39 +00:00
Chris Lattner 4c059f4962 Minor speedup for legalize by avoiding some malloc traffic
llvm-svn: 30477
2006-09-19 04:51:23 +00:00
Evan Cheng 1fc7c363e6 Fix a typo.
llvm-svn: 30474
2006-09-18 23:28:33 +00:00
Evan Cheng 4bfaf0bd2c Allow i32 UDIV, SDIV, UREM, SREM to be expanded into libcalls.
llvm-svn: 30470
2006-09-18 21:49:04 +00:00
Andrew Lenharth 51ad73c98c oops
llvm-svn: 30462
2006-09-18 18:00:18 +00:00
Andrew Lenharth c50458fb90 absolute addresses must match pointer size
llvm-svn: 30461
2006-09-18 17:59:35 +00:00
Jim Laskey d30bba331f Sort out mangled names for globals
llvm-svn: 30460
2006-09-18 14:47:26 +00:00
Chris Lattner e50f5d1fb1 Oh yeah, this is needed too
llvm-svn: 30407
2006-09-16 05:08:34 +00:00
Chris Lattner 1b63391fdf simplify control flow, no functionality change
llvm-svn: 30403
2006-09-16 00:21:44 +00:00
Chris Lattner fbadbda6ba Allow custom expand of mul
llvm-svn: 30402
2006-09-16 00:09:24 +00:00
Chris Lattner 8fb3d445f5 Keep track of the start of MBB's in a separate map from instructions. This
is faster and is needed for future improvements.

llvm-svn: 30383
2006-09-15 03:57:23 +00:00
Chris Lattner 46d710e6ea Fold (X & C1) | (Y & C2) -> (X|Y) & C3 when possible.
This implements CodeGen/X86/and-or-fold.ll

llvm-svn: 30379
2006-09-14 21:11:37 +00:00
Chris Lattner 97614c86ce Split rotate matching code out to its own function. Make it stronger, by
matching things like ((x >> c1) & c2) | ((x << c3) & c4) to (rot x, c5) & c6

llvm-svn: 30376
2006-09-14 20:50:57 +00:00