Commit Graph

22609 Commits

Author SHA1 Message Date
Chris Lattner f3aef1b004 Fix a deficiency in the spiller that Evan noticed. In particular, consider
this code:

  store [stack slot #0],  R10
    = add R14, [stack slot #0]

The spiller didn't know that the store made the value of [stackslot#0] available
in R10 *IF* the store came from a copy instruction with the store folded into it.

This patch teaches VirtRegMap to look at these stores and recognize the values
they make available.  In one case Evan provided, this code:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
1)      movsd QWORD PTR [%ESP + 48], %XMM1
2)      movsd %XMM1, QWORD PTR [%ESP + 48]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

turns into:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

In this case, instruction #2 was removed because of the value made
available by #1, and inst #1 was later deleted because it is now
never used before the stack slot is redefined by #3.

This occurs here and there in a lot of code with high spilling, on PPC
most of the removed loads/stores are LSU-reject-causing loads, which is
nice.

On X86, things are much better (because it spills more), where we nuke
about 1% of the instructions from SMG2000 and several hundred from eon.

More improvements to come...

llvm-svn: 25917
2006-02-02 23:29:36 +00:00
Nate Begeman 4efb328926 add 64b gpr store to the possible list of isStoreToStackSlot opcodes.
llvm-svn: 25916
2006-02-02 21:07:50 +00:00
Chris Lattner 5123346708 fix operand numbers
llvm-svn: 25915
2006-02-02 20:38:12 +00:00
Chris Lattner c327d71e06 implement isStoreToStackSlot for PPC
llvm-svn: 25914
2006-02-02 20:16:12 +00:00
Chris Lattner bb53acd03c Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,a far more logical place. Other methods should also be moved if anyoneis interested. :)
llvm-svn: 25913
2006-02-02 20:12:32 +00:00
Chris Lattner a50eedbcd7 Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,
a far more logical place.  Other methods should also be moved if anyone
is interested. :)

llvm-svn: 25912
2006-02-02 20:11:55 +00:00
Chris Lattner 246ee44c8f implement isStoreToStackSlot
llvm-svn: 25911
2006-02-02 20:00:41 +00:00
Chris Lattner 0acc90c67e add a method
llvm-svn: 25910
2006-02-02 19:57:16 +00:00
Chris Lattner 76e5863388 add a new isStoreToStackSlot method
llvm-svn: 25909
2006-02-02 19:55:29 +00:00
Chris Lattner d8208c3665 more notes
llvm-svn: 25908
2006-02-02 19:43:28 +00:00
Chris Lattner d3f033e8e0 add a note, I have no idea how important this is.
llvm-svn: 25907
2006-02-02 19:16:34 +00:00
Chris Lattner e10e1024bc %fcc is not an alias for %fcc0
llvm-svn: 25906
2006-02-02 08:02:20 +00:00
Chris Lattner cb34968d19 correct an opcode
llvm-svn: 25905
2006-02-02 07:56:15 +00:00
Chris Lattner 9dd7df7ee7 new example
llvm-svn: 25903
2006-02-02 07:37:11 +00:00
Nate Begeman cd018525f8 Update the README
llvm-svn: 25902
2006-02-02 07:27:56 +00:00
Chris Lattner 49beaf40fc Turn any_extend nodes into zero_extend nodes when it allows us to remove an
and instruction.  This allows us to compile stuff like this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

to this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        ret

instead of this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

This occurs quite a bit with the X86 backend.  For example, 25 times in
lambda, 30 times in 177.mesa, 14 times in galgel,  70 times in fma3d,
25 times in vpr, several hundred times in gcc, ~45 times in crafty,
~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap,
16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K
programs.

llvm-svn: 25901
2006-02-02 07:17:31 +00:00
Chris Lattner e0c60d63b1 Implement MaskedValueIsZero for ANY_EXTEND nodes
llvm-svn: 25900
2006-02-02 06:43:15 +00:00
Chris Lattner 4b2ec8af23 implemented, testcase here: test/Regression/CodeGen/X86/compare-add.ll
llvm-svn: 25899
2006-02-02 06:36:48 +00:00
Chris Lattner 49ce35542f add two dag combines:
(C1-X) == C2 --> X == C1-C2
(X+C1) == C2 --> X == C2-C1

This allows us to compile this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

into this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

not this:

_X:
        movl $14, %eax
        addl 4(%esp), %eax
        cmpl $12345, %eax
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

Testcase here: Regression/CodeGen/X86/compare-add.ll

nukage of the and coming up next.

llvm-svn: 25898
2006-02-02 06:36:13 +00:00
Chris Lattner 166ea0eda7 new testcase
llvm-svn: 25897
2006-02-02 06:35:38 +00:00
Evan Cheng d3908f79cb Update.
llvm-svn: 25896
2006-02-02 02:40:17 +00:00
Chris Lattner 0bd74558ae make -debug output less newliney
llvm-svn: 25895
2006-02-02 00:38:08 +00:00
Evan Cheng d8fba3a1ee Fix a erroneous comment.
llvm-svn: 25894
2006-02-02 00:28:23 +00:00
Chris Lattner 7f5880b1c7 Implement matching constraints. We can now say things like this:
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4)

and get:

xyz r2, r3, r4, r2

note that the r2's are pinned together.  Yaay for 2-address instructions.

2342 ----------------------------------------------------------------------

llvm-svn: 25893
2006-02-02 00:25:23 +00:00
Chris Lattner 2f34a9e332 validate matching constraints and remember when we see them.
llvm-svn: 25892
2006-02-02 00:23:53 +00:00
Chris Lattner 4f5ff85ceb add an instance var and argument.
llvm-svn: 25891
2006-02-02 00:23:12 +00:00
Chris Lattner 6132a87cf4 more notes
llvm-svn: 25890
2006-02-01 23:38:08 +00:00
Evan Cheng b3ea2677a4 Tell codegen MOVAPSrr and MOVAPDrr are copies.
llvm-svn: 25889
2006-02-01 23:03:16 +00:00
Evan Cheng f1ed826c2a Added SSE entries to foldMemoryOperand().
llvm-svn: 25888
2006-02-01 23:02:25 +00:00
Evan Cheng 8b40cde148 Rearrange code to my liking. :)
llvm-svn: 25887
2006-02-01 23:01:57 +00:00
Chris Lattner aa23fa9f43 Implement smart printing of inline asm strings, handling variants and
substituted operands.  For this testcase:

int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

we now emit:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz r2, r2, r3  ;; look here
        or r3, r2, r2
        blr

... note the substituted operands. :)

llvm-svn: 25886
2006-02-01 22:41:11 +00:00
Chris Lattner ae8863849b add a new PrintAsmOperand method, move some stuff around for ease of reading.
llvm-svn: 25885
2006-02-01 22:39:30 +00:00
Chris Lattner f7f056751c add a method
llvm-svn: 25884
2006-02-01 22:38:46 +00:00
Chris Lattner 2f7650f9dc another note
llvm-svn: 25883
2006-02-01 21:44:48 +00:00
Andrew Lenharth 4b1c726fbb Add immediate forms of cmov and remove some cruft
llvm-svn: 25882
2006-02-01 19:37:33 +00:00
Andrew Lenharth 14a7d8ff1e test cmov immediate form
llvm-svn: 25881
2006-02-01 19:36:52 +00:00
Chris Lattner 244e800c19 add a note, ya knoe
llvm-svn: 25880
2006-02-01 19:12:23 +00:00
Nate Begeman 01bd9d9911 *** empty log message ***
llvm-svn: 25879
2006-02-01 19:05:15 +00:00
Chris Lattner 1558fc64f9 Implement simple register assignment for inline asms. This allows us to compile:
int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

into:

 (0x8906130, LLVM BB @0x8902220):
        %r2 = OR4 %r3, %r3
        %r3 = OR4 %r4, %r4
        INLINEASM <es:xyz $0, $1, $2>, %r2<def>, %r2, %r3
        %r3 = OR4 %r2, %r2
        BLR

which asmprints as:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz $0, $1, $2      ;; need to print the operands now :)
        or r3, r2, r2
        blr

llvm-svn: 25878
2006-02-01 18:59:47 +00:00
Chris Lattner ba56b5dc35 Finegrainify namespacification
llvm-svn: 25877
2006-02-01 18:10:56 +00:00
Chris Lattner a983beab37 add a note
llvm-svn: 25876
2006-02-01 17:54:23 +00:00
Nate Begeman 7e7f439f85 Fix some of the stuff in the PPC README file, and clean up legalization
of the SELECT_CC, BR_CC, and BRTWOWAY_CC nodes.

llvm-svn: 25875
2006-02-01 07:19:44 +00:00
Chris Lattner 3da1bb520e add a note, I'll take care of this after nate commits his big patch
llvm-svn: 25873
2006-02-01 06:40:32 +00:00
Evan Cheng 9e350cd6ad - Use xor to clear integer registers (set R, 0).
- Added a new format for instructions where the source register is implied
  and it is same as the destination register. Used for pseudo instructions
  that clear the destination register.

llvm-svn: 25872
2006-02-01 06:13:50 +00:00
Evan Cheng c404b5748c Remove another entry.
llvm-svn: 25871
2006-02-01 06:08:48 +00:00
Evan Cheng 88e616d803 If a pattern's root node is a constant, its size should be 3 rather than 2.
llvm-svn: 25870
2006-02-01 06:06:31 +00:00
Jeff Cohen b24b66f209 Fix VC++ compilation error.
llvm-svn: 25869
2006-02-01 04:37:04 +00:00
Chris Lattner 7975f04e1f new testcase for the 'ret double folding with load' opzn
llvm-svn: 25868
2006-02-01 01:45:02 +00:00
Chris Lattner b0a76b0981 Another regression from the pattern isel
llvm-svn: 25867
2006-02-01 01:44:25 +00:00
Chris Lattner 7ed3101d14 Beef up the interface to inline asm constraint parsing, making it more general, useful, and easier to use.
llvm-svn: 25866
2006-02-01 01:29:47 +00:00