Commit Graph

24573 Commits

Author SHA1 Message Date
Chris Lattner 4f3de3e33c Add some more sign propagation cases
llvm-svn: 28154
2006-05-06 23:40:29 +00:00
Jeff Cohen 9978476c14 Apply bug fix supplied by Greg Pettyjohn for a bug he found: '<invalid>' is not a legal path on Windows.
llvm-svn: 28153
2006-05-06 23:25:53 +00:00
Chris Lattner 7e7bcf3a54 Simplify some code, add a couple minor missed folds
llvm-svn: 28152
2006-05-06 23:06:26 +00:00
Chris Lattner 751817c54f constant fold sign_extend_inreg
llvm-svn: 28151
2006-05-06 23:05:41 +00:00
Chris Lattner 2a4d7b845b remove cases handled elsewhere
llvm-svn: 28150
2006-05-06 22:43:44 +00:00
Chris Lattner f86075731d Add some more simple sign bit propagation cases.
llvm-svn: 28149
2006-05-06 22:39:59 +00:00
Jeff Cohen ce9b9fe6eb Fix some loose ends in MASM support.
llvm-svn: 28148
2006-05-06 21:27:14 +00:00
Chris Lattner 0b8c57ec2b new testcase we handle right now.
llvm-svn: 28147
2006-05-06 18:15:50 +00:00
Chris Lattner 1ecb2a2dac Use the new TargetLowering::ComputeNumSignBits method to eliminate
sign_extend_inreg operations.  Though ComputeNumSignBits is still rudimentary,
this is enough to compile this:

short test(short X, short x) {
  int Y = X+x;
  return (Y >> 1);
}
short test2(short X, short x) {
  int Y = (short)(X+x);
  return Y >> 1;
}

into:

_test:
        add r2, r3, r4
        srawi r3, r2, 1
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r3, r2, 1
        blr

instead of:

_test:
        add r2, r3, r4
        srawi r2, r2, 1
        extsh r3, r2
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r2, r2, 1
        extsh r3, r2
        blr

llvm-svn: 28146
2006-05-06 09:30:03 +00:00
Chris Lattner 7206d74f0c Add some really really simple code for computing sign-bit propagation.
This will certainly be enhanced in the future.

llvm-svn: 28145
2006-05-06 09:27:13 +00:00
Chris Lattner 1474f16166 Add some new methods for computing sign bit information.
llvm-svn: 28144
2006-05-06 09:26:22 +00:00
Chris Lattner 21cd99024a When inserting casts, be careful of where we put them. We cannot insert
a cast immediately before a PHI node.

This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll

llvm-svn: 28143
2006-05-06 09:10:37 +00:00
Chris Lattner cbeb178e14 new testcase
llvm-svn: 28142
2006-05-06 09:09:47 +00:00
Chris Lattner 1d441adfbf Move some code around.
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated.  If one or
both doesn't, then this xform doesn't remove a cast.

This fixes Transforms/InstCombine/2006-05-06-Infloop.ll

llvm-svn: 28141
2006-05-06 09:00:16 +00:00
Chris Lattner 44f121abc1 new testcase from ghostscript that inf looped instcombine
llvm-svn: 28140
2006-05-06 08:58:06 +00:00
Chris Lattner 6d4a2dc4ad Teach the X86 backend about non-i32 inline asm register classes.
llvm-svn: 28139
2006-05-06 00:29:37 +00:00
Chris Lattner 86a1467fc0 Fold (trunc (srl x, c)) -> (srl (trunc x), c)
llvm-svn: 28138
2006-05-06 00:11:52 +00:00
Chris Lattner 907e392dba Fold trunc(any_ext). This gives stuff like:
27,28c27
<       movzwl %di, %edi
<       movl %edi, %ebx
---
>       movw %di, %bx

llvm-svn: 28137
2006-05-05 22:56:26 +00:00
Chris Lattner 57f8c5a387 Shrink shifts when possible.
llvm-svn: 28136
2006-05-05 22:53:17 +00:00
Chris Lattner 0f64932a5c Implement ComputeMaskedBits/SimplifyDemandedBits for ISD::TRUNCATE
llvm-svn: 28135
2006-05-05 22:32:12 +00:00
Chris Lattner 8b9e11c110 Print a grouping around inline asm blocks so that we can tell when we are
using them.

llvm-svn: 28134
2006-05-05 21:50:04 +00:00
Chris Lattner c22d4bede5 Print *some* grouping around inline asm blocks so we know where they are.
llvm-svn: 28133
2006-05-05 21:48:50 +00:00
Chris Lattner a633c31319 Indent multiline asm strings more nicely
llvm-svn: 28132
2006-05-05 21:47:05 +00:00
Chris Lattner 44a73e9fa5 Teach the code generator to use cvtss2sd as extload f32 -> f64
llvm-svn: 28131
2006-05-05 21:35:18 +00:00
Chris Lattner 3d26577396 Fold (fpext (load x)) -> (extload x)
llvm-svn: 28130
2006-05-05 21:34:35 +00:00
Chris Lattner 3e3f2c63c3 More aggressively sink GEP offsets into loops. For example, before we
generated:

        movl 8(%esp), %eax
        movl %eax, %edx
        addl $4316, %edx
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, (%edx)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx
        movl %edx, 4460(%eax)
        ret
...

Now we generate:

        movl 8(%esp), %eax
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, 4316(%eax)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx
        movl %ecx, 4460(%eax)
        ret

... which uses one fewer register.

llvm-svn: 28129
2006-05-05 21:17:49 +00:00
Chris Lattner e745c7de0e Fix an infinite loop compiling oggenc last night.
llvm-svn: 28128
2006-05-05 20:51:30 +00:00
Evan Cheng 52c22512b9 Need extload patterns after Chris' DAG combiner changes
llvm-svn: 28127
2006-05-05 08:23:07 +00:00
Chris Lattner 3af1053488 Implement InstCombine/cast.ll:test29
llvm-svn: 28126
2006-05-05 06:39:07 +00:00
Chris Lattner c9043ef728 New testcase
llvm-svn: 28125
2006-05-05 06:38:40 +00:00
Chris Lattner 25a5283a86 Fold some common code.
llvm-svn: 28124
2006-05-05 06:32:04 +00:00
Chris Lattner 002ee91457 Implement:
// fold (and (sext x), (sext y)) -> (sext (and x, y))
  // fold (or  (sext x), (sext y)) -> (sext (or  x, y))
  // fold (xor (sext x), (sext y)) -> (sext (xor x, y))
  // fold (and (aext x), (aext y)) -> (aext (and x, y))
  // fold (or  (aext x), (aext y)) -> (aext (or  x, y))
  // fold (xor (aext x), (aext y)) -> (aext (xor x, y))

llvm-svn: 28123
2006-05-05 06:31:05 +00:00
Chris Lattner 5ac4293606 Pull and through and/or/xor. This compiles some bitfield code to:
mov EAX, DWORD PTR [ESP + 4]
        mov ECX, DWORD PTR [EAX]
        mov EDX, ECX
        add EDX, EDX
        or EDX, ECX
        and EDX, -2147483648
        and ECX, 2147483647
        or EDX, ECX
        mov DWORD PTR [EAX], EDX
        ret

instead of:

        sub ESP, 4
        mov DWORD PTR [ESP], ESI
        mov EAX, DWORD PTR [ESP + 8]
        mov ECX, DWORD PTR [EAX]
        mov EDX, ECX
        add EDX, EDX
        mov ESI, ECX
        and ESI, -2147483648
        and EDX, -2147483648
        or EDX, ESI
        and ECX, 2147483647
        or EDX, ECX
        mov DWORD PTR [EAX], EDX
        mov ESI, DWORD PTR [ESP]
        add ESP, 4
        ret

llvm-svn: 28122
2006-05-05 06:10:43 +00:00
Chris Lattner 812646aa0c Implement a variety of simplifications for ANY_EXTEND.
llvm-svn: 28121
2006-05-05 05:58:59 +00:00
Chris Lattner 8d6fc20181 Factor some code, add these transformations:
// fold (and (trunc x), (trunc y)) -> (trunc (and x, y))
  // fold (or  (trunc x), (trunc y)) -> (trunc (or  x, y))
  // fold (xor (trunc x), (trunc y)) -> (trunc (xor x, y))

llvm-svn: 28120
2006-05-05 05:51:50 +00:00
Evan Cheng ddb6cc1d8e Better implementation of truncate. ISel matches it to a pseudo instruction
that gets emitted as movl (for r32 to i16, i8) or a movw (for r16 to i8). And
if the destination gets allocated a subregister of the source operand, then
the instruction will not be emitted at all.

llvm-svn: 28119
2006-05-05 05:40:20 +00:00
Chris Lattner 304bbf3b1d New note, Nate, please check to see if I'm full of it :)
llvm-svn: 28118
2006-05-05 05:36:15 +00:00
Jeff Cohen 78a7f0e05e Fix VC++ compilation error.
llvm-svn: 28117
2006-05-05 01:47:05 +00:00
Nate Begeman dec86e74ff Somehow, I missed this part of the checkin a couple days ago
llvm-svn: 28116
2006-05-05 01:13:11 +00:00
Chris Lattner 7a3ecf7993 Sink noop copies into the basic block that uses them. This reduces the number
of cross-block live ranges, and allows the bb-at-a-time selector to always
coallesce these away, at isel time.

This reduces the load on the coallescer and register allocator.  For example
on a codec on X86, we went from:

   1643 asm-printer           - Number of machine instrs printed
    419 liveintervals         - Number of loads/stores folded into instructions
   1144 liveintervals         - Number of identity moves eliminated after coalescing
   1022 liveintervals         - Number of interval joins performed
    282 liveintervals         - Number of intervals after coalescing
   1304 liveintervals         - Number of original intervals
     86 regalloc              - Number of times we had to backtrack
1.90232 regalloc              - Ratio of intervals processed over total intervals
     40 spiller               - Number of values reused
    182 spiller               - Number of loads added
    121 spiller               - Number of stores added
    132 spiller               - Number of register spills
      6 twoaddressinstruction - Number of instructions commuted to coalesce
    360 twoaddressinstruction - Number of two-address instructions

to:

   1636 asm-printer           - Number of machine instrs printed
    403 liveintervals         - Number of loads/stores folded into instructions
   1155 liveintervals         - Number of identity moves eliminated after coalescing
   1033 liveintervals         - Number of interval joins performed
    279 liveintervals         - Number of intervals after coalescing
   1312 liveintervals         - Number of original intervals
     76 regalloc              - Number of times we had to backtrack
1.88998 regalloc              - Ratio of intervals processed over total intervals
      1 spiller               - Number of copies elided
     41 spiller               - Number of values reused
    191 spiller               - Number of loads added
    114 spiller               - Number of stores added
    128 spiller               - Number of register spills
      4 twoaddressinstruction - Number of instructions commuted to coalesce
    356 twoaddressinstruction - Number of two-address instructions

On this testcase, this change provides a modest reduction in spill code,
regalloc iterations, and total instructions emitted.  It increases the number
of register coallesces.

llvm-svn: 28115
2006-05-05 01:04:50 +00:00
Chris Lattner a0c3ec513d Add a helper method.
llvm-svn: 28114
2006-05-05 00:51:42 +00:00
Chris Lattner 34d26c38b2 wrap long line
llvm-svn: 28113
2006-05-04 23:35:31 +00:00
Chris Lattner b6277c510c Adjust to use proper TargetData copy ctor
llvm-svn: 28112
2006-05-04 21:18:40 +00:00
Chris Lattner 660d80effe Fix this to be a proper copy ctor
llvm-svn: 28111
2006-05-04 21:17:35 +00:00
Chris Lattner abdf4d569c Final pass of minor cleanups for MachineInstr
llvm-svn: 28110
2006-05-04 19:36:09 +00:00
Evan Cheng 9add880566 Initial support for register pressure aware scheduling. The register reduction
scheduler can go into a "vertical mode" (i.e. traversing up the two-address
chain, etc.) when the register pressure is low.
This does seem to reduce the number of spills in the cases I've looked at. But
with x86, it's no guarantee the performance of the code improves.
It can be turned on with -sched-vertically option.

llvm-svn: 28108
2006-05-04 19:16:39 +00:00
Chris Lattner 53af9da363 Remove redundancy and a level of indirection when creating machine operands
llvm-svn: 28107
2006-05-04 19:14:44 +00:00
Chris Lattner 6b18d20c73 Move register numbers out of "extra" into "contents". Other minor cleanup.
llvm-svn: 28106
2006-05-04 18:25:20 +00:00
Chris Lattner 469647bf38 Remove and simplify some more machineinstr/machineoperand stuff.
llvm-svn: 28105
2006-05-04 18:16:01 +00:00
Chris Lattner 10b71c0d08 Rename MO_VirtualRegister -> MO_Register. Clean up immediate handling.
llvm-svn: 28104
2006-05-04 18:05:43 +00:00