Commit Graph

230 Commits

Author SHA1 Message Date
Evan Cheng f851163c53 One more extract_subreg coalescing bug.
llvm-svn: 43644
2007-11-02 17:35:08 +00:00
Evan Cheng 8557603781 - Only perform the unfolding optimization when the folding in question is modref.
- Remove a bogus assertion.

llvm-svn: 43211
2007-10-22 03:01:44 +00:00
Evan Cheng 35ff79370b Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Evan Cheng b63076504e Local spiller optimization:
Turn this:
movswl  %ax, %eax
movl    %eax, -36(%ebp)
xorl    %edi, -36(%ebp)
into
movswl  %ax, %eax
xorl    %edi, %eax
movl    %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.

Also unfold some load folding instructions and reuse the value when similar
situation presents itself.

llvm-svn: 42947
2007-10-13 02:50:24 +00:00
Evan Cheng aa2d6ef81d EXTRACT_SUBREG coalescing support. The coalescer now treats EXTRACT_SUBREG like
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.

llvm-svn: 42899
2007-10-12 08:50:34 +00:00
Evan Cheng c1e4e3743b Allow copyRegToReg to emit cross register classes copies.
Tested with "make check"!

llvm-svn: 42346
2007-09-26 06:25:56 +00:00
Dan Gohman 9da02f5ee2 Remove isReg, isImm, and isMBB, and change all their users to use
isRegister, isImmediate, and isMachineBasicBlock, which are equivalent,
and more popular.

llvm-svn: 41958
2007-09-14 20:33:02 +00:00
David Greene a6d5d2a6a0 Add instruction dump output. This helps find bugs.
llvm-svn: 41744
2007-09-06 16:36:39 +00:00
Evan Cheng 958cf3d43e If the source of a move is in spill slot, the reload may be folded to essentially a load from stack slot. It's ok to mark the stack slot value as available for reuse. But it should not be clobbered since the destination of the move is live.
llvm-svn: 41109
2007-08-15 20:20:34 +00:00
Evan Cheng 3f22fffe94 - If a def is dead, do not spill it.
- If the defs of a spilled rematerializable MI are dead after the spill store is deleted, delete
  the def MI as well.

llvm-svn: 41086
2007-08-14 23:25:37 +00:00
Evan Cheng 6cb9fd7be5 If a MI's def is remat as well as spilled, and the store is later deemed dead, mark the def operand as isDead.
llvm-svn: 41083
2007-08-14 20:23:13 +00:00
Evan Cheng 234386509b If a spilled value is being reused and the use is a kill, that means there are
no more uses within the MBB and the spilled value isn't live out of the MBB.
Then it's safe to delete the spill store.

llvm-svn: 41069
2007-08-14 09:11:18 +00:00
Evan Cheng 78a8806f4f If a rematerializable def is not deleted, i.e. it is also spilled, check if the
spilled value is available for reuse.

llvm-svn: 41067
2007-08-14 05:42:54 +00:00
Evan Cheng 33820da1da Re-implement trivial rematerialization. This allows def MIs whose live intervals that are coalesced to be rematerialized.
llvm-svn: 41060
2007-08-13 23:45:17 +00:00
Evan Cheng 6b6d1f685f Missed a couple of places where new instructions are added due to spill / restore.
llvm-svn: 39748
2007-07-11 19:17:18 +00:00
Evan Cheng 74a541024f No longer need to track last def / use.
llvm-svn: 38534
2007-07-11 08:47:44 +00:00
Evan Cheng bec7a20c5e Fix for PR1545: Revamp code that update kill information due to register reuse.
llvm-svn: 38525
2007-07-11 05:28:39 +00:00
Dan Gohman 9e82064924 Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoad
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.

llvm-svn: 37644
2007-06-19 01:48:05 +00:00
Dan Gohman 4a4a8eb00e Add a target hook to allow loads from constant pools to be rematerialized, and an
implementation for x86.

llvm-svn: 37576
2007-06-14 20:50:44 +00:00
Evan Cheng 910c80851e Rename findRegisterUseOperand to findRegisterUseOperandIdx to avoid confusion.
llvm-svn: 36483
2007-04-26 19:00:32 +00:00
Evan Cheng 0ba174534c Match MachineFunction::UsedPhysRegs changes.
llvm-svn: 36452
2007-04-25 22:13:27 +00:00
Evan Cheng 8be98c1572 Re-materialize all loads from fixed stack slots.
llvm-svn: 35660
2007-04-04 07:40:01 +00:00
Evan Cheng 9a2a7b174a Don't add the same MI to register reuse "last def/use" twice if it reads the
register more than once.

llvm-svn: 35513
2007-03-30 20:21:35 +00:00
Evan Cheng fdbdf43632 Don't call getOperandConstraint() if operand index is greater than
TID->numOperands.

llvm-svn: 35375
2007-03-27 00:48:28 +00:00
Evan Cheng 4a09b1b5be Fix for PR1266. Don't mark a two address operand IsKill.
llvm-svn: 35365
2007-03-26 22:40:42 +00:00
Evan Cheng 0e3278e505 First cut trivial re-materialization support.
llvm-svn: 35208
2007-03-20 08:13:50 +00:00
Evan Cheng d74cb0e194 Only propagate IsKill if the last use is a kill.
llvm-svn: 34878
2007-03-03 06:32:37 +00:00
Evan Cheng 6605c5dbee - Keep track all def and uses of stack slot available in register.
- Available value use may be deleted (e.g. noop move).

llvm-svn: 34841
2007-03-02 08:52:00 +00:00
Evan Cheng 08f2f0d145 Invalidate last use of a reused register if the use is a deleted noop copy.
llvm-svn: 34839
2007-03-02 05:41:42 +00:00
Evan Cheng d6450ba1dc A restore is promoted to copy (or deleted entirely), remove the kill from the last use of the targetted register.
llvm-svn: 34773
2007-03-01 02:27:30 +00:00
Evan Cheng 38fd9b074f A couple of more places where a register liveness has been extended and its last kill should be updated accordingly.
llvm-svn: 34597
2007-02-25 09:51:27 +00:00
Evan Cheng 520b20d3b7 Reuse extends the liveness of a register. Transfer the kill to the operand that reuse it.
llvm-svn: 34536
2007-02-23 21:47:50 +00:00
Evan Cheng f7e320c9e0 A spill kills the register being stored. But it is later being reused by spiller, its live range has to be extended.
llvm-svn: 34517
2007-02-23 01:13:26 +00:00
Evan Cheng de037a821a Use BitVector instead. No functionality change.
llvm-svn: 34460
2007-02-21 02:22:03 +00:00
Evan Cheng 61cd0914ed Dead code.
llvm-svn: 34435
2007-02-20 01:29:10 +00:00
Evan Cheng 6ad6fdb70b Fixed a long standing spiller bug that's exposed by Thumb:
The code sequence before the spiller is something like:
                 = tMOVrr
        %reg1117 = tMOVrr
        %reg1078 = tLSLri %reg1117, 2

The it starts spilling:
        %r0 = tRestore <fi#5>, 0
        %r1 = tRestore <fi#7>, 0
        %r1 = tMOVrr %r1<kill>
        tSpill %r1, <fi#5>, 0
        %reg1078 = tLSLri %reg1117, 2

It restores the value while processing the first tMOVrr. At this point, the
spiller remembers fi#5 is available in %r0. Next it processes the second move.
It restores the source before the move and spills the result afterwards. The
move becomes a noop and is deleted. However, a spill has been inserted and that
should invalidate reuse of %r0 for fi#5 and add reuse of %r1 for fi#5.
Therefore, %reg1117 (which is also assigned fi#5) should get %r1, not %r0.

llvm-svn: 34039
2007-02-08 06:04:54 +00:00
Chris Lattner 199818475b Switch this to use SmallSet to avoid mallocs in the common case.
llvm-svn: 33457
2007-01-23 00:59:48 +00:00
Evan Cheng fc74e2de26 GetRegForReload() now keeps track which registers have been considered and rejected during its quest to find a suitable reload register. This avoids an infinite loop in case like this:
t1 := op t2, t3
  t2 <- assigned r0 for use by the reload but ended up reuse r1
  t3 <- assigned r1 for use by the reload but ended up reuse r0
  t1 <- desires r1
        sees r1 is taken by t2, tries t2's reload register r0
        sees r0 is taken by t3, tries t3's reload register r1
        sees r1 is taken by t2, tries t2's reload register r0 ...

llvm-svn: 33382
2007-01-19 22:40:14 +00:00
Chris Lattner aee775a6b7 Eliminate static ctors from Statistics
llvm-svn: 32698
2006-12-19 22:41:21 +00:00
Bill Wendling a77f14265b Added an automatic cast to "std::ostream*" etc. from OStream. We then can
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.

llvm-svn: 32636
2006-12-17 05:15:13 +00:00
Evan Cheng 54c4ab8524 Minor clean up.
llvm-svn: 32593
2006-12-15 06:41:01 +00:00
Evan Cheng 4c306ae0c2 Fix a long-standing spiller bug:
If a spillslot value is available in a register, and there is a noop copy that
targets that register, the spiller correctly decide not to invalidate the
spillslot register.

However, even though the noop copy does not clobbers the value. It does start a
new intersecting live range. That means the spillslot register is available for
use but should not be reused for a two-address instruction modref operand which
would clobber the new live range.

When we remove the noop copy, update the available information by clearing the
canClobber bit.

llvm-svn: 32576
2006-12-14 07:54:05 +00:00
Evan Cheng 78cb08d082 Move findTiedToSrcOperand to TargetInstrDescriptor.
llvm-svn: 32366
2006-12-08 18:45:48 +00:00
Evan Cheng bb4e6d4d12 Proper fix for PR1037: to determine is a VR is a modref, check 1) whether it is
tied to another oeprand, 2) whether is is being tied to by another operand. So
the destination operand of a two-address MI can be correctly identified.

llvm-svn: 32354
2006-12-08 08:02:34 +00:00
Reid Spencer e44aa812b4 Revision 1.83 causes PR1037.
Reverted.

llvm-svn: 32305
2006-12-07 16:21:19 +00:00
Bill Wendling f3baad3ee1 Changed llvm_ostream et all to OStream. llvm_cerr, llvm_cout, llvm_null, are
now cerr, cout, and NullStream resp.

llvm-svn: 32298
2006-12-07 01:30:32 +00:00
Evan Cheng e312c152d2 MI keeps a ptr of TargetInstrDescriptor, use it.
llvm-svn: 32296
2006-12-07 01:21:59 +00:00
Evan Cheng 7074cbd449 getOperandConstraint returns -1 if the operand does have the specific constraint. This bug was causing excessive spills.
llvm-svn: 32295
2006-12-07 00:46:04 +00:00
Chris Lattner 700b873130 Detemplatize the Statistic class. The only type it is instantiated with
is 'unsigned'.

llvm-svn: 32279
2006-12-06 17:46:33 +00:00
Evan Cheng 67fc141db5 Match TargetInstrInfo changes.
llvm-svn: 32098
2006-12-01 21:52:58 +00:00
Bill Wendling 9d46fcd59c More removal of std::cerr and DEBUG, replacing with DOUT instead.
llvm-svn: 31806
2006-11-17 02:09:07 +00:00
Evan Cheng 51733ed4a3 Fixed some spiller bugs exposed by the recent two-address code changes. Now
there may be other def(s) apart from the use&def two-address operand. We need
to check if the register reuse for a use&def operand may conflicts with another
def. Provide a mean to recover from the conflict if it is detected when the
defs are processed later.

llvm-svn: 31439
2006-11-04 00:21:55 +00:00
Evan Cheng 93cdd149f7 Rename
llvm-svn: 31364
2006-11-01 23:18:32 +00:00
Evan Cheng d8697deca3 Two-address instructions no longer have to be A := A op C. Now any pair of dest / src operands can be tied together.
llvm-svn: 31363
2006-11-01 23:06:55 +00:00
Chris Lattner c040e53372 restore my previous patch, now that the X86 backend bug has been fixed:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20061009/038518.html

llvm-svn: 30906
2006-10-12 17:45:38 +00:00
Evan Cheng c935741b1d Backing out Chris' last commit. It's breaking llvm-gcc bootstrapping.
It's turning:
        movl -24(%ebp), %esp
        subl $16, %esp
        movl -24(%ebp), %ecx
into
        movl -24(%ebp), %esp
        subl $16, %esp
        movl %esp, (%esp)

llvm-svn: 30902
2006-10-12 08:00:47 +00:00
Chris Lattner 86a012ab61 If we see a load from a stack slot into a physreg, consider it as providing
the stack slot.  This fixes PR943.

llvm-svn: 30898
2006-10-12 02:34:07 +00:00
Chris Lattner 13a5dcddce Fix a long-standing wart in the code generator: two-address instruction lowering
actually *removes* one of the operands, instead of just assigning both operands
the same register.  This make reasoning about instructions unnecessarily complex,
because you need to know if you are before or after register allocation to match
up operand #'s with the target description file.

Changing this also gets rid of a bunch of hacky code in various places.

This patch also includes changes to fold loads into cmp/test instructions in
the X86 backend, along with a significant simplification to the X86 spill
folding code.

llvm-svn: 30108
2006-09-05 02:12:02 +00:00
Chris Lattner 3d27be1333 s|llvm/Support/Visibility.h|llvm/Support/Compiler.h|
llvm-svn: 29911
2006-08-27 12:54:02 +00:00
Chris Lattner bdf121060c Take advantage of the recent improvements to the liveintervals set (tracking
instructions which define each value#) to simplify and improve the coallescer.
In particular, this patch:

1. Implements iterative coallescing.
2. Reverts an unsafe hack from handlePhysRegDef, superceeding it with a
   better solution.
3. Implements PR865, "coallescing" away the second copy in code like:

   A = B
   ...
   B = A

This also includes changes to symbolically print registers in intervals
when possible.

llvm-svn: 29862
2006-08-24 22:43:55 +00:00
Bill Wendling 04f2246400 Added a check so that if we have two machine instructions in this form
MOV R0, R1
    MOV R1, R0

the second machine instruction is removed. Added a regression test.

llvm-svn: 29792
2006-08-21 07:33:33 +00:00
Jim Laskey 4b49c23571 Eliminate data relocations by using NULL instead of global empty list.
llvm-svn: 29250
2006-07-21 21:15:20 +00:00
Andrew Lenharth c496b418b5 Reduce number of exported symbols
llvm-svn: 29220
2006-07-20 17:28:38 +00:00
Chris Lattner e097e6f7c7 Shave another 27K off libllvmgcc.dylib with visibility hidden
llvm-svn: 28973
2006-06-28 22:17:39 +00:00
Chris Lattner 10d6341618 Move some methods out of MachineInstr into MachineOperand
llvm-svn: 28102
2006-05-04 17:52:23 +00:00
Chris Lattner fd0a5478a1 Fix a latent bug that my spiller patch last week exposed: we were leaving
instructions in the virtregfolded map that were deleted.  Because they
were deleted, newly allocated instructions could end up at the same address,
magically finding themselves in the map.  The solution is to remove entries
from the map when we delete the instructions.

llvm-svn: 28041
2006-05-01 22:03:24 +00:00
Chris Lattner ab7dbe0cc9 When promoting a load to a reg-reg copy, where the load was a previous
instruction folded with spill code, make sure the remove the load from
the virt reg folded map.

llvm-svn: 28040
2006-05-01 21:17:10 +00:00
Chris Lattner 4dee67c2cd Remove previous patch, which wasn't quite right.
llvm-svn: 28039
2006-05-01 21:16:03 +00:00
Evan Cheng a656242690 Remove temp. option -spiller-check-liveout, it didn't cause any failure nor performance regressions.
llvm-svn: 28029
2006-05-01 08:54:57 +00:00
Evan Cheng f71f0f2e0b Local spiller kills a store if the folded restore is turned into a copy.
But this is incorrect if the spilled value live range extends beyond the
current BB.
It is currently controlled by a temporary option -spiller-check-liveout.

llvm-svn: 28024
2006-04-30 08:41:47 +00:00
Chris Lattner 79c50d96c9 Mapping of physregs can make it so that the designated and input physregs are
the same.  In this case, don't emit a noop copy.

llvm-svn: 28008
2006-04-28 04:43:18 +00:00
Chris Lattner 84e95d00b5 When we have a two-address instruction where the input cannot be clobbered
and is already available, instead of falling back to emitting a load, fall
back to emitting a reg-reg copy.  This generates significantly better code
for some SSE testcases, as SSE has lots of two-address instructions and
none of them are read/modify/write.  As one example, this change does:

        pshufd %XMM5, XMMWORD PTR [%ESP + 84], 255
        xorps %XMM2, %XMM5
        cmpltps %XMM1, %XMM0
-       movaps XMMWORD PTR [%ESP + 52], %XMM0
-       movapd %XMM6, XMMWORD PTR [%ESP + 52]
+       movaps %XMM6, %XMM0
        cmpltps %XMM6, XMMWORD PTR [%ESP + 68]
        movapd XMMWORD PTR [%ESP + 52], %XMM6
        movaps %XMM6, %XMM0
        cmpltps %XMM6, XMMWORD PTR [%ESP + 36]
        cmpltps %XMM3, %XMM0
-       movaps XMMWORD PTR [%ESP + 20], %XMM0
-       movapd %XMM7, XMMWORD PTR [%ESP + 20]
+       movaps %XMM7, %XMM0
        cmpltps %XMM7, XMMWORD PTR [%ESP + 4]
        movapd XMMWORD PTR [%ESP + 20], %XMM7
        cmpltps %XMM4, %XMM0

... which is far better than a store followed by a load!

llvm-svn: 28001
2006-04-28 01:46:50 +00:00
Chris Lattner 7d01f95a57 Fix a bug that Evan exposed with some changes he's making, and that was
exposed with a fastcc problem (breaking pcompress2 on x86 with -enable-x86-fastcc).

When reloading a reused reg, make sure to invalidate the reloaded reg, and
check to see if there are any other pending uses of the same register.

llvm-svn: 26369
2006-02-25 02:17:31 +00:00
Chris Lattner 28a0b8bec7 Remove debugging printout :)
Add a minor compile time win, no codegen change.

llvm-svn: 26368
2006-02-25 02:03:40 +00:00
Chris Lattner 525522e429 Refactor some code from being inline to being out in a new class with methods.
This gets rid of two gotos, which is always nice, and also adds some comments.

No functionality change, this is just a refactor.

llvm-svn: 26367
2006-02-25 01:51:33 +00:00
Jeff Cohen 57a004abfe Fix VC++ warning.
llvm-svn: 25957
2006-02-04 03:27:39 +00:00
Chris Lattner c93403a7fb Handle another case exposed on X86.
llvm-svn: 25949
2006-02-03 23:50:46 +00:00
Chris Lattner 71d20c4e18 Fix a nasty problem on two-address machines in the following situation:
store EAX -> [ss#0]
[ss#0] += 1
...
use(EAX)

In this case, it is not valid to rewrite this as:


store EAX -> [ss#0]
EAX += 1
store EAX -> [ss#0]  ;;; this would also delete the store above
...
use(EAX)

... because EAX is not a dead at that point.  Keep track of which registers
we are allowed to clobber, and which ones we aren't, and don't clobber the
ones we're not supposed to.  :)

This should resolve the issues on X86 last night.

llvm-svn: 25948
2006-02-03 23:28:46 +00:00
Chris Lattner 507a3a7bd1 significantly simplify the VirtRegMap code by pulling the SpillSlotsAvailable
and PhysRegsAvailable maps out into a new AvailableSpills struct.  No
functionality change.

This paves the way for a bugfix, coming up next.

llvm-svn: 25947
2006-02-03 23:13:58 +00:00
Jeff Cohen 3276ff7ac6 Fix VC++ compilation error caused by using a std::map iterator variable to receive
a std::multimap iterator value.  For some reason, GCC doesn't have a problem with this.

llvm-svn: 25927
2006-02-03 03:48:54 +00:00
Chris Lattner e18ef0d4a6 Remove move copies and dead stuff by not clobbering the result reg of a noop copy.
llvm-svn: 25926
2006-02-03 03:16:14 +00:00
Chris Lattner 774d4a190b Simplify some code
llvm-svn: 25924
2006-02-03 03:06:49 +00:00
Chris Lattner 1ef239afb4 Add code that checks for noop copies, which triggers when either:
1. a target doesn't know how to fold load/stores into copies, or
2. the spiller rewrites the input to a copy to the same register as the dest
   instead of to the reloaded reg.

This will be moved/improved in the near future, but allows elimination of
some ancient x86 hacks.  This eliminates 92 copies from SMG2000 on X86 and
163 copies from 252.eon.

llvm-svn: 25922
2006-02-03 02:02:59 +00:00
Chris Lattner b7f24de4c8 Physregs may hold multiple stack slot values at the same time. Keep track
of this, and use it to our advantage (bwahahah).  This allows us to eliminate another
60 instructions from smg2000 on PPC (probably significantly more on X86).  A common
old-new diff looks like this:

        stw r2, 3304(r1)
-       lwz r2, 3192(r1)
        stw r2, 3300(r1)
-       lwz r2, 3192(r1)
        stw r2, 3296(r1)
-       lwz r2, 3192(r1)
        stw r2, 3200(r1)
-       lwz r2, 3192(r1)
        stw r2, 3196(r1)
-       lwz r2, 3192(r1)
+       or r2, r2, r2
        stw r2, 3188(r1)

and

-       lwz r31, 604(r1)
-       lwz r13, 604(r1)
-       lwz r14, 604(r1)
-       lwz r15, 604(r1)
-       lwz r16, 604(r1)
-       lwz r30, 604(r1)
+       or r31, r30, r30
+       or r13, r30, r30
+       or r14, r30, r30
+       or r15, r30, r30
+       or r16, r30, r30
+       or r30, r30, r30

Removal of the R = R copies is coming next...

llvm-svn: 25919
2006-02-03 00:36:31 +00:00
Chris Lattner f3aef1b004 Fix a deficiency in the spiller that Evan noticed. In particular, consider
this code:

  store [stack slot #0],  R10
    = add R14, [stack slot #0]

The spiller didn't know that the store made the value of [stackslot#0] available
in R10 *IF* the store came from a copy instruction with the store folded into it.

This patch teaches VirtRegMap to look at these stores and recognize the values
they make available.  In one case Evan provided, this code:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
1)      movsd QWORD PTR [%ESP + 48], %XMM1
2)      movsd %XMM1, QWORD PTR [%ESP + 48]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

turns into:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

In this case, instruction #2 was removed because of the value made
available by #1, and inst #1 was later deleted because it is now
never used before the stack slot is redefined by #3.

This occurs here and there in a lot of code with high spilling, on PPC
most of the removed loads/stores are LSU-reject-causing loads, which is
nice.

On X86, things are much better (because it spills more), where we nuke
about 1% of the instructions from SMG2000 and several hundred from eon.

More improvements to come...

llvm-svn: 25917
2006-02-02 23:29:36 +00:00
Chris Lattner bb53acd03c Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,a far more logical place. Other methods should also be moved if anyoneis interested. :)
llvm-svn: 25913
2006-02-02 20:12:32 +00:00
Chris Lattner de02d7727f Add explicit #includes of <iostream>
llvm-svn: 25515
2006-01-22 23:41:00 +00:00
Chris Lattner 0511055276 Add an assertion, update DefInst even though no one uses it (dangling pointers
don't help anyone)

llvm-svn: 25081
2006-01-04 06:47:48 +00:00
Chris Lattner fabe55f155 Fix the LLC regressions on X86 last night. In particular, when undoing
previous copy elisions and we discover we need to reload a register, make
sure to use the regclass of the original register for the reload, not the
class of the current register.  This avoid using 16-bit loads to reload 32-bit
values.

llvm-svn: 23645
2005-10-06 17:19:06 +00:00
Chris Lattner 55149d7835 Fix a bug in the local spiller, where we could take code like this:
store r12 -> [ss#2]
  R3 = load [ss#1]
  use R3
  R3 = load [ss#2]
  R4 = load [ss#1]

and turn it into this code:

  store R12 -> [ss#2]
  R3 = load [ss#1]
  use R3
  R3 = R12
  R4 = R3    <- oops!

The problem was that promoting R3 = load[ss#2] to a copy missed the fact that
the instruction invalidated R3 at that point.

llvm-svn: 23638
2005-10-05 18:30:19 +00:00
Chris Lattner 5a6199f387 Change this code ot pass register classes into the stack slot spiller/reloader
code.  PrologEpilogInserter hasn't been updated yet though, so targets cannot
use this info.

llvm-svn: 23536
2005-09-30 01:29:00 +00:00
Chris Lattner 2f838f2192 Teach the local spiller to turn stack slot loads into register-register copies
when possible, avoiding the load (and avoiding the copy if the value is already
in the right register).

This patch came about when I noticed code like the following being generated:

  store R17 -> [SS1]
  ...blah...
  R4 = load [SS1]

This was causing an LSU reject on the G5.  This problem was due to the register
allocator folding spill code into a reg-reg copy (producing the load), which
prevented the spiller from being able to rewrite the load into a copy, despite
the fact that the value was already available in a register.  In the case
above, we now rip out the R4 load and replace it with a R4 = R17 copy.

This speeds up several programs on X86 (which spills a lot :) ), e.g.
smg2k from 22.39->20.60s, povray from 12.93->12.66s, 168.wupwise from
68.54->53.83s (!), 197.parser from 7.33->6.62s (!), etc.  This may have a larger
impact in some cases on the G5 (by avoiding LSU rejects), though it probably
won't trigger as often (less spilling in general).

Targets that implement folding of loads/stores into copies should implement
the isLoadFromStackSlot hook to get this.

llvm-svn: 23388
2005-09-19 06:56:21 +00:00
Chris Lattner 1410003751 Use continue in the use-processing loop to make it clear what the early exits
are, simplify logic, and cause things to not be nested as deeply.  This also
uses MRI->areAliases instead of an explicit loop.

No functionality change, just code cleanup.

llvm-svn: 23296
2005-09-09 20:29:51 +00:00
Misha Brukman 835702a094 Remove trailing whitespace
llvm-svn: 21420
2005-04-21 22:36:52 +00:00
Chris Lattner 6a6056e93d Make sure to notice that explicit physregs are used in the function
llvm-svn: 21084
2005-04-04 21:35:34 +00:00
Chris Lattner ae09d93b35 Update these register allocators to set the PhysRegUsed info in MachineFunction.
llvm-svn: 19791
2005-01-23 22:45:13 +00:00
Chris Lattner 0ad02bdd3d Improve compatibility with acc
llvm-svn: 19549
2005-01-14 15:54:24 +00:00
Chris Lattner c8b07dd339 Clean up the MachineBasicBlock.h file, percolating #includes into this file.
Patch contributed by Morten Ofstad

llvm-svn: 17251
2004-10-26 15:35:58 +00:00
Chris Lattner 2152236351 This patch fixes the nasty bug that caused 175.vpr to fail for X86 last night.
The problem occurred when trying to reload this instruction:

MOV32mr %reg2326, 8, %reg2297, 4, %reg2295

The value of reg2326 was available in EBX, so it was reused from there, instead
of reloading it into EDX.

The value of reg2297 was available in EDX, so it was reused from there, instead
of reloading it into EDI.

The value of reg2295 was not available, so we tried reloading it into EBX, its
assigned register.  However, we checked and saw that we already reloaded
something into EBX, so we chose what reg2326 was assigned to (EDX) and reloaded
into that register instead.

Unfortunately EDX had already been used by reg2297, so reloading into EDX
clobbered the value used by the reg2326 operand, breaking the program.

The fix for this is to check that the newly picked register is ok.  In this
case we now find that EDX is already used and try using EDI, which succeeds.

llvm-svn: 17006
2004-10-15 03:19:31 +00:00
Chris Lattner 9af0572a37 This patch adds and improves debugging output. No functionality changes.
llvm-svn: 17005
2004-10-15 03:16:29 +00:00
Chris Lattner 00db230c7c Do not repeat the map lookup
llvm-svn: 16633
2004-10-01 23:16:43 +00:00
Chris Lattner 1905ae69c1 When a virtual register is folded into an instruction, keep track of whether
it was a use, def, or both.  This allows us to be less pessimistic in our
analysis of them.  In practice, this doesn't make a big difference, but it
doesn't hurt either.

llvm-svn: 16632
2004-10-01 23:15:36 +00:00
Chris Lattner 04f52079d7 Add a simple little improvement to the local spiller to keep track of stores
and delete them if they turn out to be dead.  This is a useful little hack
that even speeds up some programs.  For example, it speeds up Ptrdist/ks
from 17.53s to 15.59s, and 188.ammp from 149s to 146s.

This also speeds up llc :)

llvm-svn: 16630
2004-10-01 19:47:12 +00:00
Chris Lattner d3b1f6c703 Substantially revamp the local spiller, causing it to actually improve the
generated code over the simple spiller.  The new local spiller generates
substantially better code than the simple one in some cases, by reusing
values that are loaded out of stack slots and kept available in registers.

This primarily helps programs that are spilling a lot, and there is still
stuff that can be done to improve it.  This patch makes the local spiller
the default, as it's only a tiny bit slower than the simple spiller (it
increases the runtime of llc by < 1%).

Here are some numbers with speedups.

Program    #reuse  old(s)    new(s)  Speedup

Povray:     3452,  16.87 ->  15.93   (5.5%)
177.mesa:   2176,   2.77 ->   2.76   (0%)
179.art:      35,  28.43 ->  28.01   (1.5%)
183.equake:   55,  61.44 ->  61.41   (0%)
188.ammp:    869, 174    -> 149      (15%)

164.gzip:     43,  40.73 ->  40.71   (0%)
175.vpr:     351,  18.54 ->  17.34   (6.5%)
176.gcc:    2471,   5.01 ->   4.92   (1.8%)
181.mcf       42,  79.30 ->  75.20   (5.2%)
186.crafty:  484,  29.73 ->  30.04   (-1%)
197.parser:  251,  10.47 ->  10.67   (-1%)
252.eon:    1501,   1.98 ->   1.75   (12%)
253.perlbm: 1183,  14.83 ->  14.42   (2.8%)
254.gap:     825,   7.46 ->   7.29   (2.3%)
255.vortex:  285,  10.51 ->  10.27   (2.3%)
256.bzip2:    63,  55.70 ->  55.20   (0.9%)
300.twolf:   830,  21.63 ->  22.00   (-1%)

PtrDist/ks    14,  32.75 -> 17.53    (46.5%)
Olden/tsp     46,   8.71 ->  8.24    (5.4%)
Free/distray  70,   1.09 ->  0.99    (9.2%)

llvm-svn: 16629
2004-10-01 19:04:51 +00:00
Chris Lattner b5b4a2f76b Use more efficient map operations. Fix a bug that would affect hypothetical
targets that supported multiple memory operands.

llvm-svn: 16614
2004-09-30 16:35:08 +00:00
Chris Lattner 55c1402f25 There is no need to call MachineInstr::print directly, just send the MI& to an ostream.
llvm-svn: 16613
2004-09-30 16:10:45 +00:00
Chris Lattner c2812121cd Simplify the logic in the simple spiller and capitalize some variables
llvm-svn: 16609
2004-09-30 02:59:33 +00:00
Chris Lattner 1c5942fee9 Switch from defaulting to the 'local' spiller to the 'simple' spiller. The
two spillers produce perfectly identical code (at least on povray and eon),
but the simple spiller is substantially faster than the local spiller. Once
the local spiller is improved, we can switch back.

Switching cuts 5.2% off of the llc time for povray (about 1.3s).

llvm-svn: 16608
2004-09-30 02:40:06 +00:00
Chris Lattner 28bc753cac Don't use a densemap for keeping track of which vregs are already loaded, just
use a simple vector.  This speeds up -spiller=simple from taking 22s to taking
.1s on povray (debug build).  This change does not modify the generated code.

llvm-svn: 16607
2004-09-30 02:33:48 +00:00
Chris Lattner 39fef8df03 Use longer and more explicit names for instance vars (particularly important
data structures).  Fix the print method to send to the right ostream, not
always cerr.  Delete typedefs that are only used once.

llvm-svn: 16606
2004-09-30 02:15:18 +00:00
Chris Lattner e2b77d57c0 Reindent code, improve comments, move huge nested methods out of classes,
prune #includes, add print/dump methods, etc.  No functionality changes.

llvm-svn: 16604
2004-09-30 01:54:45 +00:00
Reid Spencer 7c16caa336 Changes For Bug 352
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.

llvm-svn: 16137
2004-09-01 22:55:40 +00:00
Chris Lattner c66f27fd29 Stop using CreateStackObject(RegClass*)
llvm-svn: 15775
2004-08-15 22:02:22 +00:00
Chris Lattner 98de1d7795 These methods no longer take a TargetRegisterClass* operand.
llvm-svn: 15774
2004-08-15 21:56:44 +00:00
Brian Gaeke 902dcf0729 These files don't need to include <iostream> since they include "Support/Debug.h".
llvm-svn: 15089
2004-07-21 20:50:33 +00:00
Chris Lattner 34afafc190 Fix IA64 compatibility
llvm-svn: 14866
2004-07-16 00:06:01 +00:00
Tanya Lattner 23dbc8170c Made a fix so that you can print out MachineInstrs that belong to a MachineBasicBlock that is not yet attached to a MachineFunction. This change includes changing the third operand (TargetMachine) to a pointer for the MachineInstr::print function.
llvm-svn: 14389
2004-06-25 00:13:11 +00:00
Chris Lattner 2150542af9 Adjust to new TargetMachine interface
llvm-svn: 13956
2004-06-02 05:57:12 +00:00
Alkis Evlogimenos fd735bcf28 Add method to assign stack slot to virtual register without creating a
new one.

llvm-svn: 13895
2004-05-29 20:38:05 +00:00
Alkis Evlogimenos 6623cd78f9 Spill explicit physical register defs as well.
llvm-svn: 12260
2004-03-09 08:35:13 +00:00
Alkis Evlogimenos cb98644e9b As I wrote in the docs, simple is the default spiller :-)
llvm-svn: 12189
2004-03-06 23:08:44 +00:00
Alkis Evlogimenos 79850121ad Add simple spiller.
llvm-svn: 12188
2004-03-06 22:38:29 +00:00
Alkis Evlogimenos 31953c7a10 Add a spiller option to llc. A simple spiller will come soon. When we get CFG in the machine code represenation a global spiller will also be possible. Also document the linear scan register allocator but mark it as experimental for now.
llvm-svn: 12062
2004-03-01 23:18:15 +00:00
Alkis Evlogimenos b76d234ee9 Add the long awaited memory operand folding support for linear scan
llvm-svn: 12058
2004-03-01 20:05:10 +00:00
Alkis Evlogimenos 941f9310bb Make spiller push stores right after the definition of a register so
that they are as far away from the loads as possible.

llvm-svn: 11895
2004-02-27 04:51:35 +00:00
Alkis Evlogimenos 5a3bab9402 Clear maps right after basic block is processed.
llvm-svn: 11892
2004-02-26 23:22:23 +00:00
Alkis Evlogimenos e62ddd405d Fix bugs found with recent addition of assertions in
MRegisterInfo::is{Physical,Virtual}Register.

llvm-svn: 11849
2004-02-25 23:21:52 +00:00
Alkis Evlogimenos d8bace7f60 Add DenseMap template and actually use it for for mapping virtual regs
to objects.

llvm-svn: 11840
2004-02-25 21:55:45 +00:00
Alkis Evlogimenos 1dd872ce94 Move machine code rewriter and spiller outside the register
allocator.

The implementation is completely rewritten and now employs several
optimizations not exercised before. For example for 164.gzip we have
997 loads and 699 stores vs the 1221 loads and 880 stores we have
before.

llvm-svn: 11798
2004-02-24 08:58:30 +00:00
Alkis Evlogimenos c794a9060f Refactor VirtRegMap out of RegAllocLinearScan as the first part of bug
251 (providing a generic machine code rewriter/spiller).

llvm-svn: 11780
2004-02-23 23:08:11 +00:00