Commit Graph

223 Commits

Author SHA1 Message Date
Evan Cheng 8c19af1b7e A couple of kill marker maintainence bug.
llvm-svn: 48653
2008-03-21 19:09:30 +00:00
Evan Cheng 84aec09fdb Fix PR2138. Apparently any modification to a std::multimap (including remove entries for a different key) can invalidate multimap iterators.
llvm-svn: 48371
2008-03-14 20:44:01 +00:00
Dan Gohman 34ae72c435 Change VirtRegMap's dump to dump to cerr, not DOUT, so that it
can be called from within a debuger without having -debug specified
on the command-line.

llvm-svn: 48298
2008-03-12 20:52:10 +00:00
Evan Cheng 105cb3988b Set NextMII after issuing a physical register spill.
llvm-svn: 48263
2008-03-12 00:14:07 +00:00
Evan Cheng b398635456 Minor debug output bug.
llvm-svn: 48261
2008-03-12 00:02:46 +00:00
Evan Cheng a3891365b5 Transfer physical register spill info when load / store folding happens.
llvm-svn: 48246
2008-03-11 21:34:46 +00:00
Evan Cheng e88a625ecd When the register allocator runs out of registers, spill a physical register around the def's and use's of the interval being allocated to make it possible for the interval to target a register and spill it right away and restore a register for uses. This likely generates terrible code but is before than aborting.
llvm-svn: 48218
2008-03-11 07:19:34 +00:00
Evan Cheng 6325446666 Refactor code. Remove duplicated functions that basically do the same thing as
findRegisterUseOperandIdx, findRegisterDefOperandIndx. Fix some naming inconsistencies.

llvm-svn: 47927
2008-03-05 00:59:57 +00:00
Evan Cheng fdc732ab9a Fix a bug in dead spill slot elimination.
llvm-svn: 47687
2008-02-27 19:57:11 +00:00
Bill Wendling 97925ec704 Final de-tabification.
llvm-svn: 47663
2008-02-27 06:33:05 +00:00
Evan Cheng 6d56368caf Spiller now remove unused spill slots.
llvm-svn: 47657
2008-02-27 03:04:06 +00:00
Bill Wendling d7a258d325 Rename PrintableName to Name.
llvm-svn: 47629
2008-02-26 21:47:57 +00:00
Bill Wendling c24ea4fb41 Change "Name" to "AsmName" in the target register info. Gee, a refactoring tool
would have been a Godsend here!

llvm-svn: 47625
2008-02-26 21:11:01 +00:00
Bill Wendling 7bb51dfbb1 De-tabify.
llvm-svn: 47598
2008-02-26 10:51:52 +00:00
Evan Cheng ea1ef87ea2 Make sure reload of implicit uses are issued before remat's.
llvm-svn: 47492
2008-02-22 19:22:06 +00:00
Evan Cheng c373911461 Enable re-materialization of instructions which have virtual register operands if
the definition of the operand also reaches its uses.

llvm-svn: 47475
2008-02-22 09:24:50 +00:00
Anton Korobeynikov 035eaacd1f Update gcc 4.3 warnings fix patch with recent head changes
llvm-svn: 47368
2008-02-20 11:10:28 +00:00
Dan Gohman 3a4be0fdef Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Evan Cheng f2bd1387b0 Forgot these files.
llvm-svn: 46896
2008-02-08 22:05:27 +00:00
Chris Lattner 03ad885039 rename TargetInstrDescriptor -> TargetInstrDesc.
Make MachineInstr::getDesc return a reference instead
of a pointer, since it can never be null.

llvm-svn: 45695
2008-01-07 07:27:27 +00:00
Chris Lattner b0d06b4381 Move a bunch more accessors from TargetInstrInfo to TargetInstrDescriptor
llvm-svn: 45680
2008-01-07 03:13:06 +00:00
Chris Lattner a98c679de0 Rename MachineInstr::getInstrDescriptor -> getDesc(), which reflects
that it is cheap and efficient to get.

Move a variety of predicates from TargetInstrInfo into 
TargetInstrDescriptor, which makes it much easier to query a predicate
when you don't have TII around.  Now you can use MI->getDesc()->isBranch()
instead of going through TII, and this is much more efficient anyway. Not
all of the predicates have been moved over yet.

Update old code that used MI->getInstrDescriptor()->Flags to use the
new predicates in many places.

llvm-svn: 45674
2008-01-07 01:56:04 +00:00
Owen Anderson 0ec92e9d64 Update CodeGen for MRegisterInfo --> TargetInstrInfo changes.
llvm-svn: 45673
2008-01-07 01:35:56 +00:00
Owen Anderson eee14601b1 Move some more instruction creation methods from RegisterInfo into InstrInfo.
llvm-svn: 45484
2008-01-01 21:11:32 +00:00
Owen Anderson 7a73ae9a86 Move copyRegToReg from MRegisterInfo to TargetInstrInfo. This is part of the
Machine-level API cleanup instigated by Chris.

llvm-svn: 45470
2007-12-31 06:32:00 +00:00
Chris Lattner a10fff51d9 Rename SSARegMap -> MachineRegisterInfo in keeping with the idea
that "machine" classes are used to represent the current state of
the code being compiled.  Given this expanded name, we can start 
moving other stuff into it.  For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.

Update all the clients to match.

This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.

llvm-svn: 45467
2007-12-31 04:13:23 +00:00
Chris Lattner 6005589faf More cleanups for MachineOperand:
- Eliminate the static "print" method for operands, moving it
    into MachineOperand::print.
  - Change various set* methods for register flags to take a bool
    for the value to set it to.  Remove unset* methods.
  - Group methods more logically by operand flavor in MachineOperand.h

llvm-svn: 45461
2007-12-30 21:56:09 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Evan Cheng 6766d2fa4f If deleting a reload instruction due to reuse (value is available in register R and reload is targeting R), make sure to invalidate the kill information of the last kill.
llvm-svn: 44894
2007-12-11 23:36:57 +00:00
Evan Cheng 678b86d6ce MachineInstr can change. Store indexes instead.
llvm-svn: 44612
2007-12-05 10:24:35 +00:00
Evan Cheng 06353b48b5 If a split live interval is spilled again, remove the kill marker on its last use.
llvm-svn: 44611
2007-12-05 09:51:10 +00:00
Evan Cheng d7de56ac93 Fix kill info for split intervals.
llvm-svn: 44609
2007-12-05 08:16:32 +00:00
Evan Cheng 269dbd31d0 - Mark last use of a split interval as kill instead of letting spiller track it.
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
  low spill weight so it would not be picked in case spilling is needed (avoid
  pushing other intervals in the same BB to be spilled).

llvm-svn: 44601
2007-12-05 03:22:34 +00:00
Evan Cheng bb26301864 Add a argument to storeRegToStackSlot and storeRegToAddr to specify whether
the stored register is killed.

llvm-svn: 44600
2007-12-05 03:14:33 +00:00
Evan Cheng e412a4427b Remove a unsafe optimization. This fixes 401.bzip2.
llvm-svn: 44587
2007-12-04 23:57:55 +00:00
Evan Cheng cd8a89b3cd Spiller unfold optimization bug: do not clobber a reusable stack slot value unless it can be modified.
llvm-svn: 44575
2007-12-04 19:19:45 +00:00
Evan Cheng 40965448ff Bug fixes.
llvm-svn: 44549
2007-12-03 21:31:55 +00:00
Evan Cheng 85ef9834a6 Update kill info for uses of split intervals.
llvm-svn: 44531
2007-12-03 09:58:48 +00:00
Evan Cheng f45a1d623c Remove redundant foldMemoryOperand variants and other code clean up.
llvm-svn: 44517
2007-12-02 08:30:39 +00:00
Evan Cheng be255b0650 Fixed various live interval splitting bugs / compile time issues.
llvm-svn: 44428
2007-11-29 01:06:25 +00:00
Evan Cheng c1648b6a0d Recover compile time regression.
llvm-svn: 44386
2007-11-28 01:28:46 +00:00
Evan Cheng 8e22379303 Live interval splitting:
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.

This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.

This is currently controlled by -split-intervals-at-bb.

llvm-svn: 44198
2007-11-17 00:40:40 +00:00
Evan Cheng 7f02cfa599 Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Evan Cheng f851163c53 One more extract_subreg coalescing bug.
llvm-svn: 43644
2007-11-02 17:35:08 +00:00
Evan Cheng 8557603781 - Only perform the unfolding optimization when the folding in question is modref.
- Remove a bogus assertion.

llvm-svn: 43211
2007-10-22 03:01:44 +00:00
Evan Cheng 35ff79370b Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Evan Cheng b63076504e Local spiller optimization:
Turn this:
movswl  %ax, %eax
movl    %eax, -36(%ebp)
xorl    %edi, -36(%ebp)
into
movswl  %ax, %eax
xorl    %edi, %eax
movl    %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.

Also unfold some load folding instructions and reuse the value when similar
situation presents itself.

llvm-svn: 42947
2007-10-13 02:50:24 +00:00
Evan Cheng aa2d6ef81d EXTRACT_SUBREG coalescing support. The coalescer now treats EXTRACT_SUBREG like
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.

llvm-svn: 42899
2007-10-12 08:50:34 +00:00
Evan Cheng c1e4e3743b Allow copyRegToReg to emit cross register classes copies.
Tested with "make check"!

llvm-svn: 42346
2007-09-26 06:25:56 +00:00
Dan Gohman 9da02f5ee2 Remove isReg, isImm, and isMBB, and change all their users to use
isRegister, isImmediate, and isMachineBasicBlock, which are equivalent,
and more popular.

llvm-svn: 41958
2007-09-14 20:33:02 +00:00
David Greene a6d5d2a6a0 Add instruction dump output. This helps find bugs.
llvm-svn: 41744
2007-09-06 16:36:39 +00:00
Evan Cheng 958cf3d43e If the source of a move is in spill slot, the reload may be folded to essentially a load from stack slot. It's ok to mark the stack slot value as available for reuse. But it should not be clobbered since the destination of the move is live.
llvm-svn: 41109
2007-08-15 20:20:34 +00:00
Evan Cheng 3f22fffe94 - If a def is dead, do not spill it.
- If the defs of a spilled rematerializable MI are dead after the spill store is deleted, delete
  the def MI as well.

llvm-svn: 41086
2007-08-14 23:25:37 +00:00
Evan Cheng 6cb9fd7be5 If a MI's def is remat as well as spilled, and the store is later deemed dead, mark the def operand as isDead.
llvm-svn: 41083
2007-08-14 20:23:13 +00:00
Evan Cheng 234386509b If a spilled value is being reused and the use is a kill, that means there are
no more uses within the MBB and the spilled value isn't live out of the MBB.
Then it's safe to delete the spill store.

llvm-svn: 41069
2007-08-14 09:11:18 +00:00
Evan Cheng 78a8806f4f If a rematerializable def is not deleted, i.e. it is also spilled, check if the
spilled value is available for reuse.

llvm-svn: 41067
2007-08-14 05:42:54 +00:00
Evan Cheng 33820da1da Re-implement trivial rematerialization. This allows def MIs whose live intervals that are coalesced to be rematerialized.
llvm-svn: 41060
2007-08-13 23:45:17 +00:00
Evan Cheng 6b6d1f685f Missed a couple of places where new instructions are added due to spill / restore.
llvm-svn: 39748
2007-07-11 19:17:18 +00:00
Evan Cheng 74a541024f No longer need to track last def / use.
llvm-svn: 38534
2007-07-11 08:47:44 +00:00
Evan Cheng bec7a20c5e Fix for PR1545: Revamp code that update kill information due to register reuse.
llvm-svn: 38525
2007-07-11 05:28:39 +00:00
Dan Gohman 9e82064924 Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoad
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.

llvm-svn: 37644
2007-06-19 01:48:05 +00:00
Dan Gohman 4a4a8eb00e Add a target hook to allow loads from constant pools to be rematerialized, and an
implementation for x86.

llvm-svn: 37576
2007-06-14 20:50:44 +00:00
Evan Cheng 910c80851e Rename findRegisterUseOperand to findRegisterUseOperandIdx to avoid confusion.
llvm-svn: 36483
2007-04-26 19:00:32 +00:00
Evan Cheng 0ba174534c Match MachineFunction::UsedPhysRegs changes.
llvm-svn: 36452
2007-04-25 22:13:27 +00:00
Evan Cheng 8be98c1572 Re-materialize all loads from fixed stack slots.
llvm-svn: 35660
2007-04-04 07:40:01 +00:00
Evan Cheng 9a2a7b174a Don't add the same MI to register reuse "last def/use" twice if it reads the
register more than once.

llvm-svn: 35513
2007-03-30 20:21:35 +00:00
Evan Cheng fdbdf43632 Don't call getOperandConstraint() if operand index is greater than
TID->numOperands.

llvm-svn: 35375
2007-03-27 00:48:28 +00:00
Evan Cheng 4a09b1b5be Fix for PR1266. Don't mark a two address operand IsKill.
llvm-svn: 35365
2007-03-26 22:40:42 +00:00
Evan Cheng 0e3278e505 First cut trivial re-materialization support.
llvm-svn: 35208
2007-03-20 08:13:50 +00:00
Evan Cheng d74cb0e194 Only propagate IsKill if the last use is a kill.
llvm-svn: 34878
2007-03-03 06:32:37 +00:00
Evan Cheng 6605c5dbee - Keep track all def and uses of stack slot available in register.
- Available value use may be deleted (e.g. noop move).

llvm-svn: 34841
2007-03-02 08:52:00 +00:00
Evan Cheng 08f2f0d145 Invalidate last use of a reused register if the use is a deleted noop copy.
llvm-svn: 34839
2007-03-02 05:41:42 +00:00
Evan Cheng d6450ba1dc A restore is promoted to copy (or deleted entirely), remove the kill from the last use of the targetted register.
llvm-svn: 34773
2007-03-01 02:27:30 +00:00
Evan Cheng 38fd9b074f A couple of more places where a register liveness has been extended and its last kill should be updated accordingly.
llvm-svn: 34597
2007-02-25 09:51:27 +00:00
Evan Cheng 520b20d3b7 Reuse extends the liveness of a register. Transfer the kill to the operand that reuse it.
llvm-svn: 34536
2007-02-23 21:47:50 +00:00
Evan Cheng f7e320c9e0 A spill kills the register being stored. But it is later being reused by spiller, its live range has to be extended.
llvm-svn: 34517
2007-02-23 01:13:26 +00:00
Evan Cheng de037a821a Use BitVector instead. No functionality change.
llvm-svn: 34460
2007-02-21 02:22:03 +00:00
Evan Cheng 61cd0914ed Dead code.
llvm-svn: 34435
2007-02-20 01:29:10 +00:00
Evan Cheng 6ad6fdb70b Fixed a long standing spiller bug that's exposed by Thumb:
The code sequence before the spiller is something like:
                 = tMOVrr
        %reg1117 = tMOVrr
        %reg1078 = tLSLri %reg1117, 2

The it starts spilling:
        %r0 = tRestore <fi#5>, 0
        %r1 = tRestore <fi#7>, 0
        %r1 = tMOVrr %r1<kill>
        tSpill %r1, <fi#5>, 0
        %reg1078 = tLSLri %reg1117, 2

It restores the value while processing the first tMOVrr. At this point, the
spiller remembers fi#5 is available in %r0. Next it processes the second move.
It restores the source before the move and spills the result afterwards. The
move becomes a noop and is deleted. However, a spill has been inserted and that
should invalidate reuse of %r0 for fi#5 and add reuse of %r1 for fi#5.
Therefore, %reg1117 (which is also assigned fi#5) should get %r1, not %r0.

llvm-svn: 34039
2007-02-08 06:04:54 +00:00
Chris Lattner 199818475b Switch this to use SmallSet to avoid mallocs in the common case.
llvm-svn: 33457
2007-01-23 00:59:48 +00:00
Evan Cheng fc74e2de26 GetRegForReload() now keeps track which registers have been considered and rejected during its quest to find a suitable reload register. This avoids an infinite loop in case like this:
t1 := op t2, t3
  t2 <- assigned r0 for use by the reload but ended up reuse r1
  t3 <- assigned r1 for use by the reload but ended up reuse r0
  t1 <- desires r1
        sees r1 is taken by t2, tries t2's reload register r0
        sees r0 is taken by t3, tries t3's reload register r1
        sees r1 is taken by t2, tries t2's reload register r0 ...

llvm-svn: 33382
2007-01-19 22:40:14 +00:00
Chris Lattner aee775a6b7 Eliminate static ctors from Statistics
llvm-svn: 32698
2006-12-19 22:41:21 +00:00
Bill Wendling a77f14265b Added an automatic cast to "std::ostream*" etc. from OStream. We then can
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.

llvm-svn: 32636
2006-12-17 05:15:13 +00:00
Evan Cheng 54c4ab8524 Minor clean up.
llvm-svn: 32593
2006-12-15 06:41:01 +00:00
Evan Cheng 4c306ae0c2 Fix a long-standing spiller bug:
If a spillslot value is available in a register, and there is a noop copy that
targets that register, the spiller correctly decide not to invalidate the
spillslot register.

However, even though the noop copy does not clobbers the value. It does start a
new intersecting live range. That means the spillslot register is available for
use but should not be reused for a two-address instruction modref operand which
would clobber the new live range.

When we remove the noop copy, update the available information by clearing the
canClobber bit.

llvm-svn: 32576
2006-12-14 07:54:05 +00:00
Evan Cheng 78cb08d082 Move findTiedToSrcOperand to TargetInstrDescriptor.
llvm-svn: 32366
2006-12-08 18:45:48 +00:00
Evan Cheng bb4e6d4d12 Proper fix for PR1037: to determine is a VR is a modref, check 1) whether it is
tied to another oeprand, 2) whether is is being tied to by another operand. So
the destination operand of a two-address MI can be correctly identified.

llvm-svn: 32354
2006-12-08 08:02:34 +00:00
Reid Spencer e44aa812b4 Revision 1.83 causes PR1037.
Reverted.

llvm-svn: 32305
2006-12-07 16:21:19 +00:00
Bill Wendling f3baad3ee1 Changed llvm_ostream et all to OStream. llvm_cerr, llvm_cout, llvm_null, are
now cerr, cout, and NullStream resp.

llvm-svn: 32298
2006-12-07 01:30:32 +00:00
Evan Cheng e312c152d2 MI keeps a ptr of TargetInstrDescriptor, use it.
llvm-svn: 32296
2006-12-07 01:21:59 +00:00
Evan Cheng 7074cbd449 getOperandConstraint returns -1 if the operand does have the specific constraint. This bug was causing excessive spills.
llvm-svn: 32295
2006-12-07 00:46:04 +00:00
Chris Lattner 700b873130 Detemplatize the Statistic class. The only type it is instantiated with
is 'unsigned'.

llvm-svn: 32279
2006-12-06 17:46:33 +00:00
Evan Cheng 67fc141db5 Match TargetInstrInfo changes.
llvm-svn: 32098
2006-12-01 21:52:58 +00:00
Bill Wendling 9d46fcd59c More removal of std::cerr and DEBUG, replacing with DOUT instead.
llvm-svn: 31806
2006-11-17 02:09:07 +00:00
Evan Cheng 51733ed4a3 Fixed some spiller bugs exposed by the recent two-address code changes. Now
there may be other def(s) apart from the use&def two-address operand. We need
to check if the register reuse for a use&def operand may conflicts with another
def. Provide a mean to recover from the conflict if it is detected when the
defs are processed later.

llvm-svn: 31439
2006-11-04 00:21:55 +00:00
Evan Cheng 93cdd149f7 Rename
llvm-svn: 31364
2006-11-01 23:18:32 +00:00
Evan Cheng d8697deca3 Two-address instructions no longer have to be A := A op C. Now any pair of dest / src operands can be tied together.
llvm-svn: 31363
2006-11-01 23:06:55 +00:00
Chris Lattner c040e53372 restore my previous patch, now that the X86 backend bug has been fixed:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20061009/038518.html

llvm-svn: 30906
2006-10-12 17:45:38 +00:00
Evan Cheng c935741b1d Backing out Chris' last commit. It's breaking llvm-gcc bootstrapping.
It's turning:
        movl -24(%ebp), %esp
        subl $16, %esp
        movl -24(%ebp), %ecx
into
        movl -24(%ebp), %esp
        subl $16, %esp
        movl %esp, (%esp)

llvm-svn: 30902
2006-10-12 08:00:47 +00:00
Chris Lattner 86a012ab61 If we see a load from a stack slot into a physreg, consider it as providing
the stack slot.  This fixes PR943.

llvm-svn: 30898
2006-10-12 02:34:07 +00:00