Commit Graph

209 Commits

Author SHA1 Message Date
Evan Cheng aa2d6ef81d EXTRACT_SUBREG coalescing support. The coalescer now treats EXTRACT_SUBREG like
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.

llvm-svn: 42899
2007-10-12 08:50:34 +00:00
David Greene 517d5d8ebe Constify to catch bugs.
llvm-svn: 41751
2007-09-06 19:46:46 +00:00
Evan Cheng d059eed1c1 Fix a memory leak.
llvm-svn: 41739
2007-09-06 01:07:24 +00:00
Evan Cheng db53aef53e Use pool allocator for all the VNInfo's to improve memory access locality. This reduces coalescing time on siod Mac OS X PPC by 35%. Also remove the back ptr from VNInfo to LiveInterval and other tweaks.
llvm-svn: 41729
2007-09-05 21:46:51 +00:00
Evan Cheng 2089a21360 More tweaks to improve compile time.
llvm-svn: 41669
2007-09-01 02:03:17 +00:00
Evan Cheng 91becf4ffa Remove an unnecessary element, saving 4 bytes per LiveInterval.
llvm-svn: 41641
2007-08-31 08:26:44 +00:00
Evan Cheng 1ad4a6117b Change LiveRange so it keeps a pointer to the VNInfo rather than an index.
Changes related modules so VNInfo's are not copied. This decrease
copy coalescing time by 45% and overall compilation time by 10% on siod.

llvm-svn: 41579
2007-08-29 20:45:00 +00:00
Evan Cheng a5b10b334f Recover most of the compile time regression due to recent live interval changes.
1. Eliminate the costly live interval "swapping".
2. Change ValueNumberInfo container from SmallVector to std::vector. The former
   performs slowly when the vector size is very large.

llvm-svn: 41536
2007-08-28 08:28:51 +00:00
Evan Cheng 74c69f7588 Kill info update bugs.
llvm-svn: 41064
2007-08-14 01:56:58 +00:00
Evan Cheng 5ca98c657d Kill info update bugs.
llvm-svn: 41043
2007-08-13 07:12:23 +00:00
Evan Cheng 05cc486c7b Code to maintain kill information during register coalescing.
llvm-svn: 41016
2007-08-11 00:59:19 +00:00
Evan Cheng 103947125c Clean up and bug fix.
llvm-svn: 40921
2007-08-08 05:56:18 +00:00
Evan Cheng a8c2f38617 - Each val# can have multiple kills.
- Fix some minor bugs related to special markers on val# def. ~0U means
  undefined, ~1U means dead val#.

llvm-svn: 40916
2007-08-08 03:00:28 +00:00
Evan Cheng c236617ea0 Remove a dead assertion.
llvm-svn: 40914
2007-08-08 01:00:21 +00:00
Evan Cheng 0d0fee269a - LiveInterval value#'s now have 3 components: def instruction #,
kill instruction #, and source register number (iff the value# is defined by a
copy).
- Now def instruction # is set for every value#, not just for copy defined ones.
- Update some outdated code related inactive live ranges.
- Kill info not yet set. That's next patch.

llvm-svn: 40913
2007-08-07 23:49:57 +00:00
Evan Cheng 57b5214d59 Add a register allocation preference field; add a method to compute size of a live interval.
llvm-svn: 36216
2007-04-17 20:25:11 +00:00
Bill Wendling a77f14265b Added an automatic cast to "std::ostream*" etc. from OStream. We then can
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.

llvm-svn: 32636
2006-12-17 05:15:13 +00:00
Jeff Cohen 29192e6274 The best unbreakage yet, addressing Bill's concerns.
llvm-svn: 32622
2006-12-16 02:15:42 +00:00
Jeff Cohen b82309f1ab An even better unbreakage...
llvm-svn: 32617
2006-12-15 22:57:14 +00:00
Bill Wendling f3baad3ee1 Changed llvm_ostream et all to OStream. llvm_cerr, llvm_cout, llvm_null, are
now cerr, cout, and NullStream resp.

llvm-svn: 32298
2006-12-07 01:30:32 +00:00
Bill Wendling 5c3966aa68 Converted to using llvm streams instead of <iostream>s
llvm-svn: 31992
2006-11-29 00:39:47 +00:00
Bill Wendling bc0d5f8bcb Put the #include for a module first.
llvm-svn: 31958
2006-11-28 03:31:29 +00:00
Bill Wendling 3f6f0fd028 Changed to using llvm streams.
llvm-svn: 31954
2006-11-28 02:08:17 +00:00
Reid Spencer de46e48420 For PR786:
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.

llvm-svn: 31380
2006-11-02 20:25:50 +00:00
Chris Lattner 5a56d30906 When joining two intervals where the RHS is really simple, use a light-weight
method for joining the live ranges instead of the fully-general one.

llvm-svn: 30049
2006-09-02 05:26:59 +00:00
Chris Lattner aa36808fd3 avoid calling the virtual isMoveInstr method endlessly by caching its results.
llvm-svn: 29994
2006-08-31 05:54:43 +00:00
Chris Lattner 34434e97c9 Teach the coallescer to coallesce live intervals joined by an arbitrary
number of copies, potentially defining live ranges that appear to have
differing value numbers that become identical when coallsced.  Among other
things, this fixes CodeGen/X86/shift-coalesce.ll and PR687.

llvm-svn: 29968
2006-08-29 23:18:15 +00:00
Chris Lattner 122f2bcdc2 Simplifications to liveinterval analysis, no functionality change.
llvm-svn: 29896
2006-08-26 01:28:16 +00:00
Chris Lattner f4f0b1995c Completely change the way that joining with physregs is implemented. This
paves the way for future changes, increases coallescing opportunities (in
theory, not witnessed in practice), and eliminates the really expensive
LiveIntervals::overlapsAliases method.

llvm-svn: 29890
2006-08-25 23:41:24 +00:00
Chris Lattner 24d4208c97 When replacing value numbers, make sure to compactify the value # space.
llvm-svn: 29865
2006-08-24 23:22:59 +00:00
Chris Lattner bdf121060c Take advantage of the recent improvements to the liveintervals set (tracking
instructions which define each value#) to simplify and improve the coallescer.
In particular, this patch:

1. Implements iterative coallescing.
2. Reverts an unsafe hack from handlePhysRegDef, superceeding it with a
   better solution.
3. Implements PR865, "coallescing" away the second copy in code like:

   A = B
   ...
   B = A

This also includes changes to symbolically print registers in intervals
when possible.

llvm-svn: 29862
2006-08-24 22:43:55 +00:00
Chris Lattner 2e9f1bc056 Improve the LiveInterval class to keep track of which machine instruction
defines each value# tracked by the interval.  This will be used to improve
coallescing.

llvm-svn: 29830
2006-08-22 18:19:46 +00:00
Chris Lattner 76c97afbbc Fix LiveInterval::getOverlapingRanges to take things in the right order
(an unused method).

Fix the merger so that it can merge ranges like this  [10:12)[16:40) with
[12:38) into [10:40) instead of bogus ranges.  This sort of input will be
possible for the merger coming shortly

llvm-svn: 23865
2005-10-21 06:41:30 +00:00
Chris Lattner b7b75e1b68 Fix a conditional so we don't access past the end of the range. Thanks to
Andrew for bringing this to my attn.

llvm-svn: 23850
2005-10-20 22:50:10 +00:00
Chris Lattner 35852fc391 Fix order of eval problem from when I refactored this into a function.
llvm-svn: 23844
2005-10-20 16:56:40 +00:00
Chris Lattner 3cf40798ab add a new method, play around with some code.
Fix a *bug* in the extendIntervalEndTo method.  In particular, if adding
[2:10) to an interval containing [0:2),[10:30), we produced [0:10),[10,30).
Which is not the most smart thing to do.  Now produce [0:30).

llvm-svn: 23841
2005-10-20 07:39:25 +00:00
Chris Lattner 8816353040 Refactor some code, pulling it out into a function. No functionality change.
llvm-svn: 23839
2005-10-20 06:06:30 +00:00
Chris Lattner b1f8982ff0 Expose the LiveInterval interfaces as public headers.
llvm-svn: 23400
2005-09-21 04:19:09 +00:00
Chris Lattner c08d786ba5 Print the symbolic register name in a register allocator debug dump.
llvm-svn: 22002
2005-05-14 05:34:15 +00:00
Misha Brukman 835702a094 Remove trailing whitespace
llvm-svn: 21420
2005-04-21 22:36:52 +00:00
Chris Lattner 5684f4817f Prevent accessing past the end of the intervals vector, this fixes
Prolang-C/bison in the JIT

llvm-svn: 18477
2004-12-04 01:22:09 +00:00
Chris Lattner e3b9cb9959 There is no need to check to see if j overflowed in this loop as we're only
incrementing i.

llvm-svn: 17944
2004-11-18 05:28:21 +00:00
Chris Lattner 6e0c3f44ba Moderate head scratching reveals that this conditional is not needed. If
i->start == j->start, then certainly i->end > j->start.

llvm-svn: 17943
2004-11-18 05:19:02 +00:00
Chris Lattner 7598c316e5 Take another .7 seconds off of linear scan time.
llvm-svn: 17936
2004-11-18 04:02:11 +00:00
Chris Lattner cb0c9655bf Add ability to give hints to the overlaps routines.
llvm-svn: 17934
2004-11-18 03:47:34 +00:00
Brian Gaeke a057cd2401 Give a better message for a common assertion failure.
llvm-svn: 17887
2004-11-16 06:52:35 +00:00
Alkis Evlogimenos fc59e0e8a3 Fix includes. Patch contributed by Paolo Invernizzi!
llvm-svn: 16533
2004-09-28 02:38:58 +00:00
Reid Spencer 7c16caa336 Changes For Bug 352
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.

llvm-svn: 16137
2004-09-01 22:55:40 +00:00
Chris Lattner bbe845b969 Fix the sense of joinable
llvm-svn: 15196
2004-07-25 07:47:25 +00:00
Chris Lattner ccc75d4f32 This patch makes use of the infrastructure implemented before to safely and
aggressively coallesce live ranges even if they overlap.  Consider this LLVM
code for example:

int %test(int %X) {
        %Y = mul int %X, 1      ;; Codegens to Y = X
        %Z = add int %X, %Y
        ret int %Z
}

The mul is just there to get a copy into the code stream.  This produces
this machine code:

 (0x869e5a8, LLVM BB @0x869b9a0):
        %reg1024 = mov <fi#-2>, 1, %NOREG, 0    ;; "X"
        %reg1025 = mov %reg1024                 ;; "Y"  (subsumed by X)
        %reg1026 = add %reg1024, %reg1025
        %EAX = mov %reg1026
        ret

Note that the life times of reg1024 and reg1025 overlap, even though they
contain the same value.  This results in this machine code:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        add %EAX, %ECX
        ret

Another, worse case involves loops and PHI nodes.  Consider this trivial loop:
testcase:

int %test2(int %X) {
entry:
        br label %Loop
Loop:
        %Y = phi int [%X, %entry], [%Z, %Loop]
        %Z = add int %Y, 1
        %cond = seteq int %Z, 100
        br bool %cond, label %Out, label %Loop
Out:
        ret int %Z
}

Because of interactions between the PHI elimination pass and the register
allocator, this got compiled to this code:

test2:
        mov %ECX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
***     mov %EAX, %ECX
        inc %EAX
        cmp %EAX, 100
***     mov %ECX, %EAX
        jne .LBBtest2_1

        ret

Or on powerpc, this code:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r2, r3, 1
        cmpwi cr0, r2, 100
***     or r3, r2, r2
        bne cr0, .LBB_test2_1

***     or r3, r2, r2
        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0



With this improvement in place, we now generate this code for these two
testcases, which is what we want:


test:
        mov %EAX, DWORD PTR [%ESP + 4]
        add %EAX, %EAX
        ret

test2:
        mov %EAX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
        inc %EAX
        cmp %EAX, 100
        jne .LBBtest2_1 # Loop
        ret

Or on PPC:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r3, r3, 1
        cmpwi cr0, r3, 100
        bne cr0, .LBB_test2_1

        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0


Static numbers for spill code loads/stores/reg-reg copies (smaller is better):

em3d:       before: 47/25/26         after: 44/22/24
164.gzip:   before: 433/245/310      after: 403/231/278
175.vpr:    before: 3721/2189/1581   after: 4144/2081/1423
176.gcc:    before: 26195/8866/9235  after: 25942/8082/8275
186.crafty: before: 4295/2587/3079   after: 4119/2519/2916
252.eon:    before: 12754/7585/5803  after: 12508/7425/5643
256.bzip2:  before: 463/226/315      after: 482:241/309


Runtime perf number samples on X86:

gzip: before: 41.09 after: 39.86
bzip2: runtime: before: 56.71s after: 57.07s
gcc: before: 6.16 after: 6.12
eon: before: 2.03s after: 2.00s
llvm-svn: 15194
2004-07-25 07:11:19 +00:00
Chris Lattner c8002d49e3 Make a method const, no functionality changes
llvm-svn: 15193
2004-07-25 06:23:01 +00:00
Chris Lattner af7e898e84 Fix a bug in the range remover
llvm-svn: 15188
2004-07-25 05:43:53 +00:00
Alkis Evlogimenos cf72e7f854 Change std::map<unsigned, LiveInterval*> into a std::map<unsigned,
LiveInterval>. This saves some space and removes the pointer
indirection caused by following the pointer.

llvm-svn: 15167
2004-07-24 11:44:15 +00:00
Chris Lattner d9bbbb8484 In the joiner, merge the small interval into the large interval. This restores
us back to taking about 10.5s on gcc, instead of taking 15.6s!  The net result
is that my big patches have hand no significant effect on compile time or code
quality.  heh.

llvm-svn: 15156
2004-07-24 03:41:50 +00:00
Chris Lattner 038747f5c0 Little stuff:
* Fix comment typeo
* add dump() methods
* add a few new methods like getLiveRangeContaining, removeRange & joinable
  (which is currently the same as overlaps)
* Remove the unused operator==

Bigger change:

* In LiveInterval, instead of using a boolean isDefinedOnce to keep track of
  if there are > 1 definitions in a particular interval, keep a counter,
  NumValues to keep track of exactly how many there are.
* In LiveRange, add a new ValId element to indicate which of the numbered
  values each LiveRange belongs to.   We now no longer merge LiveRanges if
  they are of differing value ID's even if they are neighbors.

llvm-svn: 15152
2004-07-24 02:52:23 +00:00
Chris Lattner b4acba49b1 Change addRange and join to be a little bit smarter. In particular, we don't
want to insert a new range into the middle of the vector, then delete ranges
one at a time next to the inserted one as they are merged.

Instead, if the inserted interval overlaps, just start merging.  The only time
we insert into the middle of the vector is when we don't overlap at all.  Also
delete blocks of live ranges if we overlap with many of them.

This patch speeds up joining by .7 seconds on a large testcase, but more
importantly gets all of the range adding code into addRangeFrom.

llvm-svn: 15141
2004-07-23 19:38:44 +00:00
Chris Lattner 2fcc5e416d Search by the start point, not by the whole interval. This saves some
comparisons, reducing linscan by another .1 seconds :)

llvm-svn: 15139
2004-07-23 18:40:00 +00:00
Chris Lattner c96d299569 Instead of searching for a live interval pair, search for a location. This gives
a very modest speedup of .3 seconds compiling 176.gcc (out of 20s).

llvm-svn: 15136
2004-07-23 18:13:24 +00:00
Chris Lattner 78f62e37f3 Pull the LiveRange and LiveInterval classes out of LiveIntervals.h (which
will soon be renamed) into their own file.  The new file should not emit
DEBUG output or have other side effects.  The LiveInterval class also now
doesn't know whether its working on registers or some other thing.

In the future we will want to use the LiveInterval class and friends to do
stack packing.  In addition to a code simplification, this will allow us to
do it more easily.

llvm-svn: 15134
2004-07-23 17:49:16 +00:00