Commit Graph

22469 Commits

Author SHA1 Message Date
Evan Cheng ce87cac555 Complex pattern's custom matcher should not call Select() on any operands.
Select them afterwards if it returns true.

llvm-svn: 25968
2006-02-04 08:50:49 +00:00
Chris Lattner ab146eae38 Custom lower VAARG for the case when we are doing vaarg(double). In this
case, the double being loaded may not be 8-byte aligned, so we have to use
our standard bit_convert game.

llvm-svn: 25967
2006-02-04 08:31:30 +00:00
Chris Lattner a1fa8b1c88 Fix a nasty typo that broke functions with big stack frames.
llvm-svn: 25966
2006-02-04 08:04:21 +00:00
Chris Lattner d096b2f3e0 fix a bug in my last checkin
llvm-svn: 25965
2006-02-04 07:48:46 +00:00
Chris Lattner 2959f0003e Fix two significant bugs in LSR:
1. When rewriting code in outer loops, sometimes we would insert code into
   inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.

This is a performance neutral change.

llvm-svn: 25964
2006-02-04 07:36:50 +00:00
Nate Begeman a1e895cf97 Remove some stuff that now works
llvm-svn: 25963
2006-02-04 07:29:35 +00:00
Chris Lattner 32ed2b45c7 add a note
llvm-svn: 25962
2006-02-04 07:07:31 +00:00
Chris Lattner 2c0956bcea Two changes:
1. Treat FMOVD as a copy instruction, to help with coallescing in V9 mode
2. When in V9 mode, insert FMOVD instead of FpMOVD instructions, as we don't
   ever rewrite FpMOVD instructions into FMOVS instructions, thus we just end
   up with commented out copies!
This should fix a bunch of failures in V9 mode on sparc.

llvm-svn: 25961
2006-02-04 06:58:46 +00:00
Evan Cheng f9adce90bf Get rid of some memory leaks identified by Valgrind
llvm-svn: 25960
2006-02-04 06:49:00 +00:00
Chris Lattner ca05e6a837 add a method
llvm-svn: 25959
2006-02-04 05:49:01 +00:00
Chris Lattner 2d2e2e3c0e Let bugpoint work on sparc with v9 instructions enabled.
llvm-svn: 25958
2006-02-04 05:02:27 +00:00
Jeff Cohen 57a004abfe Fix VC++ warning.
llvm-svn: 25957
2006-02-04 03:27:39 +00:00
Jeff Cohen 0e7aee2382 Keep Visual Studio informed.
llvm-svn: 25956
2006-02-04 03:27:04 +00:00
Chris Lattner 3b48431333 Add initial support for immediates. This allows us to compile this:
int %rlwnm(int %A, int %B) {
  %C = call int asm "rlwnm $0, $1, $2, $3, $4", "=r,r,r,n,n"(int %A, int %B, int 4, int 17)
  ret int %C
}

into:

_rlwnm:
        or r2, r3, r3
        or r3, r4, r4
        rlwnm r2, r2, r3, 4, 17    ;; note the immediates :)
        or r3, r2, r2
        blr

llvm-svn: 25955
2006-02-04 02:26:14 +00:00
Evan Cheng 0a977c95aa Remove an unnecessary predicate.
llvm-svn: 25954
2006-02-04 02:23:01 +00:00
Evan Cheng 11613a5219 Separate FILD and FILD_FLAG, the later is only used for SSE2. It produces a
flag so it can be flagged to a FST.

llvm-svn: 25953
2006-02-04 02:20:30 +00:00
Chris Lattner 65ad53feb3 Initial early support for non-register operands, like immediates
llvm-svn: 25952
2006-02-04 02:16:44 +00:00
Chris Lattner ee1dadbccf implementation of some methods for inlineasm
llvm-svn: 25951
2006-02-04 02:13:02 +00:00
Chris Lattner 3fb81ddda3 Add some methods for inline asm support.
llvm-svn: 25950
2006-02-04 02:12:09 +00:00
Chris Lattner c93403a7fb Handle another case exposed on X86.
llvm-svn: 25949
2006-02-03 23:50:46 +00:00
Chris Lattner 71d20c4e18 Fix a nasty problem on two-address machines in the following situation:
store EAX -> [ss#0]
[ss#0] += 1
...
use(EAX)

In this case, it is not valid to rewrite this as:


store EAX -> [ss#0]
EAX += 1
store EAX -> [ss#0]  ;;; this would also delete the store above
...
use(EAX)

... because EAX is not a dead at that point.  Keep track of which registers
we are allowed to clobber, and which ones we aren't, and don't clobber the
ones we're not supposed to.  :)

This should resolve the issues on X86 last night.

llvm-svn: 25948
2006-02-03 23:28:46 +00:00
Chris Lattner 507a3a7bd1 significantly simplify the VirtRegMap code by pulling the SpillSlotsAvailable
and PhysRegsAvailable maps out into a new AvailableSpills struct.  No
functionality change.

This paves the way for a bugfix, coming up next.

llvm-svn: 25947
2006-02-03 23:13:58 +00:00
Nate Begeman 20a894282d Implement some feedback from sabre
llvm-svn: 25946
2006-02-03 22:38:07 +00:00
Nate Begeman dc7bba9ffe Add a framework for eliminating instructions that produces undemanded bits.
llvm-svn: 25945
2006-02-03 22:24:05 +00:00
Chris Lattner 81e66abd1e add a note
llvm-svn: 25944
2006-02-03 22:06:45 +00:00
Chris Lattner d079dbb9b0 another case Nate came up with
llvm-svn: 25943
2006-02-03 22:05:41 +00:00
Chris Lattner 277462e20f add a note
llvm-svn: 25942
2006-02-03 21:25:23 +00:00
Chris Lattner f68fd20286 remove some #ifdef'd out code, which should properly be in the dag combiner anyway.
llvm-svn: 25941
2006-02-03 20:13:59 +00:00
Chris Lattner a1d312c6ea remove an old comment
llvm-svn: 25940
2006-02-03 18:59:39 +00:00
Chris Lattner 23d55f2547 Remove the X86PeepholeOptimizerPass, a truly horrible old hack that is now
obsolete.  yaay :)

llvm-svn: 25939
2006-02-03 18:54:24 +00:00
Chris Lattner c408558638 When rewriting frame instructions, emit the appropriate small-immediate
instruction when possible.

llvm-svn: 25938
2006-02-03 18:20:04 +00:00
Chris Lattner 2bcfc52af6 node predicates add to the complexity of a pattern. This ensures that the
X86 backend attempts to match small-immediate versions of instructions before
the full size immediate versions.

llvm-svn: 25937
2006-02-03 18:06:02 +00:00
Chris Lattner ca76917388 Teach sparc to fold loads/stores into copies.
Remove the dead getRegClassForType method
minor formating changes.

llvm-svn: 25936
2006-02-03 07:06:25 +00:00
Chris Lattner 6091407783 remove dead fn
llvm-svn: 25935
2006-02-03 06:51:34 +00:00
Nate Begeman 22e251abf1 Add common code for reassociating ops in the dag combiner
llvm-svn: 25934
2006-02-03 06:46:56 +00:00
Evan Cheng d65461c232 Added a (store (op (load ...) ...) ...) folding test case.
llvm-svn: 25933
2006-02-03 06:46:41 +00:00
Chris Lattner d7d98611ca Implement isLoadFromStackSlot and isStoreToStackSlot
llvm-svn: 25932
2006-02-03 06:44:54 +00:00
Evan Cheng 717db484a5 (store (op (load ...))) folding problem. In the generated matching code,
Chain is initially set to the chain operand of store node, when it reaches
load, if it matches the load then Chain is set to the chain operand of the
load.

However, if the matching code that follows this fails, isel moves on to the
next pattern but it does not restore Chain to the chain operand of the store.
So when it tries to match the next store / op / load pattern it would fail on
the Chain == load.getOperand(0) test.

The solution is for each chain operand to get a unique name. e.g. Chain10.

llvm-svn: 25931
2006-02-03 06:22:41 +00:00
Chris Lattner a23b04acdb remove some target-indep and implemented notes
llvm-svn: 25930
2006-02-03 06:22:11 +00:00
Chris Lattner d1aaee03ce target independent notes
llvm-svn: 25929
2006-02-03 06:21:43 +00:00
Nate Begeman fc567d85d5 Flesh out a couple of the items in the README
llvm-svn: 25928
2006-02-03 05:17:06 +00:00
Jeff Cohen 3276ff7ac6 Fix VC++ compilation error caused by using a std::map iterator variable to receive
a std::multimap iterator value.  For some reason, GCC doesn't have a problem with this.

llvm-svn: 25927
2006-02-03 03:48:54 +00:00
Chris Lattner e18ef0d4a6 Remove move copies and dead stuff by not clobbering the result reg of a noop copy.
llvm-svn: 25926
2006-02-03 03:16:14 +00:00
Andrew Lenharth 1318240fd0 isStoreToStackSlot
llvm-svn: 25925
2006-02-03 03:07:37 +00:00
Chris Lattner 774d4a190b Simplify some code
llvm-svn: 25924
2006-02-03 03:06:49 +00:00
Chris Lattner a1eac9b978 the X86 backend no longer needs to delete its own noop copies
llvm-svn: 25923
2006-02-03 02:59:58 +00:00
Chris Lattner 1ef239afb4 Add code that checks for noop copies, which triggers when either:
1. a target doesn't know how to fold load/stores into copies, or
2. the spiller rewrites the input to a copy to the same register as the dest
   instead of to the reloaded reg.

This will be moved/improved in the near future, but allows elimination of
some ancient x86 hacks.  This eliminates 92 copies from SMG2000 on X86 and
163 copies from 252.eon.

llvm-svn: 25922
2006-02-03 02:02:59 +00:00
Chris Lattner f0a2d66d1c Add a note
llvm-svn: 25921
2006-02-03 01:49:49 +00:00
Evan Cheng 02b5b9cdd6 Added case HANDLENODE to getOperationName().
llvm-svn: 25920
2006-02-03 01:33:01 +00:00
Chris Lattner b7f24de4c8 Physregs may hold multiple stack slot values at the same time. Keep track
of this, and use it to our advantage (bwahahah).  This allows us to eliminate another
60 instructions from smg2000 on PPC (probably significantly more on X86).  A common
old-new diff looks like this:

        stw r2, 3304(r1)
-       lwz r2, 3192(r1)
        stw r2, 3300(r1)
-       lwz r2, 3192(r1)
        stw r2, 3296(r1)
-       lwz r2, 3192(r1)
        stw r2, 3200(r1)
-       lwz r2, 3192(r1)
        stw r2, 3196(r1)
-       lwz r2, 3192(r1)
+       or r2, r2, r2
        stw r2, 3188(r1)

and

-       lwz r31, 604(r1)
-       lwz r13, 604(r1)
-       lwz r14, 604(r1)
-       lwz r15, 604(r1)
-       lwz r16, 604(r1)
-       lwz r30, 604(r1)
+       or r31, r30, r30
+       or r13, r30, r30
+       or r14, r30, r30
+       or r15, r30, r30
+       or r16, r30, r30
+       or r30, r30, r30

Removal of the R = R copies is coming next...

llvm-svn: 25919
2006-02-03 00:36:31 +00:00