Commit Graph

22730 Commits

Author SHA1 Message Date
Andrew Lenharth a438ef0ee7 improved zap discovery
llvm-svn: 26148
2006-02-13 18:52:29 +00:00
Andrew Lenharth a3bc7b8a85 test for improved zap discovery
llvm-svn: 26147
2006-02-13 18:52:01 +00:00
Jim Laskey 29483560d0 Sync up the tag numbers with gcc4.
llvm-svn: 26146
2006-02-13 16:56:43 +00:00
Jim Laskey 390c63e9d9 Rename to better reflect usage (current and planned.)
llvm-svn: 26145
2006-02-13 12:50:39 +00:00
Chris Lattner 462505fc5f Completely rewrite libcall insertion by the legalizer, providing the
following handy-dandy properties:

1. it is always correct now
2. it is much faster than before
3. it is easier to understand

This implementation builds off of the recent simplifications of the
legalizer that made it single-pass instead of iterative.

This fixes JM/lencod, JM/ldecod, and
CodeGen/Generic/2006-02-12-InsertLibcall.ll (at least on PPC).

llvm-svn: 26144
2006-02-13 09:18:02 +00:00
Chris Lattner 62c3484e43 Switch targets over to using SelectionDAG::getCALLSEQ_START to create
CALLSEQ_START nodes.

llvm-svn: 26143
2006-02-13 09:00:43 +00:00
Chris Lattner 3a0ad47b39 Switch to using getCALLSEQ_START instead of using our own creation calls
llvm-svn: 26142
2006-02-13 08:55:29 +00:00
Chris Lattner 5a30d8367a Add a method
llvm-svn: 26141
2006-02-13 08:54:46 +00:00
Chris Lattner d062f09026 this passes now, due to Nate's recent efforts
llvm-svn: 26140
2006-02-13 07:26:36 +00:00
Chris Lattner 4c388281c1 Reduce this testcase a bit more, with the help of llvm-extract and some hand tweaks
llvm-svn: 26139
2006-02-13 07:02:50 +00:00
Chris Lattner 68e7475777 Be careful not to request or look at bits shifted in from outside the size
of the input.  This fixes the mediabench/gsm/toast failure last night.

llvm-svn: 26138
2006-02-13 06:09:08 +00:00
Evan Cheng 0684e9ae6d Added a test case for a libcall insertion bug.
llvm-svn: 26137
2006-02-12 10:24:00 +00:00
Nate Begeman bc3ec1d37b Add missing patterns for andi. and andis., fixing test/Regression/CodeGen/
PowerPC/and-imm.ll

llvm-svn: 26136
2006-02-12 09:09:52 +00:00
Chris Lattner f5b4ef7f58 remove some more dead special case code
llvm-svn: 26135
2006-02-12 08:07:37 +00:00
Chris Lattner 5b2edb1fca Eliminate special case hacks that are superceded by general purpose hacks
llvm-svn: 26134
2006-02-12 08:02:11 +00:00
Chris Lattner e6ebe1af61 tweaks
llvm-svn: 26133
2006-02-12 08:01:35 +00:00
Chris Lattner ee0f280743 Three changes:
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
   Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
   between the shift and and.  More aggressive bitfolding for other reasons
   was turning signed shr's into unsigned shr's, leaving the noop cast in
   the way.

llvm-svn: 26131
2006-02-12 02:07:56 +00:00
Chris Lattner 2234a136b3 new testcase
llvm-svn: 26130
2006-02-12 02:06:31 +00:00
Chris Lattner 5e488c9cc2 move a failing testcase from bit-tracking.ll to narrow.ll, and move the
xfail marker with it

llvm-svn: 26129
2006-02-12 02:02:43 +00:00
Chris Lattner 02f53ad3a2 Revert my last patch. It too breaks stuff
llvm-svn: 26128
2006-02-12 01:59:10 +00:00
Chris Lattner 8e99b0baca Make these tests fail if opt crashes.
llvm-svn: 26127
2006-02-12 01:32:58 +00:00
Chris Lattner 35248e06bc Fix for my previously reverted patch
llvm-svn: 26126
2006-02-11 21:24:54 +00:00
Chris Lattner f3ed03481a Update comments to be actually accurate
llvm-svn: 26124
2006-02-11 09:37:07 +00:00
Chris Lattner 00601c0c7e This is implemented by the simplify-libcalls pass, not instcombine
llvm-svn: 26123
2006-02-11 09:33:28 +00:00
Chris Lattner 0157e7f55b Port the recent innovations in ComputeMaskedBits to SimplifyDemandedBits.
This allows us to simplify on conditions where bits are not known, but they
are not demanded either!  This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.

In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.

llvm-svn: 26122
2006-02-11 09:31:47 +00:00
Chris Lattner b24ce3a2a8 revert my previous change, it exposed other problems.
llvm-svn: 26121
2006-02-11 08:47:47 +00:00
Duraid Madina 4698e4f5fe fix storing booleans (grawp missed this one)
llvm-svn: 26120
2006-02-11 07:33:17 +00:00
Duraid Madina 0010a92375 now short immediates will get matched (previously constants were all
triggering movl 64bit imm fat instructions)

llvm-svn: 26119
2006-02-11 07:32:15 +00:00
Chris Lattner 05bf90dddf Make this check stricter. Disallow loop exit blocks from being shared by
loops and their subloops.

llvm-svn: 26118
2006-02-11 02:13:17 +00:00
Evan Cheng a86ba85dc5 Prevent certain nodes that have already been selected from being folded into
X86 addressing mode. Currently we do not allow any node whose target node
produces a chain as well as any node that is at the root of the addressing
mode expression tree.

llvm-svn: 26117
2006-02-11 02:05:36 +00:00
Chris Lattner a6ae101afa remove dead expr
llvm-svn: 26116
2006-02-11 01:43:37 +00:00
Jim Laskey 5995d0160c Reorg for integration with gcc4. Old style debug info will not be passed though
to SelIDAG.

llvm-svn: 26115
2006-02-11 01:01:30 +00:00
Chris Lattner fbadd7e1ee implement unswitching of loops with switch stmts and selects in them
llvm-svn: 26114
2006-02-11 00:43:37 +00:00
Chris Lattner f1b151684d Update PHI nodes in successors of exit blocks.
llvm-svn: 26113
2006-02-10 23:26:14 +00:00
Chris Lattner fe4151efe7 Reform the unswitching code in terms of edge splitting, not block splitting.
llvm-svn: 26112
2006-02-10 23:16:39 +00:00
Evan Cheng 2b6f78b664 Nicer code. :-)
llvm-svn: 26111
2006-02-10 22:46:26 +00:00
Evan Cheng d49cc3634e Added X86 isel debugging stuff.
llvm-svn: 26110
2006-02-10 22:24:32 +00:00
Chris Lattner 975486db5e Remove a level of indirection.
llvm-svn: 26109
2006-02-10 21:32:11 +00:00
Chris Lattner ec6b40a093 Fix a case where UnswitchTrivialCondition broke critical edges with
phi's in the successors

llvm-svn: 26108
2006-02-10 19:08:15 +00:00
Chris Lattner fcb8a3aa76 Use the auto-generated call matcher. Remove a broken impl of the frameaddr/returnaddr
intrinsics.

Autogen frameindex matcher

llvm-svn: 26107
2006-02-10 07:35:42 +00:00
Chris Lattner 0c4dea4cb2 Update to new-style flags usage, simplifying the .td file
llvm-svn: 26106
2006-02-10 06:58:25 +00:00
Evan Cheng 907be3e24c Remove a completed entry; add a new entry about fisttp op
llvm-svn: 26105
2006-02-10 05:48:15 +00:00
Chris Lattner 6e263155a6 add some notes, move some code around. Implement unswitching of loops
with branches on partially invariant computations.

llvm-svn: 26104
2006-02-10 02:30:37 +00:00
Chris Lattner 4935417a84 Move code around to be more logical, no functionality change.
llvm-svn: 26103
2006-02-10 02:01:22 +00:00
Chris Lattner 3fc3148b85 When unswitching a trivial loop, do admit we are doing it! :)
llvm-svn: 26102
2006-02-10 01:36:35 +00:00
Chris Lattner ed7a67b0de Implement unconditional unswitching of 'trivial' loops, those loops that contain
branches in their entry block that control whether or not the loop is a noop or not.

llvm-svn: 26101
2006-02-10 01:24:09 +00:00
Chris Lattner 4f0e66df6a Simplify control flow a bit, note that unswitch preserves canonical loop form
llvm-svn: 26098
2006-02-09 22:15:42 +00:00
Evan Cheng 101e4b916a Match tblgen change.
llvm-svn: 26096
2006-02-09 22:12:53 +00:00
Evan Cheng 5940bc210e Call InsertISelMapEntry rather than map insertion operator to prevent overly
aggrssive inlining. This reduces Select_store frame size from 24k to 10k.

llvm-svn: 26095
2006-02-09 22:12:27 +00:00
Evan Cheng a1ef3ec5b5 Added SelectionDAG::InsertISelMapEntry(). This is used to workaround the gcc
problem where it inline the map insertion call too aggressively. Before this
change it was producing a frame size of 24k for Select_store(), now it's down
to 10k (by calling this method rather than calling the map insertion operator).

llvm-svn: 26094
2006-02-09 22:11:03 +00:00