Commit Graph

12821 Commits

Author SHA1 Message Date
Chris Lattner 47f7319f00 Simplify code, alignment must be specified now.
llvm-svn: 26074
2006-02-09 02:26:04 +00:00
Chris Lattner 4576bb74d5 Make MachineConstantPool entries alignments explicit
llvm-svn: 26071
2006-02-09 02:23:13 +00:00
Chris Lattner 832d78d981 Always pass in an alignment.
llvm-svn: 26070
2006-02-09 02:19:16 +00:00
Chris Lattner d94a3d2c8a provide an explicit alignment for cp entries
llvm-svn: 26069
2006-02-09 02:15:30 +00:00
Evan Cheng 6dc90ca172 Change Select() from
SDOperand Select(SDOperand N);
to
void Select(SDOperand &Result, SDOperand N);

llvm-svn: 26067
2006-02-09 00:37:58 +00:00
Chris Lattner 2e07d6370a Darwin doesn't support #APP/#NO_APP
llvm-svn: 26066
2006-02-08 23:42:22 +00:00
Chris Lattner ed87dcd45f Add support for assembler directives that wrap inline asm
llvm-svn: 26065
2006-02-08 23:41:56 +00:00
Chris Lattner 26e385a623 Rename BSel -> PPCBSel for the benefit of doxygen users.
Move the methods out of line.
Remove unused Debug.h stuff.
Teach getNumBytesForInstruction to know the size of an inline asm.

llvm-svn: 26064
2006-02-08 19:33:26 +00:00
Chris Lattner b4fc050f0f add a simple optimization
llvm-svn: 26062
2006-02-08 17:47:22 +00:00
Chris Lattner ab2dc4d70d Simplify some code, reducing calls to MaskedValueIsZero. Implement a minor
optimization where we reduce the number of bits in AND masks when possible.

llvm-svn: 26056
2006-02-08 07:34:50 +00:00
Chris Lattner b7e074ab9b more email -> README moving
llvm-svn: 26054
2006-02-08 07:12:07 +00:00
Chris Lattner f7b962d7d7 Emit the 'mr' pseudoop for easier reading.
llvm-svn: 26053
2006-02-08 06:56:40 +00:00
Chris Lattner 45bb34b715 Add some random notes, not high-prio
llvm-svn: 26052
2006-02-08 06:52:06 +00:00
Chris Lattner b97142eec0 Move emails from nate into public places
llvm-svn: 26051
2006-02-08 06:43:51 +00:00
Chris Lattner 5997cf9381 Use EraseInstFromFunction in a few cases to put the uses of the removed
instruction onto the worklist (in case they are now dead).

Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:

struct S {
    unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
    S();
};

S::S() : a(0), b(0), c(1), d(0), e(6) {}

to this:

void %_ZN1SC1Ev(%struct.S* %this) {
entry:
        %tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
        store ubyte 38, ubyte* %tmp.1
        ret void
}

much earlier (in gccas instead of only in gccld after DSE runs).

llvm-svn: 26050
2006-02-08 03:25:32 +00:00
Chris Lattner 06a0ed1ee0 Implement some more interesting select sccp cases. This implements:
test/Regression/Transforms/SCCP/select.ll

llvm-svn: 26049
2006-02-08 02:38:11 +00:00
Chris Lattner a10e23c19f Compile this:
xori r6, r2, 1
        rlwinm r6, r6, 0, 31, 31
        cmpwi cr0, r6, 0
        bne cr0, LBB1_3 ; endif

to this:

        rlwinm r6, r2, 0, 31, 31
        cmpwi cr0, r6, 0
        beq cr0, LBB1_3 ; endif

llvm-svn: 26047
2006-02-08 02:13:15 +00:00
Chris Lattner ddba3289b5 Fix a problem in my patch yesterday, causing a miscompilation of 176.gcc
llvm-svn: 26045
2006-02-08 01:20:23 +00:00
Evan Cheng adeb8fb5a2 Fixed a local common symbol bug.
llvm-svn: 26044
2006-02-07 23:32:58 +00:00
Evan Cheng ec212fb66d For ELF, .comm takes alignment value as the optional 3rd argument. It must be
specified in bytes.

llvm-svn: 26043
2006-02-07 21:54:08 +00:00
Chris Lattner 203b2f1288 Implement getConstraintType for PPC.
llvm-svn: 26042
2006-02-07 20:16:30 +00:00
Chris Lattner 44314827d6 Fix Transforms/InstCombine/2006-02-07-SextZextCrash.ll
llvm-svn: 26040
2006-02-07 19:07:40 +00:00
Evan Cheng 5a76680de1 Darwin ABI issues: weak, linkonce, etc. dynamic-no-pic support is complete.
Also fixed a function stub bug. Added weak and linkonce support for
x86 Linux.

llvm-svn: 26038
2006-02-07 08:38:37 +00:00
Evan Cheng 227e469c25 Remind myself to add PIC and static asm printer support.
llvm-svn: 26037
2006-02-07 08:35:44 +00:00
Chris Lattner 92a6865321 Generalize MaskedValueIsZero into a ComputeMaskedNonZeroBits function, which
is just as efficient as MVIZ and is also more general.

Fix a few minor bugs introduced in recent patches

llvm-svn: 26036
2006-02-07 08:05:22 +00:00
Chris Lattner c3ebf40031 Make MaskedValueIsZero take a uint64_t instead of a ConstantIntegral as a
mask.  This allows the code to be simpler and more efficient.

Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.

llvm-svn: 26035
2006-02-07 07:27:52 +00:00
Chris Lattner 77defbae0a Use Type::getIntegralTypeMask() to simplify some code
llvm-svn: 26034
2006-02-07 07:00:41 +00:00
Chris Lattner 2590e511d8 Implement the beginnings of a facility for simplifying expressions based on
'demanded bits', inspired by Nate's work in the dag combiner.  This isn't
complete, but needs to unrelated instcombiner changes to continue.

llvm-svn: 26033
2006-02-07 06:56:34 +00:00
Jeff Cohen 2439669c6f The interpreter assumes that the caller of runFunction() must be lli, and
therefore the function being called must be a main() returning an int.  The
consequences when these assumptions are false are not good, so don't assume
them.

llvm-svn: 26031
2006-02-07 05:29:44 +00:00
Jeff Cohen 69e849014c Teach the interpreter to handle global variables that are added to a module after
interpretation has begun.  The JIT already handles this situation correctly, and
the interpreter can already handle new functions being added.

llvm-svn: 26030
2006-02-07 05:11:57 +00:00
Chris Lattner 15a6c4c444 Add the simple PPC integer constraints
llvm-svn: 26027
2006-02-07 00:47:13 +00:00
Chris Lattner d62a3bfa66 Eliminate the printCallOperand method, using a 'call' modifier on
printOperand instead.

llvm-svn: 26025
2006-02-06 23:41:19 +00:00
Chris Lattner 2bf2c8d7e7 Change prototype
llvm-svn: 26022
2006-02-06 22:18:19 +00:00
Chris Lattner 34f74c180a Add support for modifier characters to operand printers
llvm-svn: 26021
2006-02-06 22:17:23 +00:00
Jim Laskey 0458fb76fd Goodbye nasty macro.
llvm-svn: 26019
2006-02-06 21:54:05 +00:00
Jim Laskey b643ff5546 Edit requests from Sabre.
llvm-svn: 26018
2006-02-06 19:12:02 +00:00
Andrew Lenharth f5b7f16259 see what this allignment thing will do
llvm-svn: 26017
2006-02-06 17:15:17 +00:00
Jim Laskey 85263234a8 Changing model for the construction of debug information.
llvm-svn: 26016
2006-02-06 15:33:21 +00:00
Jim Laskey 58d48c8118 We seem to have settled to __DWARF for section name.
llvm-svn: 26015
2006-02-06 14:16:15 +00:00
Evan Cheng d5f2ba0d6f - Update load folding checks to match those auto-generated by tblgen.
- Manually select SDOperand's returned by TryFoldLoad which make up the
  load address.

llvm-svn: 26012
2006-02-06 06:02:33 +00:00
Evan Cheng bfa4b7cc75 Complex pattern isel code shouldn't select nodes.
llvm-svn: 26010
2006-02-05 08:45:01 +00:00
Chris Lattner 463fa70eaa Fix the Sparc backend with Evan's recent tblgen changes
llvm-svn: 26009
2006-02-05 08:35:50 +00:00
Chris Lattner 8467e5d6af This xform isn't safe
llvm-svn: 26007
2006-02-05 08:26:16 +00:00
Nate Begeman 8c9cd461df Back out previous commit, it isn't safe.
llvm-svn: 26006
2006-02-05 08:23:00 +00:00
Nate Begeman 3dc8b89493 fold c1 << (x + c2) into (c1 << c2) << x. fix a warning.
llvm-svn: 26005
2006-02-05 08:07:24 +00:00
Chris Lattner 4b8fcc229f some stuff is done
llvm-svn: 26004
2006-02-05 07:54:37 +00:00
Chris Lattner 2e90b732fa Turn A % (C << N), where C is 2^k, into A & ((C << N)-1) [urem only].
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].

Tested with: rem.ll:test5, div.ll:test10

llvm-svn: 26003
2006-02-05 07:54:04 +00:00
Nate Begeman c89fdf1eb3 Handle urem by shifted powers of 2.
llvm-svn: 26001
2006-02-05 07:36:48 +00:00
Nate Begeman 25d178bece handle combining A / (B << N) into A >>u (log2(B)+N) when B is a power of 2
llvm-svn: 26000
2006-02-05 07:20:23 +00:00
Evan Cheng a28b764886 Use SelectRoot() as the entry to any tblgen based isel.
llvm-svn: 25998
2006-02-05 06:51:51 +00:00
Evan Cheng 54cb1833a4 Use SelectRoot() as entry of any tblgen based isel.
llvm-svn: 25997
2006-02-05 06:46:41 +00:00
Chris Lattner 25777c8c25 Remove the SparcV8 backend. It has been renamed to be the Sparc backend.
llvm-svn: 25992
2006-02-05 06:33:29 +00:00
Chris Lattner a3e5b2c61c remove V8 reference
llvm-svn: 25991
2006-02-05 06:32:59 +00:00
Evan Cheng d37645c07d * Added SDNode::isOnlyUse().
* Fix hasNUsesOfValue(), it should be const.

llvm-svn: 25990
2006-02-05 06:29:23 +00:00
Chris Lattner 158e1f519c Rename SPARC V8 target to be the LLVM SPARC target.
llvm-svn: 25985
2006-02-05 05:50:24 +00:00
Chris Lattner c0e48c6c58 add a note
llvm-svn: 25984
2006-02-05 05:27:35 +00:00
Evan Cheng d19d51f414 Re-commit the last bit of change that was backed out.
llvm-svn: 25983
2006-02-05 05:25:07 +00:00
Chris Lattner cbab28414e make sure that global doubles are aligned to 8 bytes
llvm-svn: 25981
2006-02-05 01:46:49 +00:00
Chris Lattner c070cb685d Use getPreferredAlignmentLog.
llvm-svn: 25980
2006-02-05 01:45:04 +00:00
Chris Lattner 1b1a8731c0 Use the asmprinter to find out what the preferred alignment of a global is.
This patch speeds up 172.mgrid from 31.81s to 11.39s on darwin/ppc.
Many many thanks to Nate for tracking down the root cause of the issue.

llvm-svn: 25979
2006-02-05 01:30:45 +00:00
Chris Lattner a9b2525d3e Implement the AsmPrinter::getPreferredAlignmentLog method.
llvm-svn: 25978
2006-02-05 01:29:18 +00:00
Andrew Lenharth 1fcff15f86 linkage fix for weak functions
llvm-svn: 25976
2006-02-04 19:13:09 +00:00
Jeff Cohen 95ae171d5b Fix VC++ warning.
llvm-svn: 25975
2006-02-04 16:20:31 +00:00
Chris Lattner d30c4991a1 Use SCEVExpander::InsertCastOfTo instead of our own code. This reduces
#LLVM LOC, and auto-cse's cast instructions.

llvm-svn: 25974
2006-02-04 09:52:43 +00:00
Chris Lattner a6da69cab0 Pull the InsertCastOfTo out of the header, implement CSE'ing of arguments.
llvm-svn: 25973
2006-02-04 09:51:53 +00:00
Chris Lattner 22b4edfb42 Temporarily revert this patch, which probably breaks with the
tblgen patch reverted.

llvm-svn: 25971
2006-02-04 09:24:16 +00:00
Chris Lattner b6a1865bca Value# select instructions, allowing -gcse to remove duplicates
llvm-svn: 25969
2006-02-04 09:15:29 +00:00
Evan Cheng ce87cac555 Complex pattern's custom matcher should not call Select() on any operands.
Select them afterwards if it returns true.

llvm-svn: 25968
2006-02-04 08:50:49 +00:00
Chris Lattner ab146eae38 Custom lower VAARG for the case when we are doing vaarg(double). In this
case, the double being loaded may not be 8-byte aligned, so we have to use
our standard bit_convert game.

llvm-svn: 25967
2006-02-04 08:31:30 +00:00
Chris Lattner a1fa8b1c88 Fix a nasty typo that broke functions with big stack frames.
llvm-svn: 25966
2006-02-04 08:04:21 +00:00
Chris Lattner d096b2f3e0 fix a bug in my last checkin
llvm-svn: 25965
2006-02-04 07:48:46 +00:00
Chris Lattner 2959f0003e Fix two significant bugs in LSR:
1. When rewriting code in outer loops, sometimes we would insert code into
   inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.

This is a performance neutral change.

llvm-svn: 25964
2006-02-04 07:36:50 +00:00
Nate Begeman a1e895cf97 Remove some stuff that now works
llvm-svn: 25963
2006-02-04 07:29:35 +00:00
Chris Lattner 32ed2b45c7 add a note
llvm-svn: 25962
2006-02-04 07:07:31 +00:00
Chris Lattner 2c0956bcea Two changes:
1. Treat FMOVD as a copy instruction, to help with coallescing in V9 mode
2. When in V9 mode, insert FMOVD instead of FpMOVD instructions, as we don't
   ever rewrite FpMOVD instructions into FMOVS instructions, thus we just end
   up with commented out copies!
This should fix a bunch of failures in V9 mode on sparc.

llvm-svn: 25961
2006-02-04 06:58:46 +00:00
Evan Cheng f9adce90bf Get rid of some memory leaks identified by Valgrind
llvm-svn: 25960
2006-02-04 06:49:00 +00:00
Chris Lattner 2d2e2e3c0e Let bugpoint work on sparc with v9 instructions enabled.
llvm-svn: 25958
2006-02-04 05:02:27 +00:00
Jeff Cohen 57a004abfe Fix VC++ warning.
llvm-svn: 25957
2006-02-04 03:27:39 +00:00
Chris Lattner 3b48431333 Add initial support for immediates. This allows us to compile this:
int %rlwnm(int %A, int %B) {
  %C = call int asm "rlwnm $0, $1, $2, $3, $4", "=r,r,r,n,n"(int %A, int %B, int 4, int 17)
  ret int %C
}

into:

_rlwnm:
        or r2, r3, r3
        or r3, r4, r4
        rlwnm r2, r2, r3, 4, 17    ;; note the immediates :)
        or r3, r2, r2
        blr

llvm-svn: 25955
2006-02-04 02:26:14 +00:00
Evan Cheng 0a977c95aa Remove an unnecessary predicate.
llvm-svn: 25954
2006-02-04 02:23:01 +00:00
Evan Cheng 11613a5219 Separate FILD and FILD_FLAG, the later is only used for SSE2. It produces a
flag so it can be flagged to a FST.

llvm-svn: 25953
2006-02-04 02:20:30 +00:00
Chris Lattner 65ad53feb3 Initial early support for non-register operands, like immediates
llvm-svn: 25952
2006-02-04 02:16:44 +00:00
Chris Lattner ee1dadbccf implementation of some methods for inlineasm
llvm-svn: 25951
2006-02-04 02:13:02 +00:00
Chris Lattner c93403a7fb Handle another case exposed on X86.
llvm-svn: 25949
2006-02-03 23:50:46 +00:00
Chris Lattner 71d20c4e18 Fix a nasty problem on two-address machines in the following situation:
store EAX -> [ss#0]
[ss#0] += 1
...
use(EAX)

In this case, it is not valid to rewrite this as:


store EAX -> [ss#0]
EAX += 1
store EAX -> [ss#0]  ;;; this would also delete the store above
...
use(EAX)

... because EAX is not a dead at that point.  Keep track of which registers
we are allowed to clobber, and which ones we aren't, and don't clobber the
ones we're not supposed to.  :)

This should resolve the issues on X86 last night.

llvm-svn: 25948
2006-02-03 23:28:46 +00:00
Chris Lattner 507a3a7bd1 significantly simplify the VirtRegMap code by pulling the SpillSlotsAvailable
and PhysRegsAvailable maps out into a new AvailableSpills struct.  No
functionality change.

This paves the way for a bugfix, coming up next.

llvm-svn: 25947
2006-02-03 23:13:58 +00:00
Nate Begeman 20a894282d Implement some feedback from sabre
llvm-svn: 25946
2006-02-03 22:38:07 +00:00
Nate Begeman dc7bba9ffe Add a framework for eliminating instructions that produces undemanded bits.
llvm-svn: 25945
2006-02-03 22:24:05 +00:00
Chris Lattner 81e66abd1e add a note
llvm-svn: 25944
2006-02-03 22:06:45 +00:00
Chris Lattner d079dbb9b0 another case Nate came up with
llvm-svn: 25943
2006-02-03 22:05:41 +00:00
Chris Lattner 277462e20f add a note
llvm-svn: 25942
2006-02-03 21:25:23 +00:00
Chris Lattner f68fd20286 remove some #ifdef'd out code, which should properly be in the dag combiner anyway.
llvm-svn: 25941
2006-02-03 20:13:59 +00:00
Chris Lattner a1d312c6ea remove an old comment
llvm-svn: 25940
2006-02-03 18:59:39 +00:00
Chris Lattner 23d55f2547 Remove the X86PeepholeOptimizerPass, a truly horrible old hack that is now
obsolete.  yaay :)

llvm-svn: 25939
2006-02-03 18:54:24 +00:00
Chris Lattner c408558638 When rewriting frame instructions, emit the appropriate small-immediate
instruction when possible.

llvm-svn: 25938
2006-02-03 18:20:04 +00:00
Chris Lattner ca76917388 Teach sparc to fold loads/stores into copies.
Remove the dead getRegClassForType method
minor formating changes.

llvm-svn: 25936
2006-02-03 07:06:25 +00:00
Chris Lattner 6091407783 remove dead fn
llvm-svn: 25935
2006-02-03 06:51:34 +00:00
Nate Begeman 22e251abf1 Add common code for reassociating ops in the dag combiner
llvm-svn: 25934
2006-02-03 06:46:56 +00:00
Chris Lattner d7d98611ca Implement isLoadFromStackSlot and isStoreToStackSlot
llvm-svn: 25932
2006-02-03 06:44:54 +00:00
Chris Lattner a23b04acdb remove some target-indep and implemented notes
llvm-svn: 25930
2006-02-03 06:22:11 +00:00
Chris Lattner d1aaee03ce target independent notes
llvm-svn: 25929
2006-02-03 06:21:43 +00:00
Nate Begeman fc567d85d5 Flesh out a couple of the items in the README
llvm-svn: 25928
2006-02-03 05:17:06 +00:00
Jeff Cohen 3276ff7ac6 Fix VC++ compilation error caused by using a std::map iterator variable to receive
a std::multimap iterator value.  For some reason, GCC doesn't have a problem with this.

llvm-svn: 25927
2006-02-03 03:48:54 +00:00
Chris Lattner e18ef0d4a6 Remove move copies and dead stuff by not clobbering the result reg of a noop copy.
llvm-svn: 25926
2006-02-03 03:16:14 +00:00
Andrew Lenharth 1318240fd0 isStoreToStackSlot
llvm-svn: 25925
2006-02-03 03:07:37 +00:00
Chris Lattner 774d4a190b Simplify some code
llvm-svn: 25924
2006-02-03 03:06:49 +00:00
Chris Lattner a1eac9b978 the X86 backend no longer needs to delete its own noop copies
llvm-svn: 25923
2006-02-03 02:59:58 +00:00
Chris Lattner 1ef239afb4 Add code that checks for noop copies, which triggers when either:
1. a target doesn't know how to fold load/stores into copies, or
2. the spiller rewrites the input to a copy to the same register as the dest
   instead of to the reloaded reg.

This will be moved/improved in the near future, but allows elimination of
some ancient x86 hacks.  This eliminates 92 copies from SMG2000 on X86 and
163 copies from 252.eon.

llvm-svn: 25922
2006-02-03 02:02:59 +00:00
Chris Lattner f0a2d66d1c Add a note
llvm-svn: 25921
2006-02-03 01:49:49 +00:00
Evan Cheng 02b5b9cdd6 Added case HANDLENODE to getOperationName().
llvm-svn: 25920
2006-02-03 01:33:01 +00:00
Chris Lattner b7f24de4c8 Physregs may hold multiple stack slot values at the same time. Keep track
of this, and use it to our advantage (bwahahah).  This allows us to eliminate another
60 instructions from smg2000 on PPC (probably significantly more on X86).  A common
old-new diff looks like this:

        stw r2, 3304(r1)
-       lwz r2, 3192(r1)
        stw r2, 3300(r1)
-       lwz r2, 3192(r1)
        stw r2, 3296(r1)
-       lwz r2, 3192(r1)
        stw r2, 3200(r1)
-       lwz r2, 3192(r1)
        stw r2, 3196(r1)
-       lwz r2, 3192(r1)
+       or r2, r2, r2
        stw r2, 3188(r1)

and

-       lwz r31, 604(r1)
-       lwz r13, 604(r1)
-       lwz r14, 604(r1)
-       lwz r15, 604(r1)
-       lwz r16, 604(r1)
-       lwz r30, 604(r1)
+       or r31, r30, r30
+       or r13, r30, r30
+       or r14, r30, r30
+       or r15, r30, r30
+       or r16, r30, r30
+       or r30, r30, r30

Removal of the R = R copies is coming next...

llvm-svn: 25919
2006-02-03 00:36:31 +00:00
Chris Lattner 9b178ce225 update a note
llvm-svn: 25918
2006-02-02 23:50:22 +00:00
Chris Lattner f3aef1b004 Fix a deficiency in the spiller that Evan noticed. In particular, consider
this code:

  store [stack slot #0],  R10
    = add R14, [stack slot #0]

The spiller didn't know that the store made the value of [stackslot#0] available
in R10 *IF* the store came from a copy instruction with the store folded into it.

This patch teaches VirtRegMap to look at these stores and recognize the values
they make available.  In one case Evan provided, this code:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
1)      movsd QWORD PTR [%ESP + 48], %XMM1
2)      movsd %XMM1, QWORD PTR [%ESP + 48]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

turns into:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

In this case, instruction #2 was removed because of the value made
available by #1, and inst #1 was later deleted because it is now
never used before the stack slot is redefined by #3.

This occurs here and there in a lot of code with high spilling, on PPC
most of the removed loads/stores are LSU-reject-causing loads, which is
nice.

On X86, things are much better (because it spills more), where we nuke
about 1% of the instructions from SMG2000 and several hundred from eon.

More improvements to come...

llvm-svn: 25917
2006-02-02 23:29:36 +00:00
Nate Begeman 4efb328926 add 64b gpr store to the possible list of isStoreToStackSlot opcodes.
llvm-svn: 25916
2006-02-02 21:07:50 +00:00
Chris Lattner 5123346708 fix operand numbers
llvm-svn: 25915
2006-02-02 20:38:12 +00:00
Chris Lattner c327d71e06 implement isStoreToStackSlot for PPC
llvm-svn: 25914
2006-02-02 20:16:12 +00:00
Chris Lattner bb53acd03c Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,a far more logical place. Other methods should also be moved if anyoneis interested. :)
llvm-svn: 25913
2006-02-02 20:12:32 +00:00
Chris Lattner 246ee44c8f implement isStoreToStackSlot
llvm-svn: 25911
2006-02-02 20:00:41 +00:00
Chris Lattner 0acc90c67e add a method
llvm-svn: 25910
2006-02-02 19:57:16 +00:00
Chris Lattner d8208c3665 more notes
llvm-svn: 25908
2006-02-02 19:43:28 +00:00
Chris Lattner d3f033e8e0 add a note, I have no idea how important this is.
llvm-svn: 25907
2006-02-02 19:16:34 +00:00
Chris Lattner e10e1024bc %fcc is not an alias for %fcc0
llvm-svn: 25906
2006-02-02 08:02:20 +00:00
Chris Lattner cb34968d19 correct an opcode
llvm-svn: 25905
2006-02-02 07:56:15 +00:00
Chris Lattner 9dd7df7ee7 new example
llvm-svn: 25903
2006-02-02 07:37:11 +00:00
Nate Begeman cd018525f8 Update the README
llvm-svn: 25902
2006-02-02 07:27:56 +00:00
Chris Lattner 49beaf40fc Turn any_extend nodes into zero_extend nodes when it allows us to remove an
and instruction.  This allows us to compile stuff like this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

to this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        ret

instead of this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

This occurs quite a bit with the X86 backend.  For example, 25 times in
lambda, 30 times in 177.mesa, 14 times in galgel,  70 times in fma3d,
25 times in vpr, several hundred times in gcc, ~45 times in crafty,
~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap,
16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K
programs.

llvm-svn: 25901
2006-02-02 07:17:31 +00:00
Chris Lattner e0c60d63b1 Implement MaskedValueIsZero for ANY_EXTEND nodes
llvm-svn: 25900
2006-02-02 06:43:15 +00:00
Chris Lattner 4b2ec8af23 implemented, testcase here: test/Regression/CodeGen/X86/compare-add.ll
llvm-svn: 25899
2006-02-02 06:36:48 +00:00
Chris Lattner 49ce35542f add two dag combines:
(C1-X) == C2 --> X == C1-C2
(X+C1) == C2 --> X == C2-C1

This allows us to compile this:

bool %X(int %X) {
        %Y = add int %X, 14
        %Z = setne int %Y, 12345
        ret bool %Z
}

into this:

_X:
        cmpl $12331, 4(%esp)
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

not this:

_X:
        movl $14, %eax
        addl 4(%esp), %eax
        cmpl $12345, %eax
        setne %al
        movzbl %al, %eax
        andl $1, %eax
        ret

Testcase here: Regression/CodeGen/X86/compare-add.ll

nukage of the and coming up next.

llvm-svn: 25898
2006-02-02 06:36:13 +00:00
Evan Cheng d3908f79cb Update.
llvm-svn: 25896
2006-02-02 02:40:17 +00:00
Chris Lattner 0bd74558ae make -debug output less newliney
llvm-svn: 25895
2006-02-02 00:38:08 +00:00
Evan Cheng d8fba3a1ee Fix a erroneous comment.
llvm-svn: 25894
2006-02-02 00:28:23 +00:00
Chris Lattner 7f5880b1c7 Implement matching constraints. We can now say things like this:
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4)

and get:

xyz r2, r3, r4, r2

note that the r2's are pinned together.  Yaay for 2-address instructions.

2342 ----------------------------------------------------------------------

llvm-svn: 25893
2006-02-02 00:25:23 +00:00
Chris Lattner 2f34a9e332 validate matching constraints and remember when we see them.
llvm-svn: 25892
2006-02-02 00:23:53 +00:00
Chris Lattner 6132a87cf4 more notes
llvm-svn: 25890
2006-02-01 23:38:08 +00:00
Evan Cheng b3ea2677a4 Tell codegen MOVAPSrr and MOVAPDrr are copies.
llvm-svn: 25889
2006-02-01 23:03:16 +00:00
Evan Cheng f1ed826c2a Added SSE entries to foldMemoryOperand().
llvm-svn: 25888
2006-02-01 23:02:25 +00:00
Evan Cheng 8b40cde148 Rearrange code to my liking. :)
llvm-svn: 25887
2006-02-01 23:01:57 +00:00
Chris Lattner aa23fa9f43 Implement smart printing of inline asm strings, handling variants and
substituted operands.  For this testcase:

int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

we now emit:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz r2, r2, r3  ;; look here
        or r3, r2, r2
        blr

... note the substituted operands. :)

llvm-svn: 25886
2006-02-01 22:41:11 +00:00
Chris Lattner f7f056751c add a method
llvm-svn: 25884
2006-02-01 22:38:46 +00:00
Chris Lattner 2f7650f9dc another note
llvm-svn: 25883
2006-02-01 21:44:48 +00:00
Andrew Lenharth 4b1c726fbb Add immediate forms of cmov and remove some cruft
llvm-svn: 25882
2006-02-01 19:37:33 +00:00
Nate Begeman 01bd9d9911 *** empty log message ***
llvm-svn: 25879
2006-02-01 19:05:15 +00:00
Chris Lattner 1558fc64f9 Implement simple register assignment for inline asms. This allows us to compile:
int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

into:

 (0x8906130, LLVM BB @0x8902220):
        %r2 = OR4 %r3, %r3
        %r3 = OR4 %r4, %r4
        INLINEASM <es:xyz $0, $1, $2>, %r2<def>, %r2, %r3
        %r3 = OR4 %r2, %r2
        BLR

which asmprints as:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz $0, $1, $2      ;; need to print the operands now :)
        or r3, r2, r2
        blr

llvm-svn: 25878
2006-02-01 18:59:47 +00:00
Chris Lattner ba56b5dc35 Finegrainify namespacification
llvm-svn: 25877
2006-02-01 18:10:56 +00:00
Chris Lattner a983beab37 add a note
llvm-svn: 25876
2006-02-01 17:54:23 +00:00
Nate Begeman 7e7f439f85 Fix some of the stuff in the PPC README file, and clean up legalization
of the SELECT_CC, BR_CC, and BRTWOWAY_CC nodes.

llvm-svn: 25875
2006-02-01 07:19:44 +00:00
Chris Lattner 3da1bb520e add a note, I'll take care of this after nate commits his big patch
llvm-svn: 25873
2006-02-01 06:40:32 +00:00
Evan Cheng 9e350cd6ad - Use xor to clear integer registers (set R, 0).
- Added a new format for instructions where the source register is implied
  and it is same as the destination register. Used for pseudo instructions
  that clear the destination register.

llvm-svn: 25872
2006-02-01 06:13:50 +00:00
Evan Cheng c404b5748c Remove another entry.
llvm-svn: 25871
2006-02-01 06:08:48 +00:00