Commit Graph

290 Commits

Author SHA1 Message Date
Dan Gohman 7e47cc7cda Define instructions for cmovo and cmovno.
llvm-svn: 61836
2009-01-07 00:35:10 +00:00
Dan Gohman 33e6fcd56f X86_COND_C and X86_COND_NC are alternate mnemonics for
X86_COND_B and X86_COND_AE, respectively.

llvm-svn: 61835
2009-01-07 00:15:08 +00:00
Dan Gohman beac19e299 Revert r42653 and forward-port the code that lets INC64_32r be
converted to LEA64_32r in x86's convertToThreeAddress. This
replaces code like this:
   movl  %esi, %edi
   inc   %edi
with this:
   lea   1(%rsi), %edi
which appears to be beneficial.

llvm-svn: 61830
2009-01-06 23:34:46 +00:00
Dan Gohman 906152a20f Tidy up #includes, deleting a bunch of unnecessary #includes.
llvm-svn: 61715
2009-01-05 17:59:02 +00:00
Dan Gohman d72358cba8 Make the fuse-failed debug output human-readable.
llvm-svn: 61356
2008-12-23 00:19:20 +00:00
Mon P Wang 998fd29ce1 Fixed x86 code generation of multiple for v2i64. It was incorrect for SSE4.1.
llvm-svn: 61211
2008-12-18 21:42:19 +00:00
Evan Cheng 43c0891838 Reason #3 from 60595 doesn't hold true. If we can fold a PIC load from constpool into a use, the rewrite happens at time of spill (not in VirtRegMap). Later on, if the GlobalBaseReg is spilled, the spiller can see the use uses GlobalBaseReg and do the right thing.
llvm-svn: 60596
2008-12-05 17:41:31 +00:00
Evan Cheng fd8c4d5975 Effectively undo 60461 in PIC mode which simply transform V_SET0 / V_SETALLONES into a load from constpool in order to fold into restores. This is not safe to do when PIC base is being used for a number of reasons:
1. GlobalBaseReg may have been spilled.
2. It may not be live at the use.
3. Spiller doesn't know this is happening so it won't prevent GlobalBaseReg from being spilled later (That by itself is a nasty hack. It's needed because we don't insert the reload until later).

llvm-svn: 60595
2008-12-05 17:23:48 +00:00
Dan Gohman 3f86b51333 Split foldMemoryOperand into public non-virtual and protected virtual
parts, and add target-independent code to add/preserve
MachineMemOperands.

llvm-svn: 60488
2008-12-03 18:43:12 +00:00
Dan Gohman cc78cdf275 Mark x86's V_SET0 and V_SETALLONES with isSimpleLoad, and teach X86's
foldMemoryOperand how to "fold" them, by converting them into constant-pool
loads. When they aren't folded, they use xorps/cmpeqd, but for example when
register pressure is high, they may now be folded as memory operands, which
reduces register pressure.

Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will
remat it instead of copying zeros around (V_SETALLONES was already marked).

llvm-svn: 60461
2008-12-03 05:21:24 +00:00
Bill Wendling 122c515809 Reapply r60382. This time, don't mark "ADC" nodes with "implicit EFLAGS".
llvm-svn: 60385
2008-12-02 00:07:05 +00:00
Bill Wendling 351b6659ad Temporarily revert r60382. It caused CodeGen/X86/i2k.ll and others to fail.
llvm-svn: 60383
2008-12-01 23:44:08 +00:00
Bill Wendling a435b1aebc - Have "ADD" instructions return an implicit EFLAGS.
- Add support for seto, setno, setc, and setnc instructions.

llvm-svn: 60382
2008-12-01 23:30:42 +00:00
Bill Wendling 751a694ad3 Generate something sensible for an [SU]ADDO op when the overflow/carry flag is
the conditional for the BRCOND statement. For instance, it will generate:

    addl %eax, %ecx
    jo LOF

instead of

    addl %eax, %ecx
    ; About 10 instructions to compare the signs of LHS, RHS, and sum.
    jl LOF

llvm-svn: 60123
2008-11-26 22:37:40 +00:00
Dan Gohman 002a2cb207 Fish kill flag annotations in PUSH instructions.
llvm-svn: 60095
2008-11-26 06:39:12 +00:00
Dan Gohman 0b2732598c Add more const qualifiers. This fixes build breakage from r59540.
llvm-svn: 59542
2008-11-18 19:49:32 +00:00
Evan Cheng f713722975 For now, don't split live intervals around x87 stack register barriers. FpGET_ST0_80 must be right after a call instruction (and ADJCALLSTACKUP) so we need to find a way to prevent reload of x87 registers between them.
llvm-svn: 58230
2008-10-27 07:14:50 +00:00
Nicolas Geoffray db30612fc4 Generate code for TLS instructions.
llvm-svn: 58141
2008-10-25 15:22:06 +00:00
Dan Gohman 97d95d6d85 Optimized FCMP_OEQ and FCMP_UNE for x86.
Where previously LLVM might emit code like this:

        ucomisd %xmm1, %xmm0
        setne   %al
        setp    %cl
        orb     %al, %cl
        jne     .LBB4_2

it now emits this:

        ucomisd %xmm1, %xmm0
        jne     .LBB4_2
        jp      .LBB4_2

It has fewer instructions and uses fewer registers, but it does
have more branches. And in the case that this code is followed by
a non-fallthrough edge, it may be followed by a jmp instruction,
resulting in three branch instructions in sequence. Some effort
is made to avoid this situation.

To achieve this, X86ISelLowering.cpp now recognizes FCMP_OEQ and
FCMP_UNE in lowered form, and replace them with code that emits
two branches, except in the case where it would require converting
a fall-through edge to an explicit branch.

Also, X86InstrInfo.cpp's branch analysis and transform code now
knows now to handle blocks with multiple conditional branches. It
uses loops instead of having fixed checks for up to two
instructions. It can now analyze and transform code generated
from FCMP_OEQ and FCMP_UNE.

llvm-svn: 57873
2008-10-21 03:29:32 +00:00
Dan Gohman c835458da9 When the coalescer is doing rematerializing, have it remove
the copy instruction from the instruction list before asking the
target to create the new instruction. This gets the old instruction
out of the way so that it doesn't interfere with the target's
rematerialization code. In the case of x86, this helps it find
more cases where EFLAGS is not live.

Also, in the X86InstrInfo.cpp, teach isSafeToClobberEFLAGS to check
to see if it reached the end of the block after scanning each
instruction, instead of just before. This lets it notice when the
end of the block is only two instructions away, without doing any
additional scanning.

These changes allow rematerialization to clobber EFLAGS in more
cases, for example using xor instead of mov to set the return value
to zero in the included testcase.

llvm-svn: 57872
2008-10-21 03:24:31 +00:00
Dan Gohman a39b0a1f05 Define patterns for shld and shrd that match immediate
shift counts, and patterns that match dynamic shift counts
when the subtract is obscured by a truncate node.

Add DAGCombiner support for recognizing rotate patterns
when the shift counts are defined by truncate nodes.

Fix and simplify the code for commuting shld and shrd
instructions to work even when the given instruction doesn't
have a parent, and when the caller needs a new instruction.

These changes allow LLVM to use the shld, shrd, rol, and ror
instructions on x86 to replace equivalent code using two
shifts and an or in many more cases.

llvm-svn: 57662
2008-10-17 01:23:35 +00:00
Dan Gohman 33332bce17 Const-ify several TargetInstrInfo methods.
llvm-svn: 57622
2008-10-16 01:49:15 +00:00
Anton Korobeynikov 46a9c01fff Update size of inst correctly with segment override.
llvm-svn: 57414
2008-10-12 10:30:11 +00:00
Anton Korobeynikov b52ef06c8c Revert r56675 - it breaks unwinding runtime everywhere.
llvm-svn: 57048
2008-10-04 11:09:36 +00:00
Dan Gohman 0d1e9a8e04 Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Dan Gohman 6ebe734ca6 Move the GlobalBaseReg field out of X86ISelDAGToDAG.cpp
and X86FastISel.cpp into X86MachineFunction.h, so that it
can be shared, instead of having each selector keep track
of its own.

llvm-svn: 56825
2008-09-30 00:58:23 +00:00
Dan Gohman 7e922aa35d Mark lea fi# as being really rematerializable.
llvm-svn: 56698
2008-09-26 21:30:20 +00:00
Evan Cheng 994dd0bbec Avoid spilling EBP / RBP twice in the prologue.
llvm-svn: 56675
2008-09-26 19:14:21 +00:00
Dan Gohman 2430073657 Move the code for initializing the global base reg out of
X86ISelDAGToDAG.cpp and into X86InstrInfo.cpp. This will allow
it to be reused by FastISel.

llvm-svn: 56494
2008-09-23 18:22:58 +00:00
Dan Gohman 38453eebdc Remove isImm(), isReg(), and friends, in favor of
isImmediate(), isRegister(), and friends, to avoid confusion
about having two different names with the same meaning. I'm
not attached to the longer names, and would be ok with
changing to the shorter names if others prefer it.

llvm-svn: 56189
2008-09-13 17:58:21 +00:00
Evan Cheng f93bc7f755 Use static_cast instead of C style cast.
llvm-svn: 55552
2008-08-29 23:21:31 +00:00
Evan Cheng b3ed09703c Backing out 55521. Not safe.
llvm-svn: 55548
2008-08-29 22:13:21 +00:00
Evan Cheng 960b17a3c2 Swap fp comparison operands and change predicate to allow load folding.
llvm-svn: 55521
2008-08-28 23:48:31 +00:00
Owen Anderson 42ccd39689 These assertions should be return false's instead, allowing the client to detect the failure.
llvm-svn: 55377
2008-08-26 18:50:40 +00:00
Owen Anderson 27fb3dcbc7 Make TargetInstrInfo::copyRegToReg return a bool indicating whether the copy requested
was inserted or not.  This allows bitcast in fast isel to properly handle the case
where an appropriate reg-to-reg copy is not available.

llvm-svn: 55375
2008-08-26 18:03:31 +00:00
Owen Anderson 4f6bf04616 Convert uses of std::vector in TargetInstrInfo to SmallVector. This change had to be propoagated down into all the targets and up into all clients of this API.
llvm-svn: 54802
2008-08-14 22:49:33 +00:00
Dan Gohman 4e2f3ace2c Add an EXTRACTPSmr pattern to match the pattern that
X86ISelLowering creates.

llvm-svn: 54544
2008-08-08 18:30:21 +00:00
Dan Gohman 527ca7e253 Re-enable elimination of unnecessary SUBREG_TO_REG instructions in
LowerSubregs, and fix an x86-64 isel bug that this exposed.

SUBREG_TO_REG for x86-64 implicit zero extension is only safe for
isel to generate when the source is known to always have zeros in
the high 32 bits. The EXTRACT_SUBREG instruction does not clear
the high 32 bits.

llvm-svn: 54444
2008-08-07 02:54:50 +00:00
Dan Gohman 2ce6f2ad5e Rename SDOperand to SDValue.
llvm-svn: 54128
2008-07-27 21:46:04 +00:00
Evan Cheng e001643358 Use movaps instead of movups to spill 16-byte vector values when default alignment is >= 16. This fixes some massive performance regressions.
llvm-svn: 53844
2008-07-21 06:34:17 +00:00
Anton Korobeynikov b7a49925a1 Use aligned stack spills, where possible. This fixes PR2549.
llvm-svn: 53784
2008-07-19 06:30:51 +00:00
Dan Gohman 1705968102 Add a new function, ReplaceAllUsesOfValuesWith, which handles bulk
replacement of multiple values. This is slightly more efficient
than doing multiple ReplaceAllUsesOfValueWith calls, and theoretically
could be optimized even further. However, an important property of this
new function is that it handles the case where the source value set and
destination value set overlap. This makes it feasible for isel to use
SelectNodeTo in many very common cases, which is advantageous because
SelectNodeTo avoids a temporary node and it doesn't require CSEMap
updates for users of values that don't change position.

Revamp MorphNodeTo, which is what does all the work of SelectNodeTo, to
handle operand lists more efficiently, and to correctly handle a number
of corner cases to which its new wider use exposes it.

This commit also includes a change to the encoding of post-isel opcodes
in SDNodes; now instead of being sandwiched between the target-independent
pre-isel opcodes and the target-dependent pre-isel opcodes, post-isel
opcodes are now represented as negative values. This makes it possible
to test if an opcode is pre-isel or post-isel without having to know
the size of the current target's post-isel instruction set.

These changes speed up llc overall by 3% and reduce memory usage by 10%
on the InstructionCombining.cpp testcase with -fast and -regalloc=local.

llvm-svn: 53728
2008-07-17 19:10:17 +00:00
Dan Gohman 9a542a4d5f Add a utility function to MachineInstr for testing whether an instruction
has exactly one MachineMemOperand, and change some X86 lowering code to
make use of it.

llvm-svn: 53498
2008-07-12 00:10:52 +00:00
Dan Gohman 3b46030375 Pool-allocation for MachineInstrs, MachineBasicBlocks, and
MachineMemOperands. The pools are owned by MachineFunctions.

This drastically reduces the number of calls to malloc/free made
during the "Emit" phase of scheduling, as well as later phases
in CodeGen. Combined with other changes, this speeds up the
"instruction selection" phase of CodeGen by 10% in some cases.

llvm-svn: 53212
2008-07-07 23:14:23 +00:00
Dan Gohman 38740a98b2 Make DenseMap's insert return a pair, to more closely resemble std::map.
llvm-svn: 53177
2008-07-07 17:46:23 +00:00
Evan Cheng 7d98a48f15 - Remove calls to copyKillDeadInfo which is an N^2 function. Instead, propagate kill / dead markers as new instructions are constructed in foldMemoryOperand, convertToThressAddress, etc.
- Also remove LiveVariables::instructionChanged, etc. Replace all calls with cheaper calls which update VarInfo kill list.

llvm-svn: 53097
2008-07-03 09:09:37 +00:00
Evan Cheng c939f45fe7 commuteInstruction should preserve dead markers.
llvm-svn: 53060
2008-07-03 00:04:51 +00:00
Owen Anderson 30cc028e4a Make LiveVariables even more optional, by making it optional in the call to TargetInstrInfo::convertToThreeAddressInstruction
Also, if LV isn't around, then TwoAddr doesn't need to be updating flags, since they won't have been set in the first place.

llvm-svn: 53058
2008-07-02 23:41:07 +00:00
Dan Gohman fb19f9402b Split ISD::LABEL into ISD::DBG_LABEL and ISD::EH_LABEL, eliminating
the need for a flavor operand, and add a new SDNode subclass,
LabelSDNode, for use with them to eliminate the need for a label id
operand.

Change instruction selection to let these label nodes through
unmodified instead of creating copies of them. Teach the MachineInstr
emitter how to emit a MachineInstr directly from an ISD label node.

This avoids the need for allocating SDNodes for the label id and
flavor value, as well as SDNodes for each of the post-isel label,
label id, and label flavor.

llvm-svn: 52943
2008-07-01 00:05:16 +00:00
Evan Cheng 3f2ceac565 If it's determined safe, remat MOV32r0 (i.e. xor r, r) and others as it is instead of using the longer MOV32ri instruction.
llvm-svn: 52670
2008-06-24 07:10:51 +00:00
Evan Cheng 03553bb59a Add option to commuteInstruction() which forces it to create a new (commuted) instruction.
llvm-svn: 52308
2008-06-16 07:33:11 +00:00
Duncan Sands 13237ac3b9 Wrap MVT::ValueType in a struct to get type safety
and better control the abstraction.  Rename the type
to MVT.  To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits().  Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).

llvm-svn: 52044
2008-06-06 12:08:01 +00:00
Dan Gohman 3388d022ac Use PMULDQ for v2i64 multiplies when SSE4.1 is available. And add
load-folding table entries for PMULDQ and PMULLD.

llvm-svn: 51489
2008-05-23 17:49:40 +00:00
Dan Gohman eabd647cd5 Change target-specific classes to use more precise static types.
This eliminates the need for several awkward casts, including
the last dynamic_cast under lib/Target.

llvm-svn: 51091
2008-05-14 01:58:56 +00:00
Bill Wendling 1e11768a4f Constify the machine instruction passed into the
"is{Trivially,Really}ReMaterializable" methods.

llvm-svn: 51001
2008-05-12 20:54:26 +00:00
Evan Cheng fa8f9f937a Undo r50574. We are already ensuring the folded load address is 16-byte aligned.
llvm-svn: 50578
2008-05-02 17:01:01 +00:00
Evan Cheng 50f82f2c8e Not safe folding a load + FsXORPSrr into FsXORPSrm. It's loading a FR64 value but the load folding variant expects a 16-byte aligned address.
llvm-svn: 50574
2008-05-02 07:50:58 +00:00
Nicolas Geoffray 984e7199cc Don't forget to update the current operand when getting the size of an instruction.
llvm-svn: 50007
2008-04-20 23:36:47 +00:00
Evan Cheng 147cb764b5 Don't forget about sub-register indices when rematting instructions.
llvm-svn: 49830
2008-04-16 23:44:44 +00:00
Nicolas Geoffray ae84bbdbed Infrastructure for getting the machine code size of a function and an instruction. X86, PowerPC and ARM are implemented
llvm-svn: 49809
2008-04-16 20:10:13 +00:00
Dan Gohman 3bc3ddd638 Rename MemOperand to MachineMemOperand. This was suggested by
review feedback from Chris quite a while ago. No functionality
change.

llvm-svn: 49348
2008-04-07 19:35:22 +00:00
Evan Cheng b86595fb0a ReMat of load from stub in pic mode extends the life of pic base. Currently spiller doesn't do a good job of estimating the impact. Disable for now.
llvm-svn: 49059
2008-04-01 23:26:12 +00:00
Evan Cheng 19a6dd9f2a Remove unnecessary and non-deterministic checking code. Re-enable remat of load from gv stub.
llvm-svn: 49054
2008-04-01 21:38:20 +00:00
Evan Cheng 306e3dcff4 Disabling remat of load from gv stub (temporarily) again to fix llvmgcc bootstrap miscompare.
llvm-svn: 49037
2008-04-01 07:33:13 +00:00
Evan Cheng e4f77c69ac It's not safe to fold a load from GV stub or constantpool into a two-address use.
llvm-svn: 49002
2008-03-31 23:19:51 +00:00
Evan Cheng ed6e34fe41 Move reMaterialize() from TargetRegisterInfo to TargetInstrInfo.
llvm-svn: 48995
2008-03-31 20:40:39 +00:00
Evan Cheng 1973a46cd3 Re-apply 48911.
llvm-svn: 48977
2008-03-31 07:54:19 +00:00
Evan Cheng b8654202dd Backing out 48911 for now. It's breaking stuff.
llvm-svn: 48922
2008-03-28 17:49:06 +00:00
Evan Cheng 9ae4d7b719 Load from stub is already re-materializable.
llvm-svn: 48911
2008-03-28 06:49:25 +00:00
Evan Cheng 308e564693 Code clean up.
llvm-svn: 48856
2008-03-27 01:45:11 +00:00
Evan Cheng 29e62a59f3 Allow certain lea instructions to be rematerialized.
llvm-svn: 48855
2008-03-27 01:41:09 +00:00
Evan Cheng 4fb07c6500 Remove an unused command line option.
llvm-svn: 48854
2008-03-27 01:30:24 +00:00
Dan Gohman 883cbfd0ba Add CMP32mr and friends to the load-unfolding table. Among
other things, this allows the scheduler to unfold a load operand
in the 2008-01-08-SchedulerCrash.ll testcase, so it now successfully
clones the comparison to avoid a pushf+popf.

llvm-svn: 48777
2008-03-25 16:53:19 +00:00
Chris Lattner 5abbe6cef5 Add support for calls that return two FP values in
ST(0)/ST(1).

llvm-svn: 48634
2008-03-21 06:38:26 +00:00
Christopher Lamb d3d0ad3f58 Make insert_subreg a two-address instruction, vastly simplifying LowerSubregs pass. Add a new TII, subreg_to_reg, which is like insert_subreg except that it takes an immediate implicit value to insert into rather than a register.
llvm-svn: 48412
2008-03-16 03:12:01 +00:00
Christopher Lamb dd55d3f1b2 Get rid of a pseudo instruction and replace it with subreg based operation on real instructions, ridding the asm printers of the hack used to do this previously. In the process, update LowerSubregs to be careful about eliminating copies that have side affects.
Note: the coalescer will have to be careful about this too, when it starts coalescing insert_subreg nodes.
llvm-svn: 48329
2008-03-13 05:47:01 +00:00
Chris Lattner 7b27ccfd5e coalesce away 80-bit floating point copies.
llvm-svn: 48241
2008-03-11 19:30:09 +00:00
Chris Lattner 7930d8e775 convert a massive if statement to a switch.
llvm-svn: 48240
2008-03-11 19:28:17 +00:00
Christopher Lamb 342e4104d3 Missed part of recommit.
llvm-svn: 48224
2008-03-11 10:27:36 +00:00
Chris Lattner a4fa0ad30d abort with an assert instead of a cerr to get line#
llvm-svn: 48199
2008-03-10 23:56:08 +00:00
Evan Cheng d4e1d9eeb2 Revert 48125, 48126, and 48130 for now to unbreak some x86-64 tests.
llvm-svn: 48167
2008-03-10 19:31:26 +00:00
Christopher Lamb 4ba3f0430b Allow insert_subreg into implicit, target-specific values.
Change insert/extract subreg instructions to be able to be used in TableGen patterns.
Use the above features to reimplement an x86-64 pseudo instruction as a pattern.

llvm-svn: 48130
2008-03-10 06:12:08 +00:00
Chris Lattner 86829f0ff7 teach X86InstrInfo::copyRegToReg how to copy into ST(0) from
an RFP register class.

Teach ScheduleDAG how to handle CopyToReg with different src/dst 
reg classes.

This allows us to compile trivial inline asms that expect stuff
on the top of x87-fp stack.

llvm-svn: 48107
2008-03-09 09:15:31 +00:00
Chris Lattner b79bafcec8 add some code to support cross-register class copying from
RST -> RFP{32/64/80}.  We only handle ST(0) for now.

llvm-svn: 48104
2008-03-09 08:46:19 +00:00
Chris Lattner c4c9dde04c rearrange some code, no functionality change.
llvm-svn: 48101
2008-03-09 07:58:04 +00:00
Evan Cheng 42cb72e52c Turning on remat of pic loads.
llvm-svn: 47524
2008-02-23 02:07:42 +00:00
Evan Cheng 4d17671997 No need recognize load from a fixed argument slot as re-materializable. LiveIntervalAnalysis already handles it as a special case.
llvm-svn: 47522
2008-02-23 01:47:44 +00:00
Evan Cheng 94ba37f8e3 Allow re-materialization of pic load (controlled by -remat-pic-load for now).
llvm-svn: 47476
2008-02-22 09:25:47 +00:00
Evan Cheng 244183ef0d commuteInstr() can now commute non-ssa machine instrs.
llvm-svn: 47043
2008-02-13 02:46:49 +00:00
Evan Cheng 3b3286d4bc It's not always safe to fold movsd into xorpd, etc. Check the alignment of the load address first to make sure it's 16 byte aligned.
llvm-svn: 46893
2008-02-08 21:20:40 +00:00
Evan Cheng 8d59dd119b Added missing entries in X86 load / store folding tables.
llvm-svn: 46866
2008-02-08 00:12:56 +00:00
Evan Cheng 1bc1cae318 In some cases, e.g. ADD32ri, no transformation is made. Guide against it.
llvm-svn: 46849
2008-02-07 08:29:53 +00:00
Chris Lattner 18df33d0c8 fix a wordo that gordon noticed :)
llvm-svn: 45896
2008-01-12 00:53:16 +00:00
Chris Lattner 6da61c2515 Any x86 instruction that reads from an invariant location is invariant.
This allows us to sink things like:
	cvtsi2sd	32(%esp), %xmm1
when reading from the argument area, for example.

llvm-svn: 45895
2008-01-12 00:35:08 +00:00
Chris Lattner 596875118c rename MachineInstr::setInstrDescriptor -> setDesc
llvm-svn: 45871
2008-01-11 18:10:50 +00:00
Chris Lattner 806dd0e2ac remove xchg and shift-reg-by-1 instructions, which are dead.
llvm-svn: 45870
2008-01-11 18:00:50 +00:00
Chris Lattner c8226f32e9 Simplify the side effect stuff a bit more and make licm/sinking
both work right according to the new flags.

This removes the TII::isReallySideEffectFree predicate, and adds
TII::isInvariantLoad. 

It removes NeverHasSideEffects+MayHaveSideEffects and adds
UnmodeledSideEffects as machine instr flags.  Now the clients
can decide everything they need.

I think isRematerializable can be implemented in terms of the
flags we have now, though I will let others tackle that.

llvm-svn: 45843
2008-01-10 23:08:24 +00:00
Chris Lattner f171482a66 verify that the frame index is immutable before remat'ing (still disabled)
or being side-effect free.

llvm-svn: 45816
2008-01-10 04:16:31 +00:00
Chris Lattner 9129f51f9b add a testcase
llvm-svn: 45768
2008-01-09 00:37:18 +00:00
Bill Wendling a3bdad153f Operand 1 should be a register. We don't care if it's a preg, vreg, or 0.
llvm-svn: 45699
2008-01-07 08:05:29 +00:00