Commit Graph

17 Commits

Author SHA1 Message Date
Chris Lattner 30c3de6461 fix two problems with machine sinking:
1. Sinking would crash when the first instruction of a block was
   sunk due to iterator problems.
2. Instructions could be sunk to their current block, causing an
   infinite loop.

This fixes PR3968

llvm-svn: 68787
2009-04-10 16:38:36 +00:00
Evan Cheng 2510436e20 Fix PR3522. It's not safe to sink into landing pad BB's.
llvm-svn: 64582
2009-02-15 08:36:12 +00:00
Evan Cheng 47a65a167d Don't sink the instruction if TargetRegisterInfo::isSafeToMoveRegClassDefs doesn't think it's safe. This works around PR1911.
llvm-svn: 63994
2009-02-07 01:21:47 +00:00
Dan Gohman 906152a20f Tidy up #includes, deleting a bunch of unnecessary #includes.
llvm-svn: 61715
2009-01-05 17:59:02 +00:00
Dan Gohman 0d1e9a8e04 Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Dan Gohman 38453eebdc Remove isImm(), isReg(), and friends, in favor of
isImmediate(), isRegister(), and friends, to avoid confusion
about having two different names with the same meaning. I'm
not attached to the longer names, and would be ok with
changing to the shorter names if others prefer it.

llvm-svn: 56189
2008-09-13 17:58:21 +00:00
Dan Gohman a79db30d28 Tidy up several unbeseeming casts from pointer to intptr_t.
llvm-svn: 55779
2008-09-04 17:05:41 +00:00
Dan Gohman d78c400b5b Clean up the use of static and anonymous namespaces. This turned up
several things that were neither in an anonymous namespace nor static
but not intended to be global.

llvm-svn: 51017
2008-05-13 00:00:25 +00:00
Evan Cheng 399e1101ba Refactor some code out of MachineSink into a MachineInstr query.
llvm-svn: 48311
2008-03-13 00:44:09 +00:00
Dan Gohman 3a4be0fdef Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Chris Lattner 08af5a9dad implement support for sinking a load out the bottom of a block that
has no stores between the load and the end of block.  This works 
great and sinks hundreds of stores, but we can't turn it on because
machineinstrs don't have volatility information and we don't want to
sink volatile stores :(

llvm-svn: 45894
2008-01-12 00:17:41 +00:00
Chris Lattner c8226f32e9 Simplify the side effect stuff a bit more and make licm/sinking
both work right according to the new flags.

This removes the TII::isReallySideEffectFree predicate, and adds
TII::isInvariantLoad. 

It removes NeverHasSideEffects+MayHaveSideEffects and adds
UnmodeledSideEffects as machine instr flags.  Now the clients
can decide everything they need.

I think isRematerializable can be implemented in terms of the
flags we have now, though I will let others tackle that.

llvm-svn: 45843
2008-01-10 23:08:24 +00:00
Chris Lattner f3bd2cd37c Clamp down on sinking of lots of instructions.
llvm-svn: 45841
2008-01-10 22:35:15 +00:00
Chris Lattner ee61d14bf6 The current impl is really trivial, add some comments about how it can be made better.
llvm-svn: 45625
2008-01-05 06:47:58 +00:00
Chris Lattner d11ca169e7 don't sink anything with side effects, this makes lots of stuff work, but sinks almost nothing.
llvm-svn: 45617
2008-01-05 02:33:22 +00:00
Chris Lattner 6ec78274df fix a common crash.
llvm-svn: 45614
2008-01-05 01:39:17 +00:00
Chris Lattner f3edc09f9b Add a really quick hack at a machine code sinking pass, enabled with --enable-sinking.
It is missing validity checks, so it is known broken.  However, it is powerful enough
to compile this contrived code:

void test1(int C, double A, double B, double *P) {
  double Tmp = A*A+B*B;
  *P = C ? Tmp : A;
}

into:

_test1:
	movsd	8(%esp), %xmm0
	cmpl	$0, 4(%esp)
	je	LBB1_2	# entry
LBB1_1:	# entry
	movsd	16(%esp), %xmm1
	mulsd	%xmm1, %xmm1
	mulsd	%xmm0, %xmm0
	addsd	%xmm1, %xmm0
LBB1_2:	# entry
	movl	24(%esp), %eax
	movsd	%xmm0, (%eax)
	ret

instead of:

_test1:
	movsd	16(%esp), %xmm0
	mulsd	%xmm0, %xmm0
	movsd	8(%esp), %xmm1
	movapd	%xmm1, %xmm2
	mulsd	%xmm2, %xmm2
	addsd	%xmm0, %xmm2
	cmpl	$0, 4(%esp)
	je	LBB1_2	# entry
LBB1_1:	# entry
	movapd	%xmm2, %xmm1
LBB1_2:	# entry
	movl	24(%esp), %eax
	movsd	%xmm1, (%eax)
	ret

woo.

llvm-svn: 45570
2008-01-04 07:36:53 +00:00