Commit Graph

35 Commits

Author SHA1 Message Date
Nick Lewycky b59008c694 Make use of the exact bit when optimizing '(X >>exact 3) << 1' to eliminate the
'and' that would zero out the trailing bits, and to produce an exact shift
ourselves.

llvm-svn: 147391
2011-12-31 21:30:22 +00:00
Chad Rosier 43a33066b4 Fix a few more places where TargetData/TargetLibraryInfo is not being passed.
Add FIXMEs to places that are non-trivial to fix.

llvm-svn: 145661
2011-12-02 01:26:24 +00:00
Eli Friedman 530341d748 Make sure to correctly clear the exact/nuw/nsw flags off of shifts when they are combined together. <rdar://problem/9859829>
llvm-svn: 136435
2011-07-29 00:18:19 +00:00
Eli Friedman 911e12f505 Clean up includes of llvm/Analysis/ConstantFolding.h so it's included where it's used and not included where it isn't.
llvm-svn: 135628
2011-07-20 21:57:23 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Benjamin Kramer f0e3f04470 Balance parentheses.
llvm-svn: 130489
2011-04-29 08:41:23 +00:00
Benjamin Kramer 16f18ed7b5 InstCombine: turn (C1 << A) << C2) into (C1 << C2) << A)
Fixes PR9809.

llvm-svn: 130485
2011-04-29 08:15:41 +00:00
Chris Lattner 6b657aed33 Enhance a bunch of transformations in instcombine to start generating
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.

Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck.  I believe that this takes care of a bunch of related
code quality issues attached to PR8862.

llvm-svn: 125267
2011-02-10 05:36:31 +00:00
Chris Lattner 9e4aa0259f Teach instsimplify some tricks about exact/nuw/nsw shifts.
improve interfaces to instsimplify to take this info.

llvm-svn: 125196
2011-02-09 17:15:04 +00:00
Ted Kremenek 3c4408ceb6 Null initialize a few variables flagged by
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.

llvm-svn: 124073
2011-01-23 17:05:06 +00:00
Duncan Sands 7f60dc1eb0 Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.

llvm-svn: 123417
2011-01-14 00:37:45 +00:00
Owen Anderson 226ac14afb When determining if we can fold (x >> C1) << C2, the bits that we need to verify are zero
are not the low bits of x, but the bits that WILL be the low bits after the operation completes.

llvm-svn: 122529
2010-12-23 23:56:24 +00:00
Dan Gohman a32986e899 Really check that the bits that will become zero are actually already zero
before eliminating the operation that zeros them. This fixes rdar://8739316.

llvm-svn: 121353
2010-12-09 02:52:17 +00:00
Benjamin Kramer 94a622af4c The srem -> urem transform is not safe for any divisor that's not a power of two.
E.g. -5 % 5 is 0 with srem and 1 with urem.

Also addresses Frits van Bommel's comments.

llvm-svn: 120049
2010-11-23 20:33:57 +00:00
Benjamin Kramer b5afa65b0a InstCombine: Reduce "X shift (A srem B)" to "X shift (A urem B)" iff B is positive.
This allows to transform the rem in "1 << ((int)x % 8);" to an and.

llvm-svn: 120028
2010-11-23 18:52:42 +00:00
Dale Johannesen 0171dc30ff When checking that the necessary bits are zero in
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits.  PR 8547.

llvm-svn: 118665
2010-11-10 01:30:56 +00:00
Owen Anderson 6186c96765 When folding away a (shl (shr)) pair, we need to check that the bits that will BECOME the low
bits are zero, not that the current low bits are zero.  Fixes <rdar://problem/8606771>.

llvm-svn: 117953
2010-11-01 21:08:20 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner 25a198e72b remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Gabor Greif 4a39b84a9d use ArgOperand API
llvm-svn: 106707
2010-06-24 00:44:01 +00:00
Eric Christopher 7258dcd77f Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00
Gabor Greif f375520f7b reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101465
2010-04-16 15:33:14 +00:00
Gabor Greif 403e9694f9 back out r101423 and r101397, they break llvm-gcc self-host on darwin10
llvm-svn: 101434
2010-04-16 01:16:20 +00:00
Gabor Greif 33ae80bff7 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101397
2010-04-15 20:51:13 +00:00
Gabor Greif 9fd00c7d25 back out r101364, as it trips the linux nightlybot on some clang C++ tests
llvm-svn: 101368
2010-04-15 12:46:56 +00:00
Gabor Greif aafd209632 rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101364
2010-04-15 10:49:53 +00:00
Chris Lattner e112ff64c5 fix a potential overflow issue Eli pointed out.
llvm-svn: 94336
2010-01-23 23:31:46 +00:00
Chris Lattner 249da5cb73 implement a simple instcombine xform that has been in the
readme forever.

llvm-svn: 94318
2010-01-23 18:49:30 +00:00
Chris Lattner 43f2fa6201 my instcombine transformations to make extension elimination more
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.

llvm-svn: 93787
2010-01-18 22:19:16 +00:00
Chris Lattner d8509424a4 change the preferred canonical form for a sign extension to be
lshr+ashr instead of trunc+sext.  We want to avoid type 
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.

llvm-svn: 93107
2010-01-10 07:08:30 +00:00
Chris Lattner 2b459fe7e1 fix indentation of switch statements, no functionality change.
llvm-svn: 93106
2010-01-10 06:59:55 +00:00
Chris Lattner a1e223ea10 teach instcombine to delete sign extending shift pairs (sra(shl X, C), C) when
the input is already sign extended.

llvm-svn: 93019
2010-01-08 19:04:21 +00:00
Chris Lattner dc67e13442 split instcombine of shifts out to its own file.
llvm-svn: 92709
2010-01-05 07:44:46 +00:00