Commit Graph

7182 Commits

Author SHA1 Message Date
Dan Gohman f031792cc6 Fix several areas in LSR to do a better job keeping the main
LSRInstance data structures up to date. This fixes some
pessimizations caused by stale data which will be exposed
in an upcoming change.

llvm-svn: 112440
2010-08-29 16:32:54 +00:00
Dan Gohman e9e0873b08 Refactor the three main groups of code out of
NarrowSearchSpaceUsingHeuristics into separate functions.

llvm-svn: 112439
2010-08-29 16:09:42 +00:00
Dan Gohman 37a0f68036 Delete a bogus check.
llvm-svn: 112438
2010-08-29 15:30:29 +00:00
Dan Gohman b6a520d63c Add some comments.
llvm-svn: 112437
2010-08-29 15:27:08 +00:00
Dan Gohman bf673e0652 Move this debug output into GenerateAllReuseFormula, to declutter
the high-level logic.

llvm-svn: 112436
2010-08-29 15:21:38 +00:00
Dan Gohman d366b6d5c8 Delete an unused declaration.
llvm-svn: 112435
2010-08-29 15:19:11 +00:00
Dan Gohman 4f13bbfefc Do one lookup instead of two.
llvm-svn: 112434
2010-08-29 15:18:49 +00:00
Chris Lattner f94f6bb0ba licm preserves the cfg, it doesn't have to explicitly say it
preserves domfrontier.  It does preserve AA though.

llvm-svn: 112419
2010-08-29 07:02:56 +00:00
Chris Lattner abe61ef3b4 now that it doesn't use the PromoteMemToReg function, LICM doesn't
require DomFrontier.  Dropping this doesn't actually save any runs
of the pass though.

llvm-svn: 112418
2010-08-29 06:49:44 +00:00
Chris Lattner 1dc98b47b5 completely rewrite the memory promotion algorithm in LICM.
Among other things, this uses SSAUpdater instead of 
PromoteMemToReg.

llvm-svn: 112417
2010-08-29 06:43:52 +00:00
Chris Lattner 9c3931a544 use getUniqueExitBlocks instead of a manual set.
llvm-svn: 112412
2010-08-29 05:12:21 +00:00
Chris Lattner 85bf5421e1 reimplement LICM::sink to use SSAUpdater instead of PromoteMemToReg.
This leads to much simpler code.

llvm-svn: 112410
2010-08-29 04:55:06 +00:00
Chris Lattner c3fb03e289 implement SSAUpdater::RewriteUseAfterInsertions, a helpful form of RewriteUse.
llvm-svn: 112409
2010-08-29 04:54:06 +00:00
Chris Lattner b50407f104 remove dead proto
llvm-svn: 112408
2010-08-29 04:53:24 +00:00
Chris Lattner cd96b4df56 reduce indentation in LICM::sink by using early exits, use
getUniqueExitBlocks instead of getExitBlocks and a manual
set to eliminate dupes.

llvm-svn: 112405
2010-08-29 04:28:20 +00:00
Chris Lattner 188cc5a0fc modernize this pass a bit: use efficient set/map and reduce indentation.
llvm-svn: 112404
2010-08-29 04:23:04 +00:00
Chris Lattner 13ee795c42 remove unions from LLVM IR. They are severely buggy and not
being actively maintained, improved, or extended.

llvm-svn: 112356
2010-08-28 04:09:24 +00:00
Chris Lattner 504e5100d3 remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner 50df36ac0a for completeness, allow undef also.
llvm-svn: 112351
2010-08-28 03:36:51 +00:00
Chris Lattner 95bb297c26 squish dead code.
llvm-svn: 112350
2010-08-28 03:21:03 +00:00
Chris Lattner d0214f3efe handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Benjamin Kramer 83f9ff0452 Update CMake build. Add newline at end of file.
llvm-svn: 112332
2010-08-28 00:11:12 +00:00
Owen Anderson cf7f941121 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner 25a198e72b remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Owen Anderson 99d4cb861b Fix typos in comments.
llvm-svn: 112286
2010-08-27 20:32:56 +00:00
Chris Lattner 7398434675 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Chris Lattner 90cd746e63 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Owen Anderson 6ebbd92380 Use LVI to eliminate conditional branches where we've tested a related condition previously. Update tests for this change.
This fixes PR5652.

llvm-svn: 112270
2010-08-27 17:12:29 +00:00
Chris Lattner bfd2228182 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Chris Lattner d4ebd6df5a optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Owen Anderson bd2ecc7e68 Make JumpThreading smart enough to properly thread StrSwitch when it's compiled with clang++.
llvm-svn: 112198
2010-08-26 17:40:24 +00:00
Dan Gohman ca26f79051 Reapply r112091 and r111922, support for metadata linking, with a
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).

This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.

llvm-svn: 112190
2010-08-26 15:41:53 +00:00
Daniel Dunbar ce45863f0d Revert r111922, "MapValue support for MDNodes. This is similar to r109117,
except ...", it is causing *massive* performance regressions when building Clang
with itself (-O3 -g).

llvm-svn: 112158
2010-08-26 03:48:11 +00:00
Daniel Dunbar 95fe13c720 Revert r112091, "Remap metadata attached to instructions when remapping
individual ...", which depends on r111922, which I am reverting.

llvm-svn: 112157
2010-08-26 03:48:08 +00:00
Chris Lattner 07afbd5a08 zap dead code.
llvm-svn: 112130
2010-08-26 01:13:54 +00:00
Dan Gohman 8f292e7a6d Rewrite ExtractGV, removing a bunch of stuff that didn't fully work,
and was over-complicated, and replacing it with a simple implementation.

llvm-svn: 112120
2010-08-26 00:22:55 +00:00
Chris Lattner 8df99b523e remove some llvmcontext arguments that are now dead post-refactoring.
llvm-svn: 112104
2010-08-25 23:00:45 +00:00
Dan Gohman fd824487a3 Remap metadata attached to instructions when remapping individual
instructions, not when remapping modules.

llvm-svn: 112091
2010-08-25 21:36:50 +00:00
Devang Patel 01262e129e DIGlobalVariable can be used to encode debug info for globals that are directly folded into a constant by FE.
llvm-svn: 112072
2010-08-25 18:52:02 +00:00
Dan Gohman a209503467 Use MapValue in the Linker instead of having a private function
which does the same thing. This eliminates redundant code and
handles MDNodes better. MDNode linking still doesn't fully
work yet though.

llvm-svn: 111941
2010-08-24 18:50:07 +00:00
Owen Anderson 7c853e877e Turn LVI on, previously detected failures should be fixed now.
llvm-svn: 111923
2010-08-24 17:21:18 +00:00
Dan Gohman 6901283544 MapValue support for MDNodes. This is similar to r109117, except
that it avoids a lot of unnecessary cloning by avoiding remapping
MDNode cycles when none of the nodes in the cycle actually need to
be remapped. Also it uses the new temporary MDNode mechanism.

llvm-svn: 111922
2010-08-24 17:10:10 +00:00
Owen Anderson 6ffa3f2aea Turn LVI back off, I have a testcase now.
llvm-svn: 111834
2010-08-23 19:59:27 +00:00
Owen Anderson 630add39a6 Re-enable LazyValueInfo. Monitoring for failures.
llvm-svn: 111816
2010-08-23 18:12:23 +00:00
Owen Anderson d31d82d75c Now that PassInfo and Pass::ID have been separated, move the rest of the passes over to the new registration API.
llvm-svn: 111815
2010-08-23 17:52:01 +00:00
Owen Anderson 84c29a096b Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson 43057cd56a Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson bb723b228a When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Owen Anderson aac8cbb261 Disable LVI while I evaluate a failure.
llvm-svn: 111551
2010-08-19 19:47:08 +00:00
Owen Anderson 5c87dd55d3 Tentatively enabled LVI by default. I'll be monitoring for any failures.
llvm-svn: 111543
2010-08-19 19:04:40 +00:00
Dan Gohman 129a816ee6 Process the step before the start, because it's usually the simpler
of the two.

llvm-svn: 111495
2010-08-19 01:02:31 +00:00
Owen Anderson 208636fa33 Inform LazyValueInfo whenever a block is deleted, to avoid dangling pointer issues.
llvm-svn: 111382
2010-08-18 18:39:01 +00:00
Chris Lattner 3c603024bb Fix PR7755: knowing something about an inval for a pred
from the LHS should disable reconsidering that pred on the
RHS.  However, knowing something about the pred on the RHS
shouldn't disable subsequent additions on the RHS from
happening.

llvm-svn: 111349
2010-08-18 03:14:36 +00:00
Chris Lattner f0b5b67ba5 fit in 80 cols
llvm-svn: 111348
2010-08-18 03:13:35 +00:00
Chris Lattner b45de95345 remove some dead code.
llvm-svn: 111344
2010-08-18 02:41:56 +00:00
Chris Lattner 6aabb66139 remove dead prototype.
llvm-svn: 111342
2010-08-18 02:37:06 +00:00
Eric Christopher 51edc7b7e1 Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.

llvm-svn: 111318
2010-08-17 22:55:27 +00:00
Dan Gohman 5047ca0c02 When rotating loops, put the original header at the bottom of the
loop, making the resulting loop significantly less ugly.  Also, zap
its trivial PHI nodes, since it's easy.

llvm-svn: 111255
2010-08-17 17:39:21 +00:00
Dan Gohman 941020ed72 Use the getUniquePredecessor() utility function, instead of doing
what it does manually.

llvm-svn: 111248
2010-08-17 17:07:02 +00:00
Evan Cheng 8b637b177c Add an option to disable codegen prepare critical edge splitting. In theory, PHI elimination is already doing all (most?) of the splitting needed. But machine-licm and machine-sink seem to miss some important optimizations when splitting is disabled.
llvm-svn: 111224
2010-08-17 01:34:49 +00:00
Dan Gohman 89fdbaf99a Instead of having CollectSubexpr's categorize operands as interesting or
uninteresting, just put all the operands on one list and make
GenerateReassociations make the decision about what's interesting.
This is simpler, and it avoids an extra ScalarEvolution::getAddExpr call.

llvm-svn: 111133
2010-08-16 15:50:00 +00:00
Dan Gohman 9b7632df26 Put add operands in ScalarEvolution-canonical order, when convenient.
This isn't necessary, because ScalarEvolution sorts them anyway,
but it's tidier this way.

llvm-svn: 111132
2010-08-16 15:39:27 +00:00
Dan Gohman 6e964c7fb4 Avoid #include <ScalarEvolution.h> in LoopSimplify.cpp, which doesn't
actually use ScalarEvolution.

llvm-svn: 111124
2010-08-16 14:44:03 +00:00
Dan Gohman 250b754428 Instead, teach SimplifyCFG to trim non-address-taken blocks from
indirectbr destination lists.

llvm-svn: 111122
2010-08-16 14:41:14 +00:00
Dan Gohman aa445c0751 LoopSimplify shouldn't split loop backedges that use indirectbr. PR7867.
llvm-svn: 111061
2010-08-14 00:43:09 +00:00
Dan Gohman 4a63fad976 Teach SimplifyCFG how to simplify indirectbr instructions.
- Eliminate redundant successors.
 - Convert an indirectbr with one successor into a direct branch.

Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.

llvm-svn: 111060
2010-08-14 00:29:42 +00:00
Dan Gohman 081ffcd00b Fix LSR's ExtractImmediate and ExtractSymbol to avoid calling
ScalarEvolution::getAddExpr, which can be pretty expensive, when nothing
has changed, which is pretty common.

llvm-svn: 111042
2010-08-13 21:17:19 +00:00
Nate Begeman 2a0ca3e937 Reapply this transformation now that it is passing the external test which it previously failed.
llvm-svn: 110987
2010-08-13 00:17:53 +00:00
Chris Lattner 363226dfe8 fix PR7876: If ipsccp decides that a function's address is taken
before it rewrites the code, we need to use that in the post-rewrite pass.

llvm-svn: 110962
2010-08-12 22:25:23 +00:00
Eric Christopher ac40d49c70 Temporarily revert 110737 and 110734, they were causing failures
in an external testsuite.

llvm-svn: 110905
2010-08-12 07:01:22 +00:00
Nate Begeman 265363061e Add the minimal amount of smarts necessary to instcombine of shufflevectors to recognize
patterns generated by clang for transpose of a matrix in generic vectors.  This is made
of two parts:

1) Propagating vector extracts of hi/lo half into their users
2) Recognizing an insertion of even elements followed by the odd elements as an unpack.

Testcase to come, but this shrinks the # of shuffle instructions generated on x86 from ~40 to the minimal 8.

llvm-svn: 110734
2010-08-10 21:38:12 +00:00
Nick Lewycky f0067b668c Fix a use after free error caught by the valgrind builders.
llvm-svn: 110601
2010-08-09 21:03:28 +00:00
Eli Friedman f99e7e6643 PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
it doesn't regress again.

llvm-svn: 110597
2010-08-09 20:49:43 +00:00
Nick Lewycky fbd2757cde Do more to modernize MergeFunctions. Refactor in response to Chris' code review.
llvm-svn: 110538
2010-08-08 05:04:23 +00:00
Owen Anderson 0398607714 Don't attempt the PRE inline asm calls, since we don't value number them yet. Fixes PR7835.
llvm-svn: 110489
2010-08-07 00:20:35 +00:00
Dan Gohman 0f7892b8ae Eliminate PromoteMemoryToRegisterID; just use addPreserved("mem2reg")
instead, as an example of what this looks like.

llvm-svn: 110478
2010-08-06 21:48:06 +00:00
Owen Anderson a7aed18624 Reapply r110396, with fixes to appease the Linux buildbot gods.
llvm-svn: 110460
2010-08-06 18:33:48 +00:00
Nick Lewycky 5a2849e166 Fix uninitialized variable warning.
Also move 'default' case next to a real case to help compiler optimize in
non-Debug builds.
No functionality change.

llvm-svn: 110435
2010-08-06 07:43:46 +00:00
Nick Lewycky f216f69ad9 Work in progress, cleaning up MergeFuncs.
Further clean up the comparison function by removing overly generalized
"domains".
Remove all understanding of ELF aliases and simplify folding code and comments.

llvm-svn: 110434
2010-08-06 07:21:30 +00:00
Owen Anderson bda59bd247 Revert r110396 to fix buildbots.
llvm-svn: 110410
2010-08-06 00:23:35 +00:00
Owen Anderson 755aceb5d0 Don't use PassInfo* as a type identifier for passes. Instead, use the address of the static
ID member as the sole unique type identifier.  Clean up APIs related to this change.

llvm-svn: 110396
2010-08-05 23:42:04 +00:00
Owen Anderson 4674dd6cf5 Give JumpThreading+LVI a long-form cl::opt so that it's easier to toggle the default.
llvm-svn: 110384
2010-08-05 22:11:31 +00:00
Owen Anderson 9f2bca02d7 Experiments show that we can safely increase our unrolling threshold without unduly impacting code size, particularly
since unrolling is not enabled at -Os.

llvm-svn: 110233
2010-08-04 18:32:46 +00:00
Dan Gohman ba81fc16a5 Fix whitespace.
llvm-svn: 110223
2010-08-04 17:43:57 +00:00
Dan Gohman 839c972102 Fix a comment.
llvm-svn: 110181
2010-08-04 01:16:35 +00:00
Dan Gohman 5442c71f2e Thread const correctness through a bunch of AliasAnalysis interfaces and
eliminate several const_casts.

Make CallSite implicitly convertible to ImmutableCallSite.

Rename the getModRefBehavior for intrinsic IDs to
getIntrinsicModRefBehavior to avoid overload ambiguity with CallSite,
which happens to be implicitly convertible to bool.

llvm-svn: 110155
2010-08-03 21:48:53 +00:00
Dan Gohman 3619660529 Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.

llvm-svn: 110128
2010-08-03 18:20:32 +00:00
Peter Collingbourne ddaaf40d24 Add an atomic lowering pass
llvm-svn: 110113
2010-08-03 16:19:16 +00:00
Dan Gohman 35e8a6209d Use unary + instead of a separate local variable for working
around std::min vs static const friction.

llvm-svn: 110112
2010-08-03 16:15:50 +00:00
Owen Anderson 8f306a779b Re-apply the infamous r108614, with a fix pointed out by Dirk Steinke.
llvm-svn: 110036
2010-08-02 09:32:13 +00:00
Oscar Fuentes 40b31ad3ee Prefix `next' iterator operation with `llvm::'.
Fixes potential ambiguity problems on VS 2010.

Patch by nobled!

llvm-svn: 110029
2010-08-02 06:00:15 +00:00
Daniel Dunbar c1b09c8644 Fix a -Wreorder warning.
llvm-svn: 110022
2010-08-02 05:43:46 +00:00
Nick Lewycky f52bd9cc33 Work in progress.
Start cleaning up MergeFunctions to look more like the rest of LLVM. The
primary change here is to move the methods responsible for comparison into the
new FunctionComparator object. Some comments added. There's more to do.

llvm-svn: 110021
2010-08-02 05:23:03 +00:00
Daniel Dunbar 0b636a24c7 Speculatively revert r108614, "Another attempt at getting the clang self-host to
like my instcombine patch.", in an attempt to fix Clang i386 bootstrap.
 - Also PR7719.

llvm-svn: 109953
2010-07-31 19:51:11 +00:00
Rafael Espindola 40f18838b7 The BlockExtractorPass() constructor was not reading the BlockFile and that was
exactly what bugpoint expected it to do.

There was also only one user of
BlockExtractorPass(const std::vector<BasicBlock*> &B), so just remove it and
make BlockExtractorPass read BlockFile.

This fixes bugpoint's block extraction.

Nick, please review.

llvm-svn: 109936
2010-07-31 00:32:17 +00:00
Dan Gohman d566d2c7b5 Move MaximumAlignment to be a member of the Value class.
llvm-svn: 109891
2010-07-30 21:07:05 +00:00
Nick Lewycky 299c6dfcbf Add missing newline to debug statement.
llvm-svn: 109886
2010-07-30 20:27:01 +00:00
Eli Friedman 0428a61e45 PR7750: !CExpr->isNullValue() only properly computes whether CExpr is nonnull
if CExpr is a ConstantInt.

llvm-svn: 109773
2010-07-29 18:03:33 +00:00
Gabor Greif 62f0aac99d simplify by using CallSite constructors; virtually eliminates CallSite::get from the tree
llvm-svn: 109687
2010-07-28 22:50:26 +00:00
Dan Gohman a7e5a24093 Define a maximum supported alignment value for load, store, and
alloca instructions (constrained by their internal encoding),
and add error checking for it. Fix an instcombine bug which
generated huge alignment values (null is infinitely aligned).
This fixes undefined behavior noticed by John Regehr.

llvm-svn: 109643
2010-07-28 20:12:04 +00:00
Dan Gohman 9cd20bf792 When user code intentionally dereferences null, the alignment of the
dereference is theoretically infinite. Put a cap on the computed
alignment to avoid overflow, noticed by John Regehr.

llvm-svn: 109596
2010-07-28 17:14:23 +00:00
Gabor Greif f0084e1333 simplify
llvm-svn: 109589
2010-07-28 15:52:43 +00:00
Gabor Greif 0a970698da use Value* constructor of CallSite to create potentially improper site, and test that
llvm-svn: 109581
2010-07-28 14:28:18 +00:00
Gabor Greif f159085414 recommit simplification (r109502, backed out r109509); seems to innocent
llvm-svn: 109510
2010-07-27 16:44:23 +00:00
Gabor Greif 5f91b7cf3e back out this too to restore the bots
llvm-svn: 109509
2010-07-27 15:56:07 +00:00
Gabor Greif 7b0a5fd2a5 simplify: CallSite::get --> CallSite constructor
llvm-svn: 109506
2010-07-27 15:02:37 +00:00
Gabor Greif 7527b2ed5c simplify
llvm-svn: 109502
2010-07-27 13:31:22 +00:00
Owen Anderson aa7f66ba67 Add an initial implementation of LazyValueInfo updating for JumpThreading. Disabled for now.
llvm-svn: 109424
2010-07-26 18:48:03 +00:00
Dan Gohman 0141c13b22 Remove LCSSA's bogus dependence on LoopSimplify and LoopSimplify's bogus
dependence on DominanceFrontier. Instead, add an explicit DominanceFrontier
pass in StandardPasses.h to ensure that it gets scheduled at the right
time.

Declare that loop unrolling preserves ScalarEvolution, and shuffle some
getAnalysisUsages.

This eliminates one LoopSimplify and one LCCSA run in the standard
compile opts sequence.

llvm-svn: 109413
2010-07-26 18:11:16 +00:00
Dan Gohman a7908ae369 Preserve ScalarEvolution in the loop unroller.
llvm-svn: 109412
2010-07-26 18:02:06 +00:00
Dan Gohman 65b257c9d2 Use DominatorTree::properlyDominates instead of dominates with an
explicit inequality check.

llvm-svn: 109401
2010-07-26 17:37:36 +00:00
Dan Gohman 31f73ef210 A block dominates itself, by definition.
llvm-svn: 109400
2010-07-26 17:35:32 +00:00
Nick Lewycky 7bc0443f2b Revert this because we can't clone cyclic MDNodes which are creating during a
build of llvm-gcc.

llvm-svn: 109355
2010-07-24 20:54:02 +00:00
Nick Lewycky 14b69d59dd Whether function-local or not, a MDNode may reference a Function in which case
it needs to be mapped to refer to the function in the new module, not the old
one. Fixes PR7700.

llvm-svn: 109353
2010-07-24 19:43:25 +00:00
Devang Patel 5fa3813329 Speculatively revert 109117
llvm-svn: 109132
2010-07-22 18:44:00 +00:00
Gabor Greif 59f9970ba5 keep in 80 cols
llvm-svn: 109122
2010-07-22 17:18:03 +00:00
Devang Patel fac440cfb6 Map MDNode correctly.
A non function local MDNode can have an operand which is cloned by MapValue(). 

llvm-svn: 109117
2010-07-22 16:35:00 +00:00
Gabor Greif dde79d8f1a mass elimination of reliance on automatic iterator dereferencing
llvm-svn: 109103
2010-07-22 13:36:47 +00:00
Gabor Greif 84012a93ef simplify
llvm-svn: 109101
2010-07-22 13:07:39 +00:00
Gabor Greif b8686360a1 do not access arguments via low-level interface, do not multiply dereference use_iterators
llvm-svn: 109100
2010-07-22 13:04:32 +00:00
Gabor Greif 10bb1f5462 pass dereferenced iterator to dyn_cast
llvm-svn: 109099
2010-07-22 11:48:35 +00:00
Gabor Greif 36f25dfd33 pass dereferenced iterator to dyn_cast
llvm-svn: 109098
2010-07-22 11:43:44 +00:00
Gabor Greif 3e44ea1917 undo 80 column trespassing I caused
llvm-svn: 109092
2010-07-22 10:37:47 +00:00
Dan Gohman 2637cc1a38 Make NamedMDNode not be a subclass of Value, and simplify the interface
for creating and populating NamedMDNodes.

llvm-svn: 109061
2010-07-21 23:38:33 +00:00
Owen Anderson a57b97e7e7 Fix batch of converting RegisterPass<> to INTIALIZE_PASS().
llvm-svn: 109045
2010-07-21 22:09:45 +00:00
Dan Gohman afbe4a7a10 Make this code a little more readable.
llvm-svn: 108968
2010-07-20 23:49:44 +00:00
Dan Gohman 7373bd9973 Use DebugLocs instead of MDNodes.
llvm-svn: 108967
2010-07-20 23:49:05 +00:00
Dan Gohman b22dd85bb3 Fix a typo.
llvm-svn: 108962
2010-07-20 23:10:36 +00:00
Dan Gohman 5c2e65b7bf Don't look up the "dbg" metadata kind by name.
llvm-svn: 108961
2010-07-20 23:09:34 +00:00
Dan Gohman d2c7e52d05 Use getDebugLoc and setDebugLoc instead of getDbgMetadata and setDbgMetadata,
avoiding MDNode overhead.

llvm-svn: 108909
2010-07-20 20:09:07 +00:00
Dan Gohman 12725c7d46 Remember that the induction variable is always a PHINode and
use getIncomingValueForBlock instead of
LoopInfo::getCanonicalInductionVariableIncrement.

llvm-svn: 108865
2010-07-20 17:18:52 +00:00
Owen Anderson 84774eda4b Tweak per Chris' comments.
llvm-svn: 108736
2010-07-19 19:23:32 +00:00
Owen Anderson 32a58342ed Reimplement r108639 in InstCombine rather than DAGCombine.
llvm-svn: 108687
2010-07-19 08:09:34 +00:00
Owen Anderson 7d2818b073 Another attempt at getting the clang self-host to like my instcombine patch.
llvm-svn: 108614
2010-07-17 06:56:35 +00:00
Chris Lattner 27e997a168 eliminate unlockedRefineAbstractTypeTo, types are all per-llvmcontext,
so there is no locking involved in type refinement.

llvm-svn: 108553
2010-07-16 20:50:13 +00:00
Dan Gohman efd7f9c360 Reorder the contents of various getAnalysisUsage functions, eliminating
a redundant loopsimplify run from the default -O2 sequence.

llvm-svn: 108539
2010-07-16 17:58:45 +00:00
Owen Anderson 8a39c807e2 Remove the rest of my instcombine changes. Back to the drawing board on this one.
llvm-svn: 108530
2010-07-16 16:39:00 +00:00
Gabor Greif 6d673953e3 eliminate CallInst::ArgOffset
llvm-svn: 108522
2010-07-16 09:38:02 +00:00
Nick Lewycky 375efe3157 Arrays and vectors with different numbers of elements are not equivalent.
llvm-svn: 108517
2010-07-16 06:31:12 +00:00
Eric Christopher 15a81cddb4 Also revert 108422, it's causing some test failures.
Working on testcases for Owen.

llvm-svn: 108494
2010-07-16 01:36:12 +00:00
Dan Gohman 1415208292 Don't merge uses when they are targetting fixup sites with
different widths. In a use with a narrower fixup, formulae
may be wider than the fixup, in which case the high bits
aren't necessarily meaningful, so it isn't safe to reuse
them for uses with wider fixups.

This fixes PR7618, though the testcase is too large for a
reasonable regression test, since it heavily dependes on
hitting LSR's heuristics in a certain way.

llvm-svn: 108455
2010-07-15 20:24:58 +00:00
Dan Gohman a1501b9c50 Use dbgs() instead of errs() in a DEBUG.
llvm-svn: 108453
2010-07-15 20:12:42 +00:00
Owen Anderson eaf64d5c1e Speculatively revert r108429 to fix the clang self-host.
llvm-svn: 108436
2010-07-15 18:18:57 +00:00
Owen Anderson eb08d01061 Per Chris' suggestion, get rid of the select canonicalization and just add
the corresponding or-icmp-and pattern.  This has the added benefit of doing
the matching earlier, and thus being less susceptible to being confused by
earlier transforms.

llvm-svn: 108429
2010-07-15 17:24:23 +00:00
Owen Anderson 13700ebb02 Remove unneeded check, and correct style.
llvm-svn: 108427
2010-07-15 16:38:22 +00:00
Dan Gohman 4afd412d6b Watch out for a constant offset cancelling out a base register, forming
a zero. This situation arrises in Fortran code with induction variables
that start at 1 instead of 0. This fixes PR7651.

llvm-svn: 108424
2010-07-15 15:14:45 +00:00
Owen Anderson 7151dfd48a Reapply r108378, with bugfixes, testcase, and improved comment formatting.
This now passes LIT, nighty test, and llvm-gcc bootstrap on my machine.

llvm-svn: 108422
2010-07-15 15:00:23 +00:00
Nick Lewycky 485ce5a49c This is a full sentence.
llvm-svn: 108418
2010-07-15 06:51:22 +00:00
Nick Lewycky e6f3287cbb Disable aliases on all platforms.
llvm-svn: 108417
2010-07-15 06:48:56 +00:00
Chris Lattner e41ab07c61 make various clients of ReplaceAndSimplifyAllUses tolerate
it *changing* the things it replaces, not just causing them
to drop to null.  There is no functionality change yet, but 
this is required for a subsequent patch.

llvm-svn: 108414
2010-07-15 06:06:04 +00:00
Eli Friedman a8b4e3732b Speculatively revert r108378; may be causing bootstrap failures.
llvm-svn: 108389
2010-07-15 00:33:00 +00:00
Owen Anderson 37d91d84af Add instcombine transforms to optimize tests of multiple bits of the same value into a single larger comparison.
llvm-svn: 108378
2010-07-14 23:33:51 +00:00
Owen Anderson 2cfe91379b Extend SimplifyCFG's common-destination folding heuristic to allow a single
"bonus" instruction to be speculatively executed.  Add a heuristic to
ensure we're not tripping up out-of-order execution by checking that this bonus
instruction only uses values that were already guaranteed to be available.

This allows us to eliminate the short circuit in (x&1)&&(x&2).

llvm-svn: 108351
2010-07-14 19:52:16 +00:00
Chris Lattner ec0e7b1643 revert r108320, I see the failures now...
llvm-svn: 108322
2010-07-14 06:16:35 +00:00
Chris Lattner 658680b2f5 reapply benjamin's instcombine patch, I don't see anything wrong with it and can't repro any problems with a manual self-host.
llvm-svn: 108320
2010-07-14 05:59:13 +00:00
Eric Christopher ea282034b6 Grammar.
llvm-svn: 108252
2010-07-13 18:27:13 +00:00
Duncan Sands f88a284579 Handle the case of a tail recursion in which the tail call is followed
by a return that returns a constant, while elsewhere in the function
another return instruction returns a different constant.  This is a
special case of accumulator recursion, so just generalize the existing
logic a bit.

llvm-svn: 108241
2010-07-13 15:41:41 +00:00
Benjamin Kramer 8f36402ac2 Nope, still breaks the release selfhost bots :(
llvm-svn: 108153
2010-07-12 16:38:48 +00:00
Benjamin Kramer 07b695e052 Reapply the "or" half of r108136, which seems to be less problematic.
llvm-svn: 108152
2010-07-12 16:15:48 +00:00
Gabor Greif 1b787df129 cache result of operator*
llvm-svn: 108150
2010-07-12 15:48:26 +00:00
Benjamin Kramer c719e8ae9e Revert r108141 again, sigh.
llvm-svn: 108148
2010-07-12 14:42:04 +00:00
Gabor Greif 96fedcb136 cache result of operator*
llvm-svn: 108147
2010-07-12 14:15:58 +00:00
Gabor Greif f9c38b5a45 cache result of operator*
llvm-svn: 108146
2010-07-12 14:15:10 +00:00
Gabor Greif 88dd73b75e cache result of operator*
llvm-svn: 108145
2010-07-12 14:14:03 +00:00
Gabor Greif a75ed761a9 cache result of operator*
llvm-svn: 108144
2010-07-12 14:13:15 +00:00
Gabor Greif 15445db11b cache results of operator*
llvm-svn: 108143
2010-07-12 14:12:11 +00:00
Gabor Greif a5fa885d47 cache results of operator*
llvm-svn: 108142
2010-07-12 14:10:24 +00:00
Benjamin Kramer f578c36035 Reapply 108136 with an ugly pasto fixed.
llvm-svn: 108141
2010-07-12 13:44:00 +00:00
Benjamin Kramer 11743249e6 Move optimization to avoid redundant matching.
llvm-svn: 108140
2010-07-12 13:34:22 +00:00
Benjamin Kramer 9675e759cf Revert r108136 until I figure out why it broke selfhost.
llvm-svn: 108139
2010-07-12 12:35:49 +00:00
Gabor Greif 782f62412f cache dereferenced iterators
llvm-svn: 108138
2010-07-12 12:03:02 +00:00
Gabor Greif 433b975fe2 recommit r108131 (hich has been backed out in r108135) with a fix
llvm-svn: 108137
2010-07-12 12:02:10 +00:00
Benjamin Kramer 35473faa50 instcombine: fold (x & y) | (~x & z) and (x & y) ^ (~x & z) into ((y ^ z) & x) ^ z which is one instruction shorter. (PR6773)
before:
  %and = and i32 %y, %x
  %neg = xor i32 %x, -1
  %and4 = and i32 %z, %neg
  %xor = xor i32 %and4, %and

after:
  %xor1 = xor i32 %z, %y
  %and2 = and i32 %xor1, %x
  %xor = xor i32 %and2, %z

llvm-svn: 108136
2010-07-12 11:54:45 +00:00
Gabor Greif f9610827ce back out r108131 (of TailDuplication.cpp) for now, it causes a buildbot failure
llvm-svn: 108135
2010-07-12 11:32:39 +00:00
Gabor Greif 6143704ac5 cache dereferenced iterators
llvm-svn: 108134
2010-07-12 11:19:24 +00:00
Gabor Greif 8629f12bb8 cache dereferenced iterators
llvm-svn: 108133
2010-07-12 10:59:23 +00:00
Gabor Greif d993402df3 cache dereferenced iterators
llvm-svn: 108132
2010-07-12 10:49:54 +00:00
Gabor Greif 2a464d7308 cache dereferenced iterators
llvm-svn: 108131
2010-07-12 10:36:48 +00:00
Duncan Sands 41b4a6b36a Convert some tab stops into spaces.
llvm-svn: 108130
2010-07-12 08:16:59 +00:00
Chris Lattner 601e390a3b make the prototypes for CreateMalloc and CreateFree more consistent. Patch
by Hans Vandierendonck from PR7605

llvm-svn: 108116
2010-07-12 00:57:28 +00:00
Chris Lattner bbc25ff5cc if jump threading is able to infer interesting values on both
the LHS and RHS of an and/or instruction, don't multiply add
known predecessor values.  This fixes the crash on testcase
from PR7498

llvm-svn: 108114
2010-07-12 00:47:34 +00:00
Duncan Sands 82b21c086e The accumulator tail recursion transform claims to work for any associative
operation, but the way it's implemented requires the operation to also be
commutative.  So add a check for commutativity (and tweak the corresponding
comments).  This makes no difference in practice since every associative
LLVM instruction is also commutative!  Here's an example to show the need
for commutativity: the accum_recursion.ll testcase calculates the factorial
function.  Before the transformation the result of a call is
  ((((1*1)*2)*3)...)*x
while afterwards it is
  (((1*x)*(x-1))...*2)*1
which clearly requires both associativity and commutativity of * to be equal
to the original.

llvm-svn: 108056
2010-07-10 20:31:42 +00:00
Gabor Greif 9d5ae03404 cache result of operator*
llvm-svn: 107990
2010-07-09 16:51:20 +00:00
Gabor Greif fd8e7d4a0f cache result of operator*
llvm-svn: 107984
2010-07-09 16:31:08 +00:00
Gabor Greif e7650c7c29 cache result of operator*
llvm-svn: 107983
2010-07-09 16:26:41 +00:00
Gabor Greif 04af1e4f65 cache result of operator*
llvm-svn: 107981
2010-07-09 16:17:52 +00:00
Gabor Greif e82532a1c5 cache result of operator*
llvm-svn: 107976
2010-07-09 15:40:10 +00:00
Gabor Greif 6d8870fc35 cache result of operator*
llvm-svn: 107975
2010-07-09 15:25:42 +00:00
Gabor Greif 329c4d8ed9 cache result of operator*
llvm-svn: 107974
2010-07-09 15:25:09 +00:00
Gabor Greif 0028cc6730 cache result of operator*
llvm-svn: 107972
2010-07-09 15:01:36 +00:00
Gabor Greif d323f5e161 cache result of operator* (found by inspection)
llvm-svn: 107971
2010-07-09 14:48:08 +00:00
Gabor Greif b0d56ffc85 cache result of operator*
llvm-svn: 107969
2010-07-09 14:36:49 +00:00
Gabor Greif 4247949ce9 cache result of operator*
llvm-svn: 107968
2010-07-09 14:29:14 +00:00
Gabor Greif a02f232c1b cache result of operator*
llvm-svn: 107966
2010-07-09 14:18:23 +00:00
Gabor Greif f0821f39ee cache operator*'s result (in multiple functions)
llvm-svn: 107965
2010-07-09 14:02:13 +00:00
Gabor Greif 60a346d0f1 do not repeatedly dereference use_iterator
llvm-svn: 107962
2010-07-09 12:23:50 +00:00
Benjamin Kramer 2321e6a4d4 Teach instcombine to transform
(X >s -1) ? C1 : C2 and (X <s  0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.

This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.

for
int sel(int n) {
     return n >= 0 ? 60 : 100;
}

we now generate
  sarl  $31, %edi
  andl  $40, %edi
  leal  60(%rdi), %eax

instead of
  testl %edi, %edi
  movl  $60, %ecx
  movl  $100, %eax
  cmovnsl %ecx, %eax

llvm-svn: 107866
2010-07-08 11:39:10 +00:00