Commit Graph

7213 Commits

Author SHA1 Message Date
Bob Wilson ff714f9992 Clarify a comment.
llvm-svn: 114487
2010-09-21 21:44:14 +00:00
Gabor Greif a06741b356 do not rely on the implicit-dereference semantics of dyn_cast_or_null
llvm-svn: 114278
2010-09-18 11:55:34 +00:00
Gabor Greif aaa22cf1b6 do not rely on the implicit-dereference semantics of dyn_cast_or_null
llvm-svn: 114277
2010-09-18 11:53:39 +00:00
Owen Anderson d104806575 Use a depth-first iteratation in CorrelatedValuePropagation to avoid wasting time trying
to optimize unreachable blocks.

llvm-svn: 114105
2010-09-16 18:35:07 +00:00
Dale Johannesen f95f59a0c2 When substituting sunkaddrs into indirect arguments an asm, we were
walking the asm arguments once and stashing their Values.  This is
wrong because the same memory location can be in the list twice, and
if the first one has a sunkaddr substituted, the stashed value for the
second one will be wrong (use-after-free).  PR 8154.

llvm-svn: 114104
2010-09-16 18:30:55 +00:00
Chris Lattner 67e534505d fix PR8144, a bug where constant merge would merge globals marked
attribute(used).

llvm-svn: 113911
2010-09-15 00:30:11 +00:00
Owen Anderson d361aac3d0 Remove the option to disable LazyValueInfo in JumpThreading, as it is now
on by default and has received significant testing.

llvm-svn: 113852
2010-09-14 20:57:41 +00:00
Chris Lattner f1144f0929 fix PR8102, a case where we'd copyValue from a value that we already
deleted.  Fix this by doing the copyValue's before we delete stuff!

The testcase only repros the problem on my system with valgrind.

llvm-svn: 113820
2010-09-14 00:19:00 +00:00
Michael J. Spencer 93c9b2ea93 Revert "CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally."
This reverts commit r113632

Conflicts:

	cmake/modules/AddLLVM.cmake

llvm-svn: 113819
2010-09-13 23:59:48 +00:00
Eric Christopher e3a89f9f9c Remove unused variable.
llvm-svn: 113769
2010-09-13 18:27:59 +00:00
John Thompson 1094c80281 Added skeleton for inline asm multiple alternative constraint support.
llvm-svn: 113766
2010-09-13 18:15:37 +00:00
Owen Anderson c237a849e3 Re-apply r113679, which was reverted in r113720, which added a paid of new instcombine transforms
to expose greater opportunities for store narrowing in codegen.  This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.

llvm-svn: 113763
2010-09-13 17:59:27 +00:00
Eric Christopher 26abd3e0c2 Revert 113679, it was causing an infinite loop in a testcase that I've sent
on to Owen.

llvm-svn: 113720
2010-09-12 06:09:23 +00:00
Owen Anderson 70f4524427 Invert and-of-or into or-of-and when doing so would allow us to clear bits of the and's mask.
This can result in increased opportunities for store narrowing in code generation.  Update a number of
tests for this change.  This fixes <rdar://problem/8285027>.

Additionally, because this inverts the order of ors and ands, some patterns for optimizing or-of-and-of-or
no longer fire in instances where they did originally.  Add a simple transform which recaptures most of these
opportunities: if we have an or-of-constant-or and have failed to fold away the inner or, commute the order 
of the two ors, to give the non-constant or a chance for simplification instead.

llvm-svn: 113679
2010-09-11 05:48:06 +00:00
Gabor Greif 2f5f696b66 typoes
llvm-svn: 113647
2010-09-10 22:25:58 +00:00
Michael J. Spencer dc38d36ccb CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally.
llvm-svn: 113632
2010-09-10 21:14:25 +00:00
Benjamin Kramer 77ab138f84 This transform is also performed by InstructionSimplify, remove the duplicate.
llvm-svn: 113608
2010-09-10 19:52:35 +00:00
Owen Anderson d85c9ccdba Lower the unrolling theshold to 150. Empirical tests indicate that this is a sweet spot in the performance per
code size increase curve.

llvm-svn: 113595
2010-09-10 17:57:00 +00:00
Owen Anderson 04cf3fd761 What the loop unroller cares about, rather than just not unrolling loops with calls, is
not unrolling loops that contain calls that would be better off getting inlined.  This mostly
comes up when an interleaved devirtualization pass has devirtualized a call which the inliner
will inline on a future pass.  Thus, rather than blocking all loops containing calls, add
a metric for "inline candidate calls" and block loops containing those instead.

llvm-svn: 113535
2010-09-09 20:32:23 +00:00
Owen Anderson 6270515918 Revert r113439, which relaxed the requirement that loops containing calls cannot be unrolled. After some discussion,
there seems to be a better way to achieve the same effect.

llvm-svn: 113528
2010-09-09 20:02:23 +00:00
Owen Anderson 11ab204fdc r113526 introduced an unintended change to the loop unrolling threshold. Revert it.
llvm-svn: 113527
2010-09-09 19:11:57 +00:00
Owen Anderson b61b1647e2 Fix typo in code to cap the loop code size reduction calculation.
llvm-svn: 113526
2010-09-09 19:08:59 +00:00
Owen Anderson 62ea1b718c Use code-size reduction metrics to estimate the amount of savings we'll get when we unroll a loop.
Next step is to recalculate the threshold values given this new heuristic.

llvm-svn: 113525
2010-09-09 19:07:31 +00:00
Owen Anderson 8084dbaf8e Relax the "don't unroll loops containing calls" rule. Instead, when a loop contains a call, lower the
unrolling threshold to the optimize-for-size threshold.  Basically, for loops containing calls, unrolling
can still be profitable as long as the loop is REALLY small.

llvm-svn: 113439
2010-09-08 23:10:07 +00:00
Owen Anderson 3fe002dfb5 Generalize instcombine's support for combining multiple bit checks into a single test. Patch by Dirk Steinke!
llvm-svn: 113423
2010-09-08 22:16:17 +00:00
Owen Anderson a4d9c78aa1 Add a separate unrolling threshold when the current function is being optimized for size.
The threshold value of 50 is arbitrary, and I chose it simply by analogy to the inlining thresholds, where
the baseline unrolling threshold is slightly smaller than the baseline inlining threshold.  This could
undoubtedly use some tuning.

llvm-svn: 113306
2010-09-07 23:15:30 +00:00
Chris Lattner 6e27b3e004 Fix a serious performance regression introduced by r108687 on linux:
turning (fptrunc (sqrt (fpext x))) -> (sqrtf x)  is great, but we have
to delete the original sqrt as well.  Not doing so causes us to do 
two sqrt's when building with -fmath-errno (the default on linux).

llvm-svn: 113260
2010-09-07 20:01:38 +00:00
Nick Lewycky 71972d45dc Fix major bug in thunk detection. Also verify the calling convention.
Switch from isWeakForLinker to mayBeOverridden which is more accurate.

Add more statistics and debugging info. Add comments. Move static function
outside anonymous namespace.

llvm-svn: 113190
2010-09-07 01:42:10 +00:00
Chris Lattner be9019090e fix PR8067, an over-aggressive assertion in LICM.
llvm-svn: 113146
2010-09-06 05:11:24 +00:00
Chris Lattner b01c24a945 Teach loop rotate to hoist trivially invariant instructions
in the duplicated block instead of duplicating them.  

Duplicating them into the end of the loop and the preheader 
means that we got a phi node in the header of the loop, 
which prevented LICM from hoisting them.  GVN would
usually come around later and merge the duplicated 
instructions so we'd get reasonable output... except that
anything dependent on the shoulda-been-hoisted value can't
be hoisted.  In PR5319 (which this fixes), a memory value
didn't get promoted.

llvm-svn: 113134
2010-09-06 01:10:22 +00:00
Chris Lattner da24b9a49a pull a simple method out of LICM into a new
Loop::hasLoopInvariantOperands method. Remove
a useless and confusing Loop::isLoopInvariant(Instruction)
method, which didn't do what you thought it did.

No functionality change.

llvm-svn: 113133
2010-09-06 01:05:37 +00:00
Chris Lattner 1edf7434cf more cleanups
llvm-svn: 113115
2010-09-05 20:13:07 +00:00
Chris Lattner e6214557e7 Change lower atomic pass to use IntrinsicInst to simplify it a bit.
llvm-svn: 113114
2010-09-05 20:10:47 +00:00
Chris Lattner 05ef361b5e eliminate some non-obvious casts. UndefValue isa Constant.
llvm-svn: 113113
2010-09-05 20:03:09 +00:00
Nick Lewycky e3ac69eca3 Fix warning reported by MSVC++ builder.
llvm-svn: 113106
2010-09-05 09:11:38 +00:00
Nick Lewycky f3a07ec394 Switch FnSet to containing the ComparableFunction instead of a pointer to one.
This reduces malloc traffic (yay!) and removes MergeFunctionsEqualityInfo.

llvm-svn: 113105
2010-09-05 09:00:32 +00:00
Nick Lewycky 0095937b13 Fix many bugs when merging weak-strong and weak-weak pairs. We now merge all
strong functions first to make sure they're the canonical definitions and then
do a second pass looking only for weak functions.

llvm-svn: 113104
2010-09-05 08:22:49 +00:00
Chris Lattner 65b48b5dfc zap dead code.
llvm-svn: 113073
2010-09-04 18:12:00 +00:00
Dan Gohman 487e250109 Fix LoopSimplify to notify ScalarEvolution when splitting a loop backedge
into an inner loop, as the new loop iteration may differ substantially.
This fixes PR8078.

llvm-svn: 113057
2010-09-04 02:42:48 +00:00
Chris Lattner 50506787d1 fix a bug in my licm rewrite when a load from the promoted memory
location is being re-stored to the memory location.  We would get
a dangling pointer from the SSAUpdate data structure and miss a 
use.  This fixes PR8068

llvm-svn: 113042
2010-09-04 00:12:30 +00:00
Owen Anderson c91c1a205a Propagate non-local comparisons. Fixes PR1757.
llvm-svn: 113025
2010-09-03 22:47:08 +00:00
Owen Anderson c725462245 Add support for simplifying a load from a computed value to a load from a global when it
is provable that they're equivalent.  This fixes PR4855.

llvm-svn: 112994
2010-09-03 19:08:37 +00:00
Chris Lattner affc0e42f0 fix more AST updating bugs, correcting miscompilation in PR8041
llvm-svn: 112878
2010-09-02 22:19:10 +00:00
Duncan Sands 6778149f7e Reapply commit 112699, speculatively reverted by echristo, since
I'm sure it is harmless.  Original commit message:
If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory.  Instead, simply pass
in the type and name explicitly, which is all that was used anyway.

llvm-svn: 112810
2010-09-02 08:14:03 +00:00
Chris Lattner 8af45a889d deepen my MMX/SRoA hack to avoid hurting non-x86 codegen.
llvm-svn: 112763
2010-09-01 23:09:27 +00:00
Dan Gohman 0ad7d9c24e Fix loop unswitching's assumption that a code path which either
infinite loops or exits will eventually exit. This fixes PR5373.

llvm-svn: 112745
2010-09-01 21:46:45 +00:00
Owen Anderson 73f988cafa JumpThreading keeps LazyValueInfo up to date, so we don't need to rerun it
if we schedule another LVI-using pass afterwards.

llvm-svn: 112722
2010-09-01 18:27:22 +00:00
Eric Christopher a5d315c665 Speculatively revert 112699 and 112702, they seem to be causing
self host errors on clang-x86-64.

llvm-svn: 112719
2010-09-01 17:29:10 +00:00
Duncan Sands f7b18437b5 If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory.  Instead, simply pass
in the type and name explicitly, which is all that was used anyway.

llvm-svn: 112699
2010-09-01 10:29:33 +00:00
Chris Lattner 34e5361eb5 add a gross hack to work around a problem that Argiris reported
on llvmdev: SRoA is introducing MMX datatypes like <1 x i64>,
which then cause random problems because the X86 backend is
producing mmx stuff without inserting proper emms calls.

In the short term, force off MMX datatypes.  In the long term,
the X86 backend should not select generic vector types to MMX
registers.  This is being worked on, but won't be done in time
for 2.8.  rdar://8380055

llvm-svn: 112696
2010-09-01 05:14:33 +00:00
Dan Gohman 110ed64fbb Revert 112442 and 112440 until the compile time problems introduced
by 112440 are resolved.

llvm-svn: 112692
2010-09-01 01:45:53 +00:00
Chris Lattner 030f02021b licm is wasting time hoisting constant foldable operations,
instead of hoisting them, just fold them away.  This occurs in the
testcase for PR8041, for example.

llvm-svn: 112669
2010-08-31 23:00:16 +00:00
Chris Lattner daca6f3483 tidy up
llvm-svn: 112643
2010-08-31 21:21:25 +00:00
Owen Anderson 3c84ecb067 More cleanups of my JumpThreading transforms, including extracting some duplicated code into a helper function.
llvm-svn: 112634
2010-08-31 20:26:04 +00:00
Owen Anderson 6fdcb172a9 Add an RAII helper to make cleanup of the RecursionSet more fool-proof.
llvm-svn: 112628
2010-08-31 19:24:27 +00:00
Owen Anderson 048efbe225 Only try to clean up the current block if we changed that block already.
llvm-svn: 112625
2010-08-31 18:55:52 +00:00
Owen Anderson cd4de7f399 Refactor my fix for PR5652 to terminate the predecessor lookups after the first failure.
llvm-svn: 112620
2010-08-31 18:48:48 +00:00
Nick Lewycky 68984ede5c Fix an infinite loop; merging two functions will create a new function (if the
two are weak, we make them thunks to a new strong function) so don't iterate
through the function list as we're modifying it.

Also add back the outermost loop which got removed during the cleanups.

llvm-svn: 112595
2010-08-31 08:29:37 +00:00
Owen Anderson ce401be792 Don't perform an extra traversal of the function just to do cleanup. We can safely simplify instructions after each block has been processed without worrying about iterator invalidation.
llvm-svn: 112594
2010-08-31 07:55:56 +00:00
Owen Anderson 48d58ad64c Rename ValuePropagation to a more descriptive CorrelatedValuePropagation.
llvm-svn: 112591
2010-08-31 07:48:34 +00:00
Owen Anderson d2918a07bd Rename file to something more descriptive.
llvm-svn: 112590
2010-08-31 07:41:39 +00:00
Owen Anderson 3997a07fb9 More Chris-inspired JumpThreading fixes: use ConstantExpr to correctly constant-fold undef, and be more careful with its return value.
This actually exposed an infinite recursion bug in ComputeValueKnownInPredecessors which theoretically already existed (in JumpThreading's
handling of and/or of i1's), but never manifested before.  This patch adds a tracking set to prevent this case.

llvm-svn: 112589
2010-08-31 07:36:34 +00:00
Nick Lewycky 0464d1d7ec Switch to DenseSet, simplifying much more code. We now have a single iteration
where we hash, compare and fold, instead of one iteration where we build up
the hash buckets and a second one to fold.

llvm-svn: 112582
2010-08-31 05:53:05 +00:00
Owen Anderson 376597c13e Remove r111665, which implemented store-narrowing in InstCombine. Chris discovered a miscompilation in it, and it's not easily
fixable at the optimizer level. I'll investigate reimplementing it in DAGCombine.

llvm-svn: 112575
2010-08-31 04:41:06 +00:00
Owen Anderson b58b3c0dda Fix a typo.
llvm-svn: 112560
2010-08-30 23:59:30 +00:00
Owen Anderson b974dbbdd7 Cleanups suggested by Chris.
llvm-svn: 112553
2010-08-30 23:34:17 +00:00
Owen Anderson c910acb54a Re-apply r112539, being more careful to respect the return values of the constant folding methods. Additionally,
use the ConstantExpr::get*() methods to simplify some constant folding.

llvm-svn: 112550
2010-08-30 23:22:36 +00:00
Owen Anderson 30bacbdfdf Add statistics to evaluate this pass.
llvm-svn: 112545
2010-08-30 22:45:55 +00:00
Owen Anderson 1ddcbbe49c Revert r112539. It accidentally introduced a miscompilation.
llvm-svn: 112543
2010-08-30 22:33:41 +00:00
Owen Anderson 75f6037c7c Fixes and cleanups pointed out by Chris. In general, be careful to handle 0 results from ComputeValueKnownInPredecessors
(indicating undef), and re-use existing constant folding APIs.

llvm-svn: 112539
2010-08-30 22:07:52 +00:00
Chris Lattner c843fca2fd rewrite DwarfEHPrepare to use SSAUpdater to promote its allocas
instead of PromoteMemToReg.  This allows it to stop using DF and DT,
eliminating a computation of DT and DF from clang -O3.  Clang is now
down to 2 runs of DomFrontier.

llvm-svn: 112457
2010-08-29 19:54:28 +00:00
Chris Lattner f58382ed87 two changes: 1) make AliasSet hold the list of call sites with an
assertingvh so we get a violent explosion if the pointer dangles.

2) Fix AliasSetTracker::deleteValue to remove call sites with
   by-pointer comparisons instead of by-alias queries.  Using
   findAliasSetForCallSite can cause alias sets to get merged
   when they shouldn't, and can also miss alias sets when the
   call is readonly.

#2 fixes PR6889, which only repros with a .c file :(

llvm-svn: 112452
2010-08-29 18:42:23 +00:00
Chris Lattner 263f804699 LICM does get dead instructions input to it. Instead of sinking them
out of loops, just delete them.

llvm-svn: 112451
2010-08-29 18:22:25 +00:00
Chris Lattner 6ac0659a1c use moveBefore instead of remove+insert, it avoids some
symtab manipulation, so its faster (in addition to being
more elegant)

llvm-svn: 112450
2010-08-29 18:18:40 +00:00
Chris Lattner f03b4eac48 revert 112448 for now.
llvm-svn: 112449
2010-08-29 18:11:16 +00:00
Chris Lattner 11f8ad8211 optimize LICM::hoist to use moveBefore. Correct its updating
of AST to remove the hoisted instruction from the AST, since it
is no longer in the loop.

llvm-svn: 112448
2010-08-29 18:03:33 +00:00
Chris Lattner 1a1ed69435 fix some bugs (found by inspection) where LICM would not update
LICM correctly.  When sinking an instruction, it should not add
entries for the sunk instruction to the AST, it should remove
the entry for the sunk instruction.  The blocks being sunk to
are not in the loop, so their instructions shouldn't be in the
AST (yet)!

llvm-svn: 112447
2010-08-29 18:00:00 +00:00
Chris Lattner cc9cbc66a3 rework the ownership of subloop alias information: instead of
keeping them around until the pass is destroyed, keep them
around a) just when useful (not for outer loops) and b) destroy
them right after we use them.  This should reduce memory use
and fixes potential bugs where a loop is deleted and another
loop gets allocated to the same address.

llvm-svn: 112446
2010-08-29 17:46:00 +00:00
Chris Lattner bc1a65ac6c apparently unswitch had the same "Feature". Stop its
claims that it preserves domfrontier if it doesn't really.

llvm-svn: 112445
2010-08-29 17:23:19 +00:00
Chris Lattner d6f46b8af8 now that loop passes don't use DomFrontier, there is no reason
for the unroller to pretend it supports updating it.  It still
has a horrible hack for DomTree.

llvm-svn: 112444
2010-08-29 17:21:35 +00:00
Dan Gohman 002ff89cbd Optionally rerun dedicated-register filtering after applying
other filtering techniques, as those may allow it to filter
out more obviously unprofitable candidates.

llvm-svn: 112441
2010-08-29 16:39:22 +00:00
Dan Gohman f031792cc6 Fix several areas in LSR to do a better job keeping the main
LSRInstance data structures up to date. This fixes some
pessimizations caused by stale data which will be exposed
in an upcoming change.

llvm-svn: 112440
2010-08-29 16:32:54 +00:00
Dan Gohman e9e0873b08 Refactor the three main groups of code out of
NarrowSearchSpaceUsingHeuristics into separate functions.

llvm-svn: 112439
2010-08-29 16:09:42 +00:00
Dan Gohman 37a0f68036 Delete a bogus check.
llvm-svn: 112438
2010-08-29 15:30:29 +00:00
Dan Gohman b6a520d63c Add some comments.
llvm-svn: 112437
2010-08-29 15:27:08 +00:00
Dan Gohman bf673e0652 Move this debug output into GenerateAllReuseFormula, to declutter
the high-level logic.

llvm-svn: 112436
2010-08-29 15:21:38 +00:00
Dan Gohman d366b6d5c8 Delete an unused declaration.
llvm-svn: 112435
2010-08-29 15:19:11 +00:00
Dan Gohman 4f13bbfefc Do one lookup instead of two.
llvm-svn: 112434
2010-08-29 15:18:49 +00:00
Chris Lattner f94f6bb0ba licm preserves the cfg, it doesn't have to explicitly say it
preserves domfrontier.  It does preserve AA though.

llvm-svn: 112419
2010-08-29 07:02:56 +00:00
Chris Lattner abe61ef3b4 now that it doesn't use the PromoteMemToReg function, LICM doesn't
require DomFrontier.  Dropping this doesn't actually save any runs
of the pass though.

llvm-svn: 112418
2010-08-29 06:49:44 +00:00
Chris Lattner 1dc98b47b5 completely rewrite the memory promotion algorithm in LICM.
Among other things, this uses SSAUpdater instead of 
PromoteMemToReg.

llvm-svn: 112417
2010-08-29 06:43:52 +00:00
Chris Lattner 9c3931a544 use getUniqueExitBlocks instead of a manual set.
llvm-svn: 112412
2010-08-29 05:12:21 +00:00
Chris Lattner 85bf5421e1 reimplement LICM::sink to use SSAUpdater instead of PromoteMemToReg.
This leads to much simpler code.

llvm-svn: 112410
2010-08-29 04:55:06 +00:00
Chris Lattner c3fb03e289 implement SSAUpdater::RewriteUseAfterInsertions, a helpful form of RewriteUse.
llvm-svn: 112409
2010-08-29 04:54:06 +00:00
Chris Lattner b50407f104 remove dead proto
llvm-svn: 112408
2010-08-29 04:53:24 +00:00
Chris Lattner cd96b4df56 reduce indentation in LICM::sink by using early exits, use
getUniqueExitBlocks instead of getExitBlocks and a manual
set to eliminate dupes.

llvm-svn: 112405
2010-08-29 04:28:20 +00:00
Chris Lattner 188cc5a0fc modernize this pass a bit: use efficient set/map and reduce indentation.
llvm-svn: 112404
2010-08-29 04:23:04 +00:00
Chris Lattner 13ee795c42 remove unions from LLVM IR. They are severely buggy and not
being actively maintained, improved, or extended.

llvm-svn: 112356
2010-08-28 04:09:24 +00:00
Chris Lattner 504e5100d3 remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner 50df36ac0a for completeness, allow undef also.
llvm-svn: 112351
2010-08-28 03:36:51 +00:00
Chris Lattner 95bb297c26 squish dead code.
llvm-svn: 112350
2010-08-28 03:21:03 +00:00
Chris Lattner d0214f3efe handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Benjamin Kramer 83f9ff0452 Update CMake build. Add newline at end of file.
llvm-svn: 112332
2010-08-28 00:11:12 +00:00
Owen Anderson cf7f941121 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner 25a198e72b remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Owen Anderson 99d4cb861b Fix typos in comments.
llvm-svn: 112286
2010-08-27 20:32:56 +00:00
Chris Lattner 7398434675 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Chris Lattner 90cd746e63 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Owen Anderson 6ebbd92380 Use LVI to eliminate conditional branches where we've tested a related condition previously. Update tests for this change.
This fixes PR5652.

llvm-svn: 112270
2010-08-27 17:12:29 +00:00
Chris Lattner bfd2228182 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Chris Lattner d4ebd6df5a optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Owen Anderson bd2ecc7e68 Make JumpThreading smart enough to properly thread StrSwitch when it's compiled with clang++.
llvm-svn: 112198
2010-08-26 17:40:24 +00:00
Dan Gohman ca26f79051 Reapply r112091 and r111922, support for metadata linking, with a
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).

This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.

llvm-svn: 112190
2010-08-26 15:41:53 +00:00
Daniel Dunbar ce45863f0d Revert r111922, "MapValue support for MDNodes. This is similar to r109117,
except ...", it is causing *massive* performance regressions when building Clang
with itself (-O3 -g).

llvm-svn: 112158
2010-08-26 03:48:11 +00:00
Daniel Dunbar 95fe13c720 Revert r112091, "Remap metadata attached to instructions when remapping
individual ...", which depends on r111922, which I am reverting.

llvm-svn: 112157
2010-08-26 03:48:08 +00:00
Chris Lattner 07afbd5a08 zap dead code.
llvm-svn: 112130
2010-08-26 01:13:54 +00:00
Dan Gohman 8f292e7a6d Rewrite ExtractGV, removing a bunch of stuff that didn't fully work,
and was over-complicated, and replacing it with a simple implementation.

llvm-svn: 112120
2010-08-26 00:22:55 +00:00
Chris Lattner 8df99b523e remove some llvmcontext arguments that are now dead post-refactoring.
llvm-svn: 112104
2010-08-25 23:00:45 +00:00
Dan Gohman fd824487a3 Remap metadata attached to instructions when remapping individual
instructions, not when remapping modules.

llvm-svn: 112091
2010-08-25 21:36:50 +00:00
Devang Patel 01262e129e DIGlobalVariable can be used to encode debug info for globals that are directly folded into a constant by FE.
llvm-svn: 112072
2010-08-25 18:52:02 +00:00
Dan Gohman a209503467 Use MapValue in the Linker instead of having a private function
which does the same thing. This eliminates redundant code and
handles MDNodes better. MDNode linking still doesn't fully
work yet though.

llvm-svn: 111941
2010-08-24 18:50:07 +00:00
Owen Anderson 7c853e877e Turn LVI on, previously detected failures should be fixed now.
llvm-svn: 111923
2010-08-24 17:21:18 +00:00
Dan Gohman 6901283544 MapValue support for MDNodes. This is similar to r109117, except
that it avoids a lot of unnecessary cloning by avoiding remapping
MDNode cycles when none of the nodes in the cycle actually need to
be remapped. Also it uses the new temporary MDNode mechanism.

llvm-svn: 111922
2010-08-24 17:10:10 +00:00
Owen Anderson 6ffa3f2aea Turn LVI back off, I have a testcase now.
llvm-svn: 111834
2010-08-23 19:59:27 +00:00
Owen Anderson 630add39a6 Re-enable LazyValueInfo. Monitoring for failures.
llvm-svn: 111816
2010-08-23 18:12:23 +00:00
Owen Anderson d31d82d75c Now that PassInfo and Pass::ID have been separated, move the rest of the passes over to the new registration API.
llvm-svn: 111815
2010-08-23 17:52:01 +00:00
Owen Anderson 84c29a096b Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson 43057cd56a Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson bb723b228a When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Owen Anderson aac8cbb261 Disable LVI while I evaluate a failure.
llvm-svn: 111551
2010-08-19 19:47:08 +00:00
Owen Anderson 5c87dd55d3 Tentatively enabled LVI by default. I'll be monitoring for any failures.
llvm-svn: 111543
2010-08-19 19:04:40 +00:00
Dan Gohman 129a816ee6 Process the step before the start, because it's usually the simpler
of the two.

llvm-svn: 111495
2010-08-19 01:02:31 +00:00
Owen Anderson 208636fa33 Inform LazyValueInfo whenever a block is deleted, to avoid dangling pointer issues.
llvm-svn: 111382
2010-08-18 18:39:01 +00:00
Chris Lattner 3c603024bb Fix PR7755: knowing something about an inval for a pred
from the LHS should disable reconsidering that pred on the
RHS.  However, knowing something about the pred on the RHS
shouldn't disable subsequent additions on the RHS from
happening.

llvm-svn: 111349
2010-08-18 03:14:36 +00:00
Chris Lattner f0b5b67ba5 fit in 80 cols
llvm-svn: 111348
2010-08-18 03:13:35 +00:00
Chris Lattner b45de95345 remove some dead code.
llvm-svn: 111344
2010-08-18 02:41:56 +00:00
Chris Lattner 6aabb66139 remove dead prototype.
llvm-svn: 111342
2010-08-18 02:37:06 +00:00
Eric Christopher 51edc7b7e1 Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.

llvm-svn: 111318
2010-08-17 22:55:27 +00:00
Dan Gohman 5047ca0c02 When rotating loops, put the original header at the bottom of the
loop, making the resulting loop significantly less ugly.  Also, zap
its trivial PHI nodes, since it's easy.

llvm-svn: 111255
2010-08-17 17:39:21 +00:00
Dan Gohman 941020ed72 Use the getUniquePredecessor() utility function, instead of doing
what it does manually.

llvm-svn: 111248
2010-08-17 17:07:02 +00:00
Evan Cheng 8b637b177c Add an option to disable codegen prepare critical edge splitting. In theory, PHI elimination is already doing all (most?) of the splitting needed. But machine-licm and machine-sink seem to miss some important optimizations when splitting is disabled.
llvm-svn: 111224
2010-08-17 01:34:49 +00:00
Dan Gohman 89fdbaf99a Instead of having CollectSubexpr's categorize operands as interesting or
uninteresting, just put all the operands on one list and make
GenerateReassociations make the decision about what's interesting.
This is simpler, and it avoids an extra ScalarEvolution::getAddExpr call.

llvm-svn: 111133
2010-08-16 15:50:00 +00:00
Dan Gohman 9b7632df26 Put add operands in ScalarEvolution-canonical order, when convenient.
This isn't necessary, because ScalarEvolution sorts them anyway,
but it's tidier this way.

llvm-svn: 111132
2010-08-16 15:39:27 +00:00
Dan Gohman 6e964c7fb4 Avoid #include <ScalarEvolution.h> in LoopSimplify.cpp, which doesn't
actually use ScalarEvolution.

llvm-svn: 111124
2010-08-16 14:44:03 +00:00
Dan Gohman 250b754428 Instead, teach SimplifyCFG to trim non-address-taken blocks from
indirectbr destination lists.

llvm-svn: 111122
2010-08-16 14:41:14 +00:00
Dan Gohman aa445c0751 LoopSimplify shouldn't split loop backedges that use indirectbr. PR7867.
llvm-svn: 111061
2010-08-14 00:43:09 +00:00
Dan Gohman 4a63fad976 Teach SimplifyCFG how to simplify indirectbr instructions.
- Eliminate redundant successors.
 - Convert an indirectbr with one successor into a direct branch.

Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.

llvm-svn: 111060
2010-08-14 00:29:42 +00:00