Commit Graph

8893 Commits

Author SHA1 Message Date
Bill Wendling 82b90a3804 Add a Fixme.
llvm-svn: 154793
2012-04-16 04:23:52 +00:00
Hal Finkel 8ee309d9b7 Simplify checking for pointer types in BBVectorize (this change was suggested by Duncan).
llvm-svn: 154787
2012-04-16 03:49:42 +00:00
Hal Finkel 83c9796033 Fix an error in BBVectorize important for vectorizing pointer types.
When vectorizing pointer types it is important to realize that potential
pairs cannot be connected via the address pointer argument of a load or store.
This is because even after vectorization, the address is still a scalar because
the address of the higher half of the pair is implicit from the address of the
lower half (it need not be, and should not be, explicitly computed).

llvm-svn: 154735
2012-04-14 07:32:50 +00:00
Hal Finkel f589519a67 Enhance BBVectorize to more-properly handle pointer values and vectorize GEPs.
llvm-svn: 154734
2012-04-14 07:32:43 +00:00
Hal Finkel b2336a79f9 Add support to BBVectorize for vectorizing selects.
llvm-svn: 154700
2012-04-13 20:45:45 +00:00
Dan Gohman 670f93744b Add some comments, and fix a few places that missed setting Changed.
llvm-svn: 154687
2012-04-13 18:57:48 +00:00
Dan Gohman e1e352af2b Consider ObjC runtime calls objc_storeWeak and others which make a copy of
their argument as "escape" points for objc_retainBlock optimization.
This fixes rdar://11229925.

llvm-svn: 154682
2012-04-13 18:28:58 +00:00
Hal Finkel 204bf5352a By default, use Early-CSE instead of GVN for vectorization cleanup.
As has been suggested by Duncan and others, Early-CSE and GVN should
do similar redundancy elimination, but Early-CSE is much less expensive.
Most of my autovectorization benchmarks show a performance regresion, but
all of these are < 0.1%, and so I think that it is still worth using
the less expensive pass.

llvm-svn: 154673
2012-04-13 17:15:33 +00:00
Dan Gohman de8d2c446b Use the new Use-aware dominates method to apply the objc runtime
library return value optimization for phi uses. Even when the
phi itself is not dominated, the specific use may be dominated.

llvm-svn: 154647
2012-04-13 01:08:28 +00:00
Bill Wendling 585583c8dd Code-gen may inject code into the IR before it emits the ASM. The linker
obviously cannot know that this code is present, let alone used. So prevent the
internalize pass from internalizing those global values which code-gen may
insert.

llvm-svn: 154645
2012-04-13 01:06:27 +00:00
Dan Gohman 8478d76d64 Don't move objc_autorelease calls past autorelease pool boundaries when
optimizing autorelease calls on phi nodes with null operands.
This fixes rdar://11207070.

llvm-svn: 154642
2012-04-13 00:59:57 +00:00
Chad Rosier cc899f3b6d Typo.
llvm-svn: 154522
2012-04-11 19:21:58 +00:00
Chandler Carruth 7ae90d4d2d Add two statistics to help track how we are computing the inline cost.
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome.

llvm-svn: 154492
2012-04-11 10:15:10 +00:00
Kostya Serebryany 5ba61ac651 [tsan] two more compile-time optimizations:
- don't isntrument reads from constant globals.
Saves ~1.5% of instrumented instructions on CPU2006
(counting static instructions, not their execution).
- don't insrument reads from vtable (which is a global constant too).
Saves ~5%.

I did not measure the run-time impact of this,
but it is certainly non-negative.

llvm-svn: 154444
2012-04-10 22:29:17 +00:00
Kostya Serebryany bf2de80be6 [tsan] compile-time instrumentation: do not instrument a read if
a write to the same temp follows in the same BB.
Also add stats printing.

On Spec CPU2006 this optimization saves roughly 4% of instrumented reads
(which is 3% of all instrumented accesses):
Writes            : 161216
Reads             : 446458
Reads-before-write: 18295

llvm-svn: 154418
2012-04-10 18:18:56 +00:00
Andrew Trick 4442bfe559 Fix 12513: Loop unrolling breaks with indirect branches.
Take this opportunity to generalize the indirectbr bailout logic for
loop transformations. CFG transformations will never get indirectbr
right, and there's no point trying.

llvm-svn: 154386
2012-04-10 05:14:42 +00:00
Andrew Trick 4104ed9c76 whitespace
llvm-svn: 154385
2012-04-10 05:14:37 +00:00
Chandler Carruth f82b0e2d29 Teach InstCombine to nuke a common alloca pattern -- an alloca which has
GEPs, bit casts, and stores reaching it but no other instructions. These
often show up during the iterative processing of the inliner, SROA, and
DCE. Once we hit this point, we can completely remove the alloca. These
were actually showing up in the final, fully optimized code in a bunch
of inliner tests I've been working on, and notably they show up after
LLVM finishes optimizing away all function calls involved in
hash_combine(a, b).

llvm-svn: 154285
2012-04-08 14:36:56 +00:00
Hongbin Zheng 5758f495da Refactor: Use positive field names in VectorizeConfig.
llvm-svn: 154249
2012-04-07 03:56:23 +00:00
Chandler Carruth 49da93396e Sink the collection of return instructions until after *all*
simplification has been performed. This is a bit less efficient
(requires another ilist walk of the basic blocks) but shouldn't matter
in practice. More importantly, it's just too much work to keep track of
all the various ways the return instructions can be mutated while
simplifying them. This fixes yet another crasher, reported by Daniel
Dunbar.

llvm-svn: 154179
2012-04-06 17:21:31 +00:00
Duncan Sands d12b18f820 Make GVN's propagateEquality non-recursive. No intended functionality change.
The modifications are a lot more trivial than they appear to be in the diff!

llvm-svn: 154174
2012-04-06 15:31:09 +00:00
Chandler Carruth e41f6f4189 Sink the return instruction collection until after we're done deleting
dead code, including dead return instructions in some cases. Otherwise,
we end up having a bogus poniter to a return instruction that blows up
much further down the road.

It turns out that this pattern is both simpler to code, easier to update
in the face of enhancements to the inliner cleanup, and likely cheaper
given that it won't add dead instructions to the list.

Thanks to John Regehr's numerous test cases for teasing this out.

llvm-svn: 154157
2012-04-06 01:11:52 +00:00
Dan Gohman cc64bbca81 Fix accidentally inverted logic from r152803, and make the
testcase slightly less trivial. This fixes rdar://11171718.

llvm-svn: 154118
2012-04-05 20:27:21 +00:00
Hongbin Zheng 31d33b8318 BBVectorize: Add the const modifier to the VectorizeConfig because we won't
modify it.

llvm-svn: 154098
2012-04-05 16:07:49 +00:00
Hongbin Zheng d6825173d3 Introduce the VectorizeConfig class, with which we can control the behavior
of the BBVectorizePass without using command line option. As pointed out
  by Hal, we can ask the TargetLoweringInfo for the architecture specific
  VectorizeConfig to perform vectorizing with architecture specific
  information.

llvm-svn: 154096
2012-04-05 15:46:55 +00:00
Hongbin Zheng 6edbc39bd7 Add the function "vectorizeBasicBlock" which allow users vectorize a
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the
 loop unroll pass right after the loop is unrolled.

llvm-svn: 154089
2012-04-05 08:05:16 +00:00
Jakob Stoklund Olesen f2390e8303 Pass the right sign to TLI->isLegalICmpImmediate.
LSR can fold three addressing modes into its ICmpZero node:

  ICmpZero BaseReg + Offset      => ICmp BaseReg, -Offset
  ICmpZero -1*ScaleReg + Offset  => ICmp ScaleReg, Offset
  ICmpZero BaseReg + -1*ScaleReg => ICmp BaseReg, ScaleReg

The first two cases are only used if TLI->isLegalICmpImmediate() likes
the offset.

Make sure the right Offset sign is passed to this method in the second
case. The ARM version is not symmetric.

<rdar://problem/11184260>

llvm-svn: 154079
2012-04-05 03:10:56 +00:00
Rafael Espindola ba0a6cabb8 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Hongbin Zheng b21b865fe8 LoopUnrollPass: Use variable "Threshold" instead of "CurrentThreshold" when
reducing unroll count, otherwise the reduced unroll count is not taking
  the "OptimizeForSize" attribute into account.

llvm-svn: 154007
2012-04-04 11:44:08 +00:00
Bill Wendling 932b992888 Add an option to turn off the expensive GVN load PRE part of GVN.
llvm-svn: 153902
2012-04-02 22:16:50 +00:00
Stepan Dyatkovskiy f62ffeca88 Fast fix for PR12343:
http://llvm.org/bugs/show_bug.cgi?id=12343

We have not trivial way for splitting edges that are goes from indirect branch. We can do it with some tricks, but it should be additionally discussed. And it is still dangerous due to difficulty of indirect branches controlling.

Fix forbids this case for unswitching.

llvm-svn: 153879
2012-04-02 17:16:45 +00:00
Chandler Carruth 45ae88f5fc Belatedly address some code review from Chris.
As a side note, I really dislike array_pod_sort... Do we really still
care about any STL implementations that get this so wrong? Does libc++?

llvm-svn: 153834
2012-04-01 10:41:24 +00:00
Chandler Carruth c5bfb3c0f5 Fix a pretty scary bug I introduced into the always inliner with
a single missing character. Somehow, this had gone untested. I've added
tests for returns-twice logic specifically with the always-inliner that
would have caught this, and fixed the bug.

Thanks to Matt for the careful review and spotting this!!! =D

llvm-svn: 153832
2012-04-01 10:21:05 +00:00
Chandler Carruth a88a0faaa3 Give the always-inliner its own custom filter. It shouldn't have to pay
the very high overhead of the complex inline cost analysis when all it
wants to do is detect three patterns which must not be inlined. Comment
the code, clean it up, and leave some hints about possible performance
improvements if this ever shows up on a profile.

Moving this off of the (now more expensive) inline cost analysis is
particularly important because we have to run this inliner even at -O0.

llvm-svn: 153814
2012-03-31 13:17:18 +00:00
Chandler Carruth edd2826f3e Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth 0539c071ea Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Benjamin Kramer 53dc873342 Internalize: Remove reference of @llvm.noinline, it was replaced with the noinline attribute a long time ago.
llvm-svn: 153806
2012-03-31 11:03:47 +00:00
Hal Finkel 5cad8742cc Correctly vectorize powi.
The powi intrinsic requires special handling because it always takes a single
integer power regardless of the result type. As a result, we can vectorize
only if the powers are equal. Fixes PR12364.

llvm-svn: 153797
2012-03-31 03:38:40 +00:00
Jakob Stoklund Olesen 4e55044ff5 Don't PRE compares.
CodeGenPrepare sinks compare instructions down to their uses to prevent
live flags and predicate registers across basic blocks.

PRE of a compare instruction prevents that, forcing the i1 compare
result into a general purpose register.  That is usually more expensive
than the redundant compare PRE was trying to eliminate in the first
place.

llvm-svn: 153657
2012-03-29 17:22:39 +00:00
Benjamin Kramer aa9e4a5e59 GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global that we just determined to be constant, replace all loads from it with a zero value.
llvm-svn: 153576
2012-03-28 14:50:09 +00:00
Chandler Carruth 772c88b887 Switch to WeakVHs in the value mapper, and aggressively prune dead basic
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.

llvm-svn: 153572
2012-03-28 08:38:27 +00:00
Chad Rosier bb2a6da440 Fix 80-column violation.
llvm-svn: 153556
2012-03-28 00:35:33 +00:00
Chandler Carruth b9e35fbc1e Make a seemingly tiny change to the inliner and fix the generated code
size bloat. Unfortunately, I expect this to disable the majority of the
benefit from r152737. I'm hopeful at least that it will fix PR12345. To
explain this requires... quite a bit of backstory I'm afraid.

TL;DR: The change in r152737 actually did The Wrong Thing for
linkonce-odr functions. This change makes it do the right thing. The
benefits we saw were simple luck, not any actual strategy. Benchmark
numbers after a mini-blog-post so that I've written down my thoughts on
why all of this works and doesn't work...

To understand what's going on here, you have to understand how the
"bottom-up" inliner actually works. There are two fundamental modes to
the inliner:

1) Standard fixed-cost bottom-up inlining. This is the mode we usually
   think about. It walks from the bottom of the CFG up to the top,
   looking at callsites, taking information about the callsite and the
   called function and computing th expected cost of inlining into that
   callsite. If the cost is under a fixed threshold, it inlines. It's
   a touch more complicated than that due to all the bonuses, weights,
   etc. Inlining the last callsite to an internal function gets higher
   weighth, etc. But essentially, this is the mode of operation.

2) Deferred bottom-up inlining (a term I just made up). This is the
   interesting mode for this patch an r152737. Initially, this works
   just like mode #1, but once we have the cost of inlining into the
   callsite, we don't just compare it with a fixed threshold. First, we
   check something else. Let's give some names to the entities at this
   point, or we'll end up hopelessly confused. We're considering
   inlining a function 'A' into its callsite within a function 'B'. We
   want to check whether 'B' has any callers, and whether it might be
   inlined into those callers. If so, we also check whether inlining 'A'
   into 'B' would block any of the opportunities for inlining 'B' into
   its callers. We take the sum of the costs of inlining 'B' into its
   callers where that inlining would be blocked by inlining 'A' into
   'B', and if that cost is less than the cost of inlining 'A' into 'B',
   then we skip inlining 'A' into 'B'.

Now, in order for #2 to make sense, we have to have some confidence that
we will actually have the opportunity to inline 'B' into its callers
when cheaper, *and* that we'll be able to revisit the decision and
inline 'A' into 'B' if that ever becomes the correct tradeoff. This
often isn't true for external functions -- we can see very few of their
callers, and we won't be able to re-consider inlining 'A' into 'B' if
'B' is external when we finally see more callers of 'B'. There are two
cases where we believe this to be true for C/C++ code: functions local
to a translation unit, and functions with an inline definition in every
translation unit which uses them. These are represented as internal
linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for
linkonce-odr in r152737.

Unfortunately, when I did that, I also introduced a subtle bug. There
was an implicit assumption that the last caller of the function within
the TU was the last caller of the function in the program. We want to
bonus the last caller of the function in the program by a huge amount
for inlining because inlining that callsite has very little cost.
Unfortunately, the last caller in the TU of a linkonce-odr function is
*not* the last caller in the program, and so we don't want to apply this
bonus. If we do, we can apply it to one callsite *per-TU*. Because of
the way deferred inlining works, when it sees this bonus applied to one
callsite in the TU for 'B', it decides that inlining 'B' is of the
*utmost* importance just so we can get that final bonus. It then
proceeds to essentially force deferred inlining regardless of the actual
cost tradeoff.

The result? PR12345: code bloat, code bloat, code bloat. Another result
is getting *damn* lucky on a few benchmarks, and the over-inlining
exposing critically important optimizations. I would very much like
a list of benchmarks that regress after this change goes in, with
bitcode before and after. This will help me greatly understand what
opportunities the current cost analysis is missing.

Initial benchmark numbers look very good. WebKit files that exhibited
the worst of PR12345 went from growing to shrinking compared to Clang
with r152737 reverted.

- Bootstrapped Clang is 3% smaller with this change.
- Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4%
  faster with this change.

Please let me know about any other performance impact you see. Thanks to
Nico for reporting and urging me to actually fix, Richard Smith, Duncan
Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues
today.

llvm-svn: 153506
2012-03-27 10:48:28 +00:00
Nadav Rotem a8f3562e8f 153465 was incorrect. In this code we wanted to check that the pointer operand is of pointer type (and not vector type).
llvm-svn: 153468
2012-03-26 21:00:53 +00:00
Nadav Rotem e63e59cc44 PR12357: The pointer was used before it was checked.
llvm-svn: 153465
2012-03-26 20:39:18 +00:00
Andrew Trick 14779cc49e LSR ivchain bug fix: corner case with ConstantExpr.
Fixes PR11950.

llvm-svn: 153463
2012-03-26 20:28:37 +00:00
Andrew Trick 356a896394 comment typo
llvm-svn: 153462
2012-03-26 20:28:35 +00:00
Chris Lattner b1e2e1e091 eliminate an unneeded branch, part of PR12357
llvm-svn: 153458
2012-03-26 19:13:57 +00:00
Eric Christopher 2b40fdf3ae Tidy.
llvm-svn: 153456
2012-03-26 19:09:40 +00:00
Eric Christopher f16bee8682 Tidy.
llvm-svn: 153455
2012-03-26 19:09:38 +00:00
Andrew Trick e51feea79c LSR cleanup: potential bug caught by PVS-Studio.
Thanks Andrey.

llvm-svn: 153451
2012-03-26 18:03:16 +00:00
Kostya Serebryany 6f8a776041 [tsan] treat vtable pointer updates in a special way (requires tbaa); fix a bug (forgot to return true after instrumenting); make sure the tsan tests are run
llvm-svn: 153448
2012-03-26 17:35:03 +00:00
Craig Topper 6e80c28017 Prune some includes and forward declarations.
llvm-svn: 153429
2012-03-26 06:58:25 +00:00
Chandler Carruth ef82cf5b1e Teach the function cloner (and thus the inliner) to simplify PHINodes
aggressively. There are lots of dire warnings about this being expensive
that seem to predate switching to the TrackingVH-based value remapper
that is automatically updated on RAUW. This makes it easy to not just
prune single-entry PHIs, but to fully simplify PHIs, and to recursively
simplify the newly inlined code to propagate PHINode simplifications.

This introduces a bit of a thorny problem though. We may end up
simplifying a branch condition to a constant when we fold PHINodes, and
we would like to nuke any dead blocks resulting from this so that time
isn't wasted continually analyzing them, but this isn't easy. Deleting
basic blocks *after* they are fully cloned and mapped into the new
function currently requires manually updating the value map. The last
piece of the simplification-during-inlining puzzle will require either
switching to WeakVH mappings or some other piece of refactoring. I've
left a FIXME in the testcase about this.

llvm-svn: 153410
2012-03-25 10:34:54 +00:00
Chandler Carruth 2121199241 Move the instruction simplification of callsite arguments in the inliner
to instead rely on much more generic and powerful instruction
simplification in the function cloner (and thus inliner).

This teaches the pruning function cloner to use instsimplify rather than
just the constant folder to fold values during cloning. This can
simplify a large number of things that constant folding alone cannot
begin to touch. For example, it will realize that 'or' and 'and'
instructions with certain constant operands actually become constants
regardless of what their other operand is. It also can thread back
through the caller to perform simplifications that are only possible by
looking up a few levels. In particular, GEPs and pointer testing tend to
fold much more heavily with this change.

This should (in some cases) have a positive impact on compile times with
optimizations on because the inliner itself will simply avoid cloning
a great deal of code. It already attempted to prune proven-dead code,
but now it will be use the stronger simplifications to prove more code
dead.

llvm-svn: 153403
2012-03-25 04:03:40 +00:00
Chandler Carruth 0c72e3f469 Add an asserting ValueHandle to the block simplification code which will
fire if anything ever invalidates the assumption of a terminator
instruction being unchanged throughout the routine.

I've convinced myself that the current definition of simplification
precludes such a transformation, so I think getting some asserts
coverage that we don't violate this agreement is sufficient to make this
code safe for the foreseeable future.

Comments to the contrary or other suggestions are of course welcome. =]
The bots are now happy with this code though, so it appears the bug here
has indeed been fixed.

llvm-svn: 153401
2012-03-25 03:29:25 +00:00
Chandler Carruth 17fc6ef234 Don't form a WeakVH around the sentinel node in the instructions BB
list. This is a bad idea. ;] I'm hopeful this is the bug that's showing
up with the MSVC bots, but we'll see.

It is definitely unnecessary. InstSimplify won't do anything to
a terminator instruction, we don't need to even include it in the
iteration range. We can also skip the now dead terminator check,
although I've made it an assert to help document that this is an
important invariant.

I'm still a bit queasy about this because there is an implicit
assumption that the terminator instruction cannot be RAUW'ed by the
simplification code. While that appears to be true at the moment, I see
no guarantee that would ensure it remains true in the future. I'm
looking at the cleanest way to solve that...

llvm-svn: 153399
2012-03-24 23:03:27 +00:00
Chandler Carruth cf1b585f60 Refactor the interface to recursively simplifying instructions to be tad
bit simpler by handling a common case explicitly.

Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.

I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.

llvm-svn: 153393
2012-03-24 21:11:24 +00:00
Francois Pichet 4b9ab74690 Fix the MSVC build.
llvm-svn: 153366
2012-03-24 01:36:37 +00:00
Andrew Trick 25553ab5fe More IndVarSimplify cleanup.
llvm-svn: 153362
2012-03-24 00:51:17 +00:00
Kostya Serebryany e505a5abe9 add EP_OptimizerLast extension point
llvm-svn: 153353
2012-03-23 23:22:59 +00:00
Dan Gohman e3ed2b0699 Don't convert objc_retainAutoreleasedReturnValue to objc_retain if it
is retaining the return value of an invoke that it immediately follows.

llvm-svn: 153344
2012-03-23 18:09:00 +00:00
Dan Gohman 5c70fadc17 It's not possible to insert code immediately after an invoke in the
same basic block, and it's not safe to insert code in the successor
blocks if the edges are critical edges. Splitting those edges is
possible, but undesirable, especially on the unwind side. Instead,
make the bottom-up code motion to consider invokes to be part of
their successor blocks, rather than part of their parent blocks, so
that it doesn't push code past them and onto the edges. This fixes
PR12307.

llvm-svn: 153343
2012-03-23 17:47:54 +00:00
Duncan Sands a11ef6e4ea When propagating equalities, eg replacing A with B in every basic block
dominated by Root, check that B is available throughout the scope.  This
is obviously true (famous last words?) given the current logic, but the
check may be helpful if more complicated reasoning is added one day.

llvm-svn: 153323
2012-03-23 08:45:52 +00:00
Duncan Sands 8f897dc88b Indentation.
llvm-svn: 153322
2012-03-23 08:29:04 +00:00
Andrew Trick e3502cb204 Remove -enable-lsr-retry in time for 3.1.
llvm-svn: 153287
2012-03-22 22:42:51 +00:00
Andrew Trick d97b83e320 Remove -enable-lsr-nested in time for 3.1.
Tests cases have been removed but attached to open PR12330.

llvm-svn: 153286
2012-03-22 22:42:45 +00:00
Dan Gohman 817a7c6fdf Refactor the code for visiting instructions out into helper functions.
llvm-svn: 153267
2012-03-22 18:24:56 +00:00
Andrew Trick 0654989062 Remove unused simplifyIVUsers
llvm-svn: 153262
2012-03-22 17:47:30 +00:00
Andrew Trick f47d0af551 Remove -enable-iv-rewrite, which has been unsupported since 3.0.
llvm-svn: 153260
2012-03-22 17:10:11 +00:00
Chris Lattner 7d7dba3c92 don't use "signed", just something I noticed in patches flying by.
llvm-svn: 153237
2012-03-22 03:46:58 +00:00
Kostya Serebryany 84a7f2e8e9 [asan] fix one more bug related to long double
llvm-svn: 153189
2012-03-21 15:28:50 +00:00
Eric Christopher 7d522f161d Zap some dead code pointed out by Chandler.
llvm-svn: 153150
2012-03-20 23:28:58 +00:00
Andrew Trick f7711010e1 LoopSimplify bug fix. Handle indirect loop back edges.
Do not call SplitBlockPredecessors on a loop preheader when one of the
predecessors is an indirectbr. Otherwise, you will hit this assert:
!isa<IndirectBrInst>(Preds[i]->getTerminator()) && "Cannot split an edge from an IndirectBrInst"

llvm-svn: 153134
2012-03-20 21:24:52 +00:00
Andrew Trick bb01cbb312 whitespace
llvm-svn: 153133
2012-03-20 21:24:47 +00:00
Kostya Serebryany c58dc9fcd2 [asan] don't emit __asan_mapping_offset/__asan_mapping_scale by default -- they are currently used only for experiments
llvm-svn: 153040
2012-03-19 16:40:35 +00:00
Bill Wendling 55b6b2b6a9 Revert r152907.
llvm-svn: 152935
2012-03-16 18:20:54 +00:00
Bill Wendling a2a26b546c The alignment of the pointer part of the store instruction may have an
alignment. If that's the case, then we want to make sure that we don't increase
the alignment of the store instruction. Because if we increase it to be "more
aligned" than the pointer, code-gen may use instructions which require a greater
alignment than the pointer guarantees.
<rdar://problem/11043589>

llvm-svn: 152907
2012-03-16 07:40:08 +00:00
Chandler Carruth b37fc13a36 Rip out support for 'llvm.noinline'. This thing has a strange history...
It was added in 2007 as the first cut at supporting no-inline
attributes, but we didn't have function attributes of any form at the
time. However, it was added without any mention in the LangRef or other
documentation.

Later on, in 2008, Devang added function notes for 'inline=never' and
then turned them into proper function attributes. From that point
onward, as far as I can tell, the world moved on, and no one has touched
'llvm.noinline' in any meaningful way since.

It's time has now come. We have had better mechanisms for doing this for
a long time, all the frontends I'm aware of use them, and this is just
holding back progress. Given that it was never a documented feature of
the IR, I've provided no auto-upgrade support. If people know of real,
in-the-wild bitcode that relies on this, yell at me and I'll add it, but
I *seriously* doubt anyone cares.

llvm-svn: 152904
2012-03-16 06:10:15 +00:00
Chandler Carruth d7a5f2adb0 Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

llvm-svn: 152903
2012-03-16 06:10:13 +00:00
Andrew Trick 070e540a3e LSR fix: Add isSimplifiedLoopNest to IVUsers analysis.
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.

I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.

Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce

llvm-svn: 152892
2012-03-16 03:16:56 +00:00
Eli Friedman e06535b2f6 In InstCombiner::visitOr, make sure we reverse the operand swap used for checking for or-of-xor operations after those checks; a later check expects that any constant will be in Op1. PR12234.
llvm-svn: 152884
2012-03-16 00:52:42 +00:00
Rafael Espindola f58927855b Short term fix for pr12270 before we change dominates to handle unreachable
code.
While here, reduce indentation.

llvm-svn: 152803
2012-03-15 15:52:59 +00:00
Bill Wendling 7fa1be77cc Use an iterator instead of calling .size() on the worklist every time, which is wasteful.
llvm-svn: 152794
2012-03-15 11:19:41 +00:00
Chandler Carruth be2ccf01b7 Remove the basic inliner. This was added in 2007, and hasn't really
changed since. No one was using it. It is yet another consumer of the
InlineCost interface that I'd like to change.

llvm-svn: 152769
2012-03-15 01:37:56 +00:00
Chandler Carruth 3904590ba8 This pass didn't want the inline cost per-se, it just wants generic code
metrics.

llvm-svn: 152760
2012-03-15 00:29:10 +00:00
Aaron Ballman a733297fa6 Fixed a transform crash when setting a negative size value for memset. Fixes PR12202.
llvm-svn: 152756
2012-03-15 00:05:31 +00:00
Kostya Serebryany abad002d55 [tsan] use FunctionBlackList
llvm-svn: 152755
2012-03-14 23:33:24 +00:00
Kostya Serebryany 01401cec00 [asan] rename class BlackList to FunctionBlackList and move it into a separate file -- we will need the same functionality in ThreadSanitizer
llvm-svn: 152753
2012-03-14 23:22:10 +00:00
Dan Gohman 532fb8131b When an invoke is marked with metadata indicating its unwind edge
should be ignored by ARC optimization, don't insert new ARC runtime
calls in the unwind destination.

llvm-svn: 152748
2012-03-14 23:05:06 +00:00
Chandler Carruth 30b8416d2c Change where we enable the heuristic that delays inlining into functions
which are small enough to themselves be inlined. Delaying in this manner
can be harmful if the function is inelligible for inlining in some (or
many) contexts as it pessimizes the code of the function itself in the
event that inlining does not eventually happen.

Previously the check was written to only do this delaying of inlining
for static functions in the hope that they could be entirely deleted and
in the knowledge that all callers of static functions will have the
opportunity to inline if it is in fact profitable. However, with C++ we
get two other important sources of functions where the definition is
always available for inlining: inline functions and templated functions.
This patch generalizes the inliner to allow linkonce-ODR (the linkage
such C++ routines receive) to also qualify for this delay-based
inlining.

Benchmarking across a range of large real-world applications shows
roughly 2% size increase across the board, but an average speedup of
about 0.5%. Some benhcmarks improved over 2%, and the 'clang' binary
itself (when bootstrapped with this feature) shows a 1% -O0 performance
improvement when run over all Sema, Lex, and Parse source code smashed
into a single file. A clean re-build of Clang+LLVM with a bootstrapped
Clang shows approximately 2% improvement, but that measurement is often
noisy.

llvm-svn: 152737
2012-03-14 20:16:41 +00:00
Pete Cooper 615fd897e0 Target override to allow CodeGenPrepare to sink address operands to intrinsics in the same way it current does for loads and stores
llvm-svn: 152666
2012-03-13 20:59:56 +00:00
Chris Lattner 87fa77bd8a enhance jump threading to preserve TBAA information when PRE'ing loads,
fixing rdar://11039258, an issue that came up when inspecting clang's 
bootstrapped codegen.

llvm-svn: 152635
2012-03-13 18:07:41 +00:00
Dan Gohman eab06fa3c9 Teach globalopt how to evaluate an invoke with a non-void return type.
llvm-svn: 152634
2012-03-13 18:01:37 +00:00
Chandler Carruth 595fda8466 When inlining a function and adding its inner call sites to the
candidate set for subsequent inlining, try to simplify the arguments to
the inner call site now that inlining has been performed.

The goal here is to propagate and fold constants through deeply nested
call chains. Without doing this, we loose the inliner bonus that should
be applied because the arguments don't match the exact pattern the cost
estimator uses.

Reviewed on IRC by Benjamin Kramer.

llvm-svn: 152556
2012-03-12 11:19:33 +00:00
Stepan Dyatkovskiy 97b02fc1b3 llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.

llvm-svn: 152532
2012-03-11 06:09:17 +00:00
Duncan Sands 14eb175836 Add statistics on removed switch cases, and fix the phi statistic
to count the number of phis changed, not the number visited.

llvm-svn: 152425
2012-03-09 19:21:15 +00:00
Dan Gohman 500b598c5c When identifying exit nodes for the reverse-CFG reverse-post-order
traversal, consider nodes for which the only successors are backedges
which the traversal is ignoring to be exit nodes. This fixes a problem
where the bottom-up traversal was failing to visit split blocks along
split loop backedges. This fixes rdar://10989035.

llvm-svn: 152421
2012-03-09 18:50:52 +00:00
Duncan Sands cca89124a2 Eliminate switch cases that can never match, for example removes all
negative switch cases if the branch condition is known to be positive.
Inspired by a recent improvement to GCC's VRP.

llvm-svn: 152405
2012-03-09 13:45:18 +00:00
Stepan Dyatkovskiy 5b648afb4d Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.

llvm-svn: 152297
2012-03-08 07:06:20 +00:00
Sebastian Pop 5ce71b18cb fix typos
llvm-svn: 152035
2012-03-05 17:39:47 +00:00
Sebastian Pop 8844e224b8 remove spaces on empty lines
llvm-svn: 152034
2012-03-05 17:39:45 +00:00
Duncan Sands 3eb328574e This is not a common case, in fact it never happens!
llvm-svn: 152027
2012-03-05 12:23:00 +00:00
Chandler Carruth d95357a18e Switch mem2reg to use the new hashing infrastructure.
llvm-svn: 152026
2012-03-05 11:29:56 +00:00
Chandler Carruth e134d1a336 Replace the ad-hoc hashing in GVN with the new hashing infrastructure.
This implicitly fixes a nasty bug in the GVN hashing (that thankfully
could only manifest as a performance bug): actually include the opcode
in the hash. The old code started the hash off with the opcode, but then
overwrote it with the type pointer.

Since this is likely to be pretty hot (GVN being already pretty
expensive) I've included a micro-optimization to just not bother with
the varargs hashing if they aren't present. I can't measure any change
in GVN performance due to this, even with a big test case like Duncan's
sqlite one. Everything I see is in the noise floor. That said, this
closes a loop hole for a potential scaling problem due to collisions if
the opcode were the differentiating aspect of the expression.

llvm-svn: 152025
2012-03-05 11:29:54 +00:00
Duncan Sands 4d928e7dff Nick pointed out on IRC that GVN's propagateEquality wasn't propagating
equalities into phi node operands for which the equality is known to
hold in the incoming basic block.  That's because replaceAllDominatedUsesWith
wasn't handling phi nodes correctly in general (that this didn't give wrong
results was just luck: the specific way GVN uses replaceAllDominatedUsesWith
precluded wrong changes to phi nodes).

llvm-svn: 152006
2012-03-04 13:25:19 +00:00
Bill Wendling 97b9359623 Do trivial CSE of dead BBs during codegen preparation.
Some BBs can become dead after codegen preparation. If we delete them here, it
could help enable tail-call optimizations later on.
<rdar://problem/10256573>

llvm-svn: 152002
2012-03-04 10:46:01 +00:00
Evgeniy Stepanov d33e3d8c6e ASan: use getTypeAllocSize instead of getTypeStoreSize.
This change replaces getTypeStoreSize with getTypeAllocSize in AddressSanitizer
instrumentation for stack allocations.

One case where old behaviour produced undesired results is an optimization in
InstCombine pass (PromoteCastOfAllocation), which can replace  alloca(T) with
alloca(S), where S has the same AllocSize, but a smaller StoreSize. Another
case is memcpy(long double => long double), where ASan will poison bytes 10-15
of a stack-allocated long double (StoreSize  10, AllocSize 16,
sizeof(long double) = 16).

See http://llvm.org/bugs/show_bug.cgi?id=12047 for more context.

llvm-svn: 151887
2012-03-02 10:41:08 +00:00
Dan Gohman 362eb69f24 Fix an iterator invalidation problem. operator[] on a DenseMap
can insert a new element, invalidating iterators. Use find
instead, and handle the case where the key is not found explicitly.

llvm-svn: 151871
2012-03-02 01:26:46 +00:00
Dan Gohman 55b067427b Misc micro-optimizations.
llvm-svn: 151869
2012-03-02 01:13:53 +00:00
Duncan Sands bb2fe65542 Have GVN also do condition propagation when the right-hand side is not
a constant.  This fixes PR1768.

llvm-svn: 151713
2012-02-29 11:12:03 +00:00
Bill Wendling f2c78f344e Restrict this transformation to equality conditions.
This transformation is not correct for not-equal conditions:

(trunc x) != C1 & (and x, CA) != C2 -> (and x, CA|CMAX) != C1|C2

Let
  C1 == 0
  C2 == 0
  CA == 0xFF0000
  CMAX == 0xFF
and truncating to i8.

The original truth table:

    x   | A: trunc x != 0 | B: x & 0xFF0000 != 0 | A & B != 0
--------------------------------------------------------------
0x00000 |        0        |          0           |     0
0x00001 |        1        |          0           |     0
0x10000 |        0        |          1           |     0
0x10001 |        1        |          1           |     1

The truth table of the replacement:

    x   | x & 0xFF00FF != 0
----------------------------
0x00000 |        0
0x00001 |        1
0x10000 |        1
0x10001 |        1

So they are different.

llvm-svn: 151691
2012-02-29 01:46:50 +00:00
Pete Cooper 39b5255df4 Reverted r152620 - DSE: Shorten memset when a later store overwrites the start of it. There were all sorts of buildbot issues
llvm-svn: 151621
2012-02-28 05:06:24 +00:00
Pete Cooper f3862f91de DSE: Shorten memset when a later store overwrites the start of it
llvm-svn: 151620
2012-02-28 04:27:10 +00:00
Benjamin Kramer 93887631d9 Plog a memleak in GlobalOpt.
Found by valgrind.

llvm-svn: 151525
2012-02-27 12:48:24 +00:00
Duncan Sands 9edea84420 Micro-optimization, no functionality change.
llvm-svn: 151524
2012-02-27 12:11:41 +00:00
Duncan Sands 1be25a78f7 The value numbering function is recursive, so it is possible for multiple new
value numbers to be assigned when calculating any particular value number.
Enhance the logic that detects new value numbers to take this into account,
for a tiny compile time speedup.  Fix a comment typo while there.

llvm-svn: 151522
2012-02-27 09:54:35 +00:00
Duncan Sands 27f459519d When performing a conditional branch depending on the value of a comparison
%cmp (eg: A==B) we already replace %cmp with "true" under the true edge, and
with "false" under the false edge.  This change enhances this to replace the
negated compare (A!=B) with "false" under the true edge and "true" under the
false edge.  Reported to improve perlbench results by 1%.

llvm-svn: 151517
2012-02-27 08:14:30 +00:00
Chad Rosier 50e0b81ea9 Add comment.
llvm-svn: 151431
2012-02-25 03:07:57 +00:00
Chad Rosier 07d37bc1ed Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. These
are optimization hints, but at -O0 we're not optimizing.  This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594

llvm-svn: 151429
2012-02-25 02:56:01 +00:00
Chad Rosier e48e5d2945 Fix indentation.
llvm-svn: 151420
2012-02-25 01:10:59 +00:00
Duncan Sands 926d101640 Teach GVN that x+y is the same as y+x and that x<y is the same as y>x.
llvm-svn: 151365
2012-02-24 15:16:31 +00:00
Benjamin Kramer 077e55252a Reflow code, no functionality change.
llvm-svn: 151262
2012-02-23 17:42:19 +00:00
Duncan Sands 4730cb9c7c GCC fails to understand that NextBB is always initialized if EvaluateBlock
returns 'true' and emits a warning.  Help it out.

llvm-svn: 151242
2012-02-23 08:23:06 +00:00
Nick Lewycky 9d0da18597 Use the target-aware constant folder on expressions to improve the chance
they'll be simple enough to simulate, and to reduce the chance we'll encounter
equal but different simple pointer constants.

This removes the symptoms from PR11352 but is not a full fix. A proper fix would
either require a guarantee that two constant objects we simulate are folded
when equal, or a different way of handling equal pointers (ie., trying a
constantexpr icmp on them to see whether we know they're equal or non-equal or
unsure).

llvm-svn: 151093
2012-02-21 22:08:06 +00:00
Benjamin Kramer c7a22fe76b Fix unsigned off-by-one in comment.
llvm-svn: 151056
2012-02-21 13:40:06 +00:00
Benjamin Kramer 6ee8690aa5 InstCombine: Don't transform a signed icmp of two GEPs into a signed compare of the indices.
This transformation is not safe in some pathological cases (signed icmp of pointers should be an
extremely rare thing, but it's valid IR!). Add an explanatory comment.

Kudos to Duncan for pointing out this edge case (and not giving up explaining it until I finally got it).

llvm-svn: 151055
2012-02-21 13:31:09 +00:00
Nick Lewycky 519561f418 Check for the correct size in the invariant marker.
llvm-svn: 151003
2012-02-20 23:32:26 +00:00
Chad Rosier 47eeddde24 Fix 80-column violation.
llvm-svn: 150998
2012-02-20 23:13:17 +00:00
Benjamin Kramer ac8ecc4e7e InstCombine: Removing the base from the address calculation is only safe when the GEPs are inbounds.
llvm-svn: 150978
2012-02-20 18:45:10 +00:00
Benjamin Kramer 7adb189538 InstCombine: When comparing two GEPs that were derived from the same base pointer but use different types, expand the offset calculation and to the compare on the offset if profitable.
This came up in SmallVector code.

llvm-svn: 150962
2012-02-20 15:07:47 +00:00
Benjamin Kramer 7746eb62fb InstCombine: Make OptimizePointerDifference more aggressive.
- Ignore pointer casts.
- Also expand GEPs that aren't constantexprs when they have one use or only constant indices.

- We now compile "&foo[i] - &foo[j]" into "i - j".

llvm-svn: 150961
2012-02-20 14:34:57 +00:00
Nick Lewycky 60829a587a Rename class Evaluate to Evaluator and put it in an anonymous namespace.
llvm-svn: 150947
2012-02-20 03:25:59 +00:00
Nick Lewycky 73be5e31a6 Move EvaluateFunction and EvaluateBlock into a class, and make the class store
the information that they pass around between them. No functionality change!

llvm-svn: 150939
2012-02-19 23:26:27 +00:00
Ahmed Charles 636a3d618c Remove dead code. Improve llvm_unreachable text. Simplify some control flow.
llvm-svn: 150918
2012-02-19 11:37:01 +00:00
Dan Gohman 0155f30a9c Calls and invokes with the new clang.arc.no_objc_arc_exceptions
metadata may still unwind, but only in ways that the ARC
optimizer doesn't need to consider. This permits more
aggressive optimization.

llvm-svn: 150829
2012-02-17 18:59:53 +00:00
Nick Lewycky 68f9f9d9c8 Add support for invariant.start inside the static constructor evaluator. This is
useful to represent a variable that is const in the source but can't be constant
in the IR because of a non-trivial constructor. If globalopt evaluates the
constructor, and there was an invariant.start with no matching invariant.end
possible, it will mark the global constant afterwards.

llvm-svn: 150794
2012-02-17 06:59:21 +00:00
Bill Wendling aa9a3eae79 Remove redundant comment. Use a more efficient datatype.
llvm-svn: 150780
2012-02-17 02:12:54 +00:00
Bill Wendling 0a8fec2762 Fix some grammar-os and formatting.
llvm-svn: 150779
2012-02-17 02:09:28 +00:00
Eli Friedman c458885c58 loop-rotate shouldn't hoist alloca instructions out of a loop. Patch by Patrik Hägglund, with slightly modified test. Issue reported by Patrik Hägglund on llvmdev.
llvm-svn: 150642
2012-02-16 00:41:10 +00:00
Kostya Serebryany a8531eeb64 [tsan] fix compiler warnings
llvm-svn: 150449
2012-02-14 00:52:07 +00:00
Andrew Trick 10cc45336d Add simplifyLoopLatch to LoopRotate pass.
This folds a simple loop tail into a loop latch. It covers the common (in fortran) case of postincrement loops. It's a "free" way to expose this type of loop to downstream loop optimizations that bail out on non-canonical loops (getLoopLatch is a heavily used check).

llvm-svn: 150439
2012-02-14 00:00:23 +00:00
Andrew Trick a20f198747 whitespace
llvm-svn: 150438
2012-02-14 00:00:19 +00:00
Devang Patel 698452bc7e Check against umin while converting fcmp into an icmp.
llvm-svn: 150425
2012-02-13 23:05:18 +00:00
Dan Gohman eb6e01533a Just like in regular escape analysis, loads and stores through
(but not of) a block pointer do not cause the block pointer to
escape. This fixes rdar://10803830.

llvm-svn: 150424
2012-02-13 22:57:02 +00:00
Kostya Serebryany e2a0e4163a ThreadSanitizer, a race detector. First LLVM commit.
Clang patch (flags) will follow shortly.
The run-time library will also follow, but not immediately.

llvm-svn: 150423
2012-02-13 22:50:51 +00:00
Ahmed Charles 32e983e4fc Fix various issues (or do cleanups) found by enabling certain MSVC warnings.
- Use unsigned literals when the desired result is unsigned. This mostly allows unsigned/signed mismatch warnings to be less noisy even if they aren't on by default.
- Remove misplaced llvm_unreachable.
- Add static to a declaration of a function on MSVC x86 only.
- Change some instances of calling a static function through a variable to simply calling that function while removing the unused variable.

llvm-svn: 150364
2012-02-13 06:30:56 +00:00
Nick Lewycky c1572e4c90 Handle InvokeInst in EvaluateBlock. Don't try to support exceptions, it's just
that no optz'ns have run yet to convert invokes to calls.

llvm-svn: 150326
2012-02-12 05:09:35 +00:00
Nick Lewycky f285256f72 false is totally null!
llvm-svn: 150324
2012-02-12 02:17:18 +00:00
Nick Lewycky 4b273cb7ea Remove redundant getAnalysis<> calls in GlobalOpt. Add a few Itanium ABI calls
to TargetLibraryInfo and use one of them in GlobalOpt.

llvm-svn: 150323
2012-02-12 02:15:20 +00:00