Commit Graph

4588 Commits

Author SHA1 Message Date
Nadav Rotem a6b91ac307 Add a cost model analysis that allows us to estimate the cost of IR-level instructions.
llvm-svn: 167324
2012-11-02 21:48:17 +00:00
Chandler Carruth 5da3f0512e Revert the majority of the next patch in the address space series:
r165941: Resubmit the changes to llvm core to update the functions to
         support different pointer sizes on a per address space basis.

Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.

However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.

In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.

In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.

This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.

llvm-svn: 167222
2012-11-01 09:14:31 +00:00
Chandler Carruth 7ec5085e01 Revert the series of commits starting with r166578 which introduced the
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.

These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.

Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)

After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.

Summary of reverted revisions:

r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
         Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
         since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
         on the address space.
llvm-svn: 167221
2012-11-01 08:07:29 +00:00
Benjamin Kramer c914ab6e3c Fix a couple of comment typos.
llvm-svn: 167113
2012-10-31 11:25:32 +00:00
Benjamin Kramer 24c643b6de DependenceAnalysis: Don't crash if there is no constant operand.
This makes the code match the comments. Resolves a crash in loop idiom (PR14219).

llvm-svn: 167110
2012-10-31 09:20:38 +00:00
Bob Wilson 09d16aa87e Remove code to saturate profile counts.
We may need to change the way profile counter values are stored, but
saturation is the wrong thing to do.  Just remove it for now.

Patch by Alastair Murray!

llvm-svn: 166938
2012-10-29 17:27:39 +00:00
Benjamin Kramer 5bc077aa88 SCEV validator: Ignore CouldNotCompute/undef on both sides. This is mostly noise and blocks finding more severe bugs.
llvm-svn: 166873
2012-10-27 11:36:07 +00:00
Benjamin Kramer 24d270db57 SCEV validator: Add workarounds for some common false positives due to the way it handles strings.
llvm-svn: 166872
2012-10-27 10:45:01 +00:00
Benjamin Kramer 6dc1e2f287 Remove LoopDependenceAnalysis.
It was unmaintained and not much more than a stub. The new DependenceAnalysis
pass is both more general and complete.

llvm-svn: 166810
2012-10-26 20:25:01 +00:00
Benjamin Kramer 214935ee70 Add a basic verifier for SCEV's backedge taken counts.
Enabled with -verify-scev. This could be extended significantly but hopefully
catches the common cases now. Note that it's not enabled by default in any
configuration because the way it tries to distinguish SCEVs is still fragile and
may produce false positives. Also the test-suite isn't clean yet, one example
is that it fails if a pass drops an NSW bit but it's still present in SCEV's
cached. Cleaning up all those cases will take some time.

llvm-svn: 166786
2012-10-26 17:31:32 +00:00
Nadav Rotem 15198e94d2 Fix a crash in SimpliftDemandedBits of vectors of pointers.
PR14183.

llvm-svn: 166785
2012-10-26 17:17:05 +00:00
Nick Lewycky c86037ff01 Hoist out some work done inside a loop doing a linear scan over all
instructions in a block. GetUnderlyingObject is more expensive than it looks as
it can, for instance, call SimplifyInstruction.

This might have some behavioural changes in odd corner cases, but only because
of some strange artefacts of the original implementation. If you were relying
on those, we can fix that by replacing this with a smarter algorithm. Change
passes the existing tests.

llvm-svn: 166754
2012-10-26 04:43:47 +00:00
Nadav Rotem 8255ceb2cf Revert 166726 because it may have broken a number of SPEC tests. PR14183.
llvm-svn: 166739
2012-10-25 23:51:48 +00:00
Nadav Rotem bb4cfb5ee1 Fix a crash in ValueTracking. Add support for vectors of pointers.
llvm-svn: 166726
2012-10-25 21:52:52 +00:00
Benjamin Kramer 71a3512d60 DependenceAnalysis: Push #includes down into the implementation.
llvm-svn: 166688
2012-10-25 16:15:22 +00:00
Hal Finkel 30bd9346a0 getSmallConstantTripMultiple should never return zero.
When the trip count is -1, getSmallConstantTripMultiple could return zero,
and this would cause runtime loop unrolling to assert. Instead of returning
zero, one is now returned (consistent with the existing overflow cases).
Fixes PR14167.

llvm-svn: 166612
2012-10-24 19:46:44 +00:00
Micah Villmow bf3eeb2dfc Add some cleanup to the DataLayout changes requested by Chandler.
llvm-svn: 166607
2012-10-24 18:36:13 +00:00
Micah Villmow 12d9127833 Add in support for getIntPtrType to get the pointer type based on the address space.
This checkin also adds in some tests that utilize these paths and updates some of the
clients.

llvm-svn: 166578
2012-10-24 15:52:52 +00:00
Bill Wendling 5858b56ce3 Ignore unreachable blocks when doing memory dependence analysis on non-local
loads. It's not really profitable and may result in GVN going into an infinite
loop when it hits constructs like this:

     %x = gep %some.type %x, ...

Found via an LTO build of LLVM.

llvm-svn: 166490
2012-10-23 18:37:11 +00:00
Nadav Rotem 4dc976fbcb revert r166264 because the LTO build is still failing
llvm-svn: 166340
2012-10-19 21:28:43 +00:00
Benjamin Kramer a225ed8d2b SCEVExpander: Don't crash when trying to merge two constant phis.
Just constant fold them so they can't cause any trouble. Fixes PR12627.

llvm-svn: 166286
2012-10-19 16:37:30 +00:00
Nadav Rotem 4985ddc5e0 recommit the patch that makes LSR and LowerInvoke use the TargetTransform interface.
llvm-svn: 166264
2012-10-19 04:27:49 +00:00
Bob Wilson d6d9ccca38 Temporarily revert the TargetTransform changes.
The TargetTransform changes are breaking LTO bootstraps of clang.  I am
working with Nadav to figure out the problem, but I am reverting it for now
to get our buildbots working.

This reverts svn commits: 165665 165669 165670 165786 165787 165997
and I have also reverted clang svn 165741

llvm-svn: 166168
2012-10-18 05:43:52 +00:00
Micah Villmow 4bb926d91d Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
llvm-svn: 165941
2012-10-15 16:24:29 +00:00
Sebastian Pop e9623261ad fix warning
DependenceAnalysis.cpp:1164:32: warning: implicit truncation from 'int' to bitfield changes value from -5 to 3
      [-Wconstant-conversion]
    Result.DV[Level].Direction &= ~Dependence::DVEntry::GT;
                               ^  ~~~~~~~~~~~~~~~~~~~~~~~~

Patch from Preston Briggs <preston.briggs@gmail.com>.

llvm-svn: 165784
2012-10-12 02:04:32 +00:00
Micah Villmow 0c61134d8d Revert 165732 for further review.
llvm-svn: 165747
2012-10-11 21:27:41 +00:00
Micah Villmow 083189730e Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
llvm-svn: 165726
2012-10-11 17:21:41 +00:00
Sebastian Pop 59b61b9e2c dependence analysis
Patch from Preston Briggs <preston.briggs@gmail.com>.

This is an updated version of the dependence-analysis patch, including an MIV
test based on Banerjee's inequalities.

It's a fairly complete implementation of the paper

    Practical Dependence Testing
    Gina Goff, Ken Kennedy, and Chau-Wen Tseng
    PLDI 1991

It cannot yet propagate constraints between coupled RDIV subscripts (discussed
in Section 5.3.2 of the paper).

It's organized as a FunctionPass with a single entry point that supports testing
for dependence between two instructions in a function. If there's no dependence,
it returns null. If there's a dependence, it returns a pointer to a Dependence
which can be queried about details (what kind of dependence, is it loop
independent, direction and distance vector entries, etc). I haven't included
every imaginable feature, but there's a good selection that should be adequate
for supporting many loop transformations. Of course, it can be extended as
necessary.

Included in the patch file are many test cases, commented with C code showing
the loops and array references.

llvm-svn: 165708
2012-10-11 07:32:34 +00:00
Nadav Rotem e10328737d Add a new interface to allow IR-level passes to access codegen-specific information.
llvm-svn: 165665
2012-10-10 22:04:55 +00:00
Bill Wendling ff758fbd45 Use the attribute enums to query if a function has an attribute.
llvm-svn: 165551
2012-10-09 21:49:51 +00:00
Bill Wendling 8ccd6ca199 Use the attribute enums to query if a parameter has an attribute.
llvm-svn: 165550
2012-10-09 21:38:14 +00:00
Bill Wendling c9b22d735a Create enums for the different attributes.
We use the enums to query whether an Attributes object has that attribute. The
opaque layer is responsible for knowing where that specific attribute is stored.

llvm-svn: 165488
2012-10-09 07:45:08 +00:00
Bill Wendling 375eb1f980 Remove more uses of the attribute enums by supplying appropriate query methods for them.
No functionality change intended.

llvm-svn: 165466
2012-10-09 00:28:54 +00:00
Nick Lewycky 7c3b5d9444 Give CaptureTracker::shouldExplore a base implementation. Most users want to do
the same thing. No functionality change.

llvm-svn: 165435
2012-10-08 22:12:48 +00:00
Micah Villmow cdfe20b97f Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Bob Wilson e0b1dea267 Make sure always-inline functions get inlined. <rdar://problem/12423986>
Without this change, when the estimated cost for inlining a function with
an "alwaysinline" attribute was lower than the inlining threshold, the
getInlineCost function was returning that estimated cost rather than the
special InlineCost::AlwaysInlineCost value. That is fine in the normal
inlining case, but it can fail when the inliner considers the opportunity
cost of inlining into an internal or linkonce-odr function. It may decide
not to inline the always-inline function in that case. The fix here is just
to make getInlineCost always return the special value for always-inline
functions. I ran into this building clang with libc++. Tablegen failed to
link because of an always-inline function that was not inlined. I have been
unable to reduce the testcase down to a reasonable size.

llvm-svn: 165367
2012-10-07 01:11:19 +00:00
Duncan Sands 271ea6cdc5 The alignment of an sret parameter is known: it must be at least the
alignment of the return type.  Teach the optimizers this.

llvm-svn: 165226
2012-10-04 13:36:31 +00:00
Bill Wendling 5d637b7d5b Use method to query for NoAlias attribute.
llvm-svn: 165211
2012-10-04 07:17:46 +00:00
Duncan Sands 5e561bbd5d Ignore apparent buffer overruns on external or weak globals. This is a major
source of false positives due to globals being declared in a header with some
kind of incomplete (small) type, but the actual definition being bigger.

llvm-svn: 164912
2012-09-30 07:30:10 +00:00
Sylvestre Ledru 91ce36c986 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru 721cffd53a Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Bill Wendling 863bab689a Remove the `hasFnAttr' method from Function.
The hasFnAttr method has been replaced by querying the Attributes explicitly. No
intended functionality change.

llvm-svn: 164725
2012-09-26 21:48:26 +00:00
Duncan Sands 8598a0ec80 Now that invoke of an intrinsic is possible (for the llvm.do.nothing intrinsic)
teach the callgraph logic to not create callgraph edges to intrinsics for invoke
instructions; it already skips this for call instructions.  Fixes PR13903.

llvm-svn: 164707
2012-09-26 17:16:01 +00:00
Duncan Sands a221eea7db Teach the 'lint' sanity checking pass to detect simple buffer overflows.
llvm-svn: 164671
2012-09-26 07:45:36 +00:00
Duncan Sands 3f4d0b1724 Change the way the lint sanity checking pass detects misaligned memory accesses.
Previously it was only be able to detect problems if the pointer was a numerical
value (eg inttoptr i32 1 to i32*), but not if it was an alloca or globa.  The
reason was the use of ComputeMaskedBits: imagine you have "alloca i8, align 2",
and ask ComputeMaskedBits what it knows about the bits of the alloca pointer.
It can tell you that the bottom bit is known zero (due to align 2) but it can't
tell you that bit 1 is known one.  That's because the address could be an even
multiple of 2 rather than an odd multiple, eg it might be a multiple of 4.  Thus
trying to use KnownOne is ineffective in the case of an alloca as it will never
have any bits set.  Instead look explicitly for constant offsets from allocas
and globals.

llvm-svn: 164595
2012-09-25 10:00:49 +00:00
Duncan Sands aef83e5f03 GCC doesn't understand that OrigAliasResult having a value is correlated with
ArePhisAssumedNoAlias, and warns that OrigAliasResult may be used uninitialized.
Pacify GCC.

llvm-svn: 164229
2012-09-19 15:43:44 +00:00
Nadav Rotem 4eb3d4b2cf Prevent inlining of callees which allocate lots of memory into a recursive caller.
Example:

void foo() {
 ... foo();   // I'm recursive!

  bar();
}

bar() {  int a[1000];  // large stack size }

rdar://10853263

llvm-svn: 164207
2012-09-19 08:08:04 +00:00
Manman Ren 49d684e1e2 Release build: guard dump functions with
"#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)"

No functional change. Update r163344.

llvm-svn: 163679
2012-09-12 05:06:18 +00:00
Manman Ren c3366ccecb Release build: guard dump functions with "ifndef NDEBUG"
No functional change.

llvm-svn: 163344
2012-09-06 19:55:56 +00:00
Roman Divacky 4717a8d654 Dont cast away const needlessly. Found by gcc48 -Wcast-qual.
llvm-svn: 163324
2012-09-06 15:42:13 +00:00
Arnold Schwaighofer 8dc34cfb99 BasicAA: Recognize cyclic NoAlias phis
Enhances basic alias analysis to recognize phis whose first incoming values are
NoAlias and whose other incoming values are just the phi node itself through
some amount of recursion.

Example: With this change basicaa reports that ptr_phi and ptr_phi2 do not alias
each other.

bb:
 ptr = ptr2 + 1

loop:
  ptr_phi = phi [bb, ptr], [loop, ptr_plus_one]
  ptr2_phi = phi [bb, ptr2], [loop, ptr2_plus_one]
  ...
  ptr_plus_one = gep ptr_phi, 1
  ptr2_plus_one = gep ptr2_phi, 1

This enables the elimination of one load in code like the following:

extern int foo;

int test_noalias(int *ptr, int num, int* coeff) {
  int *ptr2 = ptr;
  int result = (*ptr++) * (*coeff--);
  while (num--) {
    *ptr2++ = *ptr;
    result +=  (*coeff--) * (*ptr++);
  }
  *ptr = foo;
  return result;
}

Part 2/2 of fix for PR13564.

llvm-svn: 163319
2012-09-06 14:41:53 +00:00
Arnold Schwaighofer 76dca58c66 BasicAA: GEPs of NoAlias'ing base ptr with equivalent indices are NoAlias
If we can show that the base pointers of two GEPs don't alias each other using
precise analysis and the indices and base offset are equal then the two GEPs
also don't alias each other.
This is primarily needed for the follow up patch that analyses NoAlias'ing PHI
nodes.

Part 1/2 of fix for PR13564.

llvm-svn: 163317
2012-09-06 14:31:51 +00:00
Manman Ren f3fedb6935 JumpThreading: when default destination is the destination of some cases in a
switch, make sure we include the value for the cases when calculating edge
value from switch to the default destination.

rdar://12241132

llvm-svn: 163270
2012-09-05 23:45:58 +00:00
Roman Divacky ad06cee239 Stop casting away const qualifier needlessly.
llvm-svn: 163258
2012-09-05 22:26:57 +00:00
Benjamin Kramer 6c2649ca4e Switch BasicAliasAnalysis' cache to SmallDenseMap.
It relies on clear() being fast and the cache rarely has more than 1 or 2
elements, so give it an inline capacity and always shrink it back down in case
it grows. DenseMap will grow to 64 buckets which makes clear() a lot slower.

llvm-svn: 163215
2012-09-05 16:49:37 +00:00
Bob Wilson 01cfbfe9d0 Be conservative about allocations that may alias the accessed pointer.
If an allocation has a must-alias relation to the access pointer, we treat it
as a Def.  Otherwise, without this check, the code here was just skipping over
the allocation call and ignoring it.  I noticed this by inspection and don't
have a specific testcase that it breaks, but it seems like we need to treat
a may-alias allocation as a Clobber.

llvm-svn: 163127
2012-09-04 03:30:13 +00:00
Bob Wilson dcc54decd5 Fix more fallout from r158919, similar to PR13547.
This code used to only handle malloc-like calls, which do not read memory.
r158919 changed it to check isNoAliasFn(), which includes strdup-like and
realloc-like calls, but it was not checking for dependencies on the memory
read by those calls.

llvm-svn: 163106
2012-09-03 05:15:15 +00:00
Benjamin Kramer e7e5235726 Clean up ProfileDataLoader a bit.
- Overloading operator<< for raw_ostream and pointers is dangerous, it alters
  the behavior of code that includes the header.
- Remove unused ID.
- Use LLVM's byte swapping helpers instead of a hand-coded.
- Make ReadProfilingData work directly on a pointer.

No functionality change.

llvm-svn: 162992
2012-08-31 12:43:07 +00:00
Bill Wendling 5aed004cf1 Cleanups due to feedback. No functionality change. Patch by Alistair.
llvm-svn: 162979
2012-08-31 05:18:31 +00:00
Benjamin Kramer 8bcc971174 Make MemoryBuiltins aware of TargetLibraryInfo.
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.

Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.

Fixes PR13694 and probably others.

llvm-svn: 162841
2012-08-29 15:32:21 +00:00
Manman Ren abbb01abea Profile: set branch weight metadata with data generated from profiling.
This patch implements ProfileDataLoader which loads profile data generated by
-insert-edge-profiling and updates branch weight metadata accordingly.

Patch by Alastair Murray.

llvm-svn: 162799
2012-08-28 22:21:25 +00:00
Hongbin Zheng 14c05c409a Remove the the block_node_iterator of Region, replace it by the block_iterator.
llvm-svn: 162672
2012-08-27 13:49:24 +00:00
Richard Smith 228e6d4cf3 Fix integer undefined behavior due to signed left shift overflow in LLVM.
Reviewed offline by chandlerc.

llvm-svn: 162623
2012-08-24 23:29:28 +00:00
Manman Ren cf10446ffa BranchProb: modify the definition of an edge in BranchProbabilityInfo to handle
the case of multiple edges from one block to another.

A simple example is a switch statement with multiple values to the same
destination. The definition of an edge is modified from a pair of blocks to
a pair of PredBlock and an index into the successors.

Also set the weight correctly when building SelectionDAG from LLVM IR,
especially when converting a Switch.
IntegersSubsetMapping is updated to calculate the weight for each cluster.

llvm-svn: 162572
2012-08-24 18:14:27 +00:00
Richard Smith c621af1f60 Fix floating-point divide by zero, in a case where the value was not going to be used anyway.
llvm-svn: 162518
2012-08-24 00:31:45 +00:00
Benjamin Kramer f29db275b2 Reduce duplicated hash map lookups.
llvm-svn: 162362
2012-08-22 15:37:57 +00:00
Benjamin Kramer 34764fe2e4 MemoryBuiltins: Properly guard ObjectSizeOffsetVisitor against cycles in the IR.
The previous fix only checked for simple cycles, use a set to catch longer
cycles too.

Drop the broken check from the ObjectSizeOffsetEvaluator. The BoundsChecking
pass doesn't have to deal with invalid IR like InstCombine does.

llvm-svn: 162120
2012-08-17 19:26:41 +00:00
Benjamin Kramer 4901f0d2a2 Guard MemoryBuiltins against self-looping GEPs, which can occur in unreachable code due to constant propagation.
Fixes PR13621.

llvm-svn: 162098
2012-08-17 14:16:37 +00:00
Bill Wendling e1c54262f4 Set the branch probability of branching to the 'normal' destination of an invoke
instruction to something absurdly high, while setting the probability of
branching to the 'unwind' destination to the bare minimum. This should set cause
the normal destination's invoke blocks to be moved closer to the invoke.

PR13612

llvm-svn: 161944
2012-08-15 12:22:35 +00:00
Nadav Rotem 5d4e205874 MemoryDependenceAnalysis attempts to find the first memory dependency for function calls.
Currently, if GetLocation reports that it did not find a valid pointer (this is the case for volatile load/stores),
we ignore the result. This patch adds code to handle the cases where we did not obtain a valid pointer.

rdar://11872864  PR12899

llvm-svn: 161802
2012-08-13 23:03:43 +00:00
Benjamin Kramer c99d0e9186 PR13095: Give an inline cost bonus to functions using byval arguments.
We give a bonus for every argument because the argument setup is not needed
anymore when the function is inlined. With this patch we interpret byval
arguments as a compact representation of many arguments. The byval argument
setup is implemented in the backend as an inline memcpy, so to model the
cost as accurately as possible we take the number of pointer-sized elements
in the byval argument and give a bonus of 2 instructions for every one of
those. The bonus is capped at 8 elements, which is the number of stores
at which the x86 backend switches from an expanded inline memcpy to a real
memcpy. It would be better to use the real memcpy threshold from the backend,
but it's not available via TargetData.

This change brings the performance of c-ray in line with gcc 4.7. The included
test case tries to reproduce the c-ray problem to catch regressions for this
benchmark early, its performance is dominated by the inline decision of a
specific call.

This only has a small impact on most code, more on x86 and arm than on x86_64
due to the way the ABI works. When building LLVM for x86 it gives a small
inline cost boost to virtually any function using StringRef or STL allocators,
but only a 0.01% increase in overall binary size. The size of gcc compiled by
clang actually shrunk by a couple bytes with this patch applied, but not
significantly.

llvm-svn: 161413
2012-08-07 11:13:19 +00:00
Chandler Carruth 2f6cf4884c Fix PR13412, a nasty miscompile due to the interleaved
instsimplify+inline strategy.

The crux of the problem is that instsimplify was reasonably relying on
an invariant that is true within any single function, but is no longer
true mid-inline the way we use it. This invariant is that an argument
pointer != a local (alloca) pointer.

The fix is really light weight though, and allows instsimplify to be
resiliant to these situations: when checking the relation ships to
function arguments, ensure that the argumets come from the same
function. If they come from different functions, then none of these
assumptions hold. All credit to Benjamin Kramer for coming up with this
clever solution to the problem.

llvm-svn: 161410
2012-08-07 10:59:59 +00:00
Hongbin Zheng bb1d209210 Implement the block_iterator of Region based on df_iterator.
llvm-svn: 161177
2012-08-02 14:20:02 +00:00
Nick Lewycky fb78083b1c Stay rational; don't assert trying to take the square root of a negative value.
If it's negative, the loop is already proven to be infinite. Fixes PR13489!

llvm-svn: 161107
2012-08-01 09:14:36 +00:00
Nadav Rotem 77f1b9c477 When constant folding GEP expressions, keep the address space information of pointers.
Together with Ran Chachick <ran.chachick@intel.com>

llvm-svn: 160954
2012-07-30 07:25:20 +00:00
Nuno Lopes 85591f899d fix PR13390: do not loop forever with self-referencing self instructions
llvm-svn: 160876
2012-07-27 18:21:15 +00:00
Nuno Lopes f0626f2205 revert r160742: it's breaking CMake build
original commit msg:
MemoryBuiltins: add support to determine the size of strdup'ed non-constant strings

llvm-svn: 160751
2012-07-25 18:49:28 +00:00
Nuno Lopes f0441e04bd MemoryBuiltins: add support to determine the size of strdup'ed non-constant strings
llvm-svn: 160742
2012-07-25 17:29:22 +00:00
Duncan Sands 0b875a0c29 When folding a load from a global constant, if the load started in the middle
of an array element (rather than at the beginning of the element) and extended
into the next element, then the load from the second element was being handled
wrong due to incorrect updating of the notion of which byte to load next.  This
fixes PR13442.  Thanks to Chris Smowton for reporting the problem, analyzing it
and providing a fix.

llvm-svn: 160711
2012-07-25 09:14:54 +00:00
Nuno Lopes 2a4b09c9de teach objectsize about strdup() and strndup()
llvm-svn: 160676
2012-07-24 16:28:13 +00:00
Sylvestre Ledru 35521e2310 Fix a typo (the the => the)
llvm-svn: 160621
2012-07-23 08:51:15 +00:00
Nuno Lopes 705141d4df baby steps toward fixing some problems with inbound GEPs that overflow, as discussed 2 months ago or so.
Make sure we do not emit index computations with NSW flags so that we dont get an undef value if the GEP overflows

llvm-svn: 160589
2012-07-20 23:07:40 +00:00
Benjamin Kramer 5be8f60126 Remove unused private member variables uncovered by the recent changes to clang's -Wunused-private-field.
llvm-svn: 160583
2012-07-20 22:05:57 +00:00
Chandler Carruth 36e2ecf528 Move llvm/Support/TypeBuilder.h -> llvm/TypeBuilder.h. This completes
the move of *Builder classes into the Core library.

No uses of this builder in Clang or DragonEgg I could find.

If there is a desire to have an IR-building-support library that
contains all of these builders, that can be easily added, but currently
it seems likely that these add no real overhead to VMCore.

llvm-svn: 160243
2012-07-15 23:45:24 +00:00
Andrew Trick 653513b8dd LSR Fix: check SCEV expression safety before expansion.
All SCEV expressions used by LSR formulae must be safe to
expand. i.e. they may not contain UDiv unless we can prove nonzero
denominator.

Fixes PR11356: LSR hoists UDiv.

llvm-svn: 160205
2012-07-13 23:33:10 +00:00
Andrew Trick ee76065b7a IVUsers should only generate SCEV's for values that are safe to speculate.
This allows SCEVExpander to run on the IV expressions.

This codifies an assumption made by LSR to complete the fix for
PR11356, but I haven't been able to generate a separate unit test for
this part. I'm adding it as an extra safety check.

llvm-svn: 160204
2012-07-13 23:33:05 +00:00
Andrew Trick 365e31c36c Factor SCEV traversal code so I can use it elsewhere. No functionality.
llvm-svn: 160203
2012-07-13 23:33:03 +00:00
Dan Gohman 3d1512384f Delete code for folding undefs in ScalarEvolution. It's invalid in
obscure ways, and it isn't actually important in the real world.

llvm-svn: 159969
2012-07-09 23:51:20 +00:00
Nuno Lopes 0d44a50426 PHINode::hasConstantValue(): return undef if the PHI is fully recursive.
Thanks Duncan for the idea

llvm-svn: 159687
2012-07-03 21:15:40 +00:00
Nuno Lopes 9291ff4078 fold PHI nodes in SizeOffsetEvaluator whenever possible.
Unfortunately this change requires the cache map to hold WeakVHs instead

llvm-svn: 159667
2012-07-03 17:13:25 +00:00
Benjamin Kramer e2ef47c145 Reduce use list thrashing by using DenseMap's find_as for maps with ValueHandle keys.
No functionality change.

llvm-svn: 159497
2012-06-30 22:37:15 +00:00
Nuno Lopes 674acc12d0 RefreshCallGraph: ignore 'invoke intrinsic'. IntrinsicInst doesnt not recognize invoke, and shouldnt at this point, since the rest of LLVM codebase doesnt expect invoke of intrinsics
llvm-svn: 159441
2012-06-29 17:49:32 +00:00
Bill Wendling 098d906dbb Update the CMake files.
llvm-svn: 159417
2012-06-29 09:01:47 +00:00
Bill Wendling f799efdedc The DIBuilder class is just a wrapper around debug info creation
(a.k.a. MDNodes). The module doesn't belong in Analysis. Move it to the VMCore
instead.

llvm-svn: 159414
2012-06-29 08:32:07 +00:00
Nick Lewycky 474112d82c If the step value is a constant zero, the loop isn't going to terminate. Fixes
the assert reported in PR13228!

llvm-svn: 159393
2012-06-28 23:44:57 +00:00
Nuno Lopes 181d67ecb1 MemoryBuiltins:
- recognize C++ new(std::nothrow) friends
 - ignore ExtractElement and ExtractValue instructions in size/offset analysis (all easy cases are probably folded away before we get here)
 - also recognize realloc as noalias

llvm-svn: 159356
2012-06-28 16:34:03 +00:00
Nuno Lopes 8650fb8e0e make LazyValueInfo analyze the default case of switch statements (we know that in the default branch the value cannot be any of the switch cases)
llvm-svn: 159353
2012-06-28 16:13:37 +00:00
Nuno Lopes e6e049020b make LVI::getEdgeValue() always intersect the constraints of the edge with the range of the block. Previously it was only performing the intersection for a few cases, thus losing precision
llvm-svn: 159320
2012-06-28 01:16:18 +00:00
Bill Wendling 3b2ab9eaaa Fix cmake failure from moving files around.
llvm-svn: 159314
2012-06-28 00:18:12 +00:00
Bill Wendling e38859dc8e Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp and
include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h.

The reasoning is because the DebugInfo module is simply an interface to the
debug info MDNodes and has nothing to do with analysis.

llvm-svn: 159312
2012-06-28 00:05:13 +00:00
Bill Wendling 3b70d784a2 Reduce indentation in function. Rearrange some methods. No functionality change.
llvm-svn: 159239
2012-06-26 23:22:18 +00:00
Bill Wendling e02a1f8cf2 Revamp how debugging information is emitted for debug info objects.
It's not necessary for each DI class to have its own copy of `print' and
`dump'. Instead, just give DIDescriptor those methods and have it call the
appropriate debugging printing routine based on the type of the debug
information.

llvm-svn: 159237
2012-06-26 22:57:33 +00:00
Andrew Trick fb2ba3e1cb Enable the new LoopInfo algorithm by default.
The primary advantage is that loop optimizations will be applied in a
stable order. This helps debugging and unit test creation. It is also
a better overall implementation without pathologically bad performance
on deep functions.

On large functions (llvm-stress --size=200000 | opt -loops)
Before: 0.1263s
After:  0.0225s

On deep functions (after tweaking llvm-stress, thanks Nadav):
Before: 0.2281s
After:  0.0227s

See r158790 for more comments.

The loop tree is now consistently generated in forward order, but loop
passes are applied in reverse order over the program. If we have a
loop optimization that prefers forward order, that can easily be
achieved by adding a different type of LoopPassManager.

llvm-svn: 159183
2012-06-26 04:11:38 +00:00
Andrew Trick fecf937938 Remove unnecessary FIXME
llvm-svn: 159182
2012-06-26 04:11:34 +00:00
Nuno Lopes 9ecc8761bc check for the NoAlias attribute through CallSite
llvm-svn: 159145
2012-06-25 16:17:54 +00:00
NAKAMURA Takumi 704de074b8 llvm/lib: [CMake] Add explicit dependency to intrinsics_gen.
llvm-svn: 159112
2012-06-24 13:32:01 +00:00
Nuno Lopes 15dbcb4537 simplify code from previous commits (Thanks Duncan)
llvm-svn: 158999
2012-06-22 15:50:53 +00:00
Nuno Lopes 9792d68381 remove extractMallocCallFromBitCast, since it was tailor maded for its sole user. Update GlobalOpt accordingly.
llvm-svn: 158952
2012-06-22 00:25:01 +00:00
Nuno Lopes dc6085e52d Add support for invoke to the MemoryBuiltin analysid.
Update comments accordingly.

Make instcombine remove useless invokes to C++'s 'new' allocation function (test attached).

llvm-svn: 158937
2012-06-21 21:25:05 +00:00
Nuno Lopes f06b731fed fix build in C++11 mode.
Thanks to Chandler for pointing out the problem.

llvm-svn: 158928
2012-06-21 18:38:26 +00:00
Nuno Lopes a6aa3d3b5f hopefully fix the buildbots: some tests have wrong definitions of malloc and were crashing this code on 64 bits machines
llvm-svn: 158923
2012-06-21 16:47:58 +00:00
Nuno Lopes 55fff83422 refactor the MemoryBuiltin analysis:
- provide more extensive set of functions to detect library allocation functions (e.g., malloc, calloc, strdup, etc)
 - provide an API to compute the size and offset of an object pointed by

Move a few clients (GVN, AA, instcombine, ...) to the new API.
This implementation is a lot more aggressive than each of the custom implementations being replaced.

Patch reviewed by Nick Lewycky and Chandler Carruth, thanks.

llvm-svn: 158919
2012-06-21 15:45:28 +00:00
Andrew Trick ff2ed7b687 A new algorithm for computing LoopInfo. Temporarily disabled.
-stable-loops enables a new algorithm for generating the Loop
forest. It differs from the original algorithm in a few respects:

- Not determined by use-list order.
- Initially guarantees RPO order of block and subloops.
- Linear in the number of CFG edges.
- Nonrecursive.

I didn't want to change the LoopInfo API yet, so the block lists are
still inclusive. This seems strange to me, and it means that building
LoopInfo is not strictly linear, but it may not be a problem in
practice. At least the block lists start out in RPO order now. In the
future we may add an attribute or wrapper analysis that allows other
passes to assume RPO order.

The primary motivation of this work was not to optimize LoopInfo, but
to allow reproducing performance issues by decomposing the compilation
stages. I'm often unable to do this with the current LoopInfo, because
the loop tree order determines Loop pass order. Serializing the IR
tends to invert the order, which reverses the optimization order. This
makes it nearly impossible to debug interdependent loop optimizations
such as LSR.

I also believe this will provide more stable performance results across time.

llvm-svn: 158790
2012-06-20 05:23:33 +00:00
Andrew Trick cda51d430d Move the implementation of LoopInfo into LoopInfoImpl.h.
The implementation only needs inclusion from LoopInfo.cpp and
MachineLoopInfo.cpp. Clients of the interface should only include the
interface. This makes the interface readable and speeds up rebuilds
after modifying the implementation.

llvm-svn: 158787
2012-06-20 03:42:09 +00:00
Benjamin Kramer 009b1c1cf1 Round 2 of dead private variable removal.
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.

llvm-svn: 158096
2012-06-06 19:47:08 +00:00
Benjamin Kramer bde9176663 Fix typos found by http://github.com/lyda/misspell-check
llvm-svn: 157885
2012-06-02 10:20:22 +00:00
Eric Christopher 1cf3338bb4 Add support for enum forward declarations.
Part of rdar://11570854

llvm-svn: 157786
2012-06-01 00:22:32 +00:00
Benjamin Kramer 406a2db1f6 Make sure that we're dealing with a binary SCEVExpr when simplifying.
llvm-svn: 157704
2012-05-30 18:42:43 +00:00
Benjamin Kramer 50b26ebb2b Teach SCEV's icmp simplification logic that a-b == 0 is equivalent to a == b.
This also required making recursive simplifications until
nothing changes or a hard limit (currently 3) is hit.

With the simplification in place indvars can canonicalize
loops of the form
for (unsigned i = 0; i < a-b; ++i)
into
for (unsigned i = 0; i != a-b; ++i)
which used to fail because SCEV created a weird umax expr
for the backedge taken count.

llvm-svn: 157701
2012-05-30 18:32:23 +00:00
Andrew Trick a3f9043196 SCEV: Handle a corner case reducing AddRecExpr * AddRecExpr
If integer overflow causes one of the terms to reach zero, that can
force the entire expression to zero.

Fixes PR12929: cast<Ty>() argument of incompatible type

llvm-svn: 157673
2012-05-30 03:35:20 +00:00
Andrew Trick 946f76bf33 Reformat the loop that does AddRecExpr * AddRecExpr reduction.
No functionality.

llvm-svn: 157672
2012-05-30 03:35:17 +00:00
Craig Topper 9520719b9b Mark some static arrays as const.
llvm-svn: 157377
2012-05-24 06:35:32 +00:00
Eric Christopher c49643586b Add support for C++11 enum classes in llvm.
Part of rdar://11496790

llvm-svn: 157303
2012-05-23 00:09:20 +00:00
Andrew Trick a7a3de1bcf LSR fix: add a missing phi check during IV hoisting.
Fixes PR12898: SCEVExpander crash.

llvm-svn: 157263
2012-05-22 17:39:59 +00:00
Eric Christopher b5cf66cda2 Actually support DW_TAG_rvalue_reference_type that we were trying
to generate out of the front end.

rdar://11479676

llvm-svn: 157094
2012-05-19 01:36:37 +00:00
Andrew Trick 7fa4e0fea6 SCEV: Add MarkPendingLoopPredicates to avoid recursive isImpliedCond.
getUDivExpr attempts to simplify by checking for overflow.
isLoopEntryGuardedByCond then evaluates the loop predicate which
may lead to the same getUDivExpr causing endless recursion.

Fixes PR12868: clang 3.2 segmentation fault.

llvm-svn: 157092
2012-05-19 00:48:25 +00:00
Nuno Lopes ac59380dfd allow LazyValueInfo::getEdgeValue() to reason about multiple edges from the same switch instruction by doing union of ranges (which may still be conservative, but it's more aggressive than before)
llvm-svn: 157071
2012-05-18 21:02:10 +00:00
Eric Christopher 5d5338fb81 Clarify comment.
llvm-svn: 157033
2012-05-18 00:16:22 +00:00
Nuno Lopes 097e37da0e minor simplification in the call to ConstantRange constructor
llvm-svn: 157024
2012-05-17 23:04:08 +00:00
Bill Wendling e065dc8d8d Remove extraneous ';'.
llvm-svn: 157011
2012-05-17 20:27:58 +00:00
Nuno Lopes c2a170e26e reuse the result of some expensive computations in getSignExtendExpr() and getZeroExtendExpr()
this gives a speedup of > 80 in a debug build in the test case of PR12825 (php_sha512_crypt_r)

llvm-svn: 156849
2012-05-15 20:20:14 +00:00
Nuno Lopes ab5c924006 minor simplification to code: Ty is already a SCEV type; don't need to run getEffectiveSCEVType() twice
llvm-svn: 156823
2012-05-15 15:44:38 +00:00
Chad Rosier a968caf8e0 Move the capture analysis from MemoryDependencyAnalysis to a more general place
so that it can be reused in MemCpyOptimizer.  This analysis is needed to remove
an unnecessary memcpy when returning a struct into a local variable.
rdar://11341081
PR12686

llvm-svn: 156776
2012-05-14 20:35:04 +00:00
Chad Rosier 10702d5f22 Hoist simpler checks above llvm::PointerMayBeCaptured. No functional change intended.
llvm-svn: 156687
2012-05-12 00:43:40 +00:00
Chad Rosier 8244b1dc7e Fix intendation.
llvm-svn: 156589
2012-05-10 23:38:07 +00:00
Dan Gohman ed7c24e2d9 Teach DeadStoreElimination to eliminate exit-block stores with phi addresses.
llvm-svn: 156558
2012-05-10 18:57:38 +00:00
Dan Gohman 0291246ce7 Rewrite ScalarEvolution::hasOperand to use an explicit worklist instead
of recursion, to avoid excessive stack usage on deep expressions.

llvm-svn: 156554
2012-05-10 17:21:30 +00:00
Chandler Carruth 8880325a92 Rename the Region::block_iterator to Region::block_node_iterator, and
add a new Region::block_iterator which actually iterates over the basic
blocks of the region.

The old iterator, now call 'block_node_iterator' iterates over
RegionNodes which contain a single basic block. This works well with the
GraphTraits-based iterator design, however most users actually want an
iterator over the BasicBlocks inside these RegionNodes. Now the
'block_iterator' is a wrapper which exposes exactly this interface.
Internally it uses the block_node_iterator to walk all nodes which are
single basic blocks, but transparently unwraps the basic block to make
user code simpler.

While this patch is a bit of a wash, most of the updates are to internal
users, not external users of the RegionInfo. I have an accompanying
patch to Polly that is a strict simplification of every user of this
interface, and I'm working on a pass that also wants the same simplified
interface.

This patch alone should have no functional impact.

llvm-svn: 156202
2012-05-04 20:55:23 +00:00
Chandler Carruth da7513a834 A pile of long over-due refactorings here. There are some very, *very*
minor behavior changes with this, but nothing I have seen evidence of in
the wild or expect to be meaningful. The real goal is unifying our logic
and simplifying the interfaces. A summary of the changes follows:

- Make 'callIsSmall' actually accept a callsite so it can handle
  intrinsics, and simplify callers appropriately.
- Nuke a completely bogus declaration of 'callIsSmall' that was still
  lurking in InlineCost.h... No idea how this got missed.
- Teach the 'isInstructionFree' about the various more intelligent
  'free' heuristics that got added to the inline cost analysis during
  review and testing. This mostly surrounds int->ptr and ptr->int casts.
- Switch most of the interesting parts of the inline cost analysis that
  were essentially computing 'is this instruction free?' to use the code
  metrics routine instead. This way we won't keep duplicating logic.

All of this is motivated by the desire to allow other passes to compute
a roughly equivalent 'cost' metric for a particular basic block as the
inline cost analysis. Sadly, re-using the same analysis for both is
really messy because only the actual inline cost analysis is ever going
to go to the contortions required for simplification, SROA analysis,
etc.

llvm-svn: 156140
2012-05-04 00:58:03 +00:00
Nuno Lopes d4cf35d775 remove calls to calloc if the allocated memory is not used (it was already being done for malloc)
fix a few typos found by Chad in my previous commit

llvm-svn: 156110
2012-05-03 22:08:19 +00:00
Nuno Lopes d2b71e7fa9 add support for calloc to objectsize lowering
llvm-svn: 156102
2012-05-03 21:19:58 +00:00
Duncan Sands 34c4869cf6 Just mark the sign bit as known zero, rather than any other irrelevant bits
known zero in the LHS.  Fixes PR12541.

llvm-svn: 155818
2012-04-30 11:56:58 +00:00
Dan Gohman 1ccecdb2fd Reapply r155682, making constant folding more consistent, with a fix to work
properly with how the code handles all-undef PHI nodes.

llvm-svn: 155721
2012-04-27 17:50:22 +00:00
NAKAMURA Takumi 6008dfdb70 Revert r155682, "Use ConstantExpr::getExtractElement when constant-folding vectors"
It broke stage2 build. stage1/clang sometimes crashed.

llvm-svn: 155699
2012-04-27 07:59:20 +00:00
Dan Gohman 90f3798f26 Use ConstantExpr::getExtractElement when constant-folding vectors
instead of getAggregateElement. This has the advantage of being
more consistent and allowing higher-level constant folding to
procede even if an inner extract element cannot be folded.

Make ConstantFoldInstruction call ConstantFoldConstantExpression
on the instruction's operands, making it more consistent with 
ConstantFoldConstantExpression itself. This makes sure that
ConstantExprs get TargetData-aware folding before being handed
off as operands for further folding.

This causes more expressions to be folded, but due to a known
shortcoming in constant folding, this currently has the side effect
of stripping a few more nuw and inbounds flags in the non-targetdata
side of constant-fold-gep.ll. This is mostly harmless.

This fixes rdar://11324230.

llvm-svn: 155682
2012-04-27 00:54:36 +00:00
Chandler Carruth aacb8a5809 Fix a crash on valid (if UB) bitcode that is produced for some global
constants in C++11 mode. I have no idea why it required such particular
circumstances to get here, the code seems clearly to rely upon unchecked
assumptions.

Specifically, when we decide to form an index into a struct type, we may
have gone through (at least one) zero-length array indexing round, which
would have left the offset un-adjusted, and thus not necessarily valid
for use when indexing the struct type.

This is just an canonicalization step, so the correct thing is to refuse
to canonicalize nonsensical GEPs of this form. Implemented, and test
case added.

Fixes PR12642. Pair debugged and coded with Richard Smith. =] I credit
him with most of the debugging, and preventing me from writing the wrong
code.

llvm-svn: 155466
2012-04-24 18:42:47 +00:00
Eric Christopher 27deb265f9 Allow forward declarations to take a context. This helps the debugger
find forward declarations in the context that the actual definition
will occur.

rdar://11291658

llvm-svn: 155380
2012-04-23 19:00:11 +00:00
Benjamin Kramer e364d195e9 Revert "SCEV: When expanding a GEP the final addition to the base pointer has NUW but not NSW."
This isn't right either, reverting for now.

llvm-svn: 154910
2012-04-17 06:33:57 +00:00
Chandler Carruth 7ae90d4d2d Add two statistics to help track how we are computing the inline cost.
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome.

llvm-svn: 154492
2012-04-11 10:15:10 +00:00
Andrew Trick 4442bfe559 Fix 12513: Loop unrolling breaks with indirect branches.
Take this opportunity to generalize the indirectbr bailout logic for
loop transformations. CFG transformations will never get indirectbr
right, and there's no point trying.

llvm-svn: 154386
2012-04-10 05:14:42 +00:00
Chandler Carruth 28192c9398 Fix ValueTracking to conclude that debug intrinsics are safe to
speculate. Without this, loop rotate (among many other places) would
suddenly stop working in the presence of debug info. I found this
looking at loop rotate, and have augmented its tests with a reduction
out of a very hot loop in yacr2 where failing to do this rotation costs
sometimes more than 10% in runtime performance, perturbing numerous
downstream optimizations.

This should have no impact on performance without debug info, but the
change in performance when debug info is enabled can be extreme. As
a consequence (and this how I got to this yak) any profiling of
performance problems should be treated with deep suspicion -- they may
have been wildly innacurate of debug info was enabled for profiling. =/
Just a heads up.

llvm-svn: 154263
2012-04-07 19:22:18 +00:00
Benjamin Kramer e1f4ca1b0f SCEV: When expanding a GEP the final addition to the base pointer has NUW but not NSW.
Found by inspection.

llvm-svn: 154262
2012-04-07 17:19:26 +00:00
David Chisnall c1c9cdab23 Reintroduce InlineCostAnalyzer::getInlineCost() variant with explicit callee
parameter until we have a more sensible API for doing the same thing.

Reviewed by Chandler.

llvm-svn: 154180
2012-04-06 17:27:41 +00:00
Rafael Espindola ba0a6cabb8 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Eric Christopher 34164196af Add a line number for the scope of the function (starting at the first
brace) so that we get more accurate line number information about the
declaration of a given function and the line where the function
first starts.

Part of rdar://11026482

llvm-svn: 153916
2012-04-03 00:43:49 +00:00
Rafael Espindola 80c540e656 Teach CodeGen's version of computeMaskedBits to understand the range metadata.
This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.

llvm-svn: 153817
2012-03-31 18:14:00 +00:00
Chandler Carruth 1a4cc6cc9f Fix a typo reported in IRC by someone reviewing this code.
llvm-svn: 153815
2012-03-31 13:18:09 +00:00
Chandler Carruth edd2826f3e Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth 0539c071ea Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Rafael Espindola 53190539db Add computeMaskedBitsLoad back, as it was the change to instsimplify that
caused the slowdown last time.

llvm-svn: 153747
2012-03-30 15:52:11 +00:00
Eric Christopher c13fd6d1e1 Lowercase the tag name to match the rest of dwarf.
llvm-svn: 153691
2012-03-29 21:35:05 +00:00
Eric Christopher 70e1bd8872 Add support for objc property decls according to the page at:
http://llvm.org/docs/SourceLevelDebugging.html#objcproperty

including type and DECL. Expand the metadata needed accordingly.

rdar://11144023

llvm-svn: 153639
2012-03-29 08:42:56 +00:00
Rafael Espindola 5054ee82cc Handle intrinsics in GlobalsModRef. Fixes pr12351.
llvm-svn: 153604
2012-03-28 21:31:24 +00:00
Chad Rosier e27081d348 Revert r153521 as it's causing large regressions on the nightly testers.
Original commit message for r153521 (aka r153423):
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153587
2012-03-28 18:42:50 +00:00
Chad Rosier 8e6dbccd03 Reapply r153423; the original commit was fine. The failing test, distray, had
undefined behavior, which Rafael was kind enough to fix.

Original commit message for r153423:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153521
2012-03-27 17:44:52 +00:00
Andrew Trick 7004e4b95e SCEV fix: Handle loop invariant loads.
Fixes PR11882: NULL dereference in ComputeLoadConstantCompareExitLimit.

llvm-svn: 153480
2012-03-26 22:33:59 +00:00
Chad Rosier 08e57e5ccf Revert r153423 as this is causing failures on our internal nightly testers.
Original commit message:
Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loading a boolean value.

llvm-svn: 153452
2012-03-26 18:07:14 +00:00
Rafael Espindola df9b4adb82 Use the new range metadata in computeMaskedBits and add a new optimization to
instruction simplify that lets us remove an and when loding a boolean value.

llvm-svn: 153423
2012-03-26 01:44:11 +00:00
Chandler Carruth 8059c84af1 Teach instsimplify how to simplify comparisons of pointers which are
constant-offsets of a common base using the generic GEP-walking logic
I added for computing pointer differences in the same situation.

llvm-svn: 153419
2012-03-25 21:28:14 +00:00
Chandler Carruth 2741aae80b Switch the pointer-difference simplification logic to only work with
inbounds GEPs. This isn't really necessary for simplifying pointer
differences, but I'm planning to re-use the same code to simplify
pointer comparisons where it is necessary. Since real code almost
exclusively uses inbounds GEPs, it doesn't seem worth it to support the
extra complexity of turning it on and off. If anyone would like that
back, feel free to shout. Note that instcombine will still catch any of
these patterns.

llvm-svn: 153418
2012-03-25 20:43:07 +00:00
Chandler Carruth 77e8bfbb5e Try to harden the recursive simplification still further. This is again
spotted by inspection, and I've crafted no test case that triggers it on
my machine, but some of the windows builders are hitting what looks like
memory corruption, so *something* is amiss here.

This patch takes a more generalized approach to eliminating
double-visits. Imagine code such as:

  %x = ...
  %y = add %x, 1
  %z = add %x, %y

You can imagine that if we simplify %x, we would add %y and %z to the
list. If the use-chain order happens to cause us to add them in reverse
order, we could pull %y off first, and simplify it, adding %z to the
list. We now have %z on the list twice, and will reference it after it
is deleted.

Currently, all my test cases happen to not trigger this, likely due to
the use-chain ordering, but there seems no guarantee that such
a situation could not occur, so we should handle it correctly.

Again, if anyone knows how to craft a testcase that actually triggers
this, please let me know.

llvm-svn: 153397
2012-03-24 22:34:26 +00:00
Chandler Carruth e41fc73f08 Don't add the instruction about to be RAUW'ed and erased to the
worklist. This can happen in theory when an instruction uses itself,
such as a PHI node. This was spotted by inspection, and unfortunately
I've not been able to come up with a test case that would trigger it. If
anyone has ideas, let me know...

llvm-svn: 153396
2012-03-24 22:34:23 +00:00
Chandler Carruth cf1b585f60 Refactor the interface to recursively simplifying instructions to be tad
bit simpler by handling a common case explicitly.

Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.

I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.

llvm-svn: 153393
2012-03-24 21:11:24 +00:00
Eric Christopher 3c0d51661f Take out the debug info probe stuff. It's making some changes to
the PassManager annoying and should be reimplemented as a decorator
on top of existing passes (as should the timing data).

llvm-svn: 153305
2012-03-23 03:54:05 +00:00
Andrew Trick 6d1bbb8755 Cleanup IVUsers::addUsersIfInteresting.
Keep the public interface clean, even though LLVM proper does not
currently use it.

llvm-svn: 153263
2012-03-22 17:47:33 +00:00
Chandler Carruth 3ffccb3802 Teach instsimplify to gracefully degrade in the presence of instructions
not attched to a basic block or function. There are conservatively
correct answers in these cases, and this makes the analysis more useful
in contexts where we have a partially formed bit of IR.

I don't have any way to test this directly... suggestions welcome here,
but I'm not seeing anything sadly. I only found this using a subsequent
patch to the inliner which runs instsimplify on partially inlined
instructions, and even then only on a quite large program. I never got
a reasonable testcase out of it, and anything I do get is likely to be
quite fragile due to requiring an interaction of two different passes,
and the only result being a segfault if it goes wrong.

llvm-svn: 153176
2012-03-21 10:58:47 +00:00
Andrew Trick 9c45706baf LSR: teach isSimplifiedLoopNest to handle PHI IVUsers.
llvm-svn: 153132
2012-03-20 21:24:44 +00:00
Andrew Trick 3660735e18 LSR: fix IVUsers isSimplifiedLoopNest to perform a full domtree walk
instead of skipping the current loop.

My prior fix was incomplete because of an overzealous compile-time optimization:
Better fix for: <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce

llvm-svn: 153131
2012-03-20 21:24:40 +00:00
Nick Lewycky fa30607eca Factor out the multiply analysis code in ComputeMaskedBits and apply it to the
overflow checking multiply intrinsic as well.

Add a test for this, updating the test from grep to FileCheck.

llvm-svn: 153028
2012-03-18 23:28:48 +00:00
Chandler Carruth d7a5f2adb0 Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

llvm-svn: 152903
2012-03-16 06:10:13 +00:00
Chandler Carruth 3c256fbf2d Pull the implementation of the code metrics out of the inline cost
analysis implementation. The header was already separated. Also cleanup
all the comments in the header to follow a nice modern doxygen form.

There is still plenty of cruft here, but some of that will fall out in
subsequent refactorings and this was an easy step in the right
direction. No functionality changed here.

llvm-svn: 152898
2012-03-16 05:51:52 +00:00
Andrew Trick 070e540a3e LSR fix: Add isSimplifiedLoopNest to IVUsers analysis.
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.

I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.

Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce

llvm-svn: 152892
2012-03-16 03:16:56 +00:00
Eric Christopher a4a0cf8394 Do the right thing on NULL uint64 fields.
Patch by Clemens Hammacher!

Fixes PR12243

llvm-svn: 152880
2012-03-16 00:21:54 +00:00
Duncan Sands bd415dec4e Type sizes and fields offsets inside structs are unsigned. This is a highly
theoretical fix since it only matters for types with >= 2^63 bits (!) and also
only matters if pointers have more than 64 bits, which is not supported anyway.

llvm-svn: 152831
2012-03-15 20:14:42 +00:00
Chandler Carruth 6d64bd4639 Make the swap code here a bit more obvious what its doing... We're
essentially sorting the pair's arguments. I'd love to actually call sort
here, but I'm just not that crazy. ;]

llvm-svn: 152764
2012-03-15 00:55:51 +00:00
Chandler Carruth 899e439aea Don't assume that the arguments are processed in some particular order.
This appears to not be the case with dragonegg at least in some
contexts. Hopefully will fix the bootstrap assert failure there.

llvm-svn: 152763
2012-03-15 00:50:21 +00:00
Chandler Carruth 5b6ca5ca37 Remove all remnants of partial specialization in the cost computation
side of things. This is all dead code.

llvm-svn: 152759
2012-03-15 00:29:08 +00:00
Chandler Carruth 4d1d34fbfc Extend the inline cost calculation to account for bonuses due to
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.

In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.

This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.

Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.

llvm-svn: 152752
2012-03-14 23:19:53 +00:00
Chandler Carruth a308955993 Refactor the inline cost bonus calculation for constants to use
a worklist rather than a recursive call.

No functionality changed.

llvm-svn: 152706
2012-03-14 07:32:53 +00:00
Chris Lattner 87fa77bd8a enhance jump threading to preserve TBAA information when PRE'ing loads,
fixing rdar://11039258, an issue that came up when inspecting clang's 
bootstrapped codegen.

llvm-svn: 152635
2012-03-13 18:07:41 +00:00
Duncan Sands 395ac42dd2 Generalize the "trunc(ptrtoint(x)) - trunc(ptrtoint(y)) ->
trunc(ptrtoint(x-y))" optimization introduced by Chandler.

llvm-svn: 152626
2012-03-13 14:07:05 +00:00
Duncan Sands b8cee00841 Uniformize the InstructionSimplify interface by ensuring that all routines
take a TargetLibraryInfo parameter.  Internally, rather than passing TD, TLI
and DT parameters around all over the place, introduce a struct for holding
them.

llvm-svn: 152623
2012-03-13 11:42:19 +00:00
Eli Friedman c8cbd06947 Fix regression from r151466: an we can't replace uses of an instruction reachable from the entry block with uses of an instruction not reachable from the entry block. PR12231.
llvm-svn: 152595
2012-03-13 01:06:07 +00:00
Chandler Carruth e45781e673 Address some review comments from Duncan. This moves the iterative
offset accumulation to use a boring APInt instead of ConstantExprs.
I didn't go all the way to an 'int64_t' because I wanted APInt to handle
any magic required to properly wrap the arithmetic when the pointer
width is <64 bits. If there is a significant penalty from using APInt
here, first off WTF, and secondly let me know and I'll do the math by
hand.

I've left one layer still operating w/ ConstantExpr because it makes the
interface quite a bit simpler, and that one isn't iterative so has much
lower cost.

I suppose this may potentially speed up some strang compilation
situations, but I don't really expect much. It should have no functional
impact either way.

llvm-svn: 152590
2012-03-13 00:06:15 +00:00
Chandler Carruth a0796555e2 Teach instsimplify how to constant fold pointer differences.
Typically instcombine has handled this, but pointer differences show up
in several contexts where we would like to get constant folding, and
cannot afford to run instcombine. Specifically, I'm working on improving
the constant folding of arguments used in inline cost analysis with
instsimplify.

Doing this in instsimplify implies some algorithm changes. We have to
handle multiple layers of all-constant GEPs because instsimplify cannot
fold them into a single GEP the way instcombine can. Also, we're only
interested in all-constant GEPs. The result is that this doesn't really
replace the instcombine logic, it's just complimentary and focused on
constant folding.

Reviewed on IRC by Benjamin Kramer.

llvm-svn: 152555
2012-03-12 11:19:31 +00:00
Stepan Dyatkovskiy 97b02fc1b3 llvm::SwitchInst
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.

llvm-svn: 152532
2012-03-11 06:09:17 +00:00
Benjamin Kramer 71ff880ff9 Make helper static, so it can be inlined into its sole caller.
llvm-svn: 152515
2012-03-10 22:41:06 +00:00
Bill Wendling 2bbb7945e7 As Duncan pointed out, pointers tend not to be in floating point format...for now.
llvm-svn: 152499
2012-03-10 18:20:55 +00:00
Bill Wendling 0624d2a1ec Make this transformation slightly less agressive and more correct.
The 'CmpInst::isFalseWhenEqual' function returns 'false' for values other than
simply equality. For instance, it returns 'false' for <= or >=. This isn't the
correct behavior for this transformation, which is checking for strict equality
and non-equality. It was causing the gcc.c-torture/execute/frame-address.c test
to fail because it would completely (and incorrectly) optimize a whole function
into a 'ret i32 0'.

llvm-svn: 152497
2012-03-10 17:56:03 +00:00
Chandler Carruth 97f6f03c42 Refactor some methods to look through bitcasts and GEPs on pointers into
a common collection of methods on Value, and share their implementation.
We had two variations in two different places already, and I need the
third variation for inline cost estimation.

Reviewed by Duncan Sands on IRC, but further comments here welcome.

llvm-svn: 152490
2012-03-10 08:39:09 +00:00
Nick Lewycky fea3e00e09 Factor out the analysis of addition and subtraction in ComputeMaskedBits. Reuse
it to analyze extractvalue(llvm.[us](add|sub).with.overflow.*) intrinsics!

llvm-svn: 152398
2012-03-09 09:23:50 +00:00
Chandler Carruth 783b7198b7 Undo a previous restriction on the inline cost calculation which Nick
introduced. Specifically, there are cost reductions for all
constant-operand icmp instructions against an alloca, regardless of
whether the alloca will in fact be elligible for SROA. That means we
don't want to abort the icmp reduction computation when we abort the
SROA reduction computation. That in turn frees us from the need to keep
a separate worklist and defer the ICmp calculations.

Use this new-found freedom and some judicious function boundaries to
factor the innards of computing the cost factor of any given instruction
out of the loop over the instructions and into static helper functions.
This greatly simplifies the code, and hopefully makes it more clear what
is happening here.

Reviewed by Eric Christopher. There is some concern that we'd like to
ensure this doesn't get out of hand, and I plan to benchmark the effects
of this change over the next few days along with some further fixes to
the inline cost.

llvm-svn: 152368
2012-03-09 02:49:36 +00:00
Stepan Dyatkovskiy 5b648afb4d Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html

Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".

ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.

Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.

Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow

for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
  BasicBlock *BB = i.getCaseSuccessor();
  ConstantInt *V = i.getCaseValue();
  // Do something.
}

If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.

There are also related changes in llvm-clients: klee and clang.

llvm-svn: 152297
2012-03-08 07:06:20 +00:00
Chandler Carruth dd1637c393 Rotate two of the functions used to count bonuses for the inline cost
analysis to be methods on the cost analysis's function info object
instead of the code metrics object. These really are just users of the
code metrics, they're building the information for the function's
analysis.

This is the first step of growing the amount of information we collect
about a function in order to cope with pair-wise simplifications due to
allocas.

llvm-svn: 152283
2012-03-08 02:04:19 +00:00
Nick Lewycky 1d57ee341a No functionality change. Type::isSized() can be expensive, so avoid calling it
until after other inexpensive tests.

llvm-svn: 152195
2012-03-07 02:27:53 +00:00
Eli Friedman af3c6fe51e A few more cases of missing masking in ComputeMaskedBits; found by inspection.
llvm-svn: 152070
2012-03-05 23:22:40 +00:00
Eli Friedman a8b75ac798 Make sure we don't return bits outside the mask in ComputeMaskedBits. PR12189.
llvm-svn: 152066
2012-03-05 23:09:40 +00:00
Benjamin Kramer d9d80b1dde LVI: Recognize the form instcombine canonicalizes range checks into when forming constant ranges.
This could probably be made a lot smarter, but this is a common case and doesn't require LVI to scan a lot
of code. With this change CVP can optimize away the "shift == 0" case in Hashing.h that only gets hit when
"shift" is in a range not containing 0.

llvm-svn: 151919
2012-03-02 15:34:43 +00:00
Eli Friedman 0774902a00 Duncan pointed out that if the alignment isn't explicitly specified, it defaults to the ABI alignment. Given that, make this code a bit more aggressive in such cases.
llvm-svn: 151584
2012-02-27 23:16:46 +00:00
Eli Friedman 8bc169c3c5 Teach BasicAA about the LLVM IR rules that allow reading past the end of an object given sufficient alignment. Fixes PR12098.
llvm-svn: 151553
2012-02-27 20:46:07 +00:00
Rafael Espindola 09a4201d3c Fix this assert. IP can point to an instruction with strange dominance
properties (invoke). Just assert that the instruction we return dominates
the insertion point.

llvm-svn: 151511
2012-02-27 02:13:03 +00:00
Rafael Espindola b660977c67 Don't call dominates on unreachable instructions. Should fix the dragonegg
build. Testcase is still reducing.

llvm-svn: 151474
2012-02-26 05:30:08 +00:00
Rafael Espindola ae725715ef And update the comment...
llvm-svn: 151472
2012-02-26 02:36:56 +00:00
Rafael Espindola fa75542078 Enable the assert that got all this dominator work started.
llvm-svn: 151471
2012-02-26 02:29:18 +00:00
Rafael Espindola 94df267db3 Change the implementation of dominates(inst, inst) to one based on what the
verifier does. This correctly handles invoke.
Thanks to Duncan, Andrew and Chris for the comments.
Thanks to Joerg for the early testing.

llvm-svn: 151469
2012-02-26 02:19:19 +00:00
Nick Lewycky 3db143ea8c Reinstate the optimization from r151449 with a fix to not turn 'gep %x' into
'gep null' when the icmp predicate is unsigned (or is signed without inbounds).

llvm-svn: 151467
2012-02-26 02:09:49 +00:00
Rafael Espindola c8c2b06a90 Don't call dominates on unreachable instructions.
llvm-svn: 151466
2012-02-26 01:50:14 +00:00
Nick Lewycky 7bbd72da46 Roll these back to r151448 until I figure out how they're breaking
MultiSource/Applications/lua.

llvm-svn: 151463
2012-02-25 23:01:19 +00:00
Nick Lewycky eeeffbb497 An argument and a local identified object (eg. a noalias call) could turn out
equal if both are null. In the test, scope type %t and global @y by adding a
'gep' prefix to them.

llvm-svn: 151452
2012-02-25 20:19:07 +00:00
Nick Lewycky 7b99bada0b Fix five-letter typo in comment.
llvm-svn: 151450
2012-02-25 19:12:58 +00:00
Nick Lewycky 51f2be8bff Teach instsimplify to be more aggressive when analyzing comparisons of pointers
by using llvm::isIdentifiedObject. Also teach it to handle GEPs that have
the same base pointer and constant operands. Fixes PR11238!

llvm-svn: 151449
2012-02-25 19:07:42 +00:00
Nick Lewycky 3f885b65a2 Move isKnownNonNull from private implementation detail of BasicAA to a public
function that others can use, next to llvm::isIdentifiedObject.

llvm-svn: 151446
2012-02-25 10:56:28 +00:00
Chris Lattner 01990f0e1c fix PR12075, a regression in a recent transform I added. In unreachable code, gep chains can be infinite. Just like "stripPointerCasts", use a set to keep track of visited instructions so we don't recurse infinitely.
llvm-svn: 151383
2012-02-24 19:01:58 +00:00
Rafael Espindola f35c789031 Fix typo.
llvm-svn: 151238
2012-02-23 05:38:51 +00:00
Chad Rosier 5dfe6dab25 Remove extra semi-colons.
llvm-svn: 151169
2012-02-22 17:25:00 +00:00
Rafael Espindola 337cfaf757 Improve comment. Thanks for Andrew for the suggestion.
llvm-svn: 151127
2012-02-22 03:44:46 +00:00
Rafael Espindola cd06b482d2 Semantically revert 151015. Add a comment on why we should be able to assert
the dominance once the dominates method is fixed and why we can use the builder's
insertion point.
Fixes pr12048.

llvm-svn: 151125
2012-02-22 03:21:39 +00:00
Rafael Espindola b41b407f3d s/the the/the/
llvm-svn: 151079
2012-02-21 19:27:16 +00:00
Rafael Espindola 729e3aae92 Use more idiomatic assert.
llvm-svn: 151026
2012-02-21 03:51:14 +00:00
Rafael Espindola b2defca267 Avoid warning on non assert builds.
llvm-svn: 151025
2012-02-21 03:48:30 +00:00
Rafael Espindola 7d445e92c3 It turns out that with the current scev organization ReuseOrCreateCast cannot
know where users will be added. Because of this, it cannot use
Builder.GetInsertPoint at all.

This patch
* removes the FIXME about adding the assert.
* adds a comment explaining hy we don't have one.
* removes a broken logic that only works for some callers and is not needed
  since r150884.
* adds an assert to caller that would have caught the bug fixed by r150884.

llvm-svn: 151015
2012-02-21 01:19:51 +00:00
Eric Christopher 4826c8fbe8 Make this a bit prettier and more obvious when a derived type isn't
derived from anything.

llvm-svn: 150975
2012-02-20 18:04:39 +00:00
Eric Christopher 300871076e If a derived type is also a composite type, print that information
too.

llvm-svn: 150974
2012-02-20 18:04:35 +00:00
Eric Christopher 8979712685 Add support for runtime languages on our forward declarations.
llvm-svn: 150973
2012-02-20 18:04:14 +00:00
Chris Lattner 445d8c6b50 fold comparisons of gep'd alloca points with null to false,
implementing PR12013.  We now compile the testcase to:

__Z4testv:                              ## @_Z4testv
## BB#0:                                ## %_ZN4llvm15SmallVectorImplIiE9push_backERKi.exit
	pushq	%rbx
	subq	$64, %rsp
	leaq	32(%rsp), %rbx
	movq	%rbx, (%rsp)
	leaq	64(%rsp), %rax
	movq	%rax, 16(%rsp)
	movl	$1, 32(%rsp)
	leaq	36(%rsp), %rax
	movq	%rax, 8(%rsp)
	leaq	(%rsp), %rdi
	callq	__Z1gRN4llvm11SmallVectorIiLj8EEE
	movq	(%rsp), %rdi
	cmpq	%rbx, %rdi
	je	LBB0_2
## BB#1:
	callq	_free
LBB0_2:                                 ## %_ZN4llvm11SmallVectorIiLj8EED1Ev.exit
	addq	$64, %rsp
	popq	%rbx
	ret

instead of:

__Z4testv:                              ## @_Z4testv
## BB#0:
	pushq	%rbx
	subq	$64, %rsp
	xorl	%eax, %eax
	leaq	(%rsp), %rbx
	addq	$32, %rbx
	movq	%rbx, (%rsp)
	movq	%rbx, 8(%rsp)
	leaq	64(%rsp), %rcx
	movq	%rcx, 16(%rsp)
	je	LBB0_2
## BB#1:
	movl	$1, 32(%rsp)
	movq	%rbx, %rax
LBB0_2:                                 ## %_ZN4llvm15SmallVectorImplIiE9push_backERKi.exit
	addq	$4, %rax
	movq	%rax, 8(%rsp)
	leaq	(%rsp), %rdi
	callq	__Z1gRN4llvm11SmallVectorIiLj8EEE
	movq	(%rsp), %rdi
	cmpq	%rbx, %rdi
	je	LBB0_4
## BB#3:
	callq	_free
LBB0_4:                                 ## %_ZN4llvm11SmallVectorIiLj8EED1Ev.exit
	addq	$64, %rsp
	popq	%rbx
	ret

This doesn't shrink clang noticably though.

llvm-svn: 150944
2012-02-20 00:42:49 +00:00
Rafael Espindola 991356e89b Temporarily disable this assert. Looks like it found a similar issue when
building bullet.

llvm-svn: 150885
2012-02-18 17:51:43 +00:00
Rafael Espindola 82d957593e Don't skip debug instructions when looking for the insertion point of
the cast. If we do, we can end up with

   inst1
   ---------------  < Insertion point
   dbg inst
   new inst

instead of the desired

   inst1
   new inst
   ---------------  < Insertion point
   dbg inst

Another option would be for InsertNoopCastOfTo (or its callers) to move the
insertion point and we would end up with

   inst1
   dbg inst
   new inst
   ---------------  < Insertion point

but that complicates the callers. This fixes PR12018 (and firefox's build).

llvm-svn: 150884
2012-02-18 17:22:58 +00:00
Eli Friedman 952d1f9f40 Fix a rather nasty regression from r150690: LHS != RHS does not imply LHS->stripPointerCasts() != RHS->stripPointerCasts().
llvm-svn: 150863
2012-02-18 03:29:25 +00:00
Dan Gohman 9017b846d4 Remove a comment about an alternative approach that wouldn't
actually work, at least as described. LLVM Metadata is not
intended to suppress LLVM IR rules, as it can be stripped at
any time.

llvm-svn: 150821
2012-02-17 18:33:38 +00:00
Eric Christopher b23b32e43b Typo in variable name.
llvm-svn: 150796
2012-02-17 07:08:46 +00:00
Benjamin Kramer 08f18b1b74 Revert "InstSimplify: Strip pointer casts early."
Turns out this isn't safe, because the code below depends on LHS and RHS having
the same type.

llvm-svn: 150695
2012-02-16 15:19:59 +00:00
Benjamin Kramer 3d27f71f2d InstSimplify: Strip pointer casts early.
llvm-svn: 150694
2012-02-16 15:03:04 +00:00
Benjamin Kramer ea51f62e4b InstSimplify: Ignore pointer casts when constant folding compares between pointers.
llvm-svn: 150690
2012-02-16 13:49:39 +00:00
Hal Finkel 56f6b0f219 Have AliasSet::aliasesUnknownInst use pointer TBAA info when available
llvm-svn: 150249
2012-02-10 15:52:39 +00:00
Duncan Sands 26641d7c02 Fix PR11948: the result type of an icmp may be a vector of boolean -
don't assume it is a boolean.

llvm-svn: 150247
2012-02-10 14:31:24 +00:00
Eric Christopher ae56eecf5f Add support for a temporary forward decl type. We want this so we
can rauw forward declarations if we decide to emit the full type.

Part of rdar://10809898

llvm-svn: 150024
2012-02-08 00:22:26 +00:00
Devang Patel a93cc25b79 Remove tabs.
llvm-svn: 150022
2012-02-08 00:17:07 +00:00
Craig Topper a2886c21d9 Convert assert(0) to llvm_unreachable
llvm-svn: 149967
2012-02-07 05:05:23 +00:00
Kostya Serebryany 9e0d377400 The patch resolves the conflict between AddressSanitizer and load widening (GVN).
The problem initially reported by Mozilla folks (http://code.google.com/p/address-sanitizer/issues/detail?id=20),
but it also prevents us from enabling LLVM bootstrap with AddressSanitizer.

llvm-svn: 149925
2012-02-06 22:48:56 +00:00
Chris Lattner 8213c8af29 Remove some dead code and tidy things up now that vectors use ConstantDataVector
instead of always using ConstantVector.

llvm-svn: 149912
2012-02-06 21:56:39 +00:00