When ExactBECount is a constant, use it for MaxBECount.
When MaxBECount cannot be computed, replace it with ExactBECount.
Fixes PR9424.
llvm-svn: 127342
unsigned overflow (e.g. "gep P, -1"), and while they can have
signed wrap in theoretical situations, modelling an AddRec as
not having signed wrap is going enough for any case we can
think of today. In the future if this isn't enough, we can
revisit this. Modeling them as having NUW isn't causing any
known problems either FWIW.
llvm-svn: 125410
by indvars through the scev expander.
trunc(add x, y) --> add(trunc x, y). Currently SCEV largely folds the other way
which is probably wrong, but preserved to minimize churn. Instcombine doesn't
do this fold either, demonstrating a missed optz'n opportunity on code doing
add+trunc+add.
llvm-svn: 123838
are pointing to the same object, one pointer is accessing the entire
object, and the other is access has a non-zero size. This prevents
TBAA from kicking in and saying NoAlias in such cases.
llvm-svn: 123775
does normal initialization and normal chaining. Change the default
AliasAnalysis implementation to NoAlias.
Update StandardCompileOpts.h and friends to explicitly request
BasicAliasAnalysis.
Update tests to explicitly request -basicaa.
llvm-svn: 116720
response from getModRefInfo is not useful here. Instead, check for identical
calls only in the NoModRef case.
Reapply r110270, and strengthen it to compensate for the memdep changes.
When both calls are readonly, there is no dependence between them.
llvm-svn: 110382
to return Ref if the left callsite only reads memory read or written
by the right callsite; fix BasicAliasAnalysis to implement this.
Add AliasAnalysisEvaluator support for testing the two-callsite
form of getModRefInfo.
llvm-svn: 110270
The RegionInfo pass detects single entry single exit regions in a function,
where a region is defined as any subgraph that is connected to the remaining
graph at only two spots.
Furthermore an hierarchical region tree is built.
Use it by calling "opt -regions analyze" or "opt -view-regions".
llvm-svn: 109089
interface needs implementations to be consistent, so any code which
wants to support different semantics must use a different interface.
It's not currently worthwhile to add a new interface for this new
concept.
Document that AliasAnalysis doesn't support cross-function queries.
llvm-svn: 107776
assuming that loops are in canonical form, as ScalarEvolution doesn't
depend on LoopSimplify itself. Also, with indirectbr not all loops can
be simplified. This fixes PR7416.
llvm-svn: 106389
scrounging through SCEVUnknown contents and SCEVNAryExpr operands;
instead just do a simple deterministic comparison of the precomputed
hash data.
Also, since this is more precise, it eliminates the need for the slow
N^2 duplicate detection code.
llvm-svn: 105540
Also, generalize ScalarEvolutions's min and max recognition to handle
some new forms of min and max that this change makes more common.
llvm-svn: 102234
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
llvm-svn: 99928
true or false as its exit condition. These are usually eliminated by
SimplifyCFG, but the may be left around during a pass which wishes
to preserve the CFG.
llvm-svn: 96683
have trouble with an intermediate add overflowing. Also, be more conservative
about the case where the induction variable in an SLT loop exit can step past
the RHS of the SLT and overflow in a single step.
Make getSignedRange more aggressive, to recover for some common cases which
the above fixes pessimized.
This addresses rdar://7561161.
llvm-svn: 94512
Nodes that had children outside of the post dominator tree (infinite loops)
where removed from the post dominator tree. This seems to be wrong. Leave them
in the tree.
llvm-svn: 93633
first expression as P+4+4*i which we considered to possibly alias
P+4*j. Now we correctly analyze the former one as P+1+4*i.
@test10 is a sanity test that verfies that we know that P+4+4*i != P+4*i.
llvm-svn: 89960
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86311
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86077
cannot alias the GEP. GEP pointer alias rule states this clearly:
A pointer value formed from a getelementptr instruction is associated with the
addresses associated with the first operand of the getelementptr.
llvm-svn: 84079
where the induction variable has a non-unit stride, such as {0,+,2}, and
there are expressions such as {1,+,2} inside the loop formed with
or or add nsw operators.
llvm-svn: 82151
input filename so that opt doesn't print the input filename in the
output so that grep lines in the tests don't unintentionally match
strings in the input filename.
llvm-svn: 81537
D test/Analysis/Profiling
--- Reverse-merging r80907 into '.':
U lib/Analysis/ProfileInfoLoaderPass.cpp
Attempt to remove failure in the self-hosting build bot.
llvm-svn: 80966
edge-profiling, this is more useful since the loading of the
optimal-edge-profiling is more complicated.
The edge-profiling is tested in edge-profiling.ll where only the
instrumentation is tested.
llvm-svn: 80791
This is a simple AliasAnalysis implementation which works by making
ScalarEvolution queries. ScalarEvolution has a more complete understanding
of arithmetic than BasicAA's collection of ad-hoc checks, so it handles
some cases that BasicAA misses, for example p[i] and p[i+1] within the
same iteration of a loop.
This is currently experimental. It may be that the main use for this pass
will be to help find cases where BasicAA can be profitably extended, or
to help in the development of the overall AliasAnalysis infrastructure,
however it's also possible that it could grow up to become a directly
useful pass.
llvm-svn: 80098
(x pred y) with more thorough code that does more complete canonicalization
before resorting to range checks. This helps it find more cases where
the canonicalized expressions match.
llvm-svn: 76671
For now this only computes the allocated size of the memory pointed to by a
pointer, and offset a pointer from allocated pointer.
The actual checkLimits part will come later, after another round of review.
llvm-svn: 75657
blocks, and also exit blocks with multiple conditions (combined
with (bitwise) ands and ors). It's often infeasible to compute an
exact trip count in such cases, but a useful upper bound can often
be found.
llvm-svn: 73866
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
llvm-svn: 72897
beyond their associated static array type.
I believe that this fixes a legitimate bug, because BasicAliasAnalysis
already has code to check for this condition that works for non-constant
indices, however it was missing the case of constant indices. With this
change, it checks for both.
This fixes PR4267, and miscompiles of SPEC 188.ammp and 464.h264.href.
llvm-svn: 72451
artificial "ptrtoint", as it tends to clutter up complicated
expressions. The cast operators now print both source and
destination types, which is usually sufficient.
llvm-svn: 70554
compute an upper-bound value for the trip count, in addition to
the actual trip count. Use this to allow getZeroExtendExpr and
getSignExtendExpr to fold casts in more cases.
This may eventually morph into a more general value-range
analysis capability; there are certainly plenty of places where
more complete value-range information would allow more folding.
llvm-svn: 70509
(sext i8 {-128,+,1} to i64) to i64 {-128,+,1}, where the iteration
crosses from negative to positive, but is still safe if the trip
count is within range.
llvm-svn: 70421
type to truncate to should be the number of bits of the value that are
preserved, not the number that are clobbered with sign-extension.
This fixes regressions in ldecod.
llvm-svn: 69704
to more accurately describe what it does. Expand its doxygen comment
to describe what the backedge-taken count is and how it differs
from the actual iteration count of the loop. Adjust names and
comments in associated code accordingly.
llvm-svn: 65382
couldn't ever be the return of call instruction. However, it's quite possible
that said local allocation is itself the return of a function call. That's
what malloc and calloc are for, actually.
llvm-svn: 64442
The problematic part of this patch is that we were out of attribute bits,
requiring some fancy bit hacking to make it fit (by shrinking alignment)
without breaking existing users or the file format.
This change will require users to rebuild llvm-gcc to match llvm.
llvm-svn: 61239
parallel, allowing it to decide that P/Q must alias if A/B
must alias in things like:
P = gep A, 0, i, 1
Q = gep B, 0, i, 1
This allows GVN to delete 62 more instructions out of 403.gcc.
llvm-svn: 60820
indicate functions that allocate, such as operator new, or list::insert. The
actual definition is slightly less strict (for now).
No changes to the bitcode reader/writer, asm printer or verifier were needed.
llvm-svn: 59934
Use it to safely handle less-than-or-equals-to exit conditions in loops. These
also occur when the loop exit branch is exit on true because SCEV inverses the
icmp predicate.
Use it again to handle non-zero strides, but only with an unsigned comparison
in the exit condition.
llvm-svn: 59528
If this patch causes a performance regression for anyone, please let me know,
and it can be fixed in a different way with much more effort.
llvm-svn: 59384
Unfortunately this means removing one regression test
of GlobalsModRef because I couldn't work out how to
perform it without MarkModRef.
llvm-svn: 56342
can get the readnone/readonly attributes, and gives them it.
The plan is to remove markmodref (which did the same thing
by querying GlobalsModRef) and delete the analogous
functionality from GlobalsModRef.
llvm-svn: 56341
its callers to emit a space character before calling it when a
space is needed.
This fixes several spurious whitespace issues in
ScalarEvolution's debug dumps. See the test changes for
examples.
This also fixes odd space-after-tab indentation in the output
for switch statements, and changes calls from being printed like
this:
call void @foo( i32 %x )
to this:
call void @foo(i32 %x)
llvm-svn: 56196
(1) code left over from the days of ConstantPointerRef:
if a use of a function is a GlobalValue then that is
not considered a reason to add an edge from the external
node, even though the use may be as an initializer for
an externally visible global! There might be some point
to this behaviour when the use is by an alias (though the
code predated aliases by some centuries), but I think
PR2782 is a better way of handling that. (2) If function
F calls function G, and also G is a parameter to the
call, then an F->G edge is not added to the callgraph.
While this doesn't seem to matter much, adding such an
edge makes the callgraph more regular.
In addition, the new code should be faster as well as
simpler.
llvm-svn: 55987
callgraph, when one member of a SCC calls another
then the analysis would drop to mod-ref because
there is (usually) no function info for the callee
yet; fix this. Teach the analysis about function
attributes, in particular the readonly attribute
(which requires being careful about globals).
llvm-svn: 55696
continue past the first conditional branch when looking for a
relevant test. This helps it avoid using MAX expressions in
loop trip counts in more cases.
llvm-svn: 54697
version uses a new algorithm for evaluating the binomial coefficients
which is significantly more efficient for AddRecs of more than 2 terms
(see the comments in the code for details on how the algorithm works).
It also fixes some bugs: it removes the arbitrary length restriction for
AddRecs, it fixes the silent generation of incorrect code for AddRecs
which require a wide calculation width, and it fixes an issue where we
were incorrectly truncating the iteration count too far when evaluating
an AddRec expression narrower than the induction variable.
There are still a few related issues I know of: I think there's
still an issue with the SCEVExpander expansion of AddRec in terms of
the width of the induction variable used. The hack to avoid generating
too-wide integers shouldn't be necessary; instead, the callers should be
considering the cost of the expansion before expanding it (in addition
to not expanding too-wide integers, we might not want to expand
expressions that are really expensive, especially when optimizing for
size; calculating an length-17 32-bit AddRec currently generates about 250
instructions of straight-line code on X86). Also, for long 32-bit
AddRecs on X86, CodeGen really sucks at scheduling the code. I'm planning on
filing follow-up PRs for these issues.
llvm-svn: 54332
time applying to the implicit comparison in smin expressions. The
correct way to transform an inequality into the opposite
inequality, either signed or unsigned, is with a not expression.
I looked through the SCEV code, and I don't think there are any more
occurrences of this issue.
llvm-svn: 54194
SGT exit condition. Essentially, the correct way to flip an inequality
in 2's complement is the not operator, not the negation operator.
That said, the difference only affects cases involving INT_MIN.
Also, enhance the pre-test search logic to be a bit smarter about
inequalities flipped with a not operator, so it can eliminate the smax
from the iteration count for simple loops.
llvm-svn: 54184
force evaluation (ComputeIterationCountExhaustively) should be turned off.
It doesn't apply to trip-count2.ll because this file tests the brute force
evaluation.
The test for PR2364 (2008-05-25-NegativeStepToZero.ll) currently fails
showing that the patch for this bug doesn't work. I'll fix it in a few hours
with a patch for PR2088.
llvm-svn: 53792
with code that was expecting different bit widths for different values.
Make getTruncateOrZeroExtend a method on ScalarEvolution, and use it.
llvm-svn: 52248