version uses a new algorithm for evaluating the binomial coefficients
which is significantly more efficient for AddRecs of more than 2 terms
(see the comments in the code for details on how the algorithm works).
It also fixes some bugs: it removes the arbitrary length restriction for
AddRecs, it fixes the silent generation of incorrect code for AddRecs
which require a wide calculation width, and it fixes an issue where we
were incorrectly truncating the iteration count too far when evaluating
an AddRec expression narrower than the induction variable.
There are still a few related issues I know of: I think there's
still an issue with the SCEVExpander expansion of AddRec in terms of
the width of the induction variable used. The hack to avoid generating
too-wide integers shouldn't be necessary; instead, the callers should be
considering the cost of the expansion before expanding it (in addition
to not expanding too-wide integers, we might not want to expand
expressions that are really expensive, especially when optimizing for
size; calculating an length-17 32-bit AddRec currently generates about 250
instructions of straight-line code on X86). Also, for long 32-bit
AddRecs on X86, CodeGen really sucks at scheduling the code. I'm planning on
filing follow-up PRs for these issues.
llvm-svn: 54332
time applying to the implicit comparison in smin expressions. The
correct way to transform an inequality into the opposite
inequality, either signed or unsigned, is with a not expression.
I looked through the SCEV code, and I don't think there are any more
occurrences of this issue.
llvm-svn: 54194
SGT exit condition. Essentially, the correct way to flip an inequality
in 2's complement is the not operator, not the negation operator.
That said, the difference only affects cases involving INT_MIN.
Also, enhance the pre-test search logic to be a bit smarter about
inequalities flipped with a not operator, so it can eliminate the smax
from the iteration count for simple loops.
llvm-svn: 54184
circumstances we could end up remapping a dependee to the same instruction
that we're trying to remove. Handle this properly by just falling back to
a conservative solution.
llvm-svn: 54132
bail after 256-bits to avoid producing code that the backends can't handle.
Previously, we capped it at 64-bits, preferring to miscompile in those cases.
This change also reverts much of r52248 because the invariants the code was
expecting are now being met.
llvm-svn: 53812
Move GetConstantStringInfo to lib/Analysis. Remove
string output routine from Constant. Update all
callers. Change debug intrinsic api slightly to
accomodate move of routine, these now return values
instead of strings.
This unbreaks llvm-gcc bootstrap.
llvm-svn: 52884
string output routine from Constant. Update all
callers. Change debug intrinsic api slightly to
accomodate move of routine, these now return values
instead of strings.
llvm-svn: 52748
inserting extractvalues. In particular, this prevents the insertion of
extractvalues that can't be folded away later. Also add an example of when this
stuff is needed.
llvm-svn: 52328
I'm at it, rename it to FindInsertedValue.
The only functional change is that newly created instructions are no longer
added to instcombine's worklist, but that is not really necessary anyway (and
I'll commit some improvements next that will completely remove the need).
llvm-svn: 52315
This fixes several minor bugs (such as returning noalias
for comparisons between external weak functions an null) but
is mostly a cleanup.
llvm-svn: 52299
take into account the instrucion pointed by InsertPt. Thanks to it,
returning the new value of InsertPt to the InsertBinop() caller can be
avoided. The bug was, actually, in visitAddRecExpr() method which wasn't
correctly handling changes of InsertPt. There shouldn't be any
performance regression, as -gvn pass (run after -indvars) removes any
redundant binops.
llvm-svn: 52291
Add a safety measure. It isn't safe to assume in ScalarEvolutionExpander that
all loops are in canonical form (but it should be safe for loops that have
AddRecs).
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
llvm-svn: 52275
with code that was expecting different bit widths for different values.
Make getTruncateOrZeroExtend a method on ScalarEvolution, and use it.
llvm-svn: 52248
is longer than the second one) should stop after finding one. Added break
instruction guarantees it. It also changes difference between offsets to
absolute value of this difference in the condition.
llvm-svn: 51875
out of instcombine into a new file in libanalysis. This also teaches
ComputeNumSignBits about the number of sign bits in a constantint.
llvm-svn: 51863
Analysis/ConstantFolding to fold ConstantExpr's, then make instcombine use it
to try to use targetdata to fold constant expressions on void instructions.
Also extend the icmp(inttoptr, inttoptr) folding to handle the case where
int size != ptr size.
llvm-svn: 51559
SCCP like sparse lattice analysis with relative ease. Just pick your
lattice function and implement the transfer function and you're good.
Just make sure you don't break monotonicity ;-)
llvm-svn: 50961
by an instance of LibCallInfo to provide mod/ref info of
standard library functions. This is powerful enough to
say that 'sqrt' is readonly except that it modifies errno,
or that "printf doesn't store to memory unless the %n
constraint is present" etc.
llvm-svn: 50827
Currently is sufficient to describe mod/ref behavior but will hopefully
eventually be extended for other purposes.
This isn't used by anything yet.
llvm-svn: 50820