Commit Graph

313 Commits

Author SHA1 Message Date
Dan Gohman 724f825f96 Fix a typo in a comment that Frits von Bommel noticed.
llvm-svn: 73796
2009-06-19 23:41:37 +00:00
Dan Gohman cc31110b95 Re-apply r73718, now that the fix in r73787 is in, and add a
hand-crafted testcase which demonstrates the bug that was exposed
in 254.gap.

llvm-svn: 73793
2009-06-19 23:23:27 +00:00
Dan Gohman 55e3dd9174 Fix LSR's OptimizeSMax to ignore max operators with more than 2 operands,
which it isn't prepared to handle.

llvm-svn: 73787
2009-06-19 23:03:46 +00:00
Evan Cheng 86076c9e30 Revert 73718. It's breaking 254.gap.
llvm-svn: 73783
2009-06-19 21:15:06 +00:00
Dan Gohman 8c9ac59455 Generalize LSR's OptimizeSMax to handle unsigned max tests as well
as signed max tests. Along with r73717, this helps CodeGen avoid
emitting code for a maximum operation for this class of loop.

llvm-svn: 73718
2009-06-18 20:23:18 +00:00
Dan Gohman a0348809b6 Remove the code from IVUsers that attempted to handle
casted induction variables in cases where the cast
isn't foldable. It ended up being a pessimization in
many cases. This could be fixed, but it would require
a bunch of complicated code in IVUsers' clients. The
advantages of this approach aren't visible enough to
justify it at this time.

llvm-svn: 73706
2009-06-18 16:54:06 +00:00
Dan Gohman d8329e8378 Update comments to use doxygen syntax.
llvm-svn: 73621
2009-06-17 17:51:33 +00:00
Dan Gohman 7ccc52f131 Support vector casts in more places, fixing a variety of assertion
failures.

To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.

Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.

llvm-svn: 73431
2009-06-15 22:12:54 +00:00
Dan Gohman 0652fd59ff Convert several parts of the ScalarEvolution framework to use
SmallVector instead of std::vector.

llvm-svn: 73357
2009-06-14 22:47:23 +00:00
Devang Patel 50fc5a3cd7 Simplify.
llvm-svn: 72965
2009-06-05 22:39:21 +00:00
Dan Gohman a5b9645c4b Split the Add, Sub, and Mul instruction opcodes into separate
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.

For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.

This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt

llvm-svn: 72897
2009-06-04 22:49:04 +00:00
Dan Gohman 4d1823680d Revert 72493 and replace it with a more conservative fix, for now: don't
rewrite the comparison if there is any implicit extension or truncation
on the induction variable. I'm planning for IVUsers to eventually take
over some of the work of this code, and for it to be generalized.

llvm-svn: 72496
2009-05-27 21:10:47 +00:00
Dan Gohman f4d85325c0 In ChangeCompareStride, when the stride to be reused is truncated to
a smaller type, promoted its offset back up to the type of the new
comparison. This fixes PR4222.

llvm-svn: 72493
2009-05-27 20:00:18 +00:00
Dan Gohman 7248923a5d Suppress the IV reversal transformation in the case that the RHS
of the comparison is defined inside the loop. This fixes a
use-before-def problem, because the transformation puts a use
of the RHS outside the loop.

llvm-svn: 72149
2009-05-20 00:34:08 +00:00
Dan Gohman 97f70add3c Add some more comments to the top of this file.
llvm-svn: 72131
2009-05-19 20:37:36 +00:00
Dan Gohman adc70d6806 Trim unneeded #includes.
llvm-svn: 72130
2009-05-19 20:35:26 +00:00
Dan Gohman 2649491f9c Teach SCEVExpander to expand arithmetic involving pointers into GEP
instructions. It attempts to create high-level multi-operand GEPs,
though in cases where this isn't possible it falls back to casting
the pointer to i8* and emitting a GEP with that. Using GEP instructions
instead of ptrtoint+arithmetic+inttoptr helps pointer analyses that
don't use ScalarEvolution, such as BasicAliasAnalysis.

Also, make the AddrModeMatcher more aggressive in handling GEPs.
Previously it assumed that operand 0 of a GEP would require a register
in almost all cases. It now does extra checking and can do more
matching if operand 0 of the GEP is foldable. This fixes a problem
that was exposed by SCEVExpander using GEPs.

llvm-svn: 72093
2009-05-19 02:15:55 +00:00
Dan Gohman 14d1339579 Rename UseTy to AccessTy, for consistency with getAccessType, and to
avoid ambiguity with the word "use" in IVStrideUse.

llvm-svn: 72012
2009-05-18 16:45:28 +00:00
Dale Johannesen 536de01bcf Add an int64_t variant of abs, for host environments
without one.  Use it where we were using abs on
int64_t objects.
(I strongly suspect the casts to unsigned in the
fragments in LoopStrengthReduce are not doing whatever
the original intent was, but the obvious change to
uint64_t doesn't work.  Maybe later.)

llvm-svn: 71612
2009-05-13 00:24:22 +00:00
Dan Gohman d76d71a291 Factor the code for collecting IV users out of LSR into an IVUsers class,
and generalize it so that it can be used by IndVarSimplify. Implement the
base IndVarSimplify transformation code using IVUsers. This removes
TestOrigIVForWrap and associated code, as ScalarEvolution now has enough
builtin overflow detection and folding logic to handle all the same cases,
and more. Run "opt -iv-users -analyze -disable-output" on your favorite
loop for an example of what IVUsers does.

This lets IndVarSimplify eliminate IV casts and compute trip counts in
more cases. Also, this happens to finally fix the remaining testcases
in PR1301.

Now that IndVarSimplify is being more aggressive, it occasionally runs
into the problem where ScalarEvolutionExpander's code for avoiding
duplicate expansions makes it difficult to ensure that all expanded
instructions dominate all the instructions that will use them. As a
temporary measure, IndVarSimplify now uses a FixUsesBeforeDefs function
to fix up instructions inserted by SCEVExpander. Fortunately, this code
is contained, and can be easily removed once a more comprehensive
solution is available.

llvm-svn: 71535
2009-05-12 02:17:14 +00:00
Evan Cheng 78a4eb844b Teach LSR to optimize more loop exit compares, i.e. change them to use postinc iv value. Previously LSR would only optimize those which are in the loop latch block. However, if LSR can prove it is safe (and profitable), it's now possible to change those not in the latch blocks to use postinc values.
Also, if the compare is the only use, LSR would place the iv increment instruction before the compare instead in the latch.

llvm-svn: 71485
2009-05-11 22:33:01 +00:00
Dale Johannesen 02cb2bf2e3 Reverse a loop that is counting up to a maximum to
count down to 0 instead, under very restricted
circumstances.  Adjust 4 testcases in which this
optimization fires.

llvm-svn: 71439
2009-05-11 17:15:42 +00:00
Evan Cheng b9dcc2c0c9 Factor out code that optimize loop terminating condition.
llvm-svn: 71305
2009-05-09 01:08:24 +00:00
Evan Cheng 342053cd27 Unbreak the build.
llvm-svn: 71091
2009-05-06 18:00:56 +00:00
David Greene 0dec5b9a75 Make sure to use signed arithmetic in APInt to fix a regression.
llvm-svn: 71090
2009-05-06 17:39:26 +00:00
Dan Gohman e58fc20f8d Fix a copy+pasto in a comment.
llvm-svn: 71035
2009-05-05 23:02:38 +00:00
Dan Gohman 96b18ccdd3 Delete a FIXME which is no longer relevant, and add a FIXME that is.
llvm-svn: 71033
2009-05-05 22:59:55 +00:00
Bill Wendling 5e2ac0cd9c Temporarily reverting r71008. It was causing this failure:
Running /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/
CodeGen/X86/dg.exp ...
FAIL: /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/
CodeGen/X86/change-compare-stride-1.ll
Failed with exit(1) at line 2
while running: grep {cmpq       $-478,} change-compare-stride-1.ll.tmp
child process exited abnormally

llvm-svn: 71013
2009-05-05 20:49:46 +00:00
David Greene 246a3dfb10 Handle overflow of 64-bit loop conditions.
llvm-svn: 71008
2009-05-05 20:22:36 +00:00
Dan Gohman 48f8222293 Re-apply 70645, converting ScalarEvolution to use
CallbackVH, with fixes. allUsesReplacedWith need to
walk the def-use chains and invalidate all users of a
value that is replaced. SCEVs of users need to be
recalcualted even if the new value is equivalent. Also,
make forgetLoopPHIs walk def-use chains, since any
SCEV that depends on a PHI should be recalculated when
more information about that PHI becomes available.

llvm-svn: 70927
2009-05-04 22:30:44 +00:00
Dan Gohman a30370bc33 Constify a bunch of SCEV-using code.
llvm-svn: 70919
2009-05-04 22:02:23 +00:00
Dan Gohman 5036695c32 Revert r70645 for now; it's causing a variety of regressions.
llvm-svn: 70661
2009-05-03 05:46:20 +00:00
Dan Gohman e9a38d16fe Convert ScalarEvolution to use CallbackVH for its internal map. This
makes ScalarEvolution::deleteValueFromRecords, and it's code that
subtly needed to be called before ReplaceAllUsesWith, unnecessary.

It also makes ValueDeletionListener unnecessary.

llvm-svn: 70645
2009-05-02 21:19:20 +00:00
Dan Gohman ff08995589 Previously, RecursivelyDeleteDeadInstructions provided an option
of returning a list of pointers to Values that are deleted. This was
unsafe, because the pointers in the list are, by nature of what
RecursivelyDeleteDeadInstructions does, always dangling. Replace this
with a simple callback mechanism. This may eventually be removed if
all clients can reasonably be expected to use CallbackVH.

Use this to factor out the dead-phi-cycle-elimination code from LSR
utility function, and generalize it to use the
RecursivelyDeleteTriviallyDeadInstructions utility function.

This makes LSR more aggressive about eliminating dead PHI cycles;
adjust tests to either be less trivial or to simply expect fewer
instructions.

llvm-svn: 70636
2009-05-02 18:29:22 +00:00
Dan Gohman 6409e7d4e9 Don't split critical edges during the AddUsersIfInteresting phase
of LSR. This makes the AddUsersIfInteresting phase of LSR a pure
analysis instead of a phase that potentially does CFG modifications.

The conditions where this code would actually perform a split are
rare, and in the cases where it actually would do a split the split
is usually undone by CodeGenPrepare, and in cases where splits
actually survive into codegen, they appear to hurt more often than
they help.

llvm-svn: 70625
2009-05-02 05:36:01 +00:00
Dan Gohman 65dbe7874f Make RequiresTypeConversion canonicalize the types before calling the
target hooks canLosslesslyBitCastTo and isTruncateFree. This allows
targets to avoid worrying about handling all combinations of integer
and pointer types.

llvm-svn: 70555
2009-05-01 17:07:43 +00:00
Dan Gohman d3aa4215ef Minor whitespace fix.
llvm-svn: 70551
2009-05-01 16:56:32 +00:00
Dan Gohman 6be8530158 Fix some code to work if TargetLowering is not available.
llvm-svn: 70546
2009-05-01 16:29:14 +00:00
Dale Johannesen f4031bd01e Print correct instruction in dump.
llvm-svn: 70427
2009-04-29 22:57:20 +00:00
Dan Gohman e99f98262c Permit ChangeCompareStride to rewrite a comparison when the factor
between the comparison's iv stride and the candidate stride is
exactly -1.

llvm-svn: 70244
2009-04-27 20:35:32 +00:00
Dan Gohman 4860db61be Factor out a common base class from SCEVTruncateExpr, SCEVZeroExtendExpr,
and SCEVSignExtendExpr.

llvm-svn: 69649
2009-04-21 01:25:57 +00:00
Dan Gohman b397e1a7a2 Introduce encapsulation for ScalarEvolution's TargetData object, and refactor
the code to minimize dependencies on TargetData.

llvm-svn: 69644
2009-04-21 01:07:12 +00:00
Dan Gohman 056857aa21 Use more const qualifiers with SCEV interfaces.
llvm-svn: 69450
2009-04-18 17:56:28 +00:00
Dan Gohman d2d6fd806c Don't create ConstantInts with pointer type. This fixes a
regression in 403.gcc in PIC_CODEGEN=1 and DISABLE_LTO=1
mode.

llvm-svn: 69344
2009-04-17 02:02:52 +00:00
Dan Gohman fec1d086e0 Use TargetData::getTypeSizeInBits instead of getPrimitiveSizeInBits()
to get the correct answer for pointer types.

llvm-svn: 69321
2009-04-16 22:35:57 +00:00
Dan Gohman 8b6ebb1112 Minor code simplifications. Don't attempt LSR on theoretical
targets with pointers larger than 64 bits, due to the code not
yet being APInt clean.

llvm-svn: 69296
2009-04-16 16:49:48 +00:00
Dan Gohman e2ead2c328 LSR is no longer a GEP optimizer. It is now an IV expression
optimizer, which just happen to frequently involve optimizing GEPs.

llvm-svn: 69295
2009-04-16 16:46:01 +00:00
Dan Gohman a8be04b2db Use ConstantExpr::getIntToPtr instead of SCEVExpander::InsertCastOfTo,
since the operand is always a constant.

llvm-svn: 69291
2009-04-16 15:48:38 +00:00
Dan Gohman 71bccd3e0e Use a SCEV expression cast instead of immediately inserting a
new instruction with SCEVExpander::InsertCastOfTo.

llvm-svn: 69290
2009-04-16 15:47:35 +00:00
Dan Gohman 0a40ad93a9 Expand GEPs in ScalarEvolution expressions. SCEV expressions can now
have pointer types, though in contrast to C pointer types, SCEV
addition is never implicitly scaled. This not only eliminates the
need for special code like IndVars' EliminatePointerRecurrence
and LSR's own GEP expansion code, it also does a better job because
it lets the normal optimizations handle pointer expressions just
like integer expressions.

Also, since LLVM IR GEPs can't directly index into multi-dimensional
VLAs, moving the GEP analysis out of client code and into the SCEV
framework makes it easier for clients to handle multi-dimensional
VLAs the same way as other arrays.

Some existing regression tests show improved optimization.
test/CodeGen/ARM/2007-03-13-InstrSched.ll in particular improved to
the point where if-conversion started kicking in; I turned it off
for this test to preserve the intent of the test.

llvm-svn: 69258
2009-04-16 03:18:22 +00:00