Commit Graph

708 Commits

Author SHA1 Message Date
Chris Lattner ee59d4bf04 Fix a bug in my checkin from last night that caused miscompilations of
186.crafty, fhourstones and 132.ijpeg.

Bugpoint makes really nasty miscompilations embarassingly easy to find.  It
narrowed it down to the instcombiner and this testcase (from fhourstones):

bool %l7153_l4706_htstat_loopentry_2E_4_no_exit_2E_4(int* %i, [32 x int]* %works, int* %tmp.98.out) {
newFuncRoot:
        %tmp.96 = load int* %i          ; <int> [#uses=1]
        %tmp.97 = getelementptr [32 x int]* %works, long 0, int %tmp.96         ; <int*> [#uses=1]
        %tmp.98 = load int* %tmp.97             ; <int> [#uses=2]
        %tmp.99 = load int* %i          ; <int> [#uses=1]
        %tmp.100 = and int %tmp.99, 7           ; <int> [#uses=1]
        %tmp.101 = seteq int %tmp.100, 7                ; <bool> [#uses=2]
        %tmp.102 = cast bool %tmp.101 to int            ; <int> [#uses=0]
        br bool %tmp.101, label %codeRepl4.exitStub, label %codeRepl3.exitStub

codeRepl4.exitStub:             ; preds = %newFuncRoot
        store int %tmp.98, int* %tmp.98.out
        ret bool true

codeRepl3.exitStub:             ; preds = %newFuncRoot
        store int %tmp.98, int* %tmp.98.out
        ret bool false
}

... which only has one combination performed on it:

$ llvm-as < t.ll | opt -instcombine -debug | llvm-dis
IC: Old =       %tmp.101 = seteq int %tmp.100, 7                ; <bool> [#uses=1]
    New =       setne int %tmp.100, 0           ; <bool>:<badref> [#uses=0]
IC: MOD =       br bool %tmp.101, label %codeRepl3.exitStub, label %codeRepl4.exitStub
IC: MOD =       %tmp.97 = getelementptr [32 x int]* %works, uint 0, int %tmp.96         ; <int*> [#uses=1]

It doesn't get much better than this.  :)

llvm-svn: 14109
2004-06-10 02:33:20 +00:00
Chris Lattner c8e7e298c1 More minor cleanups
llvm-svn: 14108
2004-06-10 02:12:35 +00:00
Chris Lattner df20a4d589 Eliminate many occurrances of Instruction::
llvm-svn: 14107
2004-06-10 02:07:29 +00:00
Chris Lattner 35167c3087 Implement InstCombine/select.ll:test15*
llvm-svn: 14095
2004-06-09 07:59:58 +00:00
Chris Lattner 396dbfe327 Be more careful about the order we put stuff onto the worklist. This allow us to
collapse this:
bool %le(int %A, int %B) {
        %c1 = setgt int %A, %B
        %tmp = select bool %c1, int 1, int 0
        %c2 = setlt int %A, %B
        %result = select bool %c2, int -1, int %tmp
        %c3 = setle int %result, 0
        ret bool %c3
}

into:

bool %le(int %A, int %B) {
        %c3 = setle int %A, %B          ; <bool> [#uses=1]
        ret bool %c3
}

which is handy, because the Java FE makes these sequences all over the place.

This is tested as: test/Regression/Transforms/InstCombine/JavaCompare.ll

llvm-svn: 14086
2004-06-09 05:08:07 +00:00
Chris Lattner 2dd017402b Implement select.ll:test14*
llvm-svn: 14083
2004-06-09 04:24:29 +00:00
Chris Lattner 523d3e6674 Fix one of the major things that is causing the C Backend to infinite loop
llvm-svn: 13872
2004-05-28 05:02:13 +00:00
Chris Lattner ed79d8af53 Fix InstCombine/load.ll & PR347.
This code hadn't been updated after the "structs with more than 256 elements"
related changes to the GEP instruction.  Also it was not handling the
ConstantAggregateZero class.

Now it does!

llvm-svn: 13834
2004-05-27 17:30:27 +00:00
Reid Spencer 297d7fe7e6 Remove unused header file.
llvm-svn: 13750
2004-05-25 08:51:36 +00:00
Reid Spencer 1cc31f264f Make this pass simply invoke SymbolTable::strip().
llvm-svn: 13749
2004-05-25 08:51:25 +00:00
Chris Lattner e1e10e1883 Implement InstCombine:shift.ll:test16, which turns (X >> C1) & C2 != C3
into (X & (C2 << C1)) != (C3 << C1), where the shift may be either left or
right and the compare may be any one.

This triggers 1546 times in 176.gcc alone, as it is a common pattern that
occurs for bitfield accesses.

llvm-svn: 13740
2004-05-25 06:32:08 +00:00
Chris Lattner 03841659a4 Implement instcombine/cast.ll:test16:
Canonicalize cast X to bool into a setne instruction

llvm-svn: 13736
2004-05-25 04:29:21 +00:00
Chris Lattner 99173879ad Spelling people's names right is kinda important
llvm-svn: 13702
2004-05-23 21:27:29 +00:00
Chris Lattner 289ba2ac4d Adjust to the changes in the AliasSetTracker interface
llvm-svn: 13690
2004-05-23 21:20:19 +00:00
Chris Lattner e67dbc2ae2 Add support for replacement of formal arguments with simpler expressions.
llvm-svn: 13689
2004-05-23 21:19:55 +00:00
Chris Lattner 099c8cfe90 Implement the -lowergc pass which is used by code generators (like the CBE)
that do not have builtin support for garbage collection.

llvm-svn: 13688
2004-05-23 21:19:22 +00:00
Chris Lattner 0026512bac This was not meant to be committed
llvm-svn: 13565
2004-05-13 20:56:34 +00:00
Chris Lattner c12c945cc4 Fix a nasty bug that caused us to unroll EXTREMELY large loops due to overflow
in the size calculation.

This is not something you want to see:
Loop Unroll: F[main] Loop %no_exit Loop Size = 2 Trip Count = 2147483648 - UNROLLING!

The problem was that 2*2147483648 == 0.

Now we get:
Loop Unroll: F[main] Loop %no_exit Loop Size = 2 Trip Count = 2147483648 - TOO LARGE: 4294967296>100

Thanks to some anonymous person playing with the demo page that repeatedly
caused zion to go into swapping land.  That's one way to ensure you'll get
a quick bugfix.  :)

Testcase here: Transforms/LoopUnroll/2004-05-13-DontUnrollTooMuch.ll

llvm-svn: 13564
2004-05-13 20:43:31 +00:00
Chris Lattner 8ec5f88c79 Fix stupid bug in my checkin yesterday
llvm-svn: 13429
2004-05-08 22:41:42 +00:00
Chris Lattner 5f667a6f58 Implement folding of GEP's like:
%tmp.0 = getelementptr [50 x sbyte]* %ar, uint 0, int 5         ; <sbyte*> [#uses=2]
        %tmp.7 = getelementptr sbyte* %tmp.0, int 8             ; <sbyte*> [#uses=1]

together.  This patch actually allows us to simplify and generalize the code.

llvm-svn: 13415
2004-05-07 22:09:22 +00:00
Chris Lattner d9e5813821 Fix PR336: The instcombine pass asserts when visiting load instruction
llvm-svn: 13400
2004-05-07 15:35:56 +00:00
Chris Lattner 9490849028 Do not mark instructions in unreachable sections of the function as live.
This fixes PR332 and ADCE/2004-05-04-UnreachableBlock.llx

llvm-svn: 13349
2004-05-04 17:00:46 +00:00
Chris Lattner dd1a86d858 Minor efficiency tweak, suggested by Patrick Meredith
llvm-svn: 13341
2004-05-04 15:19:33 +00:00
Chris Lattner 63d75af920 Make sure to reprocess instructions used by deleted instructions to avoid
missing opportunities for combination.

llvm-svn: 13309
2004-05-01 23:27:23 +00:00
Chris Lattner b643a9e675 Make sure the instruction combiner doesn't lose track of instructions
when replacing them, missing the opportunity to do simplifications

llvm-svn: 13308
2004-05-01 23:19:52 +00:00
Chris Lattner 652064e3b8 Fix a major pessimization in the instcombiner. If an allocation instruction
is only used by a cast, and the casted type is the same size as the original
allocation, it would eliminate the cast by folding it into the allocation.

Unfortunately, it was placing the new allocation instruction right before
the cast, which could pull (for example) alloca instructions into the body
of a function.  This turns statically allocatable allocas into expensive
dynamically allocated allocas, which is bad bad bad.

This fixes the problem by placing the new allocation instruction at the same
place the old one was, duh. :)

llvm-svn: 13289
2004-04-30 04:37:52 +00:00
Chris Lattner 2d3a7a6ff0 Changes to fix up the inst_iterator to pass to boost iterator checks. This
patch was graciously contributed by Vladimir Prus.

llvm-svn: 13185
2004-04-27 15:13:33 +00:00
Chris Lattner e20c334e65 Instcombine X/-1 --> 0-X
llvm-svn: 13172
2004-04-26 14:01:59 +00:00
Chris Lattner 83cd87efcd Move the scev expansion code into this pass, where it belongs. There is
still room for cleanup, but at least the code modification is out of the
analysis now.

llvm-svn: 13135
2004-04-23 21:29:48 +00:00
Chris Lattner c27302c79f Disable a previous patch that was causing indvars to loop infinitely :(
llvm-svn: 13108
2004-04-22 15:12:36 +00:00
Chris Lattner c1a682dda0 Fix an extremely serious thinko I made in revision 1.60 of this file.
llvm-svn: 13106
2004-04-22 14:59:40 +00:00
Chris Lattner af532f27e7 Implement a todo, rewriting all possible scev expressions inside of the
loop.  This eliminates the extra add from the previous case, but it's
not clear that this will be a performance win overall.  Tommorows test
results will tell. :)

llvm-svn: 13103
2004-04-21 23:36:08 +00:00
Chris Lattner fb9a299f68 This code really wants to iterate over the OPERANDS of an instruction, not
over its USES.  If it's dead it doesn't have any uses!  :)

Thanks to the fabulous and mysterious Bill Wendling for pointing this out.  :)

llvm-svn: 13102
2004-04-21 22:29:37 +00:00
Chris Lattner dc7cc35088 Implement a fixme. The helps loops that have induction variables of different
types in them.  Instead of creating an induction variable for all types, it
creates a single induction variable and casts to the other sizes.  This generates
this code:

no_exit:                ; preds = %entry, %no_exit
        %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=4]
***     %j.0.0 = cast uint %indvar to short             ; <short> [#uses=1]
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %tmp.7 = getelementptr short* %P, uint %indvar          ; <short*> [#uses=1]
        store short %j.0.0, short* %tmp.7
        %inc.0 = add int %indvar, 1             ; <int> [#uses=2]
        %tmp.2 = setlt int %inc.0, %N           ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.2, label %no_exit, label %loopexit

instead of:

no_exit:                ; preds = %entry, %no_exit
        %indvar = phi ushort [ %indvar.next, %no_exit ], [ 0, %entry ]          ; <ushort> [#uses=2]
***     %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=3]
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %indvar = cast ushort %indvar to short          ; <short> [#uses=1]
        %tmp.7 = getelementptr short* %P, uint %indvar          ; <short*> [#uses=1]
        store short %indvar, short* %tmp.7
        %inc.0 = add int %indvar, 1             ; <int> [#uses=2]
        %tmp.2 = setlt int %inc.0, %N           ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1
***     %indvar.next = add ushort %indvar, 1
        br bool %tmp.2, label %no_exit, label %loopexit

This is an improvement in register pressure, but probably doesn't happen that
often.

The more important fix will be to get rid of the redundant add.

llvm-svn: 13101
2004-04-21 22:22:01 +00:00
Chris Lattner c1aa21f5a7 Fix PR325
llvm-svn: 13081
2004-04-20 20:26:03 +00:00
Chris Lattner f48f777d4c Initial checkin of a simple loop unswitching pass. It still needs work,
but it's a start, and seems to do it's basic job.

llvm-svn: 13068
2004-04-19 18:07:02 +00:00
Chris Lattner bc02177fdc Add #include
llvm-svn: 13057
2004-04-19 03:01:23 +00:00
Chris Lattner fc44a25bcb Move isLoopInvariant to the Loop class
llvm-svn: 13051
2004-04-18 22:46:08 +00:00
Chris Lattner 827826320d Correct rewriting of exit blocks after my last patch
llvm-svn: 13048
2004-04-18 22:27:10 +00:00
Chris Lattner 35eaa55cfc Loop exit sets are no longer explicitly held, they are dynamically computed on demand.
llvm-svn: 13046
2004-04-18 22:15:13 +00:00
Chris Lattner d72c3eb54e Change the ExitBlocks list from being explicitly contained in the Loop
structure to being dynamically computed on demand.  This makes updating
loop information MUCH easier.

llvm-svn: 13045
2004-04-18 22:14:10 +00:00
Chris Lattner d15250240c Reduce the unrolling limit
llvm-svn: 13040
2004-04-18 18:06:14 +00:00
Chris Lattner 30ae18155d If the preheader of the loop was the entry block of the function, make sure
that the exit block of the loop becomes the new entry block of the function.

This was causing a verifier assertion on 252.eon.

llvm-svn: 13039
2004-04-18 17:38:42 +00:00
Chris Lattner 230bcb6b35 Be much more careful about how we update instructions outside of the loop
using instructions inside of the loop.  This should fix the MishaTest failure
from last night.

llvm-svn: 13038
2004-04-18 17:32:39 +00:00
Chris Lattner 4d52e1e401 After unrolling our single basic block loop, fold it into the preheader and exit
block.  The primary motivation for doing this is that we can now unroll nested loops.

This makes a pretty big difference in some cases.  For example, in 183.equake,
we are now beating the native compiler with the CBE, and we are a lot closer
with LLC.

I'm now going to play around a bit with the unroll factor and see what effect
it really has.

llvm-svn: 13034
2004-04-18 06:27:43 +00:00
Chris Lattner f2cc841619 Fix a bug: this does not preserve the CFG!
While we're at it, add support for updating loop information correctly.

llvm-svn: 13033
2004-04-18 05:38:37 +00:00
Chris Lattner 946b255977 Initial checkin of a simple loop unroller. This pass is extremely basic and
limited.  Even in it's extremely simple state (it can only *fully* unroll single
basic block loops that execute a constant number of times), it already helps improve
performance a LOT on some benchmarks, particularly with the native code generators.

llvm-svn: 13028
2004-04-18 05:20:17 +00:00
Chris Lattner c14da9600b Make the tail duplication threshold accessible from the command line instead of hardcoded
llvm-svn: 13025
2004-04-18 00:52:43 +00:00
Chris Lattner a814080025 If the loop executes a constant number of times, try a bit harder to replace
exit values.

llvm-svn: 13018
2004-04-17 18:44:09 +00:00
Chris Lattner 1e9ac1a45e Fix a HUGE pessimization on X86. The indvars pass was taking this
(familiar) function:

int _strlen(const char *str) {
    int len = 0;
    while (*str++) len++;
    return len;
}

And transforming it to use a ulong induction variable, because the type of
the pointer index was left as a constant long.  This is obviously very bad.

The fix is to shrink long constants in getelementptr instructions to intptr_t,
making the indvars pass insert a uint induction variable, which is much more
efficient.

Here's the before code for this function:

int %_strlen(sbyte* %str) {
entry:
        %tmp.13 = load sbyte* %str              ; <sbyte> [#uses=1]
        %tmp.24 = seteq sbyte %tmp.13, 0                ; <bool> [#uses=1]
        br bool %tmp.24, label %loopexit, label %no_exit

no_exit:                ; preds = %entry, %no_exit
***     %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=2]
***     %indvar = phi ulong [ %indvar.next, %no_exit ], [ 0, %entry ]           ; <ulong> [#uses=2]
        %indvar1 = cast ulong %indvar to uint           ; <uint> [#uses=1]
        %inc.02.sum = add uint %indvar1, 1              ; <uint> [#uses=1]
        %inc.0.0 = getelementptr sbyte* %str, uint %inc.02.sum          ; <sbyte*> [#uses=1]
        %tmp.1 = load sbyte* %inc.0.0           ; <sbyte> [#uses=1]
        %tmp.2 = seteq sbyte %tmp.1, 0          ; <bool> [#uses=1]
        %indvar.next = add ulong %indvar, 1             ; <ulong> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.2, label %loopexit.loopexit, label %no_exit

loopexit.loopexit:              ; preds = %no_exit
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %inc.1 = add int %indvar, 1             ; <int> [#uses=1]
        ret int %inc.1

loopexit:               ; preds = %entry
        ret int 0
}


Here's the after code:

int %_strlen(sbyte* %str) {
entry:
        %inc.02 = getelementptr sbyte* %str, uint 1             ; <sbyte*> [#uses=1]
        %tmp.13 = load sbyte* %str              ; <sbyte> [#uses=1]
        %tmp.24 = seteq sbyte %tmp.13, 0                ; <bool> [#uses=1]
        br bool %tmp.24, label %loopexit, label %no_exit

no_exit:                ; preds = %entry, %no_exit
***     %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=3]
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %inc.0.0 = getelementptr sbyte* %inc.02, uint %indvar           ; <sbyte*> [#uses=1]
        %inc.1 = add int %indvar, 1             ; <int> [#uses=1]
        %tmp.1 = load sbyte* %inc.0.0           ; <sbyte> [#uses=1]
        %tmp.2 = seteq sbyte %tmp.1, 0          ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.2, label %loopexit, label %no_exit

loopexit:               ; preds = %entry, %no_exit
        %len.0.1 = phi int [ 0, %entry ], [ %inc.1, %no_exit ]          ; <int> [#uses=1]
        ret int %len.0.1
}

llvm-svn: 13016
2004-04-17 18:16:10 +00:00
Chris Lattner 885a6eb74d Even if there are not any induction variables in the loop, if we can compute
the trip count for the loop, insert one so that we can canonicalize the exit
condition.

llvm-svn: 13015
2004-04-17 18:08:33 +00:00
Chris Lattner 284d3b0311 Fix some really nasty dominance bugs that were exposed by my patch to
make the verifier more strict.  This fixes building zlib

llvm-svn: 13002
2004-04-16 18:08:07 +00:00
Chris Lattner 9e9b2b7474 Fix some of the strange CBE-only failures that happened last night.
llvm-svn: 12980
2004-04-16 06:03:17 +00:00
Chris Lattner d7a559e353 Fix a bug in the previous checkin: if the exit block is not the same as
the back-edge block, we must check the preincremented value.

llvm-svn: 12968
2004-04-15 20:26:22 +00:00
Chris Lattner 0cec5cb92c Change the canonical induction variable that we insert.
Instead of producing code like this:

Loop:
  X = phi 0, X2
  ...

  X2 = X + 1
  if (X != N-1) goto Loop

We now generate code that looks like this:

Loop:
  X = phi 0, X2
  ...

  X2 = X + 1
  if (X2 != N) goto Loop

This has two big advantages:
  1. The trip count of the loop is now explicit in the code, allowing
     the direct implementation of Loop::getTripCount()
  2. This reduces register pressure in the loop, and allows X and X2 to be
     put into the same register.

As a consequence of the second point, the code we generate for loops went
from:

.LBB2:  # no_exit.1
	...
        mov %EDI, %ESI
        inc %EDI
        cmp %ESI, 2
        mov %ESI, %EDI
        jne .LBB2 # PC rel: no_exit.1

To:

.LBB2:  # no_exit.1
	...
        inc %ESI
        cmp %ESI, 3
        jne .LBB2 # PC rel: no_exit.1

... which has two fewer moves, and uses one less register.

llvm-svn: 12961
2004-04-15 15:21:43 +00:00
Chris Lattner 6679e46b59 ADd a trivial instcombine: load null -> null
llvm-svn: 12940
2004-04-14 03:28:36 +00:00
Chris Lattner ff9362a8da Add SCCP support for constant folding calls, implementing:
test/Regression/Transforms/SCCP/calltest.ll

llvm-svn: 12921
2004-04-13 19:43:54 +00:00
Chris Lattner d0dc6d5295 Constant propagation should remove the dead instructions
llvm-svn: 12917
2004-04-13 19:28:20 +00:00
Chris Lattner 89e959bb1f Fix LoopSimplify/2004-04-13-LoopSimplifyUpdateDomFrontier.ll
LoopSimplify was not updating dominator frontiers correctly in some cases.

llvm-svn: 12890
2004-04-13 16:23:25 +00:00
Chris Lattner a6e22814ab Refactor code a bit to make it simpler and eliminate the goto
llvm-svn: 12888
2004-04-13 15:21:18 +00:00
Chris Lattner 8417052938 This patch addresses PR35: Loop simplify should reconstruct nested loops.
This is fairly straight-forward, but was a real nightmare to get just
perfect.  aarg.  :)

llvm-svn: 12884
2004-04-13 05:05:33 +00:00
Chris Lattner 494a685449 Add support for removing invoke instructions
llvm-svn: 12858
2004-04-12 05:15:13 +00:00
Chris Lattner 24cf0200c7 Fix a bug in my select transformation
llvm-svn: 12826
2004-04-11 01:39:19 +00:00
Chris Lattner f16fe7206c Update the value numbering interface.
llvm-svn: 12824
2004-04-10 22:33:34 +00:00
Chris Lattner 623fba1107 Implement InstCombine/select.ll:test13*
llvm-svn: 12821
2004-04-10 22:21:27 +00:00
Chris Lattner cf4a996cba Implement InstCombine/add.ll:test20
Canonicalize add of sign bit constant into a xor

llvm-svn: 12819
2004-04-10 22:01:55 +00:00
Chris Lattner 69c4900512 Rewrite the GCSE pass to be *substantially* simpler, a bit more efficient,
and a bit more powerful

llvm-svn: 12817
2004-04-10 21:11:11 +00:00
Chris Lattner f9d9665138 Fix spurious warning in release mode
llvm-svn: 12816
2004-04-10 19:15:56 +00:00
Chris Lattner d95ef7eff0 Simplify code a bit, and fix a bug that was breaking perlbmk
llvm-svn: 12814
2004-04-10 18:06:21 +00:00
Chris Lattner 7ebfe61dc1 Fix a bug in my checkin last night that was breaking programs using invoke.
llvm-svn: 12813
2004-04-10 16:53:29 +00:00
Chris Lattner 5093213c40 Fix previous patch
llvm-svn: 12811
2004-04-10 07:27:48 +00:00
Chris Lattner 6149ac8991 Correctly update counters
llvm-svn: 12810
2004-04-10 07:02:02 +00:00
Chris Lattner cfa1adcdb8 Simplify code a bit, and use alias analysis to allow us to delete unused
call and invoke instructions that are known to not write to memory.

llvm-svn: 12807
2004-04-10 06:53:09 +00:00
Chris Lattner 56e4d3d8ad Implement select.ll:test12*
This transforms code like this:

   %C = or %A, %B
   %D = select %cond, %C, %A
into:
   %C = select %cond, %B, 0
   %D = or %A, %C

Since B is often a constant, the select can often be eliminated.  In any case,
this reduces the usage count of A, allowing subsequent optimizations to happen.

This xform applies when the operator is any of:
  add, sub, mul, or, xor, and, shl, shr

llvm-svn: 12800
2004-04-09 23:46:01 +00:00
Chris Lattner 183b336a54 Fold binary operators with a constant operand into select instructions
that have a constant operand.  This implements
add.ll:test19, shift.ll:test15*, and others that are not tested

llvm-svn: 12794
2004-04-09 19:05:30 +00:00
Chris Lattner cf7baf3519 Implement select.ll:test11
llvm-svn: 12793
2004-04-09 18:19:44 +00:00
Chris Lattner e228ee5870 Implement InstCombine/cast-propagate.ll
llvm-svn: 12784
2004-04-08 20:39:49 +00:00
Chris Lattner 1c631e813d Implement InstCombine/select.ll:test[7-10]
llvm-svn: 12769
2004-04-08 04:43:23 +00:00
Chris Lattner 2b2412d0c8 Implement test/Regression/Transforms/InstCombine/getelementptr_index.ll
llvm-svn: 12762
2004-04-07 18:38:20 +00:00
Chris Lattner 4d1fcf1dcd Fix a bug in yesterdays checkins which broke siod. siod is a great testcase! :)
llvm-svn: 12659
2004-04-05 16:02:41 +00:00
Chris Lattner 8953b90aaa Fix InstCombine/2004-04-04-InstCombineReplaceAllUsesWith.ll
llvm-svn: 12658
2004-04-05 02:10:19 +00:00
Chris Lattner 69193f93b6 Support getelementptr instructions which use uint's to index into structure
types and can have arbitrary 32- and 64-bit integer types indexing into
sequential types.

llvm-svn: 12653
2004-04-05 01:30:19 +00:00
Chris Lattner e61b67d7d5 Rewrite the indvars pass to use the ScalarEvolution analysis.
This also implements some new features for the indvars pass, including
linear function test replacement, exit value substitution, and it works with
a much more general class of induction variables and loops.

llvm-svn: 12620
2004-04-02 20:24:31 +00:00
Chris Lattner 59fdf74968 Remove some assertions that are now bogus with the last patch I put in
llvm-svn: 12595
2004-04-01 19:21:46 +00:00
Chris Lattner 146d0df5e4 Fix PR306: Loop simplify incorrectly updates dominator information
Testcase: LoopSimplify/2004-04-01-IncorrectDomUpdate.ll

llvm-svn: 12592
2004-04-01 19:06:07 +00:00
Chris Lattner 61fab1409d Add warning
llvm-svn: 12573
2004-03-31 22:00:30 +00:00
Chris Lattner 533bc49775 Implement select.ll:test[3-6]
llvm-svn: 12544
2004-03-30 19:37:13 +00:00
Chris Lattner 059f390257 Add a simple select instruction lowering pass
llvm-svn: 12540
2004-03-30 18:41:10 +00:00
Chris Lattner 56b5051428 X % -1 == X % 1 == 0
llvm-svn: 12520
2004-03-26 16:11:24 +00:00
Chris Lattner 57c67b06e9 Two changes:
#1 is to unconditionally strip constantpointerrefs out of
instruction operands where they are absolutely pointless and inhibit
optimization.  GRRR!

#2 is to implement InstCombine/getelementptr_const.ll

llvm-svn: 12519
2004-03-25 22:59:29 +00:00
Chris Lattner abb77c9959 Teach the optimizer to delete zero sized alloca's (but not mallocs!)
llvm-svn: 12507
2004-03-19 06:08:10 +00:00
Chris Lattner 684fa5ac64 Be more accurate
llvm-svn: 12464
2004-03-17 01:59:27 +00:00
Chris Lattner a3783a577e Fix bug in previous checkin
llvm-svn: 12458
2004-03-16 23:36:49 +00:00
Chris Lattner 95057f6ad1 Okay, so there is no reasonable way for tail duplication to update SSA form,
as it is making effectively arbitrary modifications to the CFG and we don't
have a domset/domfrontier implementations that can handle the dynamic updates.
Instead of having a bunch of code that doesn't actually work in practice,
just demote any potentially tricky values to the stack (causing the problem
to go away entirely).  Later invocations of mem2reg will rebuild SSA for us.

This fixes all of the major performance regressions with tail duplication
from LLVM 1.1.  For example, this loop:

---
int popcount(int x) {
  int result = 0;
  while (x != 0) {
    result = result + (x & 0x1);
    x = x >> 1;
  }
  return result;
}
---
Used to be compiled into:

int %popcount(int %X) {
entry:
	br label %loopentry

loopentry:		; preds = %entry, %no_exit
	%x.0 = phi int [ %X, %entry ], [ %tmp.9, %no_exit ]		; <int> [#uses=3]
	%result.1.0 = phi int [ 0, %entry ], [ %tmp.6, %no_exit ]		; <int> [#uses=2]
	%tmp.1 = seteq int %x.0, 0		; <bool> [#uses=1]
	br bool %tmp.1, label %loopexit, label %no_exit

no_exit:		; preds = %loopentry
	%tmp.4 = and int %x.0, 1		; <int> [#uses=1]
	%tmp.6 = add int %tmp.4, %result.1.0		; <int> [#uses=1]
	%tmp.9 = shr int %x.0, ubyte 1		; <int> [#uses=1]
	br label %loopentry

loopexit:		; preds = %loopentry
	ret int %result.1.0
}

And is now compiled into:

int %popcount(int %X) {
entry:
        br label %no_exit

no_exit:                ; preds = %entry, %no_exit
        %x.0.0 = phi int [ %X, %entry ], [ %tmp.9, %no_exit ]          ; <int> [#uses=2]
        %result.1.0.0 = phi int [ 0, %entry ], [ %tmp.6, %no_exit ]             ; <int> [#uses=1]
        %tmp.4 = and int %x.0.0, 1              ; <int> [#uses=1]
        %tmp.6 = add int %tmp.4, %result.1.0.0          ; <int> [#uses=2]
        %tmp.9 = shr int %x.0.0, ubyte 1                ; <int> [#uses=2]
        %tmp.1 = seteq int %tmp.9, 0            ; <bool> [#uses=1]
        br bool %tmp.1, label %loopexit, label %no_exit

loopexit:               ; preds = %no_exit
        ret int %tmp.6
}

llvm-svn: 12457
2004-03-16 23:29:09 +00:00
Chris Lattner 7a7b114871 Do not try to optimize PHI nodes with incredibly high degree. This reduces SCCP
time from 615s to 1.49s on a large testcase that has a gigantic switch statement
that all of the blocks in the function go to (an intepreter).

llvm-svn: 12442
2004-03-16 19:49:59 +00:00
Chris Lattner a64923ad26 Do not copy gigantic switch instructions
llvm-svn: 12441
2004-03-16 19:45:22 +00:00
Chris Lattner db5b8f4d6b Fix a regression from this patch:
http://mail.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20040308/013095.html

Basically, this patch only updated the immediate dominatees of the header node
to tell them that the preheader also dominated them.  In practice, ALL
dominatees of the header node are also dominated by the preheader.

This fixes: LoopSimplify/2004-03-15-IncorrectDomUpdate.
and PR293

llvm-svn: 12434
2004-03-16 06:00:15 +00:00
Chris Lattner cd83282df1 Add counters for the number of calls elimianted
llvm-svn: 12420
2004-03-15 05:46:59 +00:00
Chris Lattner 20cda2645e Implement LICM of calls in simple cases. This is sufficient to move around
sin/cos/strlen calls and stuff.  This implements:
  LICM/call_sink_pure_function.ll
  LICM/call_sink_const_function.ll

llvm-svn: 12415
2004-03-15 04:11:30 +00:00
Chris Lattner b68659552a Do not create empty basic blocks when the lowerswitch pass expects blocks to
be non-empty!  This fixes LowerSwitch/2004-03-13-SwitchIsDefaultCrash.ll

llvm-svn: 12384
2004-03-14 04:14:31 +00:00