Commit Graph

652 Commits

Author SHA1 Message Date
Chris Lattner 523d3e6674 Fix one of the major things that is causing the C Backend to infinite loop
llvm-svn: 13872
2004-05-28 05:02:13 +00:00
Chris Lattner ed79d8af53 Fix InstCombine/load.ll & PR347.
This code hadn't been updated after the "structs with more than 256 elements"
related changes to the GEP instruction.  Also it was not handling the
ConstantAggregateZero class.

Now it does!

llvm-svn: 13834
2004-05-27 17:30:27 +00:00
Reid Spencer 297d7fe7e6 Remove unused header file.
llvm-svn: 13750
2004-05-25 08:51:36 +00:00
Reid Spencer 1cc31f264f Make this pass simply invoke SymbolTable::strip().
llvm-svn: 13749
2004-05-25 08:51:25 +00:00
Chris Lattner e1e10e1883 Implement InstCombine:shift.ll:test16, which turns (X >> C1) & C2 != C3
into (X & (C2 << C1)) != (C3 << C1), where the shift may be either left or
right and the compare may be any one.

This triggers 1546 times in 176.gcc alone, as it is a common pattern that
occurs for bitfield accesses.

llvm-svn: 13740
2004-05-25 06:32:08 +00:00
Chris Lattner 03841659a4 Implement instcombine/cast.ll:test16:
Canonicalize cast X to bool into a setne instruction

llvm-svn: 13736
2004-05-25 04:29:21 +00:00
Chris Lattner 99173879ad Spelling people's names right is kinda important
llvm-svn: 13702
2004-05-23 21:27:29 +00:00
Chris Lattner 289ba2ac4d Adjust to the changes in the AliasSetTracker interface
llvm-svn: 13690
2004-05-23 21:20:19 +00:00
Chris Lattner e67dbc2ae2 Add support for replacement of formal arguments with simpler expressions.
llvm-svn: 13689
2004-05-23 21:19:55 +00:00
Chris Lattner 099c8cfe90 Implement the -lowergc pass which is used by code generators (like the CBE)
that do not have builtin support for garbage collection.

llvm-svn: 13688
2004-05-23 21:19:22 +00:00
Chris Lattner 0026512bac This was not meant to be committed
llvm-svn: 13565
2004-05-13 20:56:34 +00:00
Chris Lattner c12c945cc4 Fix a nasty bug that caused us to unroll EXTREMELY large loops due to overflow
in the size calculation.

This is not something you want to see:
Loop Unroll: F[main] Loop %no_exit Loop Size = 2 Trip Count = 2147483648 - UNROLLING!

The problem was that 2*2147483648 == 0.

Now we get:
Loop Unroll: F[main] Loop %no_exit Loop Size = 2 Trip Count = 2147483648 - TOO LARGE: 4294967296>100

Thanks to some anonymous person playing with the demo page that repeatedly
caused zion to go into swapping land.  That's one way to ensure you'll get
a quick bugfix.  :)

Testcase here: Transforms/LoopUnroll/2004-05-13-DontUnrollTooMuch.ll

llvm-svn: 13564
2004-05-13 20:43:31 +00:00
Chris Lattner 8ec5f88c79 Fix stupid bug in my checkin yesterday
llvm-svn: 13429
2004-05-08 22:41:42 +00:00
Chris Lattner 5f667a6f58 Implement folding of GEP's like:
%tmp.0 = getelementptr [50 x sbyte]* %ar, uint 0, int 5         ; <sbyte*> [#uses=2]
        %tmp.7 = getelementptr sbyte* %tmp.0, int 8             ; <sbyte*> [#uses=1]

together.  This patch actually allows us to simplify and generalize the code.

llvm-svn: 13415
2004-05-07 22:09:22 +00:00
Chris Lattner d9e5813821 Fix PR336: The instcombine pass asserts when visiting load instruction
llvm-svn: 13400
2004-05-07 15:35:56 +00:00
Chris Lattner 9490849028 Do not mark instructions in unreachable sections of the function as live.
This fixes PR332 and ADCE/2004-05-04-UnreachableBlock.llx

llvm-svn: 13349
2004-05-04 17:00:46 +00:00
Chris Lattner dd1a86d858 Minor efficiency tweak, suggested by Patrick Meredith
llvm-svn: 13341
2004-05-04 15:19:33 +00:00
Chris Lattner 63d75af920 Make sure to reprocess instructions used by deleted instructions to avoid
missing opportunities for combination.

llvm-svn: 13309
2004-05-01 23:27:23 +00:00
Chris Lattner b643a9e675 Make sure the instruction combiner doesn't lose track of instructions
when replacing them, missing the opportunity to do simplifications

llvm-svn: 13308
2004-05-01 23:19:52 +00:00
Chris Lattner 652064e3b8 Fix a major pessimization in the instcombiner. If an allocation instruction
is only used by a cast, and the casted type is the same size as the original
allocation, it would eliminate the cast by folding it into the allocation.

Unfortunately, it was placing the new allocation instruction right before
the cast, which could pull (for example) alloca instructions into the body
of a function.  This turns statically allocatable allocas into expensive
dynamically allocated allocas, which is bad bad bad.

This fixes the problem by placing the new allocation instruction at the same
place the old one was, duh. :)

llvm-svn: 13289
2004-04-30 04:37:52 +00:00
Chris Lattner 2d3a7a6ff0 Changes to fix up the inst_iterator to pass to boost iterator checks. This
patch was graciously contributed by Vladimir Prus.

llvm-svn: 13185
2004-04-27 15:13:33 +00:00
Chris Lattner e20c334e65 Instcombine X/-1 --> 0-X
llvm-svn: 13172
2004-04-26 14:01:59 +00:00
Chris Lattner 83cd87efcd Move the scev expansion code into this pass, where it belongs. There is
still room for cleanup, but at least the code modification is out of the
analysis now.

llvm-svn: 13135
2004-04-23 21:29:48 +00:00
Chris Lattner c27302c79f Disable a previous patch that was causing indvars to loop infinitely :(
llvm-svn: 13108
2004-04-22 15:12:36 +00:00
Chris Lattner c1a682dda0 Fix an extremely serious thinko I made in revision 1.60 of this file.
llvm-svn: 13106
2004-04-22 14:59:40 +00:00
Chris Lattner af532f27e7 Implement a todo, rewriting all possible scev expressions inside of the
loop.  This eliminates the extra add from the previous case, but it's
not clear that this will be a performance win overall.  Tommorows test
results will tell. :)

llvm-svn: 13103
2004-04-21 23:36:08 +00:00
Chris Lattner fb9a299f68 This code really wants to iterate over the OPERANDS of an instruction, not
over its USES.  If it's dead it doesn't have any uses!  :)

Thanks to the fabulous and mysterious Bill Wendling for pointing this out.  :)

llvm-svn: 13102
2004-04-21 22:29:37 +00:00
Chris Lattner dc7cc35088 Implement a fixme. The helps loops that have induction variables of different
types in them.  Instead of creating an induction variable for all types, it
creates a single induction variable and casts to the other sizes.  This generates
this code:

no_exit:                ; preds = %entry, %no_exit
        %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=4]
***     %j.0.0 = cast uint %indvar to short             ; <short> [#uses=1]
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %tmp.7 = getelementptr short* %P, uint %indvar          ; <short*> [#uses=1]
        store short %j.0.0, short* %tmp.7
        %inc.0 = add int %indvar, 1             ; <int> [#uses=2]
        %tmp.2 = setlt int %inc.0, %N           ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.2, label %no_exit, label %loopexit

instead of:

no_exit:                ; preds = %entry, %no_exit
        %indvar = phi ushort [ %indvar.next, %no_exit ], [ 0, %entry ]          ; <ushort> [#uses=2]
***     %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=3]
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %indvar = cast ushort %indvar to short          ; <short> [#uses=1]
        %tmp.7 = getelementptr short* %P, uint %indvar          ; <short*> [#uses=1]
        store short %indvar, short* %tmp.7
        %inc.0 = add int %indvar, 1             ; <int> [#uses=2]
        %tmp.2 = setlt int %inc.0, %N           ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1
***     %indvar.next = add ushort %indvar, 1
        br bool %tmp.2, label %no_exit, label %loopexit

This is an improvement in register pressure, but probably doesn't happen that
often.

The more important fix will be to get rid of the redundant add.

llvm-svn: 13101
2004-04-21 22:22:01 +00:00
Chris Lattner c1aa21f5a7 Fix PR325
llvm-svn: 13081
2004-04-20 20:26:03 +00:00
Chris Lattner f48f777d4c Initial checkin of a simple loop unswitching pass. It still needs work,
but it's a start, and seems to do it's basic job.

llvm-svn: 13068
2004-04-19 18:07:02 +00:00
Chris Lattner bc02177fdc Add #include
llvm-svn: 13057
2004-04-19 03:01:23 +00:00
Chris Lattner fc44a25bcb Move isLoopInvariant to the Loop class
llvm-svn: 13051
2004-04-18 22:46:08 +00:00
Chris Lattner 827826320d Correct rewriting of exit blocks after my last patch
llvm-svn: 13048
2004-04-18 22:27:10 +00:00
Chris Lattner 35eaa55cfc Loop exit sets are no longer explicitly held, they are dynamically computed on demand.
llvm-svn: 13046
2004-04-18 22:15:13 +00:00
Chris Lattner d72c3eb54e Change the ExitBlocks list from being explicitly contained in the Loop
structure to being dynamically computed on demand.  This makes updating
loop information MUCH easier.

llvm-svn: 13045
2004-04-18 22:14:10 +00:00
Chris Lattner d15250240c Reduce the unrolling limit
llvm-svn: 13040
2004-04-18 18:06:14 +00:00
Chris Lattner 30ae18155d If the preheader of the loop was the entry block of the function, make sure
that the exit block of the loop becomes the new entry block of the function.

This was causing a verifier assertion on 252.eon.

llvm-svn: 13039
2004-04-18 17:38:42 +00:00
Chris Lattner 230bcb6b35 Be much more careful about how we update instructions outside of the loop
using instructions inside of the loop.  This should fix the MishaTest failure
from last night.

llvm-svn: 13038
2004-04-18 17:32:39 +00:00
Chris Lattner 4d52e1e401 After unrolling our single basic block loop, fold it into the preheader and exit
block.  The primary motivation for doing this is that we can now unroll nested loops.

This makes a pretty big difference in some cases.  For example, in 183.equake,
we are now beating the native compiler with the CBE, and we are a lot closer
with LLC.

I'm now going to play around a bit with the unroll factor and see what effect
it really has.

llvm-svn: 13034
2004-04-18 06:27:43 +00:00
Chris Lattner f2cc841619 Fix a bug: this does not preserve the CFG!
While we're at it, add support for updating loop information correctly.

llvm-svn: 13033
2004-04-18 05:38:37 +00:00
Chris Lattner 946b255977 Initial checkin of a simple loop unroller. This pass is extremely basic and
limited.  Even in it's extremely simple state (it can only *fully* unroll single
basic block loops that execute a constant number of times), it already helps improve
performance a LOT on some benchmarks, particularly with the native code generators.

llvm-svn: 13028
2004-04-18 05:20:17 +00:00
Chris Lattner c14da9600b Make the tail duplication threshold accessible from the command line instead of hardcoded
llvm-svn: 13025
2004-04-18 00:52:43 +00:00
Chris Lattner a814080025 If the loop executes a constant number of times, try a bit harder to replace
exit values.

llvm-svn: 13018
2004-04-17 18:44:09 +00:00
Chris Lattner 1e9ac1a45e Fix a HUGE pessimization on X86. The indvars pass was taking this
(familiar) function:

int _strlen(const char *str) {
    int len = 0;
    while (*str++) len++;
    return len;
}

And transforming it to use a ulong induction variable, because the type of
the pointer index was left as a constant long.  This is obviously very bad.

The fix is to shrink long constants in getelementptr instructions to intptr_t,
making the indvars pass insert a uint induction variable, which is much more
efficient.

Here's the before code for this function:

int %_strlen(sbyte* %str) {
entry:
        %tmp.13 = load sbyte* %str              ; <sbyte> [#uses=1]
        %tmp.24 = seteq sbyte %tmp.13, 0                ; <bool> [#uses=1]
        br bool %tmp.24, label %loopexit, label %no_exit

no_exit:                ; preds = %entry, %no_exit
***     %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=2]
***     %indvar = phi ulong [ %indvar.next, %no_exit ], [ 0, %entry ]           ; <ulong> [#uses=2]
        %indvar1 = cast ulong %indvar to uint           ; <uint> [#uses=1]
        %inc.02.sum = add uint %indvar1, 1              ; <uint> [#uses=1]
        %inc.0.0 = getelementptr sbyte* %str, uint %inc.02.sum          ; <sbyte*> [#uses=1]
        %tmp.1 = load sbyte* %inc.0.0           ; <sbyte> [#uses=1]
        %tmp.2 = seteq sbyte %tmp.1, 0          ; <bool> [#uses=1]
        %indvar.next = add ulong %indvar, 1             ; <ulong> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.2, label %loopexit.loopexit, label %no_exit

loopexit.loopexit:              ; preds = %no_exit
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %inc.1 = add int %indvar, 1             ; <int> [#uses=1]
        ret int %inc.1

loopexit:               ; preds = %entry
        ret int 0
}


Here's the after code:

int %_strlen(sbyte* %str) {
entry:
        %inc.02 = getelementptr sbyte* %str, uint 1             ; <sbyte*> [#uses=1]
        %tmp.13 = load sbyte* %str              ; <sbyte> [#uses=1]
        %tmp.24 = seteq sbyte %tmp.13, 0                ; <bool> [#uses=1]
        br bool %tmp.24, label %loopexit, label %no_exit

no_exit:                ; preds = %entry, %no_exit
***     %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=3]
        %indvar = cast uint %indvar to int              ; <int> [#uses=1]
        %inc.0.0 = getelementptr sbyte* %inc.02, uint %indvar           ; <sbyte*> [#uses=1]
        %inc.1 = add int %indvar, 1             ; <int> [#uses=1]
        %tmp.1 = load sbyte* %inc.0.0           ; <sbyte> [#uses=1]
        %tmp.2 = seteq sbyte %tmp.1, 0          ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.2, label %loopexit, label %no_exit

loopexit:               ; preds = %entry, %no_exit
        %len.0.1 = phi int [ 0, %entry ], [ %inc.1, %no_exit ]          ; <int> [#uses=1]
        ret int %len.0.1
}

llvm-svn: 13016
2004-04-17 18:16:10 +00:00
Chris Lattner 885a6eb74d Even if there are not any induction variables in the loop, if we can compute
the trip count for the loop, insert one so that we can canonicalize the exit
condition.

llvm-svn: 13015
2004-04-17 18:08:33 +00:00
Chris Lattner 284d3b0311 Fix some really nasty dominance bugs that were exposed by my patch to
make the verifier more strict.  This fixes building zlib

llvm-svn: 13002
2004-04-16 18:08:07 +00:00
Chris Lattner 9e9b2b7474 Fix some of the strange CBE-only failures that happened last night.
llvm-svn: 12980
2004-04-16 06:03:17 +00:00
Chris Lattner d7a559e353 Fix a bug in the previous checkin: if the exit block is not the same as
the back-edge block, we must check the preincremented value.

llvm-svn: 12968
2004-04-15 20:26:22 +00:00
Chris Lattner 0cec5cb92c Change the canonical induction variable that we insert.
Instead of producing code like this:

Loop:
  X = phi 0, X2
  ...

  X2 = X + 1
  if (X != N-1) goto Loop

We now generate code that looks like this:

Loop:
  X = phi 0, X2
  ...

  X2 = X + 1
  if (X2 != N) goto Loop

This has two big advantages:
  1. The trip count of the loop is now explicit in the code, allowing
     the direct implementation of Loop::getTripCount()
  2. This reduces register pressure in the loop, and allows X and X2 to be
     put into the same register.

As a consequence of the second point, the code we generate for loops went
from:

.LBB2:  # no_exit.1
	...
        mov %EDI, %ESI
        inc %EDI
        cmp %ESI, 2
        mov %ESI, %EDI
        jne .LBB2 # PC rel: no_exit.1

To:

.LBB2:  # no_exit.1
	...
        inc %ESI
        cmp %ESI, 3
        jne .LBB2 # PC rel: no_exit.1

... which has two fewer moves, and uses one less register.

llvm-svn: 12961
2004-04-15 15:21:43 +00:00
Chris Lattner 6679e46b59 ADd a trivial instcombine: load null -> null
llvm-svn: 12940
2004-04-14 03:28:36 +00:00