Commit Graph

8135 Commits

Author SHA1 Message Date
Reid Spencer c9e91ee004 Add the Linker library
llvm-svn: 17763
2004-11-14 21:54:41 +00:00
Misha Brukman 704576301f GhostLinkage not allowed in LLVM AsmWriter, either
llvm-svn: 17751
2004-11-14 21:04:34 +00:00
Misha Brukman 7f245d47c5 GhostLinkage should not reach asm printing stage
llvm-svn: 17750
2004-11-14 21:03:49 +00:00
Misha Brukman e225fa12ab Handle GhostLinkage (should not ever reach the assembly printing stage!)
llvm-svn: 17749
2004-11-14 21:03:30 +00:00
Misha Brukman b2e062c9d5 Mark an unmaterialized function as having GhostLinkage
llvm-svn: 17748
2004-11-14 21:02:55 +00:00
Chris Lattner 28eeb73f2f If a global is just loaded and restored, realize that it is not changing
value.  This allows us to turn more globals into constants and eliminate them.
This patch implements GlobalOpt/load-store-global.llx.

Note that this patch speeds up 255.vortex from:

Output/255.vortex.out-cbe.time:program 7.640000
Output/255.vortex.out-llc.time:program 9.810000

to:

Output/255.vortex.out-cbe.time:program 7.250000
Output/255.vortex.out-llc.time:program 9.490000

Which isn't bad at all!

llvm-svn: 17746
2004-11-14 20:50:30 +00:00
Misha Brukman 8b8ba9fcf7 Fix build on Linux/PowerPC64 using SuSE GCC (#undef PPC)
llvm-svn: 17744
2004-11-14 20:34:01 +00:00
Reid Spencer b9e561e90c Moved to lib/Bytecode/Archive in preparation for re-write.
llvm-svn: 17742
2004-11-14 19:59:40 +00:00
Chris Lattner 46dd5a6304 This optimization makes MANY phi nodes that all have the same incoming value.
If this happens, detect it early instead of relying on instcombine to notice
it later.  This can be a big speedup, because PHI nodes can have many
incoming values.

llvm-svn: 17741
2004-11-14 19:29:34 +00:00
Chris Lattner 7515cabe2a Implement instcombine/phi.ll:test6 - pulling operations through PHI nodes.
This exposes subsequent optimization possiblities and reduces code size.
This triggers 1423 times in spec.

llvm-svn: 17740
2004-11-14 19:13:23 +00:00
Chris Lattner 15ff1e1885 Transform this:
%X = alloca ...
  %Y = alloca ...
    X == Y

into false.  This allows us to simplify some stuff in eon (and probably
many other C++ programs) where operator= was checking for self assignment.
Folding this allows us to SROA several additional structs.

llvm-svn: 17735
2004-11-14 07:33:16 +00:00
Chris Lattner 5a8b003a09 Remove note to self
llvm-svn: 17734
2004-11-14 06:57:47 +00:00
Brian Gaeke e13c960415 Fix problem with insertion point for ADJCALLSTACKDOWN.
llvm-svn: 17733
2004-11-14 06:32:08 +00:00
Brian Gaeke a281ebc490 Update lists of failing unit tests.
Exclude bigfib, so that we effectively exclude all C++ benchmarks.
Update to-do list: mention va_start.

llvm-svn: 17732
2004-11-14 06:32:07 +00:00
Chris Lattner af555adc15 If a function always returns a constant, replace all calls sites with that
constant value.  This makes the return value dead and allows for
simplification in the caller.

This implements IPConstantProp/return-constant.ll

This triggers several dozen times throughout SPEC.

llvm-svn: 17730
2004-11-14 06:10:11 +00:00
Brian Gaeke 347a000be6 Fix NotTest - round up extraStack to the nearest doubleword, if it is
not zero.

llvm-svn: 17728
2004-11-14 05:19:00 +00:00
Chris Lattner fe3f4e6ebd Teach SROA how to promote an array index that is variable, if the dimension
of the array is just two.  This occurs 8 times in gcc, 6 times in crafty, and
12 times in 099.go.

This implements ScalarRepl/sroa_two.ll

llvm-svn: 17727
2004-11-14 05:00:19 +00:00
Brian Gaeke e90176e171 Update failing Benchmarks; point out that I'm skipping Shootout-C++.
llvm-svn: 17725
2004-11-14 04:43:12 +00:00
Chris Lattner 8881912d71 Rearrange some code, no functionality changes.
llvm-svn: 17724
2004-11-14 04:24:28 +00:00
Brian Gaeke 18b6015b11 Update expected UnitTests failures.
llvm-svn: 17723
2004-11-14 03:22:08 +00:00
Brian Gaeke e6b47514a3 Rewrite outgoing arg handling to handle more weird corner cases.
llvm-svn: 17722
2004-11-14 03:22:07 +00:00
Brian Gaeke 07097e12d5 Support UndefValue emission.
llvm-svn: 17721
2004-11-14 03:22:05 +00:00
Chris Lattner 9fa7f0ae0a Remove debugging code
llvm-svn: 17719
2004-11-13 23:32:53 +00:00
Chris Lattner 244031d306 Argument promotion transforms functions to unconditionally load their
argument pointers.  This is only valid to do if the function already
unconditionally loaded an argument or if the pointer passed in is known
to be valid.  Make sure to do the required checks.

This fixed ArgumentPromotion/control-flow.ll and the Burg program.

llvm-svn: 17718
2004-11-13 23:31:34 +00:00
Chris Lattner 56c4c99cca Don't print unneeded labels
llvm-svn: 17714
2004-11-13 23:27:11 +00:00
Chris Lattner 073f6ca344 Hack around stupidity in GCC, fixing Burg with the CBE and
CBackend/2004-11-13-FunctionPointerCast.llx

llvm-svn: 17710
2004-11-13 22:21:56 +00:00
Chris Lattner 049d33a717 shld is a very high latency operation. Instead of emitting it for shifts of
two or three, open code the equivalent operation which is faster on athlon
and P4 (by a substantial margin).

For example, instead of compiling this:

long long X2(long long Y) { return Y << 2; }

to:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $2, %eax, %edx
        shll $2, %eax
        ret

Compile it to:

X2:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $30, %edx
        leal (%edx,%ecx,4), %edx
        shll $2, %eax
        ret

Likewise, for << 3, compile to:

X3:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $29, %edx
        leal (%edx,%ecx,8), %edx
        shll $3, %eax
        ret

This matches icc, except that icc open codes the shifts as adds on the P4.

llvm-svn: 17707
2004-11-13 20:48:57 +00:00
Chris Lattner ef6bd92a8c Add missing check
llvm-svn: 17706
2004-11-13 20:04:38 +00:00
Chris Lattner 8d521bb16e Compile:
long long X3_2(long long Y) { return Y+Y; }
int X(int Y) { return Y+Y; }

into:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        addl %eax, %eax
        adcl %edx, %edx
        ret
X:
        movl 4(%esp), %eax
        addl %eax, %eax
        ret

instead of:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $1, %eax, %edx
        shll $1, %eax
        ret

X:
        movl 4(%esp), %eax
        shll $1, %eax
        ret

llvm-svn: 17705
2004-11-13 20:03:48 +00:00
Chris Lattner 8c3e7b92af Simplify handling of shifts to be the same as we do for adds. Add support
for (X * C1) + (X * C2) (where * can be mul or shl), allowing us to fold:

   Y+Y+Y+Y+Y+Y+Y+Y

into
         %tmp.8 = shl long %Y, ubyte 3           ; <long> [#uses=1]

instead of

        %tmp.4 = shl long %Y, ubyte 2           ; <long> [#uses=1]
        %tmp.12 = shl long %Y, ubyte 2          ; <long> [#uses=1]
        %tmp.8 = add long %tmp.4, %tmp.12               ; <long> [#uses=1]

This implements add.ll:test25

Also add support for (X*C1)-(X*C2) -> X*(C1-C2), implementing sub.ll:test18

llvm-svn: 17704
2004-11-13 19:50:12 +00:00
Chris Lattner 4efe20a103 Fold:
(X + (X << C2)) --> X * ((1 << C2) + 1)
   ((X << C2) + X) --> X * ((1 << C2) + 1)

This means that we now canonicalize "Y+Y+Y" into:

        %tmp.2 = mul long %Y, 3         ; <long> [#uses=1]

instead of:

        %tmp.10 = shl long %Y, ubyte 1          ; <long> [#uses=1]
        %tmp.6 = add long %Y, %tmp.10               ; <long> [#uses=1]

llvm-svn: 17701
2004-11-13 19:31:40 +00:00
Chris Lattner 2858e17538 Lazily create the abort message, so only translation units that use unwind
will actually get it.

llvm-svn: 17700
2004-11-13 19:07:32 +00:00
Chris Lattner 9b0291b18d Fix: CodeExtractor/2004-11-12-InvokeExtract.ll
llvm-svn: 17699
2004-11-13 00:06:45 +00:00
Chris Lattner 5bcca6058a Fix a bug where the code extractor would get a bit confused handling invoke
instructions, setting DefBlock to a block it did not have dom info for.

llvm-svn: 17697
2004-11-12 23:50:44 +00:00
Chris Lattner 5c1d84c769 Simplify handling of constant initializers
llvm-svn: 17696
2004-11-12 22:42:57 +00:00
Reid Spencer a81f8197eb Makefile for lib/Linker
llvm-svn: 17695
2004-11-12 20:38:45 +00:00
Reid Spencer 361e513db0 This file originated in lib/VMCore/Linker.cpp but now lives in
lib/Linker/LinkModules.cpp

llvm-svn: 17694
2004-11-12 20:37:43 +00:00
Reid Spencer 1cfa8d60f8 This file originated in tools/gccld/Linker.cpp but now lives in
lib/Linker/LinkArchives.cpp

llvm-svn: 17693
2004-11-12 20:34:32 +00:00
Chris Lattner 9621dfab3f Actually, leave the check in. This prevents us from counting dead arguments
as IPCP opportunities.

llvm-svn: 17680
2004-11-11 07:47:54 +00:00
Chris Lattner 5fa696f8e4 Fix bug: IPConstantProp/deadarg.ll
llvm-svn: 17679
2004-11-11 07:46:29 +00:00
Chris Lattner c1d24cd859 Make IP Constant prop more aggressive about handling self recursive calls.
This implements IPConstantProp/recursion.ll

llvm-svn: 17666
2004-11-10 19:43:59 +00:00
John Criswell 04570265a5 Correct the name of stosd for the AT&T syntax:
It's stosl (l for long == 32 bit).

llvm-svn: 17658
2004-11-10 04:48:15 +00:00
Chris Lattner 0d3773d8b1 Do not let dead constant expressions hanging off of functions prevent IPCP.
This allows to elimination of a bunch of global pool descriptor args from
programs being pool allocated (and is also generally useful!)

llvm-svn: 17657
2004-11-09 20:47:30 +00:00
Reid Spencer 202eaeb272 Fix isBytecodeFile to correctly recognized compressed bytecode too.
llvm-svn: 17655
2004-11-09 20:27:23 +00:00
Reid Spencer fb1f7357c2 * Implement getStatusInfo for getting stat(2) like information
* Implement createTemporaryFile for mkstemp(3) functionality
* Fix isBytecodeFile to accept llvc magic # (compressed) as bytecode.

llvm-svn: 17654
2004-11-09 20:26:31 +00:00
John Criswell 623dc9c5c0 Recognize compressed LLVM bytecode files.
This should fix the problem of not being able to link compressed LLVM
bytecode files from LLVM libraries.

llvm-svn: 17648
2004-11-09 19:37:07 +00:00
Reid Spencer 6a1a10aa54 Tune compression:
bzip2: block size 9 -> 5, reduces memory by 400Kbytes, doesn't affect speed
       or compression ratio on all but the largest bytecode files (>1MB)
zip:   level 9 -> 6, this speeds up compression time by ~30% but only
       degrades the compressed size by a few bytes per megabyte. Those few
       bytes aren't worth the effort.

llvm-svn: 17647
2004-11-09 17:58:09 +00:00
Chris Lattner 436285e75d Change this back so that I get stable numbers to reflect the change from the
nightly testers

llvm-svn: 17646
2004-11-09 08:05:23 +00:00
Chris Lattner 1f0a97c6cb Fix bug: 2004-11-08-FreeUseCrash.ll
llvm-svn: 17642
2004-11-09 05:10:56 +00:00
Misha Brukman 3e5dd6d34c * Convert tabs to spaces
* Order #includes according to style guide
* Remove extraneous blank lines

llvm-svn: 17639
2004-11-09 04:27:19 +00:00