Commit Graph

9095 Commits

Author SHA1 Message Date
Chris Lattner 54f4e39956 optimize comparisons against cttz/ctlz/ctpop, patch by Alastair Lynn!
llvm-svn: 92745
2010-01-05 18:09:56 +00:00
Dan Gohman fb4193625a Delete useless trailing semicolons.
llvm-svn: 92740
2010-01-05 17:55:26 +00:00
Devang Patel d146e2e3df If a scope has only one instruction then first instruction is also the last instruction.
llvm-svn: 92736
2010-01-05 16:59:17 +00:00
Chris Lattner 9da1cb243b optimize cttz and ctlz when we can prove something about the
leading/trailing bits.  Patch by Alastair Lynn!

llvm-svn: 92706
2010-01-05 07:23:56 +00:00
Chris Lattner f741d72b84 fix an infinite loop in reassociate building emacs.
llvm-svn: 92679
2010-01-05 04:55:35 +00:00
Devang Patel be94f23992 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Devang Patel e6433faba6 Fix debug_inlined section entries for routines whose names are changed through __asm() extension.
llvm-svn: 92533
2010-01-04 23:04:36 +00:00
Dan Gohman 8c63ee7e28 Make this test more portable.
llvm-svn: 92514
2010-01-04 21:23:34 +00:00
Devang Patel 63cdd6fcf3 Remove oversimplified test case.
llvm-svn: 92510
2010-01-04 20:54:06 +00:00
Dan Gohman 52183c3cc9 Add some tests and update an existing test to reflect recent
x86 isel peeps.

llvm-svn: 92509
2010-01-04 20:53:54 +00:00
Devang Patel a7c8e58d95 The test, derived from optimzed IR, does not mention "bar" in debug info anywhere so the dwarf writer is not expected to emit any debug info for function "bar".
llvm-svn: 92499
2010-01-04 19:41:13 +00:00
Chris Lattner a751d09c08 Truncate GEP indexes larger than the pointer size down to pointer size
when doing this transform if the GEP is not inbounds.  No testcase because
it is very difficult to trigger this: instcombine already canonicalizes
GEP indices to pointer size, so it relies specific permutations of the
instcombine worklist.

Thanks to Duncan for pointing this possible problem out.

llvm-svn: 92495
2010-01-04 18:57:15 +00:00
Anton Korobeynikov d91a14dba5 Fix invalid chain folding for memory variant of sdiv / udiv
llvm-svn: 92472
2010-01-04 10:31:54 +00:00
Chris Lattner 2d91231d82 implement an instcombine xform needed by clang's codegen
on the example in PR4216.  This doesn't trigger in the testsuite,
so I'd really appreciate someone scrutinizing the logic for
correctness.

llvm-svn: 92458
2010-01-04 06:03:59 +00:00
Chris Lattner 1dae8766b1 fix PR5930, allowing the asmprinter to emit difference between
two labels as a truncate.

llvm-svn: 92455
2010-01-03 18:33:18 +00:00
Chris Lattner f6a585fc2f add PR#
llvm-svn: 92451
2010-01-03 18:10:58 +00:00
Chris Lattner a7cfc43af8 differences between two blockaddress's don't cause a
global variable initializer to require relocations.

llvm-svn: 92450
2010-01-03 18:09:40 +00:00
Chris Lattner fca0c8f93a generalize the previous transformation to handle indexing into
arrays of structs and other arrays, so long as all the subsequent
indexes are constants.  This triggers frequently for stuff like:

@divisions = internal constant [29 x [2 x i32]] [[2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 2], [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2]], align 32 ; <[29 x [2 x i32]]*> [#uses=50]

	  %623 = getelementptr inbounds [29 x [2 x i32]]* @divisions, i64 0, i64 %619, i64 0 ; <i32*> [#uses=1]
	   %684 = icmp eq i32 %683, 999 

also for the "my_defs" table in 'gs', etc.

llvm-svn: 92444
2010-01-03 03:03:27 +00:00
Chris Lattner 98ad2b56cc teach instcombine to optimize idioms like A[i]&42 == 0. This
occurs in 403.gcc in mode_mask_array, in safe-ctype.c (which
is copied in multiple apps) in _sch_istable, etc.

llvm-svn: 92427
2010-01-02 22:08:28 +00:00
Chris Lattner b56bef45f8 Teach the table lookup optimization to generate range compares
when a consequtive sequence of elements all satisfies the 
predicate.  Like the double compare case, this generates better
code than the magic constant case and generalizes to more than
32/64 element array lookups.

Here are some examples where it triggers.  From 403.gcc, most
accesses to the rtx_class array are handled, e.g.:

@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=547]
   %142 = icmp eq i8 %141, 105
@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=543]
	   %165 = icmp eq i8 %164, 60      

Also, most of the 59-element arrays (mode_class/rid_to_yy, etc) 
optimized before are actually range compares.  This lets 32-bit
machines optimize them.

400.perlbmk has stuff like this:

400.perlbmk: PL_regkind, even for 32-bit:
@PL_regkind = constant [62 x i8] c"\00\00\02\02\02\06\06\06\06\09\09\0B\0B\0D\0E\0E\0E\11\12\12\14\14\16\16\18\18\1A\1A\1C\1C\1E\1F !!!$$&'((((,-.///88886789:;8$", align 32 ; <[62 x i8]*> [#uses=4]
	   %811 = icmp ne i8 %810, 33 

@PL_utf8skip = constant [256 x i8] c"\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\04\04\04\04\04\04\04\04\05\05\05\05\06\06\07\0D", align 32 ; <[256 x i8]*> [#uses=94]
	   %12 = icmp ult i8 %10, 2
           
etc.

llvm-svn: 92426
2010-01-02 21:50:18 +00:00
Nick Lewycky a67519be12 Fix logic error in previous commit. The != case needs to become an or, not an
and.

llvm-svn: 92419
2010-01-02 16:14:56 +00:00
Nick Lewycky 357d41b3c1 Optimize pointer comparison into the typesafe form, now that the backends will
handle them efficiently. This is the opposite direction of the transformation
we used to have here.

llvm-svn: 92418
2010-01-02 15:25:44 +00:00
Chris Lattner cfda435c73 Generalize the previous xform to handle cases where exactly
two elements match or don't match with two comparisons.  For
example, the testcase compiles into:

define i1 @test5(i32 %X) {
  %1 = icmp eq i32 %X, 2                          ; <i1> [#uses=1]
  %2 = icmp eq i32 %X, 7                          ; <i1> [#uses=1]
  %R = or i1 %1, %2                               ; <i1> [#uses=1]
  ret i1 %R
}

This generalizes the previous xforms when the array is larger than
64 elements (and this case matches) and generates better code for
cases where it overlaps with the magic bitshift case.

This generalizes more cases than you might expect.  For example,
400.perlbmk has:

@PL_utf8skip = constant [256 x i8] c"\01\01\01\...
%15 = icmp ult i8 %7, 7

403.gcc has:
@rid_to_yy = internal constant [114 x i16] [i16 259, i16 260, ...
%18 = icmp eq i16 %16, 295 

and xalancbmk has a bunch of examples, such as 
_ZN11xercesc_2_5L15gCombiningCharsE and _ZN11xercesc_2_5L10gBaseCharsE.

llvm-svn: 92417
2010-01-02 09:35:17 +00:00
Chris Lattner 935a4a606a enhance the compare/load/index optimization to work on *any* load
from a global with 32/64 elements or less (depending on whether
i64 is native on the target), generating a bitshift idiom to 
determine the result.  For example, on test4 we produce:

define i1 @test4(i32 %X) {
  %1 = lshr i32 933, %X                           ; <i32> [#uses=1]
  %2 = and i32 %1, 1                              ; <i32> [#uses=1]
  %R = icmp ne i32 %2, 0                          ; <i1> [#uses=1]
  ret i1 %R
}

This triggers in a number of interesting cases, for example, here's an
fp case:
@A.3255 = internal constant [4 x double] [double 4.100000e+00, double -3.900000e+00, double -1.000000e+00, double 1.000000e+00], align 32 ; <[4 x double]*> [#uses=7]
...
	   %7 = fcmp olt double %3, 0.000000e+00

In this case we make the slen2_tab global dead, which is nice:
@slen2_tab = internal constant [16 x i32] [i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 1, i32 2, i32 3, i32 1, i32 2, i32 3, i32 2, i32 3], align 32 ; <[16 x i32]*> [#uses=1]
...
	   %204 = icmp eq i32 %46, 0     

Perl has a bunch of these, also on the 'Perl_regkind' array:
@Perl_yygindex = internal constant [51 x i16] [i16 0, i16 0, i16 0, i16 0, i16 374, i16 351, i16 0, i16 -12, i16 0, i16 946, i16 413, i16 -83, i16 0, i16 0, i16 0, i16 -311, i16 -13, i16 4007, i16 2893, i16 0, i16 0, i16 0, i16 0, i16 0, i16 372, i16 -8, i16 0, i16 0, i16 246, i16 -131, i16 43, i16 86, i16 208, i16 -45, i16 -169, i16 987, i16 0, i16 0, i16 0, i16 0, i16 308, i16 0, i16 -271, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0], align 32 ; <[51 x i16]*> [#uses=1]
...
  %1364 = icmp eq i16 %1361, 0

186.crafty really likes this on 64-bit machines, because it triggers on a bunch of globals like this:
@white_outpost = internal constant [64 x i8] c"\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\02\02\00\00\00\00\00\04\05\05\04\00\00\00\00\03\06\06\03\00\00\00\00\00\01\01\00\00\00\00\00\00\00\00\00\00\00", align 32 ; <[64 x i8]*> [#uses=2]

However the big winner is 403.gcc, which triggers hundreds of times, eliminating all the accesses to the 57-element arrays 'mode_class', mode_unit_size, mode_bitsize, regclass_map, etc.

go 64-bit machines :)

llvm-svn: 92415
2010-01-02 08:56:52 +00:00
Chris Lattner b1567bd584 enhance the previous optimization to work with fcmp in addition
to icmp.

llvm-svn: 92412
2010-01-02 08:20:51 +00:00
Chris Lattner a061859ccc Teach instcombine to fold compares of loads from constant
arrays with variable indices into a comparison of the index
with a constant.  The most common occurrence of this that
I see by far is stuff like:

if ("foobar"[i] == '\0') ...

which we compile into: if (i == 6), saving a load and 
materialization of the global address.  This also exposes 
loop trip count information to later passes in many cases.

This triggers hundreds of times in xalancbmk, which is where I first
noticed it, but it also triggers in many other apps.  Here are a few 
interesting ones from various apps:

@must_be_connected_without = internal constant [8 x i8*] [i8* getelementptr inbounds ([3 x i8]* @.str64320, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str27283, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str71327, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str72328, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str18274, i64 0, i64 0), i8* getelementptr inbounds ([6 x i8]* @.str11267, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str32288, i64 0, i64 0), i8* null], align 32 ; <[8 x i8*]*> [#uses=2]
  %scevgep.i = getelementptr [8 x i8*]* @must_be_connected_without, i64 0, i64 %indvar.i ; <i8**> [#uses=1]
  %17 = load ...
  %18 = icmp eq i8* %17, null                     ; <i1> [#uses=1]
-> icmp eq i64 %indvar.i, 7 


@yytable1095 = internal constant [84 x i8] c"\12\01(\05\06\07\08\09\0A\0B\0C\0D\0E1\0F\10\11266\1D: \10\11,-,0\03'\10\11B6\04\17&\18\1945\05\06\07\08\09\0A\0B\0C\0D\0E\1E\0F\10\11*\1A\1B\1C$3+>#%;<IJ=ADFEGH9KL\00\00\00C", align 32 ; <[84 x i8]*> [#uses=2]
  %57 = getelementptr inbounds [84 x i8]* @yytable1095, i64 0, i64 %56 ; <i8*> [#uses=1]
   %mode.0.in = getelementptr inbounds [9 x i32]* @mb_mode_table, i64 0, i64 %.pn ; <i32*> [#uses=1]
load ...
   %64 = icmp eq i8 %58, 4                         ; <i1> [#uses=1]
-> icmp eq i64 %.pn, 35             ; <i1> [#uses=0]


@gsm_DLB = internal constant [4 x i16] [i16 6554, i16 16384, i16 26214, i16 32767]
%scevgep.i = getelementptr [4 x i16]* @gsm_DLB, i64 0, i64 %indvar.i ; <i16*> [#uses=1]
%425 = load %scevgep.i
%426 = icmp eq i16 %425, -32768                 ; <i1> [#uses=0]
-> false

llvm-svn: 92411
2010-01-02 08:12:04 +00:00
Chris Lattner 2e4be2c340 remove the instcombine transformations that are inserting nasty
pointer to int casts that confuse later optimizations.  See PR3351
for details.

This improves but doesn't complete fix 483.xalancbmk because llvm-gcc
does this xform in GCC's "fold" routine as well.  Clang++ will do
better I guess.

llvm-svn: 92408
2010-01-02 00:31:05 +00:00
Chris Lattner 909c71c96a allow this to work on linux hosts.
llvm-svn: 92407
2010-01-02 00:22:15 +00:00
Chris Lattner 1eea3b0ada Teach codegen to handle:
(X != null) | (Y != null) --> (X|Y) != 0
 (X == null) & (Y == null) --> (X|Y) == 0

so that instcombine can stop doing this for pointers.  This is part of PR3351,
which is a case where instcombine doing this for pointers (inserting ptrtoint)
is pessimizing code.

llvm-svn: 92406
2010-01-02 00:00:03 +00:00
Chris Lattner 6eef072eb6 rename file.
llvm-svn: 92405
2010-01-01 23:55:04 +00:00
Chris Lattner faf1337acb add a simple instcombine xform, simplify another one to use hasAllZeroIndices()
instead of hand rolling a loop.

llvm-svn: 92403
2010-01-01 23:09:08 +00:00
Chris Lattner 30c0a2833d generalize the pointer difference optimization to handle
a constantexpr gep on the 'base' side of the expression.
This completes comment #4 in PR3351, which comes from
483.xalancbmk.

llvm-svn: 92402
2010-01-01 22:42:29 +00:00
Chris Lattner 4394f71752 teach instcombine to optimize pointer difference idioms involving constant
expressions.  This is a step towards comment #4 in PR3351.

llvm-svn: 92401
2010-01-01 22:29:12 +00:00
Chris Lattner 25c87e9cf9 implement the transform requested in PR5284
llvm-svn: 92398
2010-01-01 18:34:40 +00:00
Chris Lattner 39f18e545e Teach codegen to lower llvm.powi to an efficient (but not optimal)
multiply sequence when the power is a constant integer.  Before, our
codegen for std::pow(.., int) always turned into a libcall, which was
really inefficient.

This should also make many gfortran programs happier I'd imagine.

llvm-svn: 92388
2010-01-01 03:32:16 +00:00
Chris Lattner 5967840a5f Make this more likely to generate a libcall.
llvm-svn: 92387
2010-01-01 03:26:51 +00:00
Chris Lattner 8330daf733 add a few trivial instcombines for llvm.powi.
llvm-svn: 92383
2010-01-01 01:52:15 +00:00
Chris Lattner 0c59ac3f41 When factoring multiply expressions across adds, factor both
positive and negative forms of constants together.  This 
allows us to compile:

int foo(int x, int y) {
    return (x-y) + (x-y) + (x-y);
}

into:

_foo:                                                       ## @foo
	subl	%esi, %edi
	leal	(%rdi,%rdi,2), %eax
	ret

instead of (where the 3 and -3 were not factored):

_foo:
        imull   $-3, 8(%esp), %ecx
        imull   $3, 4(%esp), %eax
        addl    %ecx, %eax
        ret

this started out as:
    movl    12(%ebp), %ecx
    imull   $3, 8(%ebp), %eax
    subl    %ecx, %eax
    subl    %ecx, %eax
    subl    %ecx, %eax
    ret

This comes from PR5359.

llvm-svn: 92381
2010-01-01 01:13:15 +00:00
Chris Lattner 2f03e64094 test case we alredy get right.
llvm-svn: 92380
2010-01-01 00:50:00 +00:00
Chris Lattner fed3397654 reuse negates where possible instead of always creating them from scratch.
This allows us to optimize test12 into:

define i32 @test12(i32 %X) {
  %factor = mul i32 %X, -3                        ; <i32> [#uses=1]
  %Z = add i32 %factor, 6                         ; <i32> [#uses=1]
  ret i32 %Z
}

instead of:

define i32 @test12(i32 %X) {
  %Y = sub i32 6, %X                              ; <i32> [#uses=1]
  %C = sub i32 %Y, %X                             ; <i32> [#uses=1]
  %Z = sub i32 %C, %X                             ; <i32> [#uses=1]
  ret i32 %Z
}

llvm-svn: 92373
2009-12-31 20:34:32 +00:00
Chris Lattner 60b71b5c4d teach reassociate to factor x+x+x -> x*3. While I'm at it,
fix RemoveDeadBinaryOp to actually do something.

llvm-svn: 92368
2009-12-31 19:24:52 +00:00
Chris Lattner 4e3a5678af simple fix for an incorrect factoring which causes a
miscompilation, PR5458.

llvm-svn: 92354
2009-12-31 08:33:49 +00:00
Chris Lattner 2d3b53a68c merge some more tests in.
llvm-svn: 92353
2009-12-31 08:32:22 +00:00
Chris Lattner 19a4baa201 filecheckize
llvm-svn: 92352
2009-12-31 08:29:56 +00:00
Chris Lattner cac432c846 add some basic named MD tests.
llvm-svn: 92336
2009-12-31 03:00:49 +00:00
Chris Lattner c5c08899e4 fix two bogus tests that the asmparser now rejects.
llvm-svn: 92303
2009-12-30 05:54:51 +00:00
Chris Lattner 28f1eebe3e reimplement insertvalue/extractvalue metadata handling to not blindly
accept invalid input.  Actually add a testcase.

llvm-svn: 92297
2009-12-30 05:14:00 +00:00
Chris Lattner 0f3bb7b25e fix parsing of mdstring values.
llvm-svn: 92290
2009-12-30 04:13:37 +00:00
Chris Lattner 596760d9bb Each instruction is allowed to have *multiple* different
metadata objects on them.  Though the entire compiler supports this,
the asmparser didn't.

llvm-svn: 92270
2009-12-29 21:25:40 +00:00
Chris Lattner 93163c401e Do not crash when .ll printing metadata that smells like debug info, but isn't.
llvm-svn: 92268
2009-12-29 21:17:33 +00:00
Sanjiv Gupta 015215ca86 Extern declaration for unordered.f32 libcall was not being emitted. Fixed that.
llvm-svn: 92242
2009-12-29 03:24:34 +00:00
Sanjiv Gupta 1ecffe13b2 Fixed llc crash for zext (i1 -> i8) loads.
llvm-svn: 92201
2009-12-28 04:53:24 +00:00
Dale Johannesen 14ee5ead2d Testcase for llvm-gcc checkin 92108.
llvm-svn: 92110
2009-12-24 01:10:43 +00:00
Chris Lattner f5e3ed64d5 handle equality memcmp of 8 bytes on x86-64 with two unaligned loads and a
compare.  On other targets we end up with a call to memcmp because we don't
want 16 individual byte loads.  We should be able to use movups as well, but
we're failing to select the generated icmp.

llvm-svn: 92107
2009-12-24 01:07:17 +00:00
Chris Lattner 1a32ede6fd move an optimization for memcmp out of simplifylibcalls and into
SDISel.  This optimization was causing simplifylibcalls to 
introduce type-unsafe nastiness.  This is the first step, I'll be 
expanding the memcmp optimizations shortly, covering things that
we really really wouldn't want simplifylibcalls to do.

llvm-svn: 92098
2009-12-24 00:37:38 +00:00
Daniel Dunbar ccf79584f9 Remove an XFAIL.
llvm-svn: 92036
2009-12-23 20:13:44 +00:00
Mikhail Glushenkov f48a0c43b6 Allow (set_option SwitchOption, true).
llvm-svn: 91997
2009-12-23 12:49:30 +00:00
Sanjiv Gupta cd419eebce Reapply 91904.
llvm-svn: 91996
2009-12-23 11:19:09 +00:00
Sanjiv Gupta 6920c17f1f deleting empty file.
llvm-svn: 91994
2009-12-23 10:35:24 +00:00
Sanjiv Gupta f7b4f89588 Reverting back 91904.
llvm-svn: 91993
2009-12-23 09:46:01 +00:00
Dale Johannesen a864a67185 Use more sensible type for flags in asms. PR 5570.
Patch by Sylve`re Teissier (sorry, ASCII only).

llvm-svn: 91988
2009-12-23 07:32:51 +00:00
Eric Christopher fdb33458fc Update objectsize intrinsic and associated dependencies. Fix
lowering code and update testcases.

llvm-svn: 91979
2009-12-23 02:51:48 +00:00
Anton Korobeynikov ef3fdc1cbd Add testcase for PR5703
llvm-svn: 91931
2009-12-22 22:37:23 +00:00
Evan Cheng 71d7eaa87e Remove target attribute break-sse-dep. Instead, do not fold load into sse partial update instructions unless optimizing for size.
llvm-svn: 91910
2009-12-22 17:47:23 +00:00
Sanjiv Gupta 8c5f05fcee While converting one of the operands to a memory operand, we need to check if it is Legal and does not result into a cyclic dep.
llvm-svn: 91904
2009-12-22 14:25:37 +00:00
Chris Lattner f6d4129c76 specify a triple to use, fixing the test on non-x86-64 hosts.
llvm-svn: 91900
2009-12-22 07:01:12 +00:00
Bob Wilson 62a84ea8e3 Generalize SROA to allow the first index of a GEP to be non-zero. Add a
missing check that an array reference doesn't go past the end of the array,
and remove some redundant checks for in-bound array and vector references
that are no longer needed.

llvm-svn: 91897
2009-12-22 06:57:14 +00:00
Chris Lattner dd0c01b5de various cleanups, make the disassemble reject lines with too much
data on them, for example:

	addb	%al, (%rax)
simple-tests.txt:11:5: error: excess data detected in input
0 0 0 0 0 
    ^

llvm-svn: 91896
2009-12-22 06:56:51 +00:00
Chris Lattner dc9845b79a rewrite the file parser for the disassembler, implementing support for
comments.  Also, check in a simple testcase for the disassembler,
including a test for r91864

llvm-svn: 91894
2009-12-22 06:37:58 +00:00
Chris Lattner f21a220bcd Implement PR5795 by merging duplicated return blocks. This could go further
by merging all returns in a function into a single one, but simplifycfg 
currently likes to duplicate the return (an unfortunate choice!)

llvm-svn: 91890
2009-12-22 06:07:30 +00:00
Chris Lattner 2b297ed9ee convert to filecheck
llvm-svn: 91889
2009-12-22 06:04:26 +00:00
David Greene dbf7074296 Fix a bug in !subst where TableGen would go and resubstitute text it had
just substituted.  This could cause infinite looping in certain
pathological cases.

llvm-svn: 91843
2009-12-21 21:21:34 +00:00
Daniel Dunbar 0fc1f6c595 XFAIL these tests on powerpc, under the assumption that no one cares. If you care, feel free to fix.
llvm-svn: 91826
2009-12-21 17:31:59 +00:00
Chris Lattner 8fb07c5a21 fix PR5837 by having SSAUpdate reuse phi nodes for the
'GetValueInMiddleOfBlock' case, instead of inserting 
duplicates.

A similar fix is almost certainly needed by the machine-level
SSAUpdate implementation.

llvm-svn: 91820
2009-12-21 07:16:11 +00:00
Chris Lattner 7bc85a931e add check lines for min/max tests.
llvm-svn: 91816
2009-12-21 06:08:50 +00:00
Chris Lattner 33269813df really convert this to filecheck.
llvm-svn: 91815
2009-12-21 06:06:10 +00:00
Chris Lattner d4fb4296df give instcombine some helper functions for matching MIN and MAX, and
implement some optimizations for MIN(MIN()) and MAX(MAX()) and 
MIN(MAX()) etc.  This substantially improves the code in PR5822 but
doesn't kick in much elsewhere.  2 max's were optimized in 
pairlocalalign and one in smg2000.

llvm-svn: 91814
2009-12-21 06:03:05 +00:00
Chris Lattner 7a72d50de7 filecheckize
llvm-svn: 91813
2009-12-21 05:53:13 +00:00
Chris Lattner ffbd02829c enhance x-(-A) -> x+A to preserve NUW/NSW.
Use the presence of NSW/NUW to fold "icmp (x+cst), x" to a constant in
cases where it would otherwise be undefined behavior.

Surprisingly (to me at least), this triggers hundreds of the times in
a few benchmarks: lencode, ldecode, and 466.h264ref seem to *really*
like this.

llvm-svn: 91812
2009-12-21 04:04:05 +00:00
Chris Lattner 900ce231f9 Optimize all cases of "icmp (X+Cst), X" to something simpler. This triggers
a bunch in lencode, ldecod, spass, 176.gcc, 252.eon, among others.  It is 
also the first part of PR5822

llvm-svn: 91811
2009-12-21 03:19:28 +00:00
Chris Lattner d18b455086 convert to filecheck
llvm-svn: 91810
2009-12-21 03:11:05 +00:00
Chris Lattner 25bf6f8946 fix an overly conservative caching issue that caused memdep to
cache a pointer as being unavailable due to phi trans in the
wrong place.  This would cause later queries to fail even when
they didn't involve phi trans.

llvm-svn: 91787
2009-12-19 21:29:22 +00:00
Chris Lattner 95b431dd32 fix inconsistent use of tabs
llvm-svn: 91783
2009-12-19 20:44:43 +00:00
Sanjiv Gupta 8ac077df57 Emit direction operand in binary insns that stores in memory.
llvm-svn: 91777
2009-12-19 13:52:01 +00:00
Sanjiv Gupta bda8002e7f Test cases for changes done in 91768.
llvm-svn: 91773
2009-12-19 11:38:14 +00:00
Chris Lattner 4ad5eba568 fix PR5827 by disabling the phi slicing transformation in a case
where instcombine would have to split a critical edge due to a
phi node of an invoke.  Since instcombine can't change the CFG,
it has to bail out from doing the transformation.

llvm-svn: 91763
2009-12-19 07:01:15 +00:00
Evan Cheng b175de6356 Increase opportunities to optimize (brcond (srl (and c1), c2)).
llvm-svn: 91717
2009-12-18 21:31:31 +00:00
Bob Wilson 532cd232fb Reapply 91459 with a simple fix for the problem that broke the x86_64-darwin
bootstrap.  This also replaces the WeakVH references that Chris objected to
with normal Value references.

llvm-svn: 91711
2009-12-18 20:14:40 +00:00
Mikhail Glushenkov d7015906d0 Make 'set_option' work with list options.
This works now: (set_option "list_opt", ["val_1", "val_2", "val_3"])

llvm-svn: 91679
2009-12-18 11:27:26 +00:00
Eli Friedman 86b9d75dc8 Optimize icmp of null and select of two constants even if the select has
multiple uses.  (The construct in question was found in gcc.)

llvm-svn: 91675
2009-12-18 08:22:35 +00:00
Evan Cheng 4cf30b72bf On recent Intel u-arch's, folding loads into some unary SSE instructions can
be non-optimal. To be precise, we should avoid folding loads if the instructions
only update part of the destination register, and the non-updated part is not
needed. e.g. cvtss2sd, sqrtss. Unfolding the load from these instructions breaks
the partial register dependency and it can improve performance. e.g.

movss (%rdi), %xmm0
cvtss2sd %xmm0, %xmm0

instead of
cvtss2sd (%rdi), %xmm0

An alternative method to break dependency is to clear the register first. e.g.
xorps %xmm0, %xmm0
cvtss2sd (%rdi), %xmm0

llvm-svn: 91672
2009-12-18 07:40:29 +00:00
Dan Gohman 51fbfb726f Tidy up this testcase and add test for tailcall optimization
with unreachable.

llvm-svn: 91650
2009-12-18 01:05:06 +00:00
Bob Wilson 3152b0471b Handle ARM inline asm "w" constraints with 64-bit ("d") registers.
The change in SelectionDAGBuilder is needed to allow using bitcasts to convert
between f64 (the default type for ARM "d" registers) and 64-bit Neon vector
types.  Radar 7457110.

llvm-svn: 91649
2009-12-18 01:03:29 +00:00
Dan Gohman 7f4326f8b6 Remove "tail" keywords. These calls are not intended to be tail calls.
This protects this test from depending on codegen not performing the
tail call optimization by default.

llvm-svn: 91648
2009-12-18 01:02:18 +00:00
Jakob Stoklund Olesen 78c5919b14 Add test case for the phi reuse patch.
llvm-svn: 91642
2009-12-18 00:11:44 +00:00
Sean Callanan 04d8cb74f3 Instruction fixes, added instructions, and AsmString changes in the
X86 instruction tables.

Also (while I was at it) cleaned up the X86 tables, removing tabs and
80-line violations.

This patch was reviewed by Chris Lattner, but please let me know if
there are any problems.

* X86*.td
	Removed tabs and fixed 80-line violations

* X86Instr64bit.td
	(IRET, POPCNT, BT_, LSL, SWPGS, PUSH_S, POP_S, L_S, SMSW)
		Added
	(CALL, CMOV) Added qualifiers
	(JMP) Added PC-relative jump instruction
	(POPFQ/PUSHFQ) Added qualifiers; renamed PUSHFQ to indicate
		that it is 64-bit only (ambiguous since it has no
		REX prefix)
	(MOV) Added rr form going the other way, which is encoded
		differently
	(MOV) Changed immediates to offsets, which is more correct;
		also fixed MOV64o64a to have to a 64-bit offset
	(MOV) Fixed qualifiers
	(MOV) Added debug-register and condition-register moves
	(MOVZX) Added more forms
	(ADC, SUB, SBB, AND, OR, XOR) Added reverse forms, which
		(as with MOV) are encoded differently
	(ROL) Made REX.W required
	(BT) Uncommented mr form for disassembly only
	(CVT__2__) Added several missing non-intrinsic forms
	(LXADD, XCHG) Reordered operands to make more sense for
		MRMSrcMem
	(XCHG) Added register-to-register forms
	(XADD, CMPXCHG, XCHG) Added non-locked forms
* X86InstrSSE.td
	(CVTSS2SI, COMISS, CVTTPS2DQ, CVTPS2PD, CVTPD2PS, MOVQ)
		Added
* X86InstrFPStack.td
	(COM_FST0, COMP_FST0, COM_FI, COM_FIP, FFREE, FNCLEX, FNOP,
	 FXAM, FLDL2T, FLDL2E, FLDPI, FLDLG2, FLDLN2, F2XM1, FYL2X,
	 FPTAN, FPATAN, FXTRACT, FPREM1, FDECSTP, FINCSTP, FPREM,
	 FYL2XP1, FSINCOS, FRNDINT, FSCALE, FCOMPP, FXSAVE,
	 FXRSTOR)
		Added
	(FCOM, FCOMP) Added qualifiers
	(FSTENV, FSAVE, FSTSW) Fixed opcode names
	(FNSTSW) Added implicit register operand
* X86InstrInfo.td
	(opaque512mem) Added for FXSAVE/FXRSTOR
	(offset8, offset16, offset32, offset64) Added for MOV
	(NOOPW, IRET, POPCNT, IN, BTC, BTR, BTS, LSL, INVLPG, STR,
	 LTR, PUSHFS, PUSHGS, POPFS, POPGS, LDS, LSS, LES, LFS,
	 LGS, VERR, VERW, SGDT, SIDT, SLDT, LGDT, LIDT, LLDT,
	 LODSD, OUTSB, OUTSW, OUTSD, HLT, RSM, FNINIT, CLC, STC,
	 CLI, STI, CLD, STD, CMC, CLTS, XLAT, WRMSR, RDMSR, RDPMC,
	 SMSW, LMSW, CPUID, INVD, WBINVD, INVEPT, INVVPID, VMCALL,
	 VMCLEAR, VMLAUNCH, VMRESUME, VMPTRLD, VMPTRST, VMREAD,
	 VMWRITE, VMXOFF, VMXON) Added
	(NOOPL, POPF, POPFD, PUSHF, PUSHFD) Added qualifier
	(JO, JNO, JB, JAE, JE, JNE, JBE, JA, JS, JNS, JP, JNP, JL,
	 JGE, JLE, JG, JCXZ) Added 32-bit forms
	(MOV) Changed some immediate forms to offset forms
	(MOV) Added reversed reg-reg forms, which are encoded
		differently
	(MOV) Added debug-register and condition-register moves
	(CMOV) Added qualifiers
	(AND, OR, XOR, ADC, SUB, SBB) Added reverse forms, like MOV
	(BT) Uncommented memory-register forms for disassembler
	(MOVSX, MOVZX) Added forms
	(XCHG, LXADD) Made operand order make sense for MRMSrcMem
	(XCHG) Added register-register forms
	(XADD, CMPXCHG) Added unlocked forms
* X86InstrMMX.td
	(MMX_MOVD, MMV_MOVQ) Added forms
* X86InstrInfo.cpp: Changed PUSHFQ to PUSHFQ64 to reflect table
	change

* X86RegisterInfo.td: Added debug and condition register sets
* x86-64-pic-3.ll: Fixed testcase to reflect call qualifier
* peep-test-3.ll: Fixed testcase to reflect test qualifier
* cmov.ll: Fixed testcase to reflect cmov qualifier
* loop-blocks.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-11.ll: Fixed testcase to reflect call qualifier
* 2009-11-04-SubregCoalescingBug.ll: Fixed testcase to reflect call
  qualifier
* x86-64-pic-2.ll: Fixed testcase to reflect call qualifier
* live-out-reg-info.ll: Fixed testcase to reflect test qualifier
* tail-opts.ll: Fixed testcase to reflect call qualifiers
* x86-64-pic-10.ll: Fixed testcase to reflect call qualifier
* bss-pagealigned.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-1.ll: Fixed testcase to reflect call qualifier
* widen_load-1.ll: Fixed testcase to reflect call qualifier

llvm-svn: 91638
2009-12-18 00:01:26 +00:00
Eli Friedman 250b119d98 Allow instcombine to combine "sext(a) >u const" to "a >u trunc(const)".
llvm-svn: 91631
2009-12-17 22:42:29 +00:00
Eli Friedman 7cc86b4cc6 Make the ptrtoint comparison simplification work if one side is a global.
llvm-svn: 91624
2009-12-17 21:27:47 +00:00
Eli Friedman 5842c9968a Slightly generalize transformation of memmove(a,a,n) so that it also applies
to memcpy. (Such a memcpy is technically illegal, but in practice is safe
and is generated by struct self-assignment in C code.)

llvm-svn: 91621
2009-12-17 21:07:31 +00:00
Bob Wilson f3927b7994 Re-revert 91459. It's breaking the x86_64 darwin bootstrap.
llvm-svn: 91607
2009-12-17 18:34:24 +00:00
Mikhail Glushenkov 1fe2678a06 Add a 'set_option' action for use in OptionPreprocessor.
llvm-svn: 91594
2009-12-17 07:49:16 +00:00
Eli Friedman e67cae33e1 Aggressively flip compare constant expressions where appropriate; constant
folding in particular expects null to be on the RHS.

llvm-svn: 91587
2009-12-17 06:07:04 +00:00
Evan Cheng aadf060b92 Revert this dag combine change:
Fold (zext (and x, cst)) -> (and (zext x), cst)

DAG combiner likes to optimize expression in the other way so this would end up cause an infinite looping.

llvm-svn: 91574
2009-12-17 00:40:05 +00:00
Daniel Dunbar ab42d42390 Reapply r91459, it was only unmasking the bug, and since TOT is still broken having it reverted does no good.
llvm-svn: 91559
2009-12-16 20:09:53 +00:00
Daniel Dunbar 133efc317e Revert "Reapply 91184 with fixes and an addition to the testcase to cover the
problem", this broke llvm-gcc bootstrap for release builds on
x86_64-apple-darwin10.

This reverts commit db22309800b224a9f5f51baf76071d7a93ce59c9.

llvm-svn: 91534
2009-12-16 10:56:17 +00:00
Chris Lattner f278addbdc reapply my strstr optimization. I have reproduced the x86-64 bootstrap
miscompile (i386.o miscompares) but it happens both with and without
this patch.

llvm-svn: 91532
2009-12-16 09:32:05 +00:00
Nick Lewycky 23fbd54cfe Make this test pass on Linux.
llvm-svn: 91521
2009-12-16 07:35:25 +00:00
Devang Patel c69d9607f8 XFAIL on ppc-darwin.
llvm-svn: 91495
2009-12-16 02:11:38 +00:00
Evan Cheng 1be6286028 Re-enable 91381 with fixes.
llvm-svn: 91489
2009-12-16 00:53:11 +00:00
Chris Lattner 177be32334 revert my strstr optimization, I'm told it breaks x86-64 bootstrap.
Will reapply with a fix when I get a chance.

llvm-svn: 91486
2009-12-16 00:46:02 +00:00
Dale Johannesen 56f041406d Do better with physical reg operands (typically, from inline asm)
in local register allocator.  If a reg-reg copy has a phys reg
input and a virt reg output, and this is the last use of the phys
reg, assign the phys reg to the virt reg.  If a reg-reg copy has
a phys reg output and we need to reload its spilled input, reload
it directly into the phys reg than passing it through another reg.

Following 76208, there is sometimes no dependency between the def of
a phys reg and its use; this creates a window where that phys reg
can be used for spilling (this is true in linear scan also).  This
is bad and needs to be fixed a better way, although 76208 works too
well in practice to be reverted.  However, there should normally be
no spilling within inline asm blocks.  The patch here goes a long way
towards making this actually be true.

llvm-svn: 91485
2009-12-16 00:29:41 +00:00
Bob Wilson e44756d7c2 Reapply 91184 with fixes and an addition to the testcase to cover the problem
found last time.  Instead of trying to modify the IR while iterating over it,
I've change it to keep a list of WeakVH references to dead instructions, and
then delete those instructions later.  I also added some special case code to
detect and handle the situation when both operands of a memcpy intrinsic are
referencing the same alloca.

llvm-svn: 91459
2009-12-15 22:00:51 +00:00
Chris Lattner 26ab363361 optimize strstr, PR5783
llvm-svn: 91438
2009-12-15 19:14:40 +00:00
Mikhail Glushenkov 66a664870b Convert llvmc tests to FileCheck.
llvm-svn: 91420
2009-12-15 07:21:14 +00:00
Mikhail Glushenkov 62b65ecf18 Support hook invocation from 'append_cmd'.
llvm-svn: 91419
2009-12-15 07:20:50 +00:00
Kenneth Uildriks 792f0913ee For fastcc on x86, let ECX be used as a return register after EAX and EDX
llvm-svn: 91410
2009-12-15 03:27:52 +00:00
Evan Cheng fcb5453dc7 Disable 91381 for now. It's miscompiling ARMISelDAG2DAG.cpp.
llvm-svn: 91405
2009-12-15 03:07:11 +00:00
Mikhail Glushenkov 096fc103fb Validate the generated C++ code in llvmc tests.
Checks that the code generated by 'tblgen --emit-llvmc' can be actually
compiled. Also fixes two bugs found in this way:

- forward_transformed_value didn't work with non-list arguments
- cl::ZeroOrOne is now called cl::Optional

llvm-svn: 91404
2009-12-15 03:04:52 +00:00
Mikhail Glushenkov cb0721e363 Pipe 'grep' output to 'count'.
llvm-svn: 91403
2009-12-15 03:04:14 +00:00
Mikhail Glushenkov 2cb65bd5ab Allow $CALL(Hook, '$INFILE') for non-join tools.
llvm-svn: 91402
2009-12-15 03:04:02 +00:00
Evan Cheng 852c486946 Make 91378 more conservative.
1. Only perform (zext (shl (zext x), y)) -> (shl (zext x), y) when y is a constant. This makes sure it remove at least one zest.
2. If the shift is a left shift, make sure the original shift cannot shift out bits.

llvm-svn: 91399
2009-12-15 03:00:32 +00:00
Evan Cheng 0e8b9e32d1 Use sbb x, x to materialize carry bit in a GPR. The result is all one's or all zero's.
llvm-svn: 91381
2009-12-15 00:53:42 +00:00
Evan Cheng d1521ef40c Fold (zext (and x, cst)) -> (and (zext x), cst).
llvm-svn: 91380
2009-12-15 00:52:11 +00:00
Evan Cheng ca7c690d3b Propagate zest through logical shift.
llvm-svn: 91378
2009-12-15 00:41:36 +00:00
Dan Gohman cecad35728 Fix integer cast code to handle vector types.
llvm-svn: 91362
2009-12-14 23:40:38 +00:00
Eric Christopher 1dba6ea72f Add radar fixed in comment.
llvm-svn: 91312
2009-12-14 19:07:25 +00:00
Shantonu Sen 0c20054cc4 Remove empty file completely
llvm-svn: 91277
2009-12-14 14:15:15 +00:00
Chris Lattner aaa6ac10a6 revert r91184, because it causes a crash on a .bc file I just
sent to Bob.

llvm-svn: 91268
2009-12-14 05:11:02 +00:00
Mikhail Glushenkov 897889ef6b Add a test for the 'init' option property.
llvm-svn: 91259
2009-12-14 04:06:38 +00:00
Evan Cheng 26fdd7265b Disable r91104 for x86. It causes partial register stall which pessimize code in 32-bit.
llvm-svn: 91223
2009-12-12 20:03:14 +00:00
Benjamin Kramer 401e6093c9 Fix some CHECK lines which were ignored by accident.
llvm-svn: 91214
2009-12-12 09:25:50 +00:00
Bob Wilson 895f364ae6 Revise scalar replacement to be more flexible about handle bitcasts and GEPs.
While scanning through the uses of an alloca, keep track of the current offset
relative to the start of the alloca, and check memory references to see if
the offset & size correspond to a component within the alloca.  This has the
nice benefit of unifying much of the code from isSafeUseOfAllocation,
isSafeElementUse, and isSafeUseOfBitCastedAllocation.  The code to rewrite
the uses of a promoted alloca, after it is determined to be safe, is
reorganized in the same way.

Also, when rewriting GEP instructions, mark them as "in-bounds" since all the
indices are known to be safe.

llvm-svn: 91184
2009-12-11 23:47:40 +00:00
Anton Korobeynikov e27e028cdd Lower setcc branchless, if this is profitable.
Based on the patch by Brian Lucas!

llvm-svn: 91175
2009-12-11 23:01:29 +00:00
Dan Gohman 1d459e4937 Implement vector widening, splitting, and scalarizing for SIGN_EXTEND_INREG.
llvm-svn: 91158
2009-12-11 21:31:27 +00:00
Dan Gohman bffa061e02 Change this to the correct PR number.
llvm-svn: 91148
2009-12-11 20:09:21 +00:00
Dan Gohman 84ba039cf2 Make getUniqueExitBlocks's precondition assert more precise, to
avoid spurious failures. This fixes PR5758.

llvm-svn: 91147
2009-12-11 20:05:23 +00:00
Dan Gohman 6d306bb32b Fix the result type of SELECT nodes lowered from Select instructions with
aggregate return values. This fixes PR5754.

llvm-svn: 91145
2009-12-11 19:50:50 +00:00
Anton Korobeynikov fc51282cbe Honour setHasCalls() set from isel.
This is used in some weird cases like general dynamic TLS model.
This fixes PR5723

llvm-svn: 91144
2009-12-11 19:39:55 +00:00
Evan Cheng ff2ac71b25 Tests for 91103 and 91104.
llvm-svn: 91105
2009-12-11 06:02:21 +00:00
Eric Christopher 4b91e0194b Add a test for the fix in revision 91009.
llvm-svn: 91062
2009-12-10 21:11:40 +00:00
Evan Cheng 4986588ddb It's not safe to coalesce a move where src and dst registers have different subregister indices. e.g.:
%reg16404:1<def> = MOV8rr %reg16412:2<kill>

llvm-svn: 91061
2009-12-10 20:59:45 +00:00
Chris Lattner 9ccc879006 Fix PR5744, a case where we were getting the pointer size instead of the
value size.  This only manifested when memdep inprecisely returns clobber,
which is do to a caching issue in the PR5744 testcase.  We can 'efficiently
emulate' this by using '-no-aa'

llvm-svn: 91004
2009-12-10 00:11:45 +00:00
Evan Cheng 2262909b20 Fix test.
llvm-svn: 90988
2009-12-09 22:24:42 +00:00
Evan Cheng 493b882f80 Optimize splat of a scalar load into a shuffle of a vector load when it's legal. e.g.
vector_shuffle (scalar_to_vector (i32 load (ptr + 4))), undef, <0, 0, 0, 0>
=>
vector_shuffle (v4i32 load ptr), undef, <1, 1, 1, 1>

iff ptr is 16-byte aligned (or can be made into 16-byte aligned).

llvm-svn: 90984
2009-12-09 21:00:30 +00:00
Chris Lattner ca5f9cb18b fix hte last remaining known (by me) phi translation bug. When we reanalyze
clobbers to forward pieces of large stores to small loads, we need to consider
the properly phi translated pointer in the store block.

llvm-svn: 90978
2009-12-09 18:21:46 +00:00
Chris Lattner 9f9010ef47 Add a minor optimization: if we haven't changed the operands of an
add, there is no need to scan the world to find the same add again.
This invalidates the previous testcase, which wasn't wonderful anyway,
because it needed a run of instcombine to permute the use-lists in 
just the right way to before GVN was run (so it was really fragile).
Not a big loss.

llvm-svn: 90973
2009-12-09 17:27:45 +00:00
Chris Lattner fa2e536831 fix PR5733, a case where we'd replace an add with a lexically identical
binary operator that wasn't an add.  In this case, a xor.  Whoops.

llvm-svn: 90971
2009-12-09 17:18:49 +00:00
Chris Lattner 8f77035568 merge crash-2.ll into crash.ll
llvm-svn: 90969
2009-12-09 17:17:26 +00:00
Chris Lattner 10398e74ae the code in GVN that tries to forward large loads to small
stores is not phi translating, thus it miscompiles really
crazy testcases.  This is from inspection, I haven't seen
this in the wild.

llvm-svn: 90930
2009-12-09 02:43:05 +00:00
Chris Lattner 972e6d8d00 Switch GVN and memdep to use PHITransAddr, which correctly handles
phi translation of complex expressions like &A[i+1].  This has the
following benefits:

1. The phi translation logic is all contained in its own class with
   a strong interface and verification that it is self consistent.

2. The logic is more correct than before.  Previously, if intermediate
   expressions got PHI translated, we'd miss the update and scan for
   the wrong pointers in predecessor blocks.  @phi_trans2 is a testcase
   for this.

3. We have a lot less code in memdep.

We can handle phi translation across blocks of things like @phi_trans3,
which is pretty insane :).

This patch should fix the miscompiles of 255.vortex, and I tested it 
with a bootstrap of llvm-gcc, llvm-test and dejagnu of course.

llvm-svn: 90926
2009-12-09 01:59:31 +00:00
Evan Cheng d938faff4b Teach InferPtrAlignment to infer GV+cst alignment and use it to simplify x86 isl lowering code.
llvm-svn: 90925
2009-12-09 01:53:58 +00:00
Devang Patel e52b1fa128 Remove tests that are not suitable anymore. Plus they are not testing the original bugfixes anymore. These tests were inserted to check bug fixes in code that handled debug info intrinsics. These intrinsics are no longer used and now llvm parser simply ignores old .dbg intrinsics from these dead tests.
llvm-svn: 90923
2009-12-09 01:46:00 +00:00
Devang Patel 512001ac7d Revert 90858 90875 and 90805 for now.
llvm-svn: 90898
2009-12-08 23:21:45 +00:00
Evan Cheng 0c2544fd6b - Support inline asm 'w' constraint for 128-bit vector types.
- Also support the 'q' NEON registers asm code.

llvm-svn: 90894
2009-12-08 23:06:22 +00:00
Daniel Dunbar 0f620b81c1 CMake/lit: Add llvm_{unit_,}site_config parameters, and always pass them when running tests from the project files.
llvm-svn: 90869
2009-12-08 19:47:36 +00:00
Devang Patel 7d723ec70d Do not try to push dead variable's debug info into namespace info.
llvm-svn: 90857
2009-12-08 15:01:35 +00:00
Duncan Sands 6a3df7b0c7 Teach GlobalOpt to delete aliases with internal linkage (after
forwarding any uses).  GlobalDCE can also do this, but is only
run at -O3.

llvm-svn: 90850
2009-12-08 10:10:20 +00:00
Anton Korobeynikov dd2b2f8cba Reduce (cmp 0, and_su (foo, bar)) into (bit foo, bar). This saves extra instruction. Patch inspired by Brian Lucas!
llvm-svn: 90819
2009-12-08 01:03:04 +00:00
Evan Cheng 8d61ec3002 Test case for 90787.
llvm-svn: 90791
2009-12-07 19:42:22 +00:00
David Greene 76a7edc36d Use FileCheck and set nounwind on calls.
llvm-svn: 90790
2009-12-07 19:40:26 +00:00
Dan Gohman 9528ccdd77 Don't enable the post-RA scheduler on x86 except at -O3. In its
current form, it is too expensive in compile time.

llvm-svn: 90781
2009-12-07 19:04:31 +00:00
Mikhail Glushenkov 6b6be99632 Implement 'forward_value' and 'forward_transformed_value'.
llvm-svn: 90770
2009-12-07 17:03:05 +00:00
Anton Korobeynikov 75dfed4fa5 Dynamic stack realignment use of sp register as source/dest register
in "bic sp, sp, #15" leads to unpredicatble behaviour in Thumb2 mode.
Emit the following code instead:
mov r4, sp
bic r4, r4, #15
mov sp, r4

llvm-svn: 90724
2009-12-06 22:39:50 +00:00
Chris Lattner 6d6f10fe91 fix PR5698
llvm-svn: 90708
2009-12-06 17:17:23 +00:00
Chris Lattner 778cb92235 constant fold loads from memcpy's from global constants. This is important
because clang lowers nontrivial automatic struct/array inits to memcpy from
a global array.

llvm-svn: 90698
2009-12-06 05:29:56 +00:00
Chris Lattner 93236ba327 add support for forwarding mem intrinsic values to non-local loads.
llvm-svn: 90697
2009-12-06 04:54:31 +00:00
Chris Lattner 850a3cd905 gvn is optimizing this better now.
llvm-svn: 90696
2009-12-06 04:16:05 +00:00
Chris Lattner 42376066eb Handle forwarding local memsets to loads. For example, we optimize this:
short x(short *A) {
  memset(A, 1, sizeof(*A)*100);
  return A[42];
}

to 'return 257' instead of doing the load.  

llvm-svn: 90695
2009-12-06 01:57:02 +00:00
Chris Lattner eb5bb1bf78 merge two tests.
llvm-svn: 90691
2009-12-06 01:47:24 +00:00
Bill Wendling f89986235d Temporarily revert r90502. It was causing the llvm-gcc bootstrap on PPC to fail.
llvm-svn: 90653
2009-12-05 07:30:23 +00:00
Nick Lewycky a0e9d700dc Generalize this optimization to work on equality comparisons between any two
integers that are constant except for a single bit (the same n-th bit in each).

llvm-svn: 90646
2009-12-05 05:00:00 +00:00
Dan Gohman abc77742c8 Fix this code to use DIScope instead of DICompileUnit, as in r90181.
Don't print "SrcLine"; just print the filename and line number, which
is obvious enough and more informative.

llvm-svn: 90631
2009-12-05 00:23:29 +00:00
Dan Gohman 6aea8dccf1 Remove now-redundant llvm-as invocations.
llvm-svn: 90626
2009-12-05 00:02:37 +00:00
Bill Wendling f85dc3f0f1 Add testcase for PR4262.
llvm-svn: 90623
2009-12-04 23:29:57 +00:00
Bill Wendling 74356efae9 Temporarily revert r72620 because r72619 was reverted.
llvm-svn: 90619
2009-12-04 23:16:56 +00:00
Chris Lattner 1ddfd9f96c Fix PR5551 by not ignoring the top level constantexpr when
folding a load from constant.

llvm-svn: 90545
2009-12-04 06:29:29 +00:00
Chris Lattner 1c21aaca06 Small and carefully crafted testcase showing a miscompilation by GVN
that I'm working on.  This is manifesting as a miscompile of 255.vortex
on some targets.  No check lines yet because it fails.

llvm-svn: 90520
2009-12-04 02:12:12 +00:00
Jakob Stoklund Olesen ca9cf65455 Also attempt trivial coalescing for live intervals that end in a copy.
The coalescer is supposed to clean these up, but when setting up parameters
for a function call, there may be copies to physregs. If the defining
instruction has been LICM'ed far away, the coalescer won't touch it.

The register allocation hint does not always work - when the register
allocator is backtracking, it clears the hints.

This patch takes care of a few more cases that r90163 missed.

llvm-svn: 90502
2009-12-04 00:16:04 +00:00
Nate Begeman 9655f84662 Don't pull vector sext through both hands of a logical operation, since doing so prevents the fusion of vector sext and setcc into vsetcc.
Add a testcase for the above transformation.
Fix a bogus use of APInt noticed while tracking this down.

llvm-svn: 90423
2009-12-03 07:11:29 +00:00
Bob Wilson 0bbd3077ce Recognize canonical forms of vector shuffles where the same vector is used for
both source operands.  In the canonical form, the 2nd operand is changed to an
undef and the shuffle mask is adjusted to only reference elements from the 1st
operand.  Radar 7434842.

llvm-svn: 90417
2009-12-03 06:40:55 +00:00
Owen Anderson 0b6e260066 Fix this crasher, and add a FIXME for a missed optimization.
llvm-svn: 90408
2009-12-03 03:43:29 +00:00
Chris Lattner 65812b58f2 add a failing testcase.
llvm-svn: 90380
2009-12-03 01:46:18 +00:00
Chris Lattner 77c36d68f3 fix PR5673 by being more careful about pointers to functions.
llvm-svn: 90369
2009-12-03 01:05:45 +00:00
Bill Wendling 76bf386af0 Remove unnecessary check.
llvm-svn: 90352
2009-12-02 22:02:20 +00:00
Owen Anderson b9878ee6b6 Cleanup/remove some parts of the lifetime region handling code in memdep and GVN,
per Chris' comments.  Adjust testcases to match.

llvm-svn: 90304
2009-12-02 07:35:19 +00:00
Chris Lattner 4ca1981e82 merge sext-2 into sext.ll
llvm-svn: 90293
2009-12-02 05:34:35 +00:00
Chris Lattner 0a12a8f9fe rename test
llvm-svn: 90292
2009-12-02 05:32:33 +00:00
Chris Lattner fe206d2a13 filecheckize
llvm-svn: 90291
2009-12-02 05:32:16 +00:00
Mon P Wang bb3eac9e7a Fixed an assertion failure for tracking sext of a vector of integers
llvm-svn: 90290
2009-12-02 04:59:58 +00:00
Evan Cheng 732351f732 Fix PR5391: support early clobber physical register def tied with a use (ewwww)
- A valno should be set HasRedefByEC if there is an early clobber def in the middle of its live ranges. It should not be set if the def of the valno is defined by an early clobber.
- If a physical register def is tied to an use and it's an early clobber, it just means the HasRedefByEC is set since it's still one continuous live range.
- Add a couple of missing checks for HasRedefByEC in the coalescer. In general, it should not coalesce a vr with a physical register if the physical register has a early clobber def somewhere. This is overly conservative but that's the price for using such a nasty inline asm "feature".

llvm-svn: 90269
2009-12-01 22:25:00 +00:00
Jim Grosbach 8a8ba87ac8 test case for IV-Users simplification loop improvement
llvm-svn: 90260
2009-12-01 21:53:51 +00:00
Devang Patel 0a2c0bcb14 Clear function specific containers while processing end of a function, even if DW_TAG_subprogram for current function is not found.
llvm-svn: 90247
2009-12-01 18:13:48 +00:00
Chris Lattner 367b5eafb7 minimize this a bit more.
llvm-svn: 90216
2009-12-01 07:30:01 +00:00
Chris Lattner fd75b90d81 merge 2009-11-29-ReverseMap.ll into crash.ll
llvm-svn: 90212
2009-12-01 06:22:10 +00:00
Chris Lattner 3c9aca9079 fix PR5640 by tracking whether a block is the header of a loop more
precisely, which prevents us from infinitely peeling the loop.

llvm-svn: 90211
2009-12-01 06:04:43 +00:00
Jakob Stoklund Olesen 26667abbd3 Use CFG connectedness as a secondary sort key when deciding the order of copy coalescing.
This means that well connected blocks are copy coalesced before the less connected blocks. Connected blocks are more difficult to
coalesce because intervals are more complicated, so handling them first gives a greater chance of success.

llvm-svn: 90194
2009-12-01 03:03:00 +00:00
Dan Gohman 03f90ab0a9 Add a comment about A[i+(j+1)].
llvm-svn: 90185
2009-12-01 01:38:10 +00:00
Evan Cheng 1d31fc9123 Fix PR5614: parts of a physical register def may be killed the rest.
llvm-svn: 90180
2009-12-01 00:44:45 +00:00
Devang Patel 3daa96b079 Test case for r90175.
llvm-svn: 90176
2009-12-01 00:13:06 +00:00
Jakob Stoklund Olesen 020d8d4c63 New virtual registers created for spill intervals should inherit allocation hints from the original register.
This helps us avoid silly copies when rematting values that are copied to a physical register:

leaq	_.str44(%rip), %rcx
movq	%rcx, %rsi
call	_strcmp

becomes:

leaq	_.str44(%rip), %rsi
call	_strcmp

The coalescer will not touch the movq because that would tie down the physical register.

llvm-svn: 90163
2009-11-30 22:55:54 +00:00
Bill Wendling 120037fec7 Debug info is disabled on PPC Darwin.
llvm-svn: 90160
2009-11-30 22:23:29 +00:00
Nick Lewycky 8a29dd4c7f Add a testcase for the current llvm-gcc build failure.
llvm-svn: 90112
2009-11-30 07:02:18 +00:00
Mon P Wang 031cb00246 Add test case for r90108
llvm-svn: 90109
2009-11-30 02:42:27 +00:00
Nick Lewycky fef0c67d01 Fix this test on 64-bit systems which seem to use i64 for gep indices sometimes
while 32-bit gcc uses i32.

llvm-svn: 90106
2009-11-30 02:23:57 +00:00
Nick Lewycky 95ef6c9560 Commit r90099 made LLVM simplify one of these constant expressions a little
more. Update the syntax we're checking for and filecheckize it too.

This will fix the selfhost buildbots but will 'break' the others (sigh) because
they're still linked against older LLVM which is emitting less optimized IR.

llvm-svn: 90104
2009-11-30 00:38:56 +00:00
Nick Lewycky e35e6f097d Teach ConstantFolding to do a better job when folding gep(bitcast).
This permits the devirtualization of llvm.org/PR3100#c9 when compiled by clang.

llvm-svn: 90099
2009-11-29 21:40:55 +00:00
Chris Lattner 1cc4cca193 add testcases for the foo_with_overflow op xforms added recently and
fix bugs exposed by the tests.  Testcases from Alastair Lynn!

llvm-svn: 90056
2009-11-29 02:57:29 +00:00
Chris Lattner 0d39613f65 add PR#
llvm-svn: 90049
2009-11-29 01:28:58 +00:00
Chris Lattner 73d45454be Add a testcase for:
void test(int N, double* G) {
  long j;
  for (j = 1; j < N - 1; j++)
      G[j] = G[j] + G[j+1] + G[j-1];
}

which we now compile to one load in the loop:

LBB1_2:                                                     ## %bb
	movsd	16(%rsi,%rax,8), %xmm2
	incq	%rdx
	addsd	%xmm2, %xmm1
	addsd	%xmm1, %xmm0
	movapd	%xmm2, %xmm1
	movsd	%xmm0, 8(%rsi,%rax,8)
	incq	%rax
	cmpq	%rcx, %rax
	jne	LBB1_2

instead of:

LBB1_2:                                                     ## %bb
	movsd	8(%rsi,%rax,8), %xmm0
	addsd	16(%rsi,%rax,8), %xmm0
	addsd	(%rsi,%rax,8), %xmm0
	movsd	%xmm0, 8(%rsi,%rax,8)
	incq	%rax
	cmpq	%rcx, %rax
	jne	LBB1_2

llvm-svn: 90048
2009-11-29 01:15:43 +00:00
Chris Lattner a73adac52e add a testcase for
void test9(int N, double* G) {
  long j;
  for (j = 1; j < N - 1; j++)
      G[j+1] = G[j] + G[j+1];
}

llvm-svn: 90047
2009-11-29 01:04:40 +00:00
Chris Lattner cd261c9c26 Implement PR5634.
llvm-svn: 90046
2009-11-29 00:51:17 +00:00
Nick Lewycky 218a3393f4 Teach memdep to look for memory use intrinsics during dependency queries. Fixes
PR5574.

llvm-svn: 90045
2009-11-28 21:27:49 +00:00
Chris Lattner 32140312ca reenable load address insertion in load pre. This allows us to
handle cases like this:
void test(int N, double* G) {
  long j;
  for (j = 1; j < N - 1; j++)
      G[j+1] = G[j] + G[j+1];
}

where G[1] isn't live into the loop.

llvm-svn: 90041
2009-11-28 16:08:18 +00:00
Chris Lattner c7bc66dfc6 implement a FIXME: limit the depth that DecomposeGEPExpression goes the same
way that getUnderlyingObject does it. 

This fixes the 'DecomposeGEPExpression and getUnderlyingObject disagree!' 
assertion on sqlite3.

llvm-svn: 90038
2009-11-28 15:12:41 +00:00
Chris Lattner cf0b198827 disable value insertion for now, I need to figure out how
to inform GVN about the newly inserted values.  This fixes 
PR5631.

llvm-svn: 90022
2009-11-27 22:50:07 +00:00
Chris Lattner d141f885a1 I accidentally implemented this :)
llvm-svn: 90014
2009-11-27 19:56:00 +00:00
Chris Lattner 2f0354ecf0 add support for recursive phi translation and phi
translation of add with immediate.  This allows us
to optimize this function:

void test(int N, double* G) {
  long j;
  G[1] = 1;
    for (j = 1; j < N - 1; j++)
        G[j+1] = G[j] + G[j+1];
}

to only do one load every iteration of the loop.

llvm-svn: 90013
2009-11-27 19:11:31 +00:00
Chris Lattner e66f84e012 add two simple test cases we now optimize (to one load in the loop each) and one we don't (corresponding to the fixme I added yesterday).
llvm-svn: 90012
2009-11-27 18:08:30 +00:00
Chris Lattner 2226db66ab fix PR5436 by making the 'simple' case of SRoA not promote out of range
array indexes.  The "complex" case of SRoA still handles them, and correctly.

This fixes a weirdness where we'd correctly avoid transforming A[0][42] if
the 42 was too large, but we'd only do it if it was one gep, not two separate
ones.

llvm-svn: 90007
2009-11-27 16:37:41 +00:00
Chris Lattner 92ba18e9e4 filecheckize
llvm-svn: 90006
2009-11-27 16:31:59 +00:00
Duncan Sands b56334b4f2 While this test is testing a problem in the generic part of codegen,
the problem only shows for msp430 and pic16 which is why it specifies
them using -march.  But it is wrong to put such tests in CodeGen/Generic,
since not everyone builds these targets.  Put a copy of the test in each
of the target test directories.

llvm-svn: 90005
2009-11-27 16:04:14 +00:00
Chris Lattner 25be93dfed teach GVN's load PRE to insert computations of the address in predecessors
where it is not available.  It's unclear how to get this inserted 
computation into GVN's scalar availability sets, Owen, help? :)

llvm-svn: 89997
2009-11-27 08:25:10 +00:00
Chris Lattner 41a5bba4e0 add some tests for memdep phi translation + PRE.
llvm-svn: 89996
2009-11-27 06:42:42 +00:00
Chris Lattner fa76d23c1d this test is failing, and is expected to.
llvm-svn: 89995
2009-11-27 06:36:28 +00:00
Chris Lattner 4f1552bde7 filecheckize
llvm-svn: 89994
2009-11-27 06:33:09 +00:00
Chris Lattner 66426c70e6 rename test.
llvm-svn: 89993
2009-11-27 06:31:55 +00:00
Chris Lattner a9a76ccf56 Fix phi translation in load PRE to agree with the phi
translation done by memdep, and reenable gep translation 
again.

llvm-svn: 89992
2009-11-27 06:31:14 +00:00
Chris Lattner b018bda665 redisable this, my bootstrap worked because it wasn't an optimized build, whoops.
llvm-svn: 89991
2009-11-27 05:53:01 +00:00
Chris Lattner fb8a718fc3 try again.
llvm-svn: 89990
2009-11-27 05:19:56 +00:00
Chris Lattner 14444f5c1a this is causing buildbot failures, disable for now.
llvm-svn: 89985
2009-11-27 01:52:22 +00:00
Chris Lattner 5030c6ab21 teach phi translation of GEPs to simplify geps like 'gep x, 0'.
This allows us to compile the example from PR5313 into:

LBB1_2:                                                     ## %bb
	incl	%ecx
	movb	%al, (%rsi)
	movslq	%ecx, %rax
	movb	(%rdi,%rax), %al
	testb	%al, %al
	jne	LBB1_2

instead of:

LBB1_2:                                                     ## %bb
	movslq	%eax, %rcx
	incl	%eax
	movb	(%rdi,%rcx), %cl
	movb	%cl, (%rsi)
	movslq	%eax, %rcx
	cmpb	$0, (%rdi,%rcx)
	jne	LBB1_2

llvm-svn: 89981
2009-11-27 00:34:38 +00:00
Chris Lattner 4c88e814b8 teach memdep to do trivial PHI translation of GEPs. More to
come.

llvm-svn: 89979
2009-11-27 00:07:37 +00:00
Chris Lattner 9bd2136ca3 Teach memdep to phi translate bitcasts. This allows us to compile
the example in GCC PR16799 to:

LBB1_2:                                                     ## %bb1
	movl	%eax, %eax
	subq	%rax, %rdi
	movq	%rdi, (%rcx)
	movl	(%rdi), %eax
	testl	%eax, %eax
	je	LBB1_2

instead of:

LBB1_2:                                                     ## %bb1
	movl	(%rdi), %ecx
	subq	%rcx, %rdi
	movq	%rdi, (%rax)
	cmpl	$0, (%rdi)
	je	LBB1_2

llvm-svn: 89978
2009-11-26 23:41:07 +00:00
Chris Lattner dfaa592de1 convert to filecheck
llvm-svn: 89977
2009-11-26 23:32:59 +00:00
Chris Lattner a73ecf0b00 Fix PR5471 by removing an instcombine xform. Some pieces of the code
generates store to undef and some generates store to null as the idiom
for undefined behavior.  Since simplifycfg zaps both, don't remove the
undefined behavior in instcombine.

llvm-svn: 89971
2009-11-26 22:04:42 +00:00
Chris Lattner 5fe97e7aca @test9 is a testcase for r89958. Before 89958, we misanalyzed the
first expression as P+4+4*i which we considered to possibly alias
P+4*j.  Now we correctly analyze the former one as P+1+4*i.

@test10 is a sanity test that verfies that we know that P+4+4*i != P+4*i.

llvm-svn: 89960
2009-11-26 19:25:46 +00:00
Chris Lattner 1bf7ff704a Implement PR1143 (at -m64) by making basicaa look through extensions. We
previously already handled it at -m32 because there were no i32->i64 
extensions for addressing.

llvm-svn: 89959
2009-11-26 18:53:33 +00:00
Chris Lattner 631c5b2cb9 teach GetLinearExpression to be a bit more aggressive.
llvm-svn: 89955
2009-11-26 17:00:01 +00:00
Chris Lattner ba0014a44c update status of this. basicaa is much improved now,
only missing the one form (in this testcase).  Dan, do you
consider this example to be important?

llvm-svn: 89953
2009-11-26 16:42:00 +00:00
Chris Lattner 29bc8a91d3 Teach basicaa that x|c == x+c when the c bits of x are clear. This
allows us to compile the example in readme.txt into:

LBB1_1:                                                     ## %bb
	movl	4(%rdx,%rax), %ecx
	movl	%ecx, %esi
	imull	(%rdx,%rax), %esi
	imull	%esi, %ecx
	movl	%esi, 8(%rdx,%rax)
	imull	%ecx, %esi
	movl	%ecx, 12(%rdx,%rax)
	movl	%esi, 16(%rdx,%rax)
	imull	%ecx, %esi
	movl	%esi, 20(%rdx,%rax)
	addq	$16, %rax
	cmpq	$4000, %rax
	jne	LBB1_1

instead of:

LBB1_1: 
	movl	(%rdx,%rax), %ecx
	imull	4(%rdx,%rax), %ecx
	movl	%ecx, 8(%rdx,%rax)
	imull	4(%rdx,%rax), %ecx
	movl	%ecx, 12(%rdx,%rax)
	imull	8(%rdx,%rax), %ecx
	movl	%ecx, 16(%rdx,%rax)
	imull	12(%rdx,%rax), %ecx
	movl	%ecx, 20(%rdx,%rax)
	addq	$16, %rax
	cmpq	$4000, %rax
	jne	LBB1_1

GCC (4.2) doesn't seem to be able to eliminate the loads in this 
testcase either, it generates:

L2:
	movl	(%rdx), %eax
	imull	4(%rdx), %eax
	movl	%eax, 8(%rdx)
	imull	4(%rdx), %eax
	movl	%eax, 12(%rdx)
	imull	8(%rdx), %eax
	movl	%eax, 16(%rdx)
	imull	12(%rdx), %eax
	movl	%eax, 20(%rdx)
	addl	$4, %ecx
	addq	$16, %rdx
	cmpl	$1002, %ecx
	jne	L2

llvm-svn: 89952
2009-11-26 16:26:43 +00:00
Chris Lattner 12dacdd359 teach basicaa that A[i] != A[i+1].
llvm-svn: 89951
2009-11-26 16:18:10 +00:00
Chris Lattner 453751031a rename test
llvm-svn: 89950
2009-11-26 16:08:41 +00:00
Chris Lattner 7a5b56aca9 Change the other half of aliasGEP (which handles GEP differencing) to use DecomposeGEPExpression. This dramatically simplifies and shrinks the code by eliminating the horrible CheckGEPInstructions method, fixes a miscompilation (@test3) and makes the code more aggressive. In particular, we now handle the @test4 case, which is reduced from the SmallPtrSet constructor. Missing this caused us to emit a variable length memset instead of a fixed size one.
llvm-svn: 89922
2009-11-26 02:17:34 +00:00
Chris Lattner 0d23076adf add a new random feature test
llvm-svn: 89921
2009-11-26 02:16:28 +00:00
Evan Cheng a4c986cbdd Test for 89905.
llvm-svn: 89906
2009-11-26 00:35:01 +00:00
Dale Johannesen 979ac9fce4 Test for llvm-gcc checkin 89898.
llvm-svn: 89899
2009-11-25 23:50:09 +00:00
Evan Cheng 44df27e964 ProcessImplicitDefs should watch out for invalidated iterator and extra implicit operands on copies.
llvm-svn: 89880
2009-11-25 21:13:39 +00:00
Bruno Cardoso Lopes 2db07581b7 Support PIC loading of constant pool entries
llvm-svn: 89863
2009-11-25 12:17:58 +00:00
Edward O'Callaghan 2b8fed15e0 Reverting patch in revision 89758, initial attempt at fixing PR5373 has proven to be bogus.
llvm-svn: 89844
2009-11-25 05:38:41 +00:00
Dale Johannesen 5ece8f0a20 Do not store R31 into the caller's link area on PPC.
This violates the ABI (that area is "reserved"), and
while it is safe if all code is generated with current
compilers, there is some very old code around that uses
that slot for something else, and breaks if it is stored
into.  Adjust testcases looking for current behavior.
I've verified that the stack frame size is right in all
testcases, whether it changed or not.  7311323.

llvm-svn: 89811
2009-11-24 22:59:02 +00:00