Commit Graph

827 Commits

Author SHA1 Message Date
Reid Spencer c635f47d9a For PR950:
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int   -> Int32
4. [U]Long  -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
   and other methods related to signedness. In a few places this warranted
   identifying the signedness information from other sources.

llvm-svn: 32785
2006-12-31 05:48:39 +00:00
Reid Spencer 193df25eb9 For PR1066:
Fix this by ensuring that a bitcast is inserted to do sign switching. This
is only temporarily needed as the merging of signed and unsigned is next
on the SignlessTypes plate.

llvm-svn: 32757
2006-12-24 00:40:59 +00:00
Reid Spencer 910f23f7d7 Shut up some compilers that can't accurately analyze variable usage
correctly and emit "may be used uninitialized" warnings.

llvm-svn: 32756
2006-12-23 19:17:57 +00:00
Reid Spencer 43c77d53ff For PR1065:
Don't allow CmpInst instances to be processed in FoldSelectOpOp because
you can't easily swap their operands.

llvm-svn: 32753
2006-12-23 18:58:04 +00:00
Reid Spencer 266e42b312 For PR950:
This patch removes the SetCC instructions and replaces them with the ICmp
and FCmp instructions. The SetCondInst instruction has been removed and
been replaced with ICmpInst and FCmpInst.

llvm-svn: 32751
2006-12-23 06:05:41 +00:00
Chris Lattner 79a42ac941 Switch over Transforms/Scalar to use the STATISTIC macro. For each statistic
converted, we lose a static initializer.  This also allows GCC to emit warnings
about unused statistics.

llvm-svn: 32690
2006-12-19 21:40:18 +00:00
Reid Spencer 668d90f289 Convert the last uses of CastInst::createInferredCast to a normal cast
creation. These changes are still temporary but at least this pushes
knowledge of signedness out closer to where it can be determined properly
and allows signedness to be removed from VMCore.

llvm-svn: 32654
2006-12-18 08:47:13 +00:00
Reid Spencer 74a528b427 Fix a bug in EvaluateInDifferentType. The type of operand should not be
used to determine whether a ZExt or SExt cast is performed. Instead, pass
an "isSigned" bool to the function and determine its value from the opcode
of the cast involved.
Also, clean up some cruft from previous patches.

llvm-svn: 32548
2006-12-13 18:21:21 +00:00
Reid Spencer 2a499b0b6c Implement review feedback. Most of this has to do with removing unnecessary
cast instructions. A few are bug fixes.

llvm-svn: 32544
2006-12-13 17:19:09 +00:00
Reid Spencer 612683b0d7 For mul transforms, when checking for a cast from bool as either operand,
make sure to also check that it is a zext from bool, not any other cast
operation type.

llvm-svn: 32539
2006-12-13 08:33:33 +00:00
Reid Spencer 799b5bfc71 Fix and/or/xor (cast A), (cast B) --> cast (and/or/xor A, B)
The cast patch introduced the possibility that the wrong cast opcode
could be used and that this transform could trigger on different kinds
of cast operations. This patch rectifies that.

llvm-svn: 32538
2006-12-13 08:27:15 +00:00
Reid Spencer bb65ebf9a1 Replace inferred getCast(V,Ty) calls with more strict variants.
Rename getZeroExtend and getSignExtend to getZExt and getSExt to match
the the casting mnemonics in the rest of LLVM.

llvm-svn: 32514
2006-12-12 23:36:14 +00:00
Chris Lattner 2dc148e89d this can be trunc or bitcast, per line 3092.
llvm-svn: 32487
2006-12-12 19:11:20 +00:00
Chris Lattner ade1f6894d Fix regression on 400.perlbench last night.
llvm-svn: 32486
2006-12-12 18:41:03 +00:00
Reid Spencer 13bc5d7b57 Fix numerous inferred casts.
llvm-svn: 32479
2006-12-12 09:18:51 +00:00
Bill Wendling f3baad3ee1 Changed llvm_ostream et all to OStream. llvm_cerr, llvm_cout, llvm_null, are
now cerr, cout, and NullStream resp.

llvm-svn: 32298
2006-12-07 01:30:32 +00:00
Reid Spencer 4ae56f3086 Update ConstantIntegral Max/Min tests for new interface.
llvm-svn: 32288
2006-12-06 20:39:57 +00:00
Chris Lattner 700b873130 Detemplatize the Statistic class. The only type it is instantiated with
is 'unsigned'.

llvm-svn: 32279
2006-12-06 17:46:33 +00:00
Chris Lattner c209b584eb add an instcombine xform. This speeds up 462.libquantum from 9.78s to
7.48s.  This regression is due to unforseen consequences of the cast patch.

llvm-svn: 32209
2006-12-05 01:26:29 +00:00
Reid Spencer 14fbdd5523 Update call to CastInst::getCastOpcode for its new signature.
llvm-svn: 32166
2006-12-04 02:48:01 +00:00
Chris Lattner 7a002fec1f disable transformations that are invalid for fp vectors. This fixes
Transforms/InstCombine/2006-12-01-BadFPVectorXform.ll

llvm-svn: 32112
2006-12-02 00:13:08 +00:00
Reid Spencer ad05ee9f39 Remove 4 FIXMEs to hack around cast-to-bool problems which no longer exist.
llvm-svn: 32051
2006-11-30 23:13:36 +00:00
Chris Lattner 960acb008b implement cast.ll:test35. With this, we recognize:
unsigned short swp(unsigned short a) {
       return ((a & 0xff00) >> 8 | (a & 0x00ff) << 8);
}

as an idiom for bswap.

llvm-svn: 32011
2006-11-29 07:18:39 +00:00
Chris Lattner d747f015ff Teach instcombine to turn trunc(srl x, c) -> srl (trunc(x), c) when safe.
This implements InstCombine/cast.ll:test34.  It fires hundreds of times on
176.gcc.

llvm-svn: 32009
2006-11-29 07:04:07 +00:00
Chris Lattner a7942b7bbd Implement Regression/Transforms/InstCombine/bswap-fold.ll,
folding   seteq (bswap(x)), c -> seteq(x,bswap(c))

llvm-svn: 32006
2006-11-29 05:02:16 +00:00
Reid Spencer a736fdf216 Join a split line.
llvm-svn: 31996
2006-11-29 01:11:01 +00:00
Reid Spencer 116ad83aa0 Undo the last patch until 253.perlbmk passes with these changes.
llvm-svn: 31977
2006-11-28 20:23:51 +00:00
Reid Spencer 59fe2d89ae Remove 4 FIXME's from the CAST patch now that the back end is correctly
producing code for "trunc to bool". This passes all tests on Linux.

llvm-svn: 31963
2006-11-28 07:23:01 +00:00
Chris Lattner 8e9a7b73d9 Fix PR1014 and InstCombine/2006-11-27-XorBug.ll.
llvm-svn: 31941
2006-11-27 19:55:07 +00:00
Reid Spencer 6c38f0bb07 For PR950:
The long awaited CAST patch. This introduces 12 new instructions into LLVM
to replace the cast instruction. Corresponding changes throughout LLVM are
provided. This passes llvm-test, llvm/test, and SPEC CPUINT2000 with the
exception of 175.vpr which fails only on a slight floating point output
difference.

llvm-svn: 31931
2006-11-27 01:05:10 +00:00
Bill Wendling 5dbf43c983 Removed #include <iostream> and replaced with llvm_* streams.
llvm-svn: 31923
2006-11-26 09:46:52 +00:00
Chris Lattner ec45a4c88c This xform is handled by FoldOpIntoPhi in visitCastInst in a more elegant way.
llvm-svn: 31889
2006-11-21 17:05:13 +00:00
Chris Lattner e3a63d136d Fix a gcc 4.2 warning.
llvm-svn: 31751
2006-11-15 04:53:24 +00:00
Chris Lattner f05d69ae72 implement InstCombine/shift-simplify.ll by transforming:
(X >> Z) op (Y >> Z)  -> (X op Y) >> Z

for all shifts and all ops={and/or/xor}.

llvm-svn: 31729
2006-11-14 07:46:50 +00:00
Chris Lattner d12a4bf799 implement InstCombine/and-compare.ll:test1. This compiles:
typedef struct { unsigned prefix : 4; unsigned code : 4; unsigned unsigned_p : 4; } tree_common;
int foo(tree_common *a, tree_common *b) { return a->code == b->code; }

into:

_foo:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl (%eax), %eax
        xorl (%ecx), %eax
        # TRUNCATE movb %al, %al
        shrb $4, %al
        testb %al, %al
        sete %al
        movzbl %al, %eax
        ret

instead of:

_foo:
        movl 8(%esp), %eax
        movb (%eax), %al
        shrb $4, %al
        movl 4(%esp), %ecx
        movb (%ecx), %cl
        shrb $4, %cl
        cmpb %al, %cl
        sete %al
        movzbl %al, %eax
        ret

saving one cycle by eliminating a shift.

llvm-svn: 31727
2006-11-14 06:06:06 +00:00
Chris Lattner d4dee405cb Fix InstCombine/2006-11-10-ashr-miscompile.ll a miscompilation introduced
by the shr -> [al]shr patch.  This was reduced from 176.gcc.

llvm-svn: 31653
2006-11-10 23:38:52 +00:00
Chris Lattner 6e2c15c158 Teach ShrinkDemandedConstant how to handle X+C. This implements:
add.ll:test33, add.ll:test34, shift-sra.ll:test2

llvm-svn: 31586
2006-11-09 05:12:27 +00:00
Chris Lattner 4f218d56f5 reenable factoring of GEP expressions, being more precise about the
case that it bad to do.

llvm-svn: 31563
2006-11-08 19:42:28 +00:00
Chris Lattner cd62f11227 make this code more efficient by not creating a phi node we are just going to
delete in the first place.  This also makes it simpler.

llvm-svn: 31562
2006-11-08 19:29:23 +00:00
Chris Lattner a3acfca920 disable this factoring optzn for GEPs for now, this severely pessimizes some
loops.

llvm-svn: 31560
2006-11-08 18:49:31 +00:00
Reid Spencer fdff938a7e For PR950:
This patch converts the old SHR instruction into two instructions,
AShr (Arithmetic) and LShr (Logical). The Shr instructions now are not
dependent on the sign of their operands.

llvm-svn: 31542
2006-11-08 06:47:33 +00:00
Andrew Lenharth 0ebb0b03e6 The wrong parameter was being tested to deturmine i32 vs i64
llvm-svn: 31431
2006-11-03 22:45:50 +00:00
Reid Spencer de46e48420 For PR786:
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.

llvm-svn: 31380
2006-11-02 20:25:50 +00:00
Reid Spencer 7eb55b395f For PR950:
Replace the REM instruction with UREM, SREM and FREM.

llvm-svn: 31369
2006-11-02 01:53:59 +00:00
Chris Lattner eebea43b48 Factor gep instructions through phi nodes.
llvm-svn: 31346
2006-11-01 07:43:41 +00:00
Chris Lattner 14f82c7dcd Turn a phi of many loads into a phi of the address and a single load of the
result.  This can significantly shrink code and exposes identities more
aggressively.

llvm-svn: 31344
2006-11-01 07:13:54 +00:00
Chris Lattner dc826fc068 Fix a bug in the previous patch
llvm-svn: 31342
2006-11-01 04:55:47 +00:00
Chris Lattner cadac0c5c3 Fold things like "phi [add (a,b), add(c,d)]" into two phi's and one add.
This triggers thousands of times on multisource.

llvm-svn: 31341
2006-11-01 04:51:18 +00:00
Reid Spencer 00c482b7a2 Simplify code a bit by changing instances of:
InsertNewInstBefore(new CastInst(Val, ValTy, Val->GetName()), I)
into:
   InsertCastBefore(Val, ValTy, I)

llvm-svn: 31204
2006-10-26 19:19:06 +00:00
Reid Spencer 7e80b0b31e For PR950:
Make necessary changes to support DIV -> [SUF]Div. This changes llvm to
have three division instructions: signed, unsigned, floating point. The
bytecode and assembler are bacwards compatible, however.

llvm-svn: 31195
2006-10-26 06:15:43 +00:00
Chris Lattner 5dee3b2526 Fix miscompilation of MallocBench/espresso which code review pointed out
but apparently didn't make it into the final patch.

llvm-svn: 31070
2006-10-20 18:20:21 +00:00
Reid Spencer e0fc4dfc22 For PR950:
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.

llvm-svn: 31063
2006-10-20 07:07:24 +00:00
Devang Patel 5d417e35bc While creating mask, use 1ULL instead of 1.
llvm-svn: 31062
2006-10-20 01:16:56 +00:00
Devang Patel 5d6df959e3 It is OK to remove extra cast if operation is EQ/NE even though source
and destination sign may not match but other conditions are met.

llvm-svn: 31056
2006-10-19 20:59:13 +00:00
Devang Patel 88afd00d1d Typo Typo.
llvm-svn: 31055
2006-10-19 19:21:36 +00:00
Devang Patel 472530d9fc Typo.
llvm-svn: 31054
2006-10-19 19:05:38 +00:00
Devang Patel b42aef4925 Fix bug in PR454 resolution. Added new test case.
This fixes llvmAsmParser.cpp miscompile by llvm on PowerPC Darwin.

llvm-svn: 31053
2006-10-19 18:54:08 +00:00
Reid Spencer 3c514959dd Undo Chris' last patch, it caused a regression.
llvm-svn: 30991
2006-10-16 23:08:08 +00:00
Chris Lattner 9a1c7dd27a fix a buggy check that accidentally disabled this xform
llvm-svn: 30967
2006-10-15 22:42:15 +00:00
Chris Lattner 2deeaeaca7 add a new SimplifyDemandedVectorElts method, which works similarly to
SimplifyDemandedBits.  The idea is that some operations can be simplified if
not all of the computed elements are needed.  Some targets (like x86) have a
large number of intrinsics that operate on a single element, but pass other
elts through unmodified.  If those other elements are not needed, the
intrinsics can be simplified to scalar operations, and insertelement ops can
be removed.

This turns (f.e.):

ushort %Convert_sse(float %f) {
        %tmp = insertelement <4 x float> undef, float %f, uint 0                ; <<4 x float>> [#uses=1]
        %tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, uint 1             ; <<4 x float>> [#uses=1]
        %tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, uint 2           ; <<4 x float>> [#uses=1]
        %tmp12 = insertelement <4 x float> %tmp11, float 0.000000e+00, uint 3           ; <<4 x float>> [#uses=1]
        %tmp28 = tail call <4 x float> %llvm.x86.sse.sub.ss( <4 x float> %tmp12, <4 x float> < float 1.000000e+00, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > )               ; <<4 x float>> [#uses=1]
        %tmp37 = tail call <4 x float> %llvm.x86.sse.mul.ss( <4 x float> %tmp28, <4 x float> < float 5.000000e-01, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > )               ; <<4 x float>> [#uses=1]
        %tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp37, <4 x float> < float 6.553500e+04, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > )               ; <<4 x float>> [#uses=1]
        %tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> zeroinitializer )          ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 )              ; <int> [#uses=1]
        %tmp69 = cast int %tmp to ushort                ; <ushort> [#uses=1]
        ret ushort %tmp69
}

into:

ushort %Convert_sse(float %f) {
entry:
        %tmp28 = sub float %f, 1.000000e+00             ; <float> [#uses=1]
        %tmp37 = mul float %tmp28, 5.000000e-01         ; <float> [#uses=1]
        %tmp375 = insertelement <4 x float> undef, float %tmp37, uint 0         ; <<4 x float>> [#uses=1]
        %tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp375, <4 x float> < float 6.553500e+04, float undef, float undef, float undef > )           ; <<4 x float>> [#uses=1]
        %tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> < float 0.000000e+00, float undef, float undef, float undef > )            ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 )              ; <int> [#uses=1]
        %tmp69 = cast int %tmp to ushort                ; <ushort> [#uses=1]
        ret ushort %tmp69
}

which improves codegen from:

_Convert_sse:
        movss LCPI1_0, %xmm0
        movss 4(%esp), %xmm1
        subss %xmm0, %xmm1
        movss LCPI1_1, %xmm0
        mulss %xmm0, %xmm1
        movss LCPI1_2, %xmm0
        minss %xmm0, %xmm1
        xorps %xmm0, %xmm0
        maxss %xmm0, %xmm1
        cvttss2si %xmm1, %eax
        andl $65535, %eax
        ret

to:

_Convert_sse:
        movss 4(%esp), %xmm0
        subss LCPI1_0, %xmm0
        mulss LCPI1_1, %xmm0
        movss LCPI1_2, %xmm1
        minss %xmm1, %xmm0
        xorps %xmm1, %xmm1
        maxss %xmm1, %xmm0
        cvttss2si %xmm0, %eax
        andl $65535, %eax
        ret


This is just a first step, it can be extended in many ways.  Testcase here:
Transforms/InstCombine/vec_demanded_elts.ll

llvm-svn: 30752
2006-10-05 06:55:50 +00:00
Chris Lattner 7d19067c42 Fix a bug from r1.391 of this file, where we checked the size instead of
the alignment when promoting allocations.  This implements
InstCombine/cast.ll:test32

llvm-svn: 30682
2006-10-01 19:40:58 +00:00
Chris Lattner 6ab03f6a08 Eliminate ConstantBool::True and ConstantBool::False. Instead, provide
ConstantBool::getTrue() and ConstantBool::getFalse().

llvm-svn: 30665
2006-09-28 23:35:22 +00:00
Andrew Lenharth 44cb67af5c simplify
llvm-svn: 30535
2006-09-20 15:37:57 +00:00
Chris Lattner 380c7e9a59 We went through all that trouble to compute whether it was safe to transform
this comparison, but never checked it.  Whoops, no wonder we miscompiled
177.mesa!

llvm-svn: 30511
2006-09-20 04:44:59 +00:00
Evan Cheng cd3f6ff0e5 Back out Chris' last set of changes. This breaks 177.mesa and povray somehow.
llvm-svn: 30505
2006-09-20 01:39:40 +00:00
Evan Cheng 453280b94d 80 col.
llvm-svn: 30504
2006-09-20 01:10:02 +00:00
Andrew Lenharth 4f339bebb0 If we have an add, do it in the pointer realm, not the int realm. This is critical in the linux kernel for pointer analysis correctness
llvm-svn: 30496
2006-09-19 18:24:51 +00:00
Chris Lattner 12f52faf93 implement select.ll:test19-22
llvm-svn: 30482
2006-09-19 06:18:21 +00:00
Chris Lattner de07792595 Fix an infinite loop building the CFE
llvm-svn: 30465
2006-09-18 18:27:05 +00:00
Chris Lattner 4922a0e53f Implement InstCombine/cast.ll:test31. This speeds up 462.libquantum by 26%.
llvm-svn: 30456
2006-09-18 05:27:43 +00:00
Chris Lattner 420c4bcc8d Implement Transforms/InstCombine/shift-sra.ll:test0
llvm-svn: 30450
2006-09-18 04:31:40 +00:00
Chris Lattner b3f24c91b0 Rewrite shift/and/compare sequences to promote better licm of the RHS.
Use isLogicalShift/isArithmeticShift to simplify code.

llvm-svn: 30448
2006-09-18 04:22:48 +00:00
Chris Lattner 850465d53f Fix Transforms/InstCombine/2006-09-15-CastToBool.ll and PR913
llvm-svn: 30405
2006-09-16 03:14:10 +00:00
Chris Lattner d28627009a Fix PR905 and InstCombine/2006-09-11-EmptyStructCrash.ll
llvm-svn: 30266
2006-09-11 21:43:16 +00:00
Chris Lattner 0468987592 Implement Transforms/InstCombine/hoist_instr.ll
llvm-svn: 30234
2006-09-09 22:02:56 +00:00
Chris Lattner d79dc79831 Turn div X, (Cond ? Y : 0) -> div X, Y
This implements select.ll::test18.

llvm-svn: 30230
2006-09-09 20:26:32 +00:00
Chris Lattner c2d3d3112e eliminate RegisterOpt. It does the same thing as RegisterPass.
llvm-svn: 29925
2006-08-27 22:42:52 +00:00
Chris Lattner 3d27be1333 s|llvm/Support/Visibility.h|llvm/Support/Compiler.h|
llvm-svn: 29911
2006-08-27 12:54:02 +00:00
Chris Lattner 091b6ea847 Silence a warning produced in assertions-disabled mode
llvm-svn: 29108
2006-07-11 18:31:26 +00:00
Owen Anderson bbf8990ef7 Add a comment, and fix a typo that broke the build.
llvm-svn: 29094
2006-07-10 22:15:25 +00:00
Owen Anderson ae8aa646f1 Don't indent the entire function.
llvm-svn: 29093
2006-07-10 22:03:18 +00:00
Chris Lattner b7845d69db Recognize 16-bit bswaps by relaxing overconstrained pattern.
This implements Transforms/InstCombine/bswap.ll:test[34].

llvm-svn: 29087
2006-07-10 20:25:24 +00:00
Owen Anderson a6968f83b2 Make instcombine not remove Phi nodes when LCSSA is live.
llvm-svn: 29083
2006-07-10 19:03:49 +00:00
Chris Lattner 4a4c7fe7fa Shrink libllvmgcc.dylib by another 23K
llvm-svn: 28972
2006-06-28 22:08:15 +00:00
Chris Lattner 3fda386965 Fix Transforms/InstCombine/2006-06-28-infloop.ll
llvm-svn: 28961
2006-06-28 17:34:50 +00:00
Andrew Lenharth ebfa24ee9a Catch more function pointer casting problems
Remove the Function pointer cast in these calls, converting it to
a cast of argument.
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( int 10 )
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( uint %tmp51 )

llvm-svn: 28953
2006-06-28 01:01:52 +00:00
Chris Lattner c482a9e31a Implement Transforms/InstCombine/bswap.ll, turning common shift/and/or bswap
idioms into bswap intrinsics.

llvm-svn: 28803
2006-06-15 19:07:26 +00:00
Chris Lattner 95cebb082f Fix a bug in a recent patch. This fixes UnitTests/Vector/Altivec/casts.c on
PPC/altivec

llvm-svn: 28698
2006-06-06 22:26:02 +00:00
Chris Lattner 1df0e98ac2 Swap the order of operands created here. For +&|^, the order doesn't matter,
but for sub, it really does!  Fix fixes a miscompilation of fibheap_cut in
llvmgcc4.

llvm-svn: 28600
2006-05-31 21:14:00 +00:00
Chris Lattner dab43b2b0e Implement Transforms/InstCombine/store.ll:test2.
llvm-svn: 28503
2006-05-26 19:19:20 +00:00
Chris Lattner 0e47716e69 Transform things like (splat(splat)) -> splat
llvm-svn: 28490
2006-05-26 00:29:06 +00:00
Chris Lattner 12249be286 Introduce a helper function that simplifies interpretation of shuffle masks.
No functionality change.

llvm-svn: 28489
2006-05-25 23:48:38 +00:00
Chris Lattner 99155be33f Turn (cast (shuffle (cast)) -> shuffle (cast) if it reduces the # casts in
the program.  This exposes more opportunities for the instcombiner, and implements
vec_shuffle.ll:test6

llvm-svn: 28487
2006-05-25 23:24:33 +00:00
Chris Lattner 83f6578b0c extract element from a shuffle vector can be trivially turned into an
extractelement from the SV's source.  This implement vec_shuffle.ll:test[45]

llvm-svn: 28485
2006-05-25 22:53:38 +00:00
Chris Lattner d0622b6894 Silence a bogus gcc warning
llvm-svn: 28422
2006-05-20 23:14:03 +00:00
Evan Cheng 18d0438148 Backing out last check-in for now. It's causing an infinite loop gccas lencode.
llvm-svn: 28284
2006-05-14 06:46:03 +00:00
Chris Lattner 3987a8532d Add/Sub/Mul are safe to promote here as well. Incrementing a single-bit
bitfield now gives this code:

_plus:
        lwz r2, 0(r3)
        rlwimi r2, r2, 0, 1, 31
        xoris r2, r2, 32768
        stw r2, 0(r3)
        blr

instead of this:

_plus:
        lwz r2, 0(r3)
        srwi r4, r2, 31
        slwi r4, r4, 31
        addis r4, r4, -32768
        rlwimi r2, r4, 0, 0, 0
        stw r2, 0(r3)
        blr

this can obviously still be improved.

llvm-svn: 28275
2006-05-13 02:16:08 +00:00
Chris Lattner 1ebbe6a22e Implement simple promotion for cast elimination in instcombine. This is
currently very limited, but can be extended in the future.  For example,
we now compile:

uint %test30(uint %c1) {
        %c2 = cast uint %c1 to ubyte
        %c3 = xor ubyte %c2, 1
        %c4 = cast ubyte %c3 to uint
        ret uint %c4
}

to:

_xor:
        movzbl 4(%esp), %eax
        xorl $1, %eax
        ret

instead of:

_xor:
        movb $1, %al
        xorb 4(%esp), %al
        movzbl %al, %eax
        ret

More impressively, we now compile:

struct B { unsigned bit : 1; };
void xor(struct B *b) { b->bit = b->bit ^ 1; }

To (X86/PPC):

_xor:
        movl 4(%esp), %eax
        xorl $-2147483648, (%eax)
        ret
_xor:
        lwz r2, 0(r3)
        xoris r2, r2, 32768
        stw r2, 0(r3)
        blr

instead of (X86/PPC):

_xor:
        movl 4(%esp), %eax
        movl (%eax), %ecx
        movl %ecx, %edx
        shrl $31, %edx
        # TRUNCATE movb %dl, %dl
        xorb $1, %dl
        movzbl %dl, %edx
        andl $2147483647, %ecx
        shll $31, %edx
        orl %ecx, %edx
        movl %edx, (%eax)
        ret

_xor:
        lwz r2, 0(r3)
        srwi r4, r2, 31
        xori r4, r4, 1
        rlwimi r2, r4, 31, 0, 0
        stw r2, 0(r3)
        blr

This implements InstCombine/cast.ll:test30.

llvm-svn: 28273
2006-05-13 02:06:03 +00:00
Chris Lattner 1443bc52be Refactor some code, making it simpler.
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.

This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.

llvm-svn: 28224
2006-05-11 17:11:52 +00:00
Chris Lattner a36ee4ea34 Two changes:
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
   blocks (due to constants in conditional branches/switches) to the worklist.
   This causes them to be deleted before instcombine starts up, leading to
   better optimization.

2. In the prepass over instructions, do trivial constprop/dce as we go.  This
   has the effect of improving the effectiveness of #1.  In addition, it
   *significantly* speeds up instcombine on test cases with large amounts of
   constant folding code (for example, that produced by code specialization
   or partial evaluation).  In one example, it speeds up instcombine from
   0.0589s to 0.0224s with a release build (a 2.6x speedup).

llvm-svn: 28215
2006-05-10 19:00:36 +00:00
Chris Lattner 1d441adfbf Move some code around.
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated.  If one or
both doesn't, then this xform doesn't remove a cast.

This fixes Transforms/InstCombine/2006-05-06-Infloop.ll

llvm-svn: 28141
2006-05-06 09:00:16 +00:00
Chris Lattner e745c7de0e Fix an infinite loop compiling oggenc last night.
llvm-svn: 28128
2006-05-05 20:51:30 +00:00
Chris Lattner 3af1053488 Implement InstCombine/cast.ll:test29
llvm-svn: 28126
2006-05-05 06:39:07 +00:00
Chris Lattner fb29692055 Fix Transforms/InstCombine/2006-05-04-DemandedBitCrash.ll
llvm-svn: 28101
2006-05-04 17:33:35 +00:00
Chris Lattner 655d08fda8 Fix InstCombine/2006-04-28-ShiftShiftLongLong.ll
llvm-svn: 28019
2006-04-28 22:21:41 +00:00
Chris Lattner b6cb64b7e6 Add support for inserting undef into a vector. This implements
Transforms/InstCombine/vec_insert_to_shuffle.ll

llvm-svn: 27997
2006-04-27 21:14:21 +00:00
Andrew Lenharth f89e630b2f Make code match cvs commit message :)
llvm-svn: 27881
2006-04-20 15:41:37 +00:00
Andrew Lenharth 61eae29ad6 If we can convert the return pointer type into an integer that IntPtrType
can be converted to losslessly, we can continue the conversion to a direct call.

llvm-svn: 27880
2006-04-20 14:56:47 +00:00
Chris Lattner 36dd7c98d1 Turn x86 unaligned load/store intrinsics into aligned load/store instructions
if the pointer is known aligned.

llvm-svn: 27781
2006-04-17 22:26:56 +00:00
Chris Lattner 9095186deb Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform
Make the insert/extract elt -> shuffle code more aggressive.

This fixes CodeGen/PowerPC/vec_shuffle.ll

llvm-svn: 27728
2006-04-16 00:51:47 +00:00
Chris Lattner 34cebe785d Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask').
llvm-svn: 27727
2006-04-16 00:03:56 +00:00
Chris Lattner 39fac448d6 significant cleanups to code that uses insert/extractelt heavily. This builds
maximal shuffles out of them where possible.

llvm-svn: 27717
2006-04-15 01:39:45 +00:00
Chris Lattner b19a5c661b Turn casts into getelementptr's when possible. This enables SROA to be more
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.

This implements InstCombine/cast.ll:test27/28.

llvm-svn: 27620
2006-04-12 18:09:35 +00:00
Chris Lattner 2d37f920ad Implement vec_shuffle.ll:test3
llvm-svn: 27573
2006-04-10 23:06:36 +00:00
Chris Lattner fbb77a408b Implement InstCombine/vec_shuffle.ll:test[12]
llvm-svn: 27571
2006-04-10 22:45:52 +00:00
Chris Lattner e79d249c29 Lower vperm(x,y, mask) -> shuffle(x,y,mask) if mask is constant. This allows
us to compile oh-so-realistic stuff like this:

 vec_vperm(A, B, (vector unsigned char){14});

to:
        vspltb v0, v0, 14

instead of:

        vspltisb v0, 14
        vperm v0, v2, v1, v0

llvm-svn: 27452
2006-04-06 19:19:17 +00:00
Chris Lattner caba72b6ff vector casts of casts are eliminable. Transform this:
%tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
        %tmp = cast <4 x int> %tmp to <4 x float>               ; <<4 x float>> [#uses=1]

into:

        %tmp = cast <4 x uint> %tmp to <4 x float>              ; <<4 x float>> [#uses=1]

llvm-svn: 27355
2006-04-02 05:43:13 +00:00
Chris Lattner ebca476b27 Allow transforming this:
%tmp = cast <4 x uint>* %testData to <4 x int>*         ; <<4 x int>*> [#uses=1]
        %tmp = load <4 x int>* %tmp             ; <<4 x int>> [#uses=1]

to this:

        %tmp = load <4 x uint>* %testData               ; <<4 x uint>> [#uses=1]
        %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]

llvm-svn: 27353
2006-04-02 05:37:12 +00:00
Chris Lattner f42d0aeda1 Turn altivec lvx/stvx intrinsics into loads and stores. This allows the
elimination of one load from this:

int AreSecondAndThirdElementsBothNegative( vector float *in ) {
#define QNaN 0x7FC00000
const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
vector float test = vec_ld( 0, (float*) &testData );
return ! vec_any_ge( test, *in );
}

Now generating:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        addi r6, r1, -16
        lvx v0, r5, r4
        stvx v0, 0, r6
        lvx v1, 0, r3
        vcmpgefp. v0, v0, v1
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27352
2006-04-02 05:30:25 +00:00
Chris Lattner 6cf4914fd4 Fix InstCombine/2006-04-01-InfLoop.ll
llvm-svn: 27330
2006-04-01 22:05:01 +00:00
Chris Lattner dcd0792622 Fold A^(B&A) -> (B&A)^A
Fold (B&A)^A == ~B & A

This implements InstCombine/xor.ll:test2[56]

llvm-svn: 27328
2006-04-01 08:03:55 +00:00
Chris Lattner 8d1d8d364c If we can look through vector operations to find the scalar version of an
extract_element'd value, do so.

llvm-svn: 27323
2006-03-31 23:01:56 +00:00
Chris Lattner 92346c315e extractelement(undef,x) -> undef
llvm-svn: 27300
2006-03-31 18:25:14 +00:00
Chris Lattner 612fa8e6f3 Fix Transforms/InstCombine/2006-03-30-ExtractElement.ll
llvm-svn: 27261
2006-03-30 22:02:40 +00:00
Chris Lattner d70d9f5b24 Don't crash on packed logical ops
llvm-svn: 27125
2006-03-25 21:58:26 +00:00
Jim Laskey 83f99115db Can't combine anymore - we don't have a chain through llvm.dbg intrinsics.
llvm-svn: 26992
2006-03-23 18:10:42 +00:00
Chris Lattner 53ef5a032c Teach the alignment handling code to look through constant expr casts and GEPs
llvm-svn: 26580
2006-03-07 01:28:57 +00:00
Chris Lattner 82f2ef20b6 Teach instcombine to increase the alignment of memset/memcpy/memmove when
the pointer is known to come from either a global variable, alloca or
malloc.  This allows us to compile this:

  P = malloc(28);
  memset(P, 0, 28);

into explicit stores on PPC instead of a memset call.

llvm-svn: 26577
2006-03-06 20:18:44 +00:00
Chris Lattner 6bc98653c2 Make vector narrowing more effective, implementing
Transforms/InstCombine/vec_narrow.ll.  This add support for narrowing
extract_element(insertelement) also.

llvm-svn: 26538
2006-03-05 00:22:33 +00:00
Chris Lattner 32c01df299 Canonicalize (X+C1)*C2 -> X*C2+C1*C2
This implements Transforms/InstCombine/add.ll:test31

llvm-svn: 26519
2006-03-04 06:04:02 +00:00
Chris Lattner 681ef2f083 Change this to work with renamed intrinsics.
llvm-svn: 26484
2006-03-03 01:34:17 +00:00
Chris Lattner 85dda9a2bd Generalize the REM folding code to handle another case Nick Lewycky
pointed out: realize the AND can provide factors and look through Casts.

llvm-svn: 26469
2006-03-02 06:50:58 +00:00
Chris Lattner c5b6c9a12a Fix a regression in a patch from a couple of days ago. This fixes
Transforms/InstCombine/2006-02-28-Crash.ll

llvm-svn: 26427
2006-02-28 19:47:20 +00:00
Chris Lattner b70f141893 Implement rem.ll:test[7-9] and PR712
llvm-svn: 26415
2006-02-28 05:49:21 +00:00
Chris Lattner 2a7c7b8bab Simplify some code now that the RHS of a rem can't be 0
llvm-svn: 26413
2006-02-28 05:40:55 +00:00
Chris Lattner 0de4a8d7b7 Rearrange some code, fold "rem X, 0", implementing rem.ll:test6
llvm-svn: 26411
2006-02-28 05:30:45 +00:00
Chris Lattner c7bfed0f7b Merge two almost-identical pieces of code.
Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand.  This lets us fold this:

int %test23(int %a) {
        %tmp.1 = and int %a, 1
        %tmp.2 = seteq int %tmp.1, 0
        %tmp.3 = cast bool %tmp.2 to int  ;; xor tmp1, 1
        ret int %tmp.3
}

into: xor (and a, 1), 1
llvm-svn: 26396
2006-02-27 02:38:23 +00:00
Chris Lattner f5c8a0b83f Fold (A^B) == A -> B == 0
and  (A-B) == A  ->  B == 0

llvm-svn: 26394
2006-02-27 01:44:11 +00:00
Chris Lattner f78df7c14d Fold (X|C1)^C2 -> X^(C1|C2) when possible. This implements
InstCombine/or.ll:test23.

llvm-svn: 26385
2006-02-26 19:57:54 +00:00
Chris Lattner b580d26e7d Fix a problem that Nate noticed that boils down to an over conservative check
in the code that does "select C, (X+Y), (X-Y) --> (X+(select C, Y, (-Y)))".
We now compile this loop:

LBB1_1: ; no_exit
        add r6, r2, r3
        subf r3, r2, r3
        cmpwi cr0, r2, 0
        addi r7, r5, 4
        lwz r2, 0(r5)
        addi r4, r4, 1
        blt cr0, LBB1_4 ; no_exit
LBB1_3: ; no_exit
        mr r3, r6
LBB1_4: ; no_exit
        cmpwi cr0, r4, 16
        mr r5, r7
        bne cr0, LBB1_1 ; no_exit

into this instead:

LBB1_1: ; no_exit
        srawi r6, r2, 31
        add r2, r2, r6
        xor r6, r2, r6
        addi r7, r5, 4
        lwz r2, 0(r5)
        addi r4, r4, 1
        add r3, r3, r6
        cmpwi cr0, r4, 16
        mr r5, r7
        bne cr0, LBB1_1 ; no_exit

llvm-svn: 26356
2006-02-24 18:05:58 +00:00
Jeff Cohen 0add83e969 Fix bugs identified by VC++.
llvm-svn: 26287
2006-02-18 03:20:33 +00:00
Nate Begeman 8a77efe4f7 Rework the SelectionDAG-based implementations of SimplifyDemandedBits
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.

llvm-svn: 26238
2006-02-16 21:11:51 +00:00
Chris Lattner 8b10ab3002 Implement Instcombine/and.ll:test34
llvm-svn: 26155
2006-02-13 23:07:23 +00:00
Chris Lattner 7d8522884b If any of the sign extended bits are demanded, the input sign bit is demanded
for a sign extension.

This fixes InstCombine/2006-02-13-DemandedMiscompile.ll and Ptrdist/bc.

llvm-svn: 26152
2006-02-13 22:41:07 +00:00
Chris Lattner 68e7475777 Be careful not to request or look at bits shifted in from outside the size
of the input.  This fixes the mediabench/gsm/toast failure last night.

llvm-svn: 26138
2006-02-13 06:09:08 +00:00
Chris Lattner f5b4ef7f58 remove some more dead special case code
llvm-svn: 26135
2006-02-12 08:07:37 +00:00
Chris Lattner 5b2edb1fca Eliminate special case hacks that are superceded by general purpose hacks
llvm-svn: 26134
2006-02-12 08:02:11 +00:00
Chris Lattner ee0f280743 Three changes:
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
   Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
   between the shift and and.  More aggressive bitfolding for other reasons
   was turning signed shr's into unsigned shr's, leaving the noop cast in
   the way.

llvm-svn: 26131
2006-02-12 02:07:56 +00:00
Chris Lattner 0157e7f55b Port the recent innovations in ComputeMaskedBits to SimplifyDemandedBits.
This allows us to simplify on conditions where bits are not known, but they
are not demanded either!  This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.

In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.

llvm-svn: 26122
2006-02-11 09:31:47 +00:00
Chris Lattner 24cd2fa269 Fix 80-column violations
llvm-svn: 26088
2006-02-09 07:41:14 +00:00
Chris Lattner 4534dd59a3 Enhance MVIZ in three ways:
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.

This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.

llvm-svn: 26087
2006-02-09 07:38:58 +00:00
Chris Lattner ab2dc4d70d Simplify some code, reducing calls to MaskedValueIsZero. Implement a minor
optimization where we reduce the number of bits in AND masks when possible.

llvm-svn: 26056
2006-02-08 07:34:50 +00:00
Chris Lattner 5997cf9381 Use EraseInstFromFunction in a few cases to put the uses of the removed
instruction onto the worklist (in case they are now dead).

Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:

struct S {
    unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
    S();
};

S::S() : a(0), b(0), c(1), d(0), e(6) {}

to this:

void %_ZN1SC1Ev(%struct.S* %this) {
entry:
        %tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
        store ubyte 38, ubyte* %tmp.1
        ret void
}

much earlier (in gccas instead of only in gccld after DSE runs).

llvm-svn: 26050
2006-02-08 03:25:32 +00:00
Chris Lattner ddba3289b5 Fix a problem in my patch yesterday, causing a miscompilation of 176.gcc
llvm-svn: 26045
2006-02-08 01:20:23 +00:00
Chris Lattner 44314827d6 Fix Transforms/InstCombine/2006-02-07-SextZextCrash.ll
llvm-svn: 26040
2006-02-07 19:07:40 +00:00
Chris Lattner 92a6865321 Generalize MaskedValueIsZero into a ComputeMaskedNonZeroBits function, which
is just as efficient as MVIZ and is also more general.

Fix a few minor bugs introduced in recent patches

llvm-svn: 26036
2006-02-07 08:05:22 +00:00
Chris Lattner c3ebf40031 Make MaskedValueIsZero take a uint64_t instead of a ConstantIntegral as a
mask.  This allows the code to be simpler and more efficient.

Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.

llvm-svn: 26035
2006-02-07 07:27:52 +00:00
Chris Lattner 77defbae0a Use Type::getIntegralTypeMask() to simplify some code
llvm-svn: 26034
2006-02-07 07:00:41 +00:00
Chris Lattner 2590e511d8 Implement the beginnings of a facility for simplifying expressions based on
'demanded bits', inspired by Nate's work in the dag combiner.  This isn't
complete, but needs to unrelated instcombiner changes to continue.

llvm-svn: 26033
2006-02-07 06:56:34 +00:00
Chris Lattner 2e90b732fa Turn A % (C << N), where C is 2^k, into A & ((C << N)-1) [urem only].
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].

Tested with: rem.ll:test5, div.ll:test10

llvm-svn: 26003
2006-02-05 07:54:04 +00:00
Chris Lattner c597b8a55e Make iostream #inclusion explicit
llvm-svn: 25514
2006-01-22 23:32:06 +00:00
Chris Lattner e154abf9b3 Implement casts.ll:test26: a cast from float -> double -> integer, doesn't
need the float->double part.

llvm-svn: 25452
2006-01-19 07:40:22 +00:00
Chris Lattner 307b7ea15f fix a crash due to missing parens
llvm-svn: 25363
2006-01-16 19:47:21 +00:00
Robert Bocchino a83529678e Added instcombine support for extractelement.
llvm-svn: 25299
2006-01-13 22:48:06 +00:00
Chris Lattner 503221f5c5 Do a simple instcombine xforms to delete llvm.stackrestore cases.
llvm-svn: 25294
2006-01-13 21:28:09 +00:00
Chris Lattner c66b223b28 Simplify this a tiny bit by using the new IntrinsicInst functionality.
llvm-svn: 25292
2006-01-13 20:11:04 +00:00
Chris Lattner 9cbfbc21bb fix some 176.gcc miscompilation from my previous patch.
llvm-svn: 25137
2006-01-07 01:32:28 +00:00
Chris Lattner 330628a6d8 silence some bogus gcc warnings on fenris
llvm-svn: 25130
2006-01-06 17:59:59 +00:00
Chris Lattner eb372a0276 Enhance the shift-shift folding code to allow a no-op cast to occur in between
the shifts.

This allows us to fold this (which is the 'integer add a constant' sequence
from cozmic's scheme compmiler):

int %x(uint %anf-temporary776) {
        %anf-temporary777 = shr uint %anf-temporary776, ubyte 1
        %anf-temporary800 = cast uint %anf-temporary777 to int
        %anf-temporary804 = shl int %anf-temporary800, ubyte 1
        %anf-temporary805 = add int %anf-temporary804, -2
        %anf-temporary806 = or int %anf-temporary805, 1
        ret int %anf-temporary806
}

into this:

int %x(uint %anf-temporary776) {
        %anf-temporary776 = cast uint %anf-temporary776 to int
        %anf-temporary776.mask1 = add int %anf-temporary776, -2
        %anf-temporary805 = or int %anf-temporary776.mask1, 1
        ret int %anf-temporary805
}

note that instcombine already knew how to eliminate the AND that the two
shifts fold into.  This is tested by InstCombine/shift.ll:test26

-Chris

llvm-svn: 25128
2006-01-06 07:52:12 +00:00
Chris Lattner b330939d90 Simplify the code a bit more
llvm-svn: 25126
2006-01-06 07:22:22 +00:00
Chris Lattner 145539343f Extract a bunch of code out of visitShiftInst into FoldShiftByConstant. No
functionality changes.

llvm-svn: 25125
2006-01-06 07:12:35 +00:00
Nate Begeman 848622f87f Add support alignment of allocation instructions.
Add support for specifying alignment and size of setjmp jmpbufs.

No targets currently do anything with this information, nor is it presrved
in the bytecode representation.  That's coming up next.

llvm-svn: 24196
2005-11-05 09:21:28 +00:00
Chris Lattner dd0c174082 Turn sdiv into udiv if both operands have a clear sign bit. This occurs
a few times in crafty:

OLD:    %tmp.36 = div int %tmp.35, 8            ; <int> [#uses=1]
NEW:    %tmp.36 = div uint %tmp.35, 8           ; <uint> [#uses=0]
OLD:    %tmp.19 = div int %tmp.18, 8            ; <int> [#uses=1]
NEW:    %tmp.19 = div uint %tmp.18, 8           ; <uint> [#uses=0]
OLD:    %tmp.117 = div int %tmp.116, 8          ; <int> [#uses=1]
NEW:    %tmp.117 = div uint %tmp.116, 8         ; <uint> [#uses=0]
OLD:    %tmp.92 = div int %tmp.91, 8            ; <int> [#uses=1]
NEW:    %tmp.92 = div uint %tmp.91, 8           ; <uint> [#uses=0]

Which all turn into shrs.

llvm-svn: 24190
2005-11-05 07:40:31 +00:00
Chris Lattner e9ff0eaf5b Turn srem -> urem when neither input has their sign bit set. This triggers
8 times in vortex, allowing the srems to be turned into shrs:

OLD:    %tmp.104 = rem int %tmp.5.i37, 16               ; <int> [#uses=1]
NEW:    %tmp.104 = rem uint %tmp.5.i37, 16              ; <uint> [#uses=0]
OLD:    %tmp.98 = rem int %tmp.5.i24, 16                ; <int> [#uses=1]
NEW:    %tmp.98 = rem uint %tmp.5.i24, 16               ; <uint> [#uses=0]
OLD:    %tmp.91 = rem int %tmp.5.i19, 8         ; <int> [#uses=1]
NEW:    %tmp.91 = rem uint %tmp.5.i19, 8                ; <uint> [#uses=0]
OLD:    %tmp.88 = rem int %tmp.5.i14, 8         ; <int> [#uses=1]
NEW:    %tmp.88 = rem uint %tmp.5.i14, 8                ; <uint> [#uses=0]
OLD:    %tmp.85 = rem int %tmp.5.i9, 1024               ; <int> [#uses=2]
NEW:    %tmp.85 = rem uint %tmp.5.i9, 1024              ; <uint> [#uses=0]
OLD:    %tmp.82 = rem int %tmp.5.i, 512         ; <int> [#uses=2]
NEW:    %tmp.82 = rem uint %tmp.5.i1, 512               ; <uint> [#uses=0]
OLD:    %tmp.48.i = rem int %tmp.5.i.i161, 4            ; <int> [#uses=1]
NEW:    %tmp.48.i = rem uint %tmp.5.i.i161, 4           ; <uint> [#uses=0]
OLD:    %tmp.20.i2 = rem int %tmp.5.i.i, 4              ; <int> [#uses=1]
NEW:    %tmp.20.i2 = rem uint %tmp.5.i.i, 4             ; <uint> [#uses=0]

it also occurs 9 times in gcc, but with odd constant divisors (1009 and 61)
so the payoff isn't as great.

llvm-svn: 24189
2005-11-05 07:28:37 +00:00
Andrew Lenharth 662295587d make this 64 bit clean, fixed test30 of /Regression/Transforms/InstCombine/add.ll
llvm-svn: 24158
2005-11-02 18:35:40 +00:00
Chris Lattner 09efd4e5b6 Limit the search depth of MaskedValueIsZero to 6 instructions, to avoid
bad cases.  This fixes Markus's second testcase in PR639, and should
seal it for good.

llvm-svn: 24123
2005-10-31 18:35:52 +00:00
Chris Lattner 8f663e8bbc Pull some code out into a function, give it the ability to see through +.
This allows us to turn code like malloc(4*x+4) -> malloc int, (x+1)

llvm-svn: 24081
2005-10-29 04:36:15 +00:00
Chris Lattner 8270c33606 Remove a special case, allowing the general case to handle it. No functionality
change.

llvm-svn: 24076
2005-10-29 03:19:53 +00:00
Chris Lattner b9d3ca5c3c Fix a bit of backwards logic that broke exptree and smg2000
llvm-svn: 24056
2005-10-28 16:27:35 +00:00
Chris Lattner c4f67e67d2 Do not sink any instruction with side effects, including vaarg. This fixes
PR640

llvm-svn: 24046
2005-10-27 17:13:11 +00:00
Chris Lattner c6372cca78 Fix typo
llvm-svn: 24033
2005-10-27 06:26:26 +00:00
Chris Lattner 0fe7551bc0 Teach instcombine to promote stuff like (cast (malloc sbyte, 8*X) to int*)
into: malloc int, (2*X)

llvm-svn: 24032
2005-10-27 06:24:46 +00:00
Chris Lattner b3ecf96900 Promote cases like cast (malloc sbyte, 100) to int* into
(malloc [25 x int]) directly without having to convert to
(malloc [100 x sbyte]) first.

llvm-svn: 24031
2005-10-27 06:12:00 +00:00
Chris Lattner bb17180a23 Minor change to this file to support obscure cases with constant array amounts
llvm-svn: 24030
2005-10-27 05:53:56 +00:00
Chris Lattner 38a1b00a0f fold nested and's early to avoid inefficiencies in MaskedValueIsZero. This
fixes a very slow compile in PR639.

llvm-svn: 24011
2005-10-26 17:18:16 +00:00
Chris Lattner 46705b2f2d Handle allocations that, even after removing dead uses, still have more than
one use (but one is a cast).  This handles the very common case of:

 X = alloc [n x byte]
 Y = cast X to somethingbetter
 seteq X, null

In order to avoid infinite looping when there are multiple casts, we only
allow this if the xform is strictly increasing the alignment of the
allocation.

llvm-svn: 23961
2005-10-24 06:35:18 +00:00
Chris Lattner 355ecc09f8 Fix a bug where we would 'promote' an allocation from one type to another
where the second has less alignment required.  If we had explicit alignment
support in the IR, we could handle this case, but we can't until we do.

llvm-svn: 23960
2005-10-24 06:26:18 +00:00
Chris Lattner ac87beb03a Before promoting a malloc type, remove dead uses. This makes instcombine
more effective at promoting these allocations, catching them earlier in the
compile process.

llvm-svn: 23959
2005-10-24 06:22:12 +00:00
Chris Lattner 216be91817 Pull some code out into a function, no functionality change
llvm-svn: 23958
2005-10-24 06:03:58 +00:00
Chris Lattner da1b152c43 Make this work for FP constantexprs
llvm-svn: 23773
2005-10-17 20:18:38 +00:00
Chris Lattner 7fde91e365 Oops, X+0.0 isn't foldable, but X+-0.0 is.
llvm-svn: 23772
2005-10-17 17:56:38 +00:00
Chris Lattner 32979336a7 relax this a bit, as we only support the default rounding mode
llvm-svn: 23771
2005-10-17 17:49:32 +00:00
Chris Lattner 03b9eb506c Make MaskedValueIsZero a bit more aggressive
llvm-svn: 23677
2005-10-09 22:08:50 +00:00
Chris Lattner 62010c450f Fix funky xcode indentation
llvm-svn: 23674
2005-10-09 06:36:35 +00:00
Jeff Cohen 572910c9a2 Remove useless variable.
llvm-svn: 23656
2005-10-07 05:28:29 +00:00
Chris Lattner 0b011ec8e2 Factor the GetGEPGlobalInitializer out of this pass and into Transforms/Utils
as ConstantFoldLoadThroughGEPConstantExpr.

llvm-svn: 23445
2005-09-26 05:28:06 +00:00
Chris Lattner 0b3557f54a Move MaskedValueIsZero up.
Match a bunch of idioms for sign extensions, implementing InstCombine/signext.ll

llvm-svn: 23428
2005-09-24 23:43:33 +00:00
Chris Lattner b4b2530a1a Refactor this code a bit and make it more general. This now compiles:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus2 (unsigned int x) { b.j += x; }

To:

_plus2:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        slwi r3, r3, 6
        add r3, r4, r3
        rlwimi r3, r4, 0, 26, 14
        stw r3, 0(r2)
        blr


instead of:

_plus2:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        rlwinm r5, r4, 26, 21, 31
        add r3, r5, r3
        rlwimi r4, r3, 6, 15, 25
        stw r4, 0(r2)
        blr

by eliminating an 'and'.

I'm pretty sure this is as small as we can go :)

llvm-svn: 23386
2005-09-18 07:22:02 +00:00
Chris Lattner 797dee7705 Compile
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus2 (unsigned int x) {
  b.j += x;
}

to:

plus2:
        mov %EAX, DWORD PTR [b]
        mov %ECX, %EAX
        and %ECX, 131008
        mov %EDX, DWORD PTR [%ESP + 4]
        shl %EDX, 6
        add %EDX, %ECX
        and %EDX, 131008
        and %EAX, -131009
        or %EDX, %EAX
        mov DWORD PTR [b], %EDX
        ret

instead of:

plus2:
        mov %EAX, DWORD PTR [b]
        mov %ECX, %EAX
        shr %ECX, 6
        and %ECX, 2047
        add %ECX, DWORD PTR [%ESP + 4]
        shl %ECX, 6
        and %ECX, 131008
        and %EAX, -131009
        or %ECX, %EAX
        mov DWORD PTR [b], %ECX
        ret

llvm-svn: 23385
2005-09-18 06:30:59 +00:00
Chris Lattner 01f56c68e9 Generalize this transform, using MaskedValueIsZero, allowing us to compile:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus3 (unsigned int x) { b.k += x; }

To:

plus3:
        mov %EAX, DWORD PTR [%ESP + 4]
        shl %EAX, 17
        add DWORD PTR [b], %EAX
        ret

instead of:

plus3:
        mov %EAX, DWORD PTR [%ESP + 4]
        shl %EAX, 17
        mov %ECX, DWORD PTR [b]
        add %EAX, %ECX
        and %EAX, -131072
        and %ECX, 131071
        or %ECX, %EAX
        mov DWORD PTR [b], %ECX
        ret

llvm-svn: 23384
2005-09-18 06:02:59 +00:00
Chris Lattner 4ebc8ab4e0 fix typeo
llvm-svn: 23383
2005-09-18 05:25:20 +00:00
Chris Lattner e5b23a6d67 Remove unintentionally committed code
llvm-svn: 23382
2005-09-18 05:12:51 +00:00
Chris Lattner 27cb9dbd35 implement shift.ll:test25. This compiles:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus3 (unsigned int x) {
  b.k += x;
}

to:

_plus3:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r3, 0(r2)
        rlwinm r4, r3, 0, 0, 14
        add r4, r4, r3
        rlwimi r4, r3, 0, 15, 31
        stw r4, 0(r2)
        blr

instead of:

_plus3:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        srwi r5, r4, 17
        add r3, r5, r3
        slwi r3, r3, 17
        rlwimi r3, r4, 0, 15, 31
        stw r3, 0(r2)
        blr

llvm-svn: 23381
2005-09-18 05:12:10 +00:00
Chris Lattner af517574ce Implement add.ll:test29. Codegening:
struct S { unsigned int i : 6, j : 11, k : 15; } b;
void plus1 (unsigned int x) {
  b.i += x;
}

as:
_plus1:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        add r3, r4, r3
        rlwimi r3, r4, 0, 0, 25
        stw r3, 0(r2)
        blr

instead of:

_plus1:
        lis r2, ha16(L_b$non_lazy_ptr)
        lwz r2, lo16(L_b$non_lazy_ptr)(r2)
        lwz r4, 0(r2)
        rlwinm r5, r4, 0, 26, 31
        add r3, r5, r3
        rlwimi r3, r4, 0, 0, 25
        stw r3, 0(r2)
        blr

llvm-svn: 23379
2005-09-18 04:24:45 +00:00
Chris Lattner 027eaf01cf remove debug output
llvm-svn: 23377
2005-09-18 03:50:25 +00:00
Chris Lattner 1521298993 Implement or.ll:test21. This teaches instcombine to be able to turn this:
struct {
   unsigned int bit0:1;
   unsigned int ubyte:31;
} sdata;

void foo() {
  sdata.ubyte++;
}

into this:

foo:
        add DWORD PTR [sdata], 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [sdata]
        mov %ECX, %EAX
        add %ECX, 2
        and %ECX, -2
        and %EAX, 1
        or %EAX, %ECX
        mov DWORD PTR [sdata], %EAX
        ret

llvm-svn: 23376
2005-09-18 03:42:07 +00:00
Chris Lattner a393e4d4b3 Fix the regression last night compiling povray
llvm-svn: 23348
2005-09-14 17:32:56 +00:00
Chris Lattner 2a8932960d Add a simple xform to simplify array accesses with casts in the way.
This is useful for 178.galgel where resolution of dope vectors (by the
optimizer) causes the scales to become apparent.

llvm-svn: 23328
2005-09-13 18:36:04 +00:00
Chris Lattner 567b81f0d2 Add a helper function, allowing us to simplify some code a bit, changing
indentation, no functionality change

llvm-svn: 23325
2005-09-13 00:40:14 +00:00
Chris Lattner 219175c84d Implement a simple xform to turn code like this:
if () { store A -> P; } else { store B -> P; }

into a PHI node with one store, in the most trival case.  This implements
load.ll:test10.

llvm-svn: 23324
2005-09-12 23:23:25 +00:00
Chris Lattner e0bfdf1485 Another load-peephole optimization: do gcse when two loads are next to
each other.  This implements InstCombine/load.ll:test9

llvm-svn: 23322
2005-09-12 22:21:03 +00:00
Chris Lattner b990f7d8ed Implement a trivial form of store->load forwarding where the store and the
load are exactly consequtive.  This is picked up by other passes, but this
triggers thousands of times in fortran programs that use static locals
(and is thus a compile-time speedup).

llvm-svn: 23320
2005-09-12 22:00:15 +00:00
Chris Lattner 9f269e40c9 Use the new 'moveBefore' method to simplify some code. Really, which is
easier to understand?  :)

llvm-svn: 22706
2005-08-08 19:11:57 +00:00
Chris Lattner 2c14cf7b74 Add some simple folds that occur in bitfield cases. Fix a minor bug in
isHighOnes, where it would consider 0 to have high ones.

llvm-svn: 22693
2005-08-07 07:03:10 +00:00
Chris Lattner 9f9c260b8c now that hasConstantValue defaults to only returning values that dominate
the PHI node, this ugly code can vanish.

llvm-svn: 22672
2005-08-05 01:04:30 +00:00
Nate Begeman b392321cae Fix a fixme in CondPropagate.cpp by moving a PhiNode optimization into
BasicBlock's removePredecessor routine.  This requires shuffling around
the definition and implementation of hasContantValue from Utils.h,cpp into
Instructions.h,cpp

llvm-svn: 22664
2005-08-04 23:24:19 +00:00
Chris Lattner 22d00a8e90 Update to use the new MathExtras.h support for log2 computation.
Patch contributed by Jim Laskey!

llvm-svn: 22592
2005-08-02 19:16:58 +00:00
Jeff Cohen 5f4ef3c5a8 Eliminate all remaining tabs and trailing spaces.
llvm-svn: 22523
2005-07-27 06:12:32 +00:00
Chris Lattner 18aa4d8196 Do not let MaskedValueIsZero consider undef to be zero, for reasons
explained in the comment.

This fixes UnitTests/2003-09-18-BitFieldTest on darwin

llvm-svn: 22483
2005-07-20 18:49:28 +00:00
Chris Lattner 247aef884c When transforming &A[i] < &A[j] -> i < j, make sure to perform the comparison
as a signed compare.  This patch may fix PR597, but is correct in any case.

llvm-svn: 22465
2005-07-18 23:07:33 +00:00
Chris Lattner 4ed40f7c6f Fix a problem that instcombine would hit when dealing with unreachable code.
Because the instcombine has to scan the entire function when it starts up
to begin with, we might as well do it in DFO so we can nuke unreachable code.

This fixes: Transforms/InstCombine/2005-07-07-DeadPHILoop.ll

llvm-svn: 22348
2005-07-07 20:40:38 +00:00
Reid Spencer 4fdd96c4e0 Clean up some uninitialized variables and missing return statements that
GCC 4.0.0 compiler (sometimes incorrectly) warns about under release build.

llvm-svn: 22249
2005-06-18 17:37:34 +00:00
Chris Lattner 2ceb6ee576 This is not true: (X != 13 | X < 15) -> X < 15
It is actually always true.  This fixes PR586 and
Transforms/InstCombine/2005-06-16-SetCCOrSetCCMiscompile.ll

llvm-svn: 22236
2005-06-17 03:59:17 +00:00
Chris Lattner 73bcba5f61 Don't crash when dealing with INTMIN. This fixes PR585 and
Transforms/InstCombine/2005-06-16-RangeCrash.ll

llvm-svn: 22234
2005-06-17 02:05:55 +00:00
Chris Lattner c53cb9d3ff avoid constructing out of range shift amounts.
llvm-svn: 22230
2005-06-17 01:29:28 +00:00
Chris Lattner 89dc4f16f5 Fix PR583 and testcase Transforms/InstCombine/2005-06-15-DivSelectCrash.ll
llvm-svn: 22227
2005-06-16 04:55:52 +00:00
Chris Lattner 252a845e30 Fix PR571, removing code that does just the WRONG thing :)
llvm-svn: 22225
2005-06-16 03:00:08 +00:00
Chris Lattner 104002bee3 Fix a bug in my previous patch. Do not get the shift amount type (which
is always ubyte, get the type being shifted).  This unbreaks espresso

llvm-svn: 22224
2005-06-16 01:52:07 +00:00
Chris Lattner 19b57f55aa Fix PR577 and testcase InstCombine/2005-06-15-ShiftSetCCCrash.ll.
Do not perform undefined out of range shifts.

llvm-svn: 22217
2005-06-15 20:53:31 +00:00
Reid Spencer a299d6f701 Put the hack back in that removes features, causes regressions to fail, but
allows test programs to succeed. Actual fix for this is forthcoming.

llvm-svn: 22213
2005-06-15 18:25:30 +00:00
Reid Spencer 6d231e55fa Unbreak several InstCombine regression checks introduced by a hack to
fix the bzip2 test. A better hack is needed.

llvm-svn: 22209
2005-06-13 06:41:26 +00:00
Andrew Lenharth ffe65458e7 hack to fix bzip2 (bug 571)
llvm-svn: 22192
2005-06-04 12:43:56 +00:00
Chris Lattner 05c703ea85 preserve calling conventions when hacking on code
llvm-svn: 22024
2005-05-14 12:25:32 +00:00
Chris Lattner 61d9d81770 calling a function with the wrong CC is undefined, turn it into an unreachable
instruction.  This is useful for catching optimizers that don't preserve
calling conventions

llvm-svn: 21928
2005-05-13 07:09:09 +00:00
Chris Lattner b62f5082c5 implement and.ll:test33
llvm-svn: 21809
2005-05-09 04:58:36 +00:00
Chris Lattner b18dbbfff5 Strength reduce SAR into SHR if there is no way sign bits could be shifted
in.  This tends to get cases like this:

  X = cast ubyte to int
  Y = shr int X, ...

Tested by: shift.ll:test24

llvm-svn: 21775
2005-05-08 17:34:56 +00:00
Chris Lattner 4294cec0f1 Fix a miscompilation of crafty by clobbering the "A" variable.
llvm-svn: 21770
2005-05-07 23:49:08 +00:00
Chris Lattner 6aacb0f9da Preserve tail marker
llvm-svn: 21737
2005-05-06 06:48:21 +00:00
Chris Lattner ef298a3b8a Teach instcombine propagate zeroness through shl instructions, implementing
and.ll:test31

llvm-svn: 21717
2005-05-06 04:53:20 +00:00
Chris Lattner 873804168e Implement shift.ll:test23. If we are shifting right then immediately truncating
the result, turn signed shift rights into unsigned shift rights if possible.

This leads to later simplification and happens *often* in 176.gcc.  For example,
this testcase:

struct xxx { unsigned int code : 8; };
enum codes { A, B, C, D, E, F };
int foo(struct xxx *P) {
  if ((enum codes)P->code == A)
     bar();
}

used to be compiled to:

int %foo(%struct.xxx* %P) {
        %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0           ; <uint*> [#uses=1]
        %tmp.2 = load uint* %tmp.1              ; <uint> [#uses=1]
        %tmp.3 = cast uint %tmp.2 to int                ; <int> [#uses=1]
        %tmp.4 = shl int %tmp.3, ubyte 24               ; <int> [#uses=1]
        %tmp.5 = shr int %tmp.4, ubyte 24               ; <int> [#uses=1]
        %tmp.6 = cast int %tmp.5 to sbyte               ; <sbyte> [#uses=1]
        %tmp.8 = seteq sbyte %tmp.6, 0          ; <bool> [#uses=1]
        br bool %tmp.8, label %then, label %UnifiedReturnBlock

Now it is compiled to:

        %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0           ; <uint*> [#uses=1]
        %tmp.2 = load uint* %tmp.1              ; <uint> [#uses=1]
        %tmp.2 = cast uint %tmp.2 to sbyte              ; <sbyte> [#uses=1]
        %tmp.8 = seteq sbyte %tmp.2, 0          ; <bool> [#uses=1]
        br bool %tmp.8, label %then, label %UnifiedReturnBlock

which is the difference between this:

foo:
        subl $4, %esp
        movl 8(%esp), %eax
        movl (%eax), %eax
        shll $24, %eax
        sarl $24, %eax
        testb %al, %al
        jne .LBBfoo_2

and this:

foo:
        subl $4, %esp
        movl 8(%esp), %eax
        movl (%eax), %eax
        testb %al, %al
        jne .LBBfoo_2

This occurs 3243 times total in the External tests, 215x in povray,
6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl,
25x in gap, 3x in m88ksim, 25x in ijpeg.

Maybe this will cause a little jump on gcc tommorow :)

llvm-svn: 21715
2005-05-06 04:18:52 +00:00
Chris Lattner 7208616ec0 Implement xor.ll:test22
llvm-svn: 21713
2005-05-06 02:07:39 +00:00
Chris Lattner 4c2d3781aa implement and.ll:test30 and set.ll:test21
llvm-svn: 21712
2005-05-06 01:53:19 +00:00
Chris Lattner dd1e562ec3 implement or.ll:test20
llvm-svn: 21709
2005-05-06 00:58:50 +00:00
Chris Lattner 809dfac421 Instcombine: cast (X != 0) to int, cast (X == 1) to int -> X iff X has only the low bit set.
This implements set.ll:test20.

This triggers 2x on povray, 9x on mesa, 11x on gcc, 2x on crafty, 1x on eon,
6x on perlbmk and 11x on m88ksim.

It allows us to compile these two functions into the same code:

struct s { unsigned int bit : 1; };
unsigned foo(struct s *p) {
  if (p->bit)
    return 1;
  else
    return 0;
}
unsigned bar(struct s *p) { return p->bit; }

llvm-svn: 21690
2005-05-04 19:10:26 +00:00
Chris Lattner a816eee427 Implement getelementptr.ll:test11
llvm-svn: 21647
2005-05-01 04:42:15 +00:00
Chris Lattner a9d84e3388 Check for volatile loads only once.
Implement load.ll:test7

llvm-svn: 21645
2005-05-01 04:24:53 +00:00
Chris Lattner bd43b9db9d Fix the compile failures from last night.
llvm-svn: 21565
2005-04-26 14:40:41 +00:00
Chris Lattner a21bf8d1be implement getelementptr.ll:test10
llvm-svn: 21541
2005-04-25 20:17:30 +00:00
Chris Lattner 2f1457fd83 Eliminate cases where we could << by 64, which is undefined in C.
llvm-svn: 21500
2005-04-24 17:46:05 +00:00
Chris Lattner d6f636a340 Implement xor.ll:test21: select (not C), A, B -> select C, B, A
llvm-svn: 21495
2005-04-24 07:30:14 +00:00
Chris Lattner d1f46d3bf9 Use getPrimitiveSizeInBits() instead of getPrimitiveSize()*8
Completely rework the 'setcc (cast x to larger), y' code.  This code has
the advantage of implementing setcc.ll:test19 (being more general than
the previous code) and being correct in all cases.

This allows us to unxfail 2004-11-27-SetCCForCastLargerAndConstant.ll,
and close PR454.

llvm-svn: 21491
2005-04-24 06:59:08 +00:00
Jeff Cohen 82639853c0 Eliminate tabs and trailing spaces
llvm-svn: 21480
2005-04-23 21:38:35 +00:00
Chris Lattner 77c32c34d7 Generalize the setcc -> PHI and Select folding optimizations to work with
any constant RHS, not just a constant integer RHS.  This implements
select.ll:test17

llvm-svn: 21470
2005-04-23 15:31:55 +00:00
Misha Brukman b1c9317bb4 Remove trailing whitespace
llvm-svn: 21427
2005-04-21 23:48:37 +00:00
Chris Lattner 374e659466 Instcombine this:
%shortcirc_val = select bool %tmp.1, bool true, bool %tmp.4             ; <bool> [#uses=1]
        %tmp.6 = cast bool %shortcirc_val to int                ; <int> [#uses=1]

into this:

        %shortcirc_val = or bool %tmp.1, %tmp.4         ; <bool> [#uses=1]
        %tmp.6 = cast bool %shortcirc_val to int                ; <int> [#uses=1]

not this:

        %tmp.4.cast = cast bool %tmp.4 to int           ; <int> [#uses=1]
        %tmp.6 = select bool %tmp.1, int 1, int %tmp.4.cast             ; <int> [#uses=1]

llvm-svn: 21389
2005-04-21 05:43:13 +00:00
Chris Lattner 5c219469a0 Eliminate a broken transformation, fixing PR548
llvm-svn: 21354
2005-04-19 06:04:18 +00:00
Chris Lattner 4236261930 Fix bug: InstCombine/2005-05-07-UDivSelectCrash.ll
llvm-svn: 21152
2005-04-08 04:03:26 +00:00
Chris Lattner 4706046e68 Implement the following xforms:
(X-Y)-X --> -Y
A + (B - A) --> B
(B - A) + A --> B

llvm-svn: 21138
2005-04-07 17:14:51 +00:00
Chris Lattner c7f3c1a00e Implement InstCombine/add.ll:test28, transforming C1-(X+C2) --> (C1-C2)-X.
This occurs several dozen times in specint2k, particularly in crafty and gcc
apparently.

llvm-svn: 21136
2005-04-07 16:28:01 +00:00
Chris Lattner a9be4490d8 Transform X-(X+Y) == -Y and X-(Y+X) == -Y
llvm-svn: 21134
2005-04-07 16:15:25 +00:00
Chris Lattner ecfa9b5810 disable this transformation in the one obscure case that really pessimizes
pointer analysis.

llvm-svn: 20916
2005-03-29 06:37:47 +00:00
Chris Lattner cfe2822cdf Do not compute 1ULL << 64, which is undefined. This fixes Ptrdist/ks on the
sparc, and testcase Regression/Transforms/InstCombine/2005-03-04-ShiftOverflow.ll

llvm-svn: 20445
2005-03-04 23:21:33 +00:00
Chris Lattner 72684fecf8 Implement InstCombine/cast.ll:test25, a case that occurs many times
in spec

llvm-svn: 19953
2005-01-31 05:51:45 +00:00
Chris Lattner 31f486c775 Implement the trivial cases in InstCombine/store.ll
llvm-svn: 19950
2005-01-31 05:36:43 +00:00
Chris Lattner fe1b0b8b24 Implement Transforms/InstCombine/cast-load-gep.ll, which allows us to devirtualize
11 indirect calls in perlbmk.

llvm-svn: 19947
2005-01-31 04:50:46 +00:00
Chris Lattner d8e20188c6 Adjust to changes in instruction interfaces.
llvm-svn: 19900
2005-01-29 00:39:08 +00:00
Chris Lattner cd517ff0c7 * add some DEBUG statements
* Properly compile this:

struct a {};
int test() {
  struct a b[2];
  if (&b[0] != &b[1])
    abort ();
  return 0;
}

to 'return 0', not abort().

llvm-svn: 19875
2005-01-28 19:32:01 +00:00
Chris Lattner 9e2c7facb2 Get rid of a several dozen more and instructions in specint.
llvm-svn: 19786
2005-01-23 20:26:55 +00:00
Chris Lattner fc4429e7c1 Handle comparisons of gep instructions that have different typed indices
as long as they are the same size.

llvm-svn: 19734
2005-01-21 23:06:49 +00:00
Chris Lattner 411336fe04 Add two optimizations. The first folds (X+Y)-X -> Y
The second folds operations into selects, e.g. (select C, (X+Y), (Y+Z))
-> (Y+(select C, X, Z)

This occurs a few times across spec, e.g.

         select    add/sub
mesa:    83        0
povray:  5         2
gcc      4         2
parser   0         22
perlbmk  13        30
twolf    0         3

llvm-svn: 19706
2005-01-19 21:50:18 +00:00
Chris Lattner 715364364b Delete PHI nodes that are not dead but are locked in a cycle of single
useness.

llvm-svn: 19629
2005-01-17 05:10:15 +00:00
Chris Lattner 03f06f11aa Move code out of indentation one level to make it easier to read.
Disable the xform for < > cases.  It turns out that the following is being
miscompiled:

bool %test(sbyte %S) {
        %T = cast sbyte %S to uint
        %V = setgt uint %T, 255
        ret bool %V
}

llvm-svn: 19628
2005-01-17 03:20:02 +00:00
Chris Lattner 51726c47fe Fix some bugs in an xform added yesterday. This fixes Prolangs-C/allroots.
llvm-svn: 19553
2005-01-14 17:35:12 +00:00
Chris Lattner 7aa41cfa88 Fix a compile crash on spiff
llvm-svn: 19552
2005-01-14 17:17:59 +00:00
Chris Lattner 4fa89827e2 if two gep comparisons only differ by one index, compare that index directly.
This allows us to better optimize begin() -> end() comparisons in common cases.

llvm-svn: 19542
2005-01-14 00:20:05 +00:00
Chris Lattner d35d210ea0 Do not overrun iterators. This fixes a 176.gcc crash
llvm-svn: 19541
2005-01-13 23:26:48 +00:00
Chris Lattner a04c904c4c Turn select C, (X+Y), (X-Y) --> (X+(select C, Y, (-Y))). This occurs in
the 'sim' program and probably elsewhere.  In sim, it comes up for cases
like this:

#define round(x) ((x)>0.0 ? (x)+0.5 : (x)-0.5)
double G;
void T(double X) { G = round(X); }

(it uses the round macro a lot).  This changes the LLVM code from:

        %tmp.1 = setgt double %X, 0.000000e+00          ; <bool> [#uses=1]
        %tmp.4 = add double %X, 5.000000e-01            ; <double> [#uses=1]
        %tmp.6 = sub double %X, 5.000000e-01            ; <double> [#uses=1]
        %mem_tmp.0 = select bool %tmp.1, double %tmp.4, double %tmp.6
        store double %mem_tmp.0, double* %G

to:

        %tmp.1 = setgt double %X, 0.000000e+00          ; <bool> [#uses=1]
        %mem_tmp.0.p = select bool %tmp.1, double 5.000000e-01, double -5.000000e-01
        %mem_tmp.0 = add double %mem_tmp.0.p, %X
        store double %mem_tmp.0, double* %G
        ret void

llvm-svn: 19537
2005-01-13 22:52:24 +00:00
Chris Lattner 81e8417614 Implement an optimization for == and != comparisons like this:
_Bool test2(int X, int Y) {
  return &arr[X][Y] == arr;
}

instead of generating this:

bool %test2(int %X, int %Y) {
        %tmp.3.idx = mul int %X, 160            ; <int> [#uses=1]
        %tmp.3.idx1 = shl int %Y, ubyte 2               ; <int> [#uses=1]
        %tmp.3.offs2 = sub int 0, %tmp.3.idx            ; <int> [#uses=1]
        %tmp.7 = seteq int %tmp.3.idx1, %tmp.3.offs2            ; <bool> [#uses=1]
        ret bool %tmp.7
}


generate this:

bool %test2(int %X, int %Y) {
        seteq int %X, 0         ; <bool>:0 [#uses=1]
        seteq int %Y, 0         ; <bool>:1 [#uses=1]
        %tmp.7 = and bool %0, %1                ; <bool> [#uses=1]
        ret bool %tmp.7
}

This idiom occurs in C++ programs when iterating from begin() to end(),
in a vector or array.  For example, we now compile this:

void test(int X, int Y) {
  for (int *i = arr; i != arr+100; ++i)
    foo(*i);
}

to this:

no_exit:                ; preds = %entry, %no_exit
	...
        %exitcond = seteq uint %indvar.next, 100                ; <bool> [#uses=1]
        br bool %exitcond, label %return, label %no_exit



instead of this:

no_exit:                ; preds = %entry, %no_exit
	...
        %inc5 = getelementptr [100 x [40 x int]]* %arr, int 0, int 0, int %inc.rec              ; <int*> [#uses=1]
        %tmp.8 = seteq int* %inc5, getelementptr ([100 x [40 x int]]* %arr, int 0, int 100, int 0)              ; <bool> [#uses=1]
        %indvar.next = add uint %indvar, 1              ; <uint> [#uses=1]
        br bool %tmp.8, label %return, label %no_exit

llvm-svn: 19536
2005-01-13 22:25:21 +00:00
Chris Lattner 4cb9fa373b Fix some bugs in code I didn't mean to check in.
llvm-svn: 19534
2005-01-13 20:40:58 +00:00
Chris Lattner 0798af33a5 Fix a crash compiling 129.compress
llvm-svn: 19533
2005-01-13 20:14:25 +00:00
Chris Lattner fdfe3e49fe Fix uint64_t -> unsigned VS warnings.
llvm-svn: 19381
2005-01-08 19:42:22 +00:00
Chris Lattner 86102b8ad5 This is a bulk commit that implements the following primary improvements:
* We can now fold cast instructions into select instructions that
    have at least one constant operand.
  * We now optimize expressions more aggressively based on bits that are
    known to be zero.  These optimizations occur a lot in code that uses
    bitfields even in simple ways.
  * We now turn more cast-cast sequences into AND instructions.  Before we
    would only do this if it if all types were unsigned.  Now only the
    middle type needs to be unsigned (guaranteeing a zero extend).
  * We transform sign extensions into zero extensions in several cases.

This corresponds to these test/Regression/Transforms/InstCombine testcases:
  2004-11-22-Missed-and-fold.ll
  and.ll: test28-29
  cast.ll: test21-24
  and-or-and.ll
  cast-cast-to-and.ll
  zeroext-and-reduce.ll

llvm-svn: 19220
2005-01-01 16:22:27 +00:00
Chris Lattner 9ad0d55025 Constant exprs are not efficiently negatable in practice. This disables
turning X - (constantexpr) into X + (-constantexpr) among other things.

llvm-svn: 18935
2004-12-14 20:08:06 +00:00
Chris Lattner bf5b7cf638 Optimize div/rem + select combinations more.
In particular, implement div.ll:test10 and rem.ll:test4.

llvm-svn: 18838
2004-12-12 21:48:58 +00:00
Chris Lattner 36d39cecb4 note to self: Do not check in debugging code!
llvm-svn: 18693
2004-12-09 07:15:52 +00:00
Chris Lattner f17a2fb849 Implement trivial sinking for load instructions. This causes us to sink 567 loads in spec
llvm-svn: 18692
2004-12-09 07:14:34 +00:00
Chris Lattner 39c98bb31c Do extremely simple sinking of instructions when they are only used in a
successor block.  This turns cases like this:

x = a op b
if (c) {
  use x
}

into:

if (c) {
  x = a op b
  use x
}

This triggers 3965 times in spec, and is tested by
Regression/Transforms/InstCombine/sink_instruction.ll

This appears to expose a bug in the X86 backend for 177.mesa, which I'm
looking in to.

llvm-svn: 18677
2004-12-08 23:43:58 +00:00
Alkis Evlogimenos a1291a0679 Fix this regression and remove the XFAIL from this test.
llvm-svn: 18674
2004-12-08 23:10:30 +00:00
Chris Lattner 8f30caf549 Fix Transforms/InstCombine/2004-12-08-RemInfiniteLoop.ll
llvm-svn: 18670
2004-12-08 22:20:34 +00:00
Reid Spencer 279fa256a2 Fix for PR454:
* Make sure we handle signed to unsigned conversion correctly
* Move this visitSetCondInst case to its own method.

llvm-svn: 18312
2004-11-28 21:31:15 +00:00
Chris Lattner 14f3cdc227 Implement Regression/Transforms/InstCombine/getelementptr_cast.ll, which
occurs many times in crafty

llvm-svn: 18273
2004-11-27 17:55:46 +00:00
Chris Lattner 953075442d Delete stoppoints that occur for the same source line.
llvm-svn: 17970
2004-11-18 21:41:39 +00:00
Chris Lattner 97013636cd Quiet warnings on the persephone tester
llvm-svn: 17821
2004-11-15 05:54:07 +00:00
Chris Lattner 46dd5a6304 This optimization makes MANY phi nodes that all have the same incoming value.
If this happens, detect it early instead of relying on instcombine to notice
it later.  This can be a big speedup, because PHI nodes can have many
incoming values.

llvm-svn: 17741
2004-11-14 19:29:34 +00:00
Chris Lattner 7515cabe2a Implement instcombine/phi.ll:test6 - pulling operations through PHI nodes.
This exposes subsequent optimization possiblities and reduces code size.
This triggers 1423 times in spec.

llvm-svn: 17740
2004-11-14 19:13:23 +00:00
Chris Lattner 15ff1e1885 Transform this:
%X = alloca ...
  %Y = alloca ...
    X == Y

into false.  This allows us to simplify some stuff in eon (and probably
many other C++ programs) where operator= was checking for self assignment.
Folding this allows us to SROA several additional structs.

llvm-svn: 17735
2004-11-14 07:33:16 +00:00
Chris Lattner 8c3e7b92af Simplify handling of shifts to be the same as we do for adds. Add support
for (X * C1) + (X * C2) (where * can be mul or shl), allowing us to fold:

   Y+Y+Y+Y+Y+Y+Y+Y

into
         %tmp.8 = shl long %Y, ubyte 3           ; <long> [#uses=1]

instead of

        %tmp.4 = shl long %Y, ubyte 2           ; <long> [#uses=1]
        %tmp.12 = shl long %Y, ubyte 2          ; <long> [#uses=1]
        %tmp.8 = add long %tmp.4, %tmp.12               ; <long> [#uses=1]

This implements add.ll:test25

Also add support for (X*C1)-(X*C2) -> X*(C1-C2), implementing sub.ll:test18

llvm-svn: 17704
2004-11-13 19:50:12 +00:00
Chris Lattner 4efe20a103 Fold:
(X + (X << C2)) --> X * ((1 << C2) + 1)
   ((X << C2) + X) --> X * ((1 << C2) + 1)

This means that we now canonicalize "Y+Y+Y" into:

        %tmp.2 = mul long %Y, 3         ; <long> [#uses=1]

instead of:

        %tmp.10 = shl long %Y, ubyte 1          ; <long> [#uses=1]
        %tmp.6 = add long %Y, %tmp.10               ; <long> [#uses=1]

llvm-svn: 17701
2004-11-13 19:31:40 +00:00
Chris Lattner 33eb909939 Fix some warnings on VC++
llvm-svn: 17481
2004-11-05 04:45:43 +00:00
Chris Lattner 96f6616479 * Rearrange code slightly
* Disable broken transforms for simplifying (setcc (cast X to larger), CI)
  where CC is not != or ==

llvm-svn: 17422
2004-11-02 03:50:32 +00:00