* Re-enable the APInt version of MaskedValueIsZero.
* APIntify the Comput{Un}SignedMinMaxValuesFromKnownBits functions
* APIntify visitICmpInst.
llvm-svn: 35270
* Fix some indentation and comments in InsertRangeTest
* Add an "IsSigned" parameter to AddWithOverflow and make it handle signed
additions. Also, APIntify this function so it works with any bitwidth.
* For the icmp pred ([us]div %X, C1), C2 transforms, exit early if the
div instruction's RHS is zero.
* Finally, for icmp pred (sdiv %X, C1), -C2, fix an off-by-one error. The
HiBound needs to be incremented in order to get the range test correct.
llvm-svn: 35247
1. Replace getSignedMinValue() with getSignBit() for better code readability.
2. Replace APIntOps::shl() with operator<<= for convenience.
3. Make APInt construction more effective.
llvm-svn: 35060
Provide an APIntified version of MaskedValueIsZero. This will (temporarily)
cause a "defined but not used" message from the compiler. It will be used
in the next patch in this series.
Patch by Sheng Zhou.
llvm-svn: 35019
Add a new ComputeMaskedBits function that is APIntified. We'll slowly
convert things over to use this version. When its all done, we'll remove
the existing version.
llvm-svn: 35018
the order that instcombine processed instructions in the testcase. The end
result is that instcombine finished with:
define i16 @test1(i16 %a) {
%tmp = zext i16 %a to i32 ; <i32> [#uses=2]
%tmp21 = lshr i32 %tmp, 8 ; <i32> [#uses=1]
%tmp5 = shl i32 %tmp, 8 ; <i32> [#uses=1]
%tmp.upgrd.32 = or i32 %tmp21, %tmp5 ; <i32> [#uses=1]
%tmp.upgrd.3 = trunc i32 %tmp.upgrd.32 to i16 ; <i16> [#uses=1]
ret i16 %tmp.upgrd.3
}
which can't get matched as a bswap.
This patch makes instcombine more sophisticated about removing truncating
casts, allowing it to turn this into:
define i16 @test2(i16 %a) {
%tmp211 = lshr i16 %a, 8
%tmp52 = shl i16 %a, 8
%tmp.upgrd.323 = or i16 %tmp211, %tmp52
ret i16 %tmp.upgrd.323
}
which then matches as bswap. This fixes bswap.ll and implements
InstCombine/cast2.ll:test[12]. This also implements cast elimination of
add/sub.
llvm-svn: 34870
a value from the worklist required scanning the entire worklist to remove all
entries. We now use a combination map+vector to prevent duplicates from
happening and prevent the scan. This speeds up instcombine on a large file
from the llvm-gcc bootstrap from 189.7s to 4.84s in a debug build and from
5.04s to 1.37s in a release build.
llvm-svn: 34848
Make the Module's dependent library use a std::vector instead of SetVector
adjust #includes in .cpp files because SetVector.h is no longer included.
llvm-svn: 33855
This feature is needed in order to support shifts of more than 255 bits
on large integer types. This changes the syntax for llvm assembly to
make shl, ashr and lshr instructions look like a binary operator:
shl i32 %X, 1
instead of
shl i32 %X, i8 1
Additionally, this should help a few passes perform additional optimizations.
llvm-svn: 33776
pessimization where instcombine can sink a load (good for code size) that
prevents an alloca from being promoted by mem2reg (bad for everything).
llvm-svn: 33771
This occurs in C++ code like:
#include <iostream>
#include <iterator>
int a[] = { 1, 2, 3, 4, 5 };
int main() {
using namespace std;
copy(a, a + sizeof(a)/sizeof(a[0]), ostream_iterator<int>(cout, "\n"));
return 0;
}
Before we would decide the loop trip count is:
sdiv (i32 sub (i32 ptrtoint (i32* getelementptr ([5 x i32]* @a, i32 0, i32 5) to i32), i32 ptrtoint ([5 x i32]* @a to i32)), i32 4)
Now we decide it is "5". Amazing.
This code will need to be refactored, but I'm doing that as a separate
commit.
llvm-svn: 33665
changes: (1) don't special case for i1 any more, (2) use the new
TargetData::getTypeSizeInBits method to ensure source and dest are the
same bit width.
llvm-svn: 33427
We only want to do this if the src and destination types have the same
bit width. This patch uses TargetData::getTypeSizeInBits() instead of
making a special case for integer types and avoiding the transform if
they don't match.
llvm-svn: 33414
This is the final patch for this PR. It implements some minor cleanup
in the use of IntegerType, to wit:
1. Type::getIntegerTypeMask -> IntegerType::getBitMask
2. Type::Int*Ty changed to IntegerType* from Type*
3. ConstantInt::getType() returns IntegerType* now, not Type*
This also fixes PR1120.
Patch by Sheng Zhou.
llvm-svn: 33370
transform. Change some variable names so it is clear what is source and
what is dest of the cast. Also, add an assert to ensure that the integer
to integer case is asserting if the bitwidths are different. This prevents
illegal casts from being formed and catches bitwidth bugs sooner.
llvm-svn: 33337
because TargetData::getTypeSize() returns the same for i1 and i8. This fix
is not right for the full generality of bitwise types, but it fixes the
regression.
llvm-svn: 33237
rename Type::getIntegralTypeMask to Type::getIntegerTypeMask.
This makes naming much more consistent. For example, there are now no longer any
instances of IntegerType that are not considered isInteger! :)
llvm-svn: 33225
Implement the arbitrary bit-width integer feature. The feature allows
integers of any bitwidth (up to 64) to be defined instead of just 1, 8,
16, 32, and 64 bit integers.
This change does several things:
1. Introduces a new Derived Type, IntegerType, to represent the number of
bits in an integer. The Type classes SubclassData field is used to
store the number of bits. This allows 2^23 bits in an integer type.
2. Removes the five integer Type::TypeID values for the 1, 8, 16, 32 and
64-bit integers. These are replaced with just IntegerType which is not
a primitive any more.
3. Adjust the rest of LLVM to account for this change.
Note that while this incremental change lays the foundation for arbitrary
bit-width integers, LLVM has not yet been converted to actually deal with
them in any significant way. Most optimization passes, for example, will
still only deal with the byte-width integer types. Future increments
will rectify this situation.
llvm-svn: 33113
recommended that getBoolValue be replaced with getZExtValue and that
get(bool) be replaced by get(const Type*, uint64_t). This implements
those changes.
llvm-svn: 33110
This patch converts getPrimitiveSize to getPrimitiveSizeInBits where it is
appropriate to do so (comparison of integer primitive types).
llvm-svn: 33012
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int -> Int32
4. [U]Long -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
and other methods related to signedness. In a few places this warranted
identifying the signedness information from other sources.
llvm-svn: 32785
Fix this by ensuring that a bitcast is inserted to do sign switching. This
is only temporarily needed as the merging of signed and unsigned is next
on the SignlessTypes plate.
llvm-svn: 32757
This patch removes the SetCC instructions and replaces them with the ICmp
and FCmp instructions. The SetCondInst instruction has been removed and
been replaced with ICmpInst and FCmpInst.
llvm-svn: 32751
creation. These changes are still temporary but at least this pushes
knowledge of signedness out closer to where it can be determined properly
and allows signedness to be removed from VMCore.
llvm-svn: 32654
used to determine whether a ZExt or SExt cast is performed. Instead, pass
an "isSigned" bool to the function and determine its value from the opcode
of the cast involved.
Also, clean up some cruft from previous patches.
llvm-svn: 32548
The cast patch introduced the possibility that the wrong cast opcode
could be used and that this transform could trigger on different kinds
of cast operations. This patch rectifies that.
llvm-svn: 32538
The long awaited CAST patch. This introduces 12 new instructions into LLVM
to replace the cast instruction. Corresponding changes throughout LLVM are
provided. This passes llvm-test, llvm/test, and SPEC CPUINT2000 with the
exception of 175.vpr which fails only on a slight floating point output
difference.
llvm-svn: 31931
This patch converts the old SHR instruction into two instructions,
AShr (Arithmetic) and LShr (Logical). The Shr instructions now are not
dependent on the sign of their operands.
llvm-svn: 31542
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
llvm-svn: 31380
Make necessary changes to support DIV -> [SUF]Div. This changes llvm to
have three division instructions: signed, unsigned, floating point. The
bytecode and assembler are bacwards compatible, however.
llvm-svn: 31195
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.
llvm-svn: 31063
SimplifyDemandedBits. The idea is that some operations can be simplified if
not all of the computed elements are needed. Some targets (like x86) have a
large number of intrinsics that operate on a single element, but pass other
elts through unmodified. If those other elements are not needed, the
intrinsics can be simplified to scalar operations, and insertelement ops can
be removed.
This turns (f.e.):
ushort %Convert_sse(float %f) {
%tmp = insertelement <4 x float> undef, float %f, uint 0 ; <<4 x float>> [#uses=1]
%tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, uint 1 ; <<4 x float>> [#uses=1]
%tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, uint 2 ; <<4 x float>> [#uses=1]
%tmp12 = insertelement <4 x float> %tmp11, float 0.000000e+00, uint 3 ; <<4 x float>> [#uses=1]
%tmp28 = tail call <4 x float> %llvm.x86.sse.sub.ss( <4 x float> %tmp12, <4 x float> < float 1.000000e+00, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp37 = tail call <4 x float> %llvm.x86.sse.mul.ss( <4 x float> %tmp28, <4 x float> < float 5.000000e-01, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp37, <4 x float> < float 6.553500e+04, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> zeroinitializer ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
into:
ushort %Convert_sse(float %f) {
entry:
%tmp28 = sub float %f, 1.000000e+00 ; <float> [#uses=1]
%tmp37 = mul float %tmp28, 5.000000e-01 ; <float> [#uses=1]
%tmp375 = insertelement <4 x float> undef, float %tmp37, uint 0 ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp375, <4 x float> < float 6.553500e+04, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> < float 0.000000e+00, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
which improves codegen from:
_Convert_sse:
movss LCPI1_0, %xmm0
movss 4(%esp), %xmm1
subss %xmm0, %xmm1
movss LCPI1_1, %xmm0
mulss %xmm0, %xmm1
movss LCPI1_2, %xmm0
minss %xmm0, %xmm1
xorps %xmm0, %xmm0
maxss %xmm0, %xmm1
cvttss2si %xmm1, %eax
andl $65535, %eax
ret
to:
_Convert_sse:
movss 4(%esp), %xmm0
subss LCPI1_0, %xmm0
mulss LCPI1_1, %xmm0
movss LCPI1_2, %xmm1
minss %xmm1, %xmm0
xorps %xmm1, %xmm1
maxss %xmm1, %xmm0
cvttss2si %xmm0, %eax
andl $65535, %eax
ret
This is just a first step, it can be extended in many ways. Testcase here:
Transforms/InstCombine/vec_demanded_elts.ll
llvm-svn: 30752
Remove the Function pointer cast in these calls, converting it to
a cast of argument.
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( int 10 )
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( uint %tmp51 )
llvm-svn: 28953
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
llvm-svn: 28224
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1. In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
llvm-svn: 28215