transformation much more careful. Truncating binary '01' to '1' sounds like it's
safe until you realize that it switched from positive to negative under a signed
interpretation, and that depends on the icmp predicate.
Also a few miscellaneous cleanups.
llvm-svn: 97721
operators.
The test difference is just due to the multiplication operands
being commuted (and thus requiring a more elaborate match). In
optimized code, that expression would be folded.
llvm-svn: 96816
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
llvm-svn: 94987
getelementptr (i8* inttoptr (i64 1 to i8*), i32 -1)
to
inttoptr (i64 0 to i8*)
from the VMCore constant folder. It didn't handle sign-extension properly
in the case where the source integer is smaller than a pointer size. And,
it relied on an assumption about sizeof(i8).
The Analysis constant folder still folds these kinds of things; it has
access to TargetData, so it can do them right.
Add a testcase which tests that the VMCore constant folder doesn't
miscompile this, and that the Analysis folder does fold it.
llvm-svn: 94750
use plain SCEVUnknowns with ConstantExpr::getSizeOf and
ConstantExpr::getOffsetOf constants. This eliminates a bunch of
special-case code.
Also add code for pattern-matching these expressions, for clients that
want to recognize them.
Move ScalarEvolution's logic for expanding array and vector sizeof
expressions into an element count times the element size, to expose
the multiplication to subsequent folding, into the regular constant
folder.
llvm-svn: 94737
if one of the vectors didn't have elements (such as undef). Fixes PR 6096.
Fix an issue in the constant folder where fcmp (<2 x %ty>, <2 x %ty>) would
have <2 x i1> type if constant folding was successful and i1 type if it wasn't.
This exposed a related issue in the bitcode reader.
llvm-svn: 94069
In the new world order, BlockAddress can have a BasicBlock operand.
This doesn't permute much, because if you have a ConstantExpr (or
anything more specific than Constant) we still know the operand has
to be a Constant.
llvm-svn: 85375
allowing it to simplify the crazy constantexprs in the testcases
down to something sensible. This allows -std-compile-opts to
completely "devirtualize" the pointers to member functions in
the testcase from PR5176.
llvm-svn: 84368
the new predicates I added) instead of going through a context and doing a
pointer comparison. Besides being cheaper, this allows a smart compiler
to turn the if sequence into a switch.
llvm-svn: 83297
how to fold notionally-out-of-bounds array getelementptr indices instead
of just doing these in lib/Analysis/ConstantFolding.cpp, because it can
be done in a fairly general way without TargetData, and because not all
constants are visited by lib/Analysis/ConstantFolding.cpp. This enables
more constant folding.
Also, set the "inbounds" flag when the getelementptr indices are
one-past-the-end.
llvm-svn: 81483
within the notional bounds of the static type of the getelementptr (which
is not the same as "inbounds") from GlobalOpt into a utility routine,
and use it in ConstantFold.cpp to check whether there are any mis-behaved
indices.
llvm-svn: 81478
and exact flags. Because ConstantExprs are uniqued, creating an
expression with this flag causes all expressions with the same operands
to have the same flag, which may not be safe. Add, sub, mul, and sdiv
ConstantExprs are usually folded anyway, so the main interesting flag
here is inbounds, and the constant folder already knows how to set the
inbounds flag automatically in most cases, so there isn't an urgent need
for the API support.
This can be reconsidered in the future, but for now just removing these
API bits eliminates a source of potential trouble with little downside.
llvm-svn: 80959
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").
llvm-svn: 75640
This involves temporarily hard wiring some parts to use the global context. This isn't ideal, but it's
the only way I could figure out to make this process vaguely incremental.
llvm-svn: 75445
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.
llvm-svn: 75379
create separate recursive mutexes for each value map. The recursive-ness fixes the double-acquiring issue, which having one per ValueMap
lets us continue to maintain some concurrency.
llvm-svn: 73801
gets involved, and we end up trying to recursively acquire a writer lock. The fix for this is slightly horrible,
and involves passing a boolean "locked" parameter around in Constants.cpp, but it's better than having locked and
unlocked versions of most of the code.
llvm-svn: 73790
failures.
To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.
Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.
llvm-svn: 73431
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
llvm-svn: 72897
be folded. Instead, fail to fold the entire vector.
We could also return a vector with some elements folded and some not. If anyone
thinks that's a better approach, please speak up!
llvm-svn: 55689
1) evaluate [v]fcmp true/false with undefs to true or false instead
of undef.
2) fix vector comparisons with undef to return a vector result instead
of i1
3) fix vector comparisons with evaluatable results to return vector
true/false instead of i1 true/false (PR2529)
llvm-svn: 53220