Getelementptrs that are defined to wrap are virtually useless to
optimization, and getelementptrs that are undefined on any kind
of overflow are too restrictive -- it's difficult to ensure that
all intermediate addresses are within bounds. I'm going to take
a different approach.
Remove a few optimizations that depended on this flag.
llvm-svn: 76437
GEPOperator's hasNoPointer0verflow(), and make a few places in instcombine
that create GEPs that may overflow clear the NoOverflow value. Among
other things, this partially addresses PR2831.
llvm-svn: 76252
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").
llvm-svn: 75640
which are effectively smart pointers to Value*'s. They are both very light
weight and simple, and react to values being destroyed or being RAUW'd.
WeakVN does a best effort to follow a value around, including through RAUW
operations and will get nulled out of the value is destroyed. This is useful
for the eventual "metadata that references a value" work, because it is a
reference to a value that does not show up on its use_* list.
AssertingVH is a pointer that compiles down to a dumb raw pointer when
assertions are disabled. When enabled, it emits an assertion if the
pointed-to value is destroyed while it is still being referenced. This
is very useful for Maps and other things, and should have caught the recent
bugs in CallGraph and Reassociate, for example.
llvm-svn: 68149
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
llvm-svn: 66815
pointer bitcasts and GEP's", and centralize the
logic in Value::getUnderlyingObject. The
difference with stripPointerCasts is that
stripPointerCasts only strips GEPs if all
indices are zero, while getUnderlyingObject
strips GEPs no matter what the indices are.
llvm-svn: 56922
_sabre_: it has a major problem: by the time ~Value is run, all of the "parts" of the derived classes have been destroyed
_sabre_: the vtable lives to fight another day
llvm-svn: 44760