expression evaluation.
OrcMCJITReplacement is a reimplementation of MCJIT using ORC components, and
provides an easy upgrade path to ORC for existing MCJIT clients. There should be
no functional changes resulting from this switch.
llvm-svn: 279327
Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later.
Differential Revision: https://reviews.llvm.org/D23704
Differential Revision: https://reviews.llvm.org/D23705
llvm-svn: 279326
This reverts commit r279296. Including LLDBDependencies breaks the
netbsd lldb bot because it exposes LLDB_USED_LIBS, which causes
lldb_link_common_libs to run to completion in unintended sites, which
results in a malformed call to target_link_libraries.
http://lab.llvm.org:8011/builders/lldb-amd64-ninja-netbsd7/builds/5989
Thanks to Chris Bieneman for figuring this out!
llvm-svn: 279322
Summary:
This switches us to use a different, more powerful algorithm for address
space inference. I've tested this locally and it seems to work great.
Once we're more confident in it, we can remove the old pass altogether.
Reviewers: jingyue
Subscribers: llvm-commits, tra, jholewinski
Differential Revision: https://reviews.llvm.org/D23694
llvm-svn: 279317
This reverts commit r279053, reapplying r278974 after fixing PR29035
with r279104.
Note that r279312 has been committed in the meantime, and this has been
rebased on top of that. Otherwise it's identical to r278974.
Note for maintainers of out-of-tree code (that I missed in the original
message): if the new isKnownSentinel() assertion is firing from
ilist_iterator<>::operator*(), this patch has identified a bug in your
code. There are a few common patterns:
- Some IR-related APIs htake an IRUnit* that might be nullptr, and pass
in an incremented iterator as an insertion point. Some old code was
using "&*++I", which in the case of end() only worked by fluke. If
the IRUnit in question inherits from ilist_node_with_parent<>, you can
use "I->getNextNode()". Otherwise, use "List.getNextNode(*I)".
- In most other cases, crashes on &*I just need to check for I==end()
before dereferencing.
- There's also occasional code that sends iterators into a function, and
then starts calling I->getOperand() (or other API). Either check for
end() before the entering the function, or early exit.
Note for if the static_assert with HasObsoleteCustomization is firing
for you:
- r278513 has examples of how to stop using custom sentinel traits.
- r278532 removed ilist_nextprev_traits since no one was using it. See
lld's r278469 for the only migration I needed to do.
Original commit message follows.
----
This removes the undefined behaviour (UB) in ilist/ilist_node/etc.,
mainly by removing (gutting) the ilist_sentinel_traits customization
point and canonicalizing on a single, efficient memory layout. This
fixes PR26753.
The new ilist is a doubly-linked circular list.
- ilist_node_base has two ilist_node_base*: Next and Prev. Size-of: two
pointers.
- ilist_node<T> (size-of: two pointers) is a type-safe wrapper around
ilist_node_base.
- ilist_iterator<T> (size-of: two pointers) operates on an
ilist_node<T>*, and downcasts to T* on dereference.
- ilist_sentinel<T> (size-of: two pointers) is a wrapper around
ilist_node<T> that has some extra API for list management.
- ilist<T> (size-of: two pointers) has an ilist_sentinel<T>, whose
address is returned for end().
The new memory layout matches ilist_half_embedded_sentinel_traits<T>
exactly. The Head pointer that previously lived in ilist<T> is
effectively glued to the ilist_half_node<T> that lived in
ilist_half_embedded_sentinel_traits<T>, becoming the Next and Prev in
the ilist_sentinel_node<T>, respectively. sizeof(ilist<T>) is now the
size of two pointers, and there is never any additional storage for a
sentinel.
This is a much simpler design for a doubly-linked list, removing most of
the corner cases of list manipulation (add, remove, etc.). In follow-up
commits, I intend to move as many algorithms as possible into a
non-templated base class (ilist_base) to reduce code size.
Moreover, this fixes the UB in ilist_iterator/getNext/getPrev
operations. Previously, ilist_iterator<T> operated on a T*, even when
the sentinel was not of type T (i.e., ilist_embedded_sentinel_traits and
ilist_half_embedded_sentinel_traits). This added UB to all operations
involving end(). Now, ilist_iterator<T> operates on an ilist_node<T>*,
and only downcasts when the full type is guaranteed to be T*.
What did we lose? There used to be a crash (in some configurations) on
++end(). Curiously (via UB), ++end() would return begin() for users of
ilist_half_embedded_sentinel_traits<T>, but otherwise ++end() would
cause a nice dependable nullptr dereference, crashing instead of a
possible infinite loop. Options:
1. Lose that behaviour.
2. Keep it, by stealing a bit from Prev in asserts builds.
3. Crash on dereference instead, using the same technique.
Hans convinced me (because of the number of problems this and r278532
exposed on Windows) that we really need some assertion here, at least in
the short term. I've opted for #3 since I think it catches more bugs.
I added only a couple of unit tests to root out specific bugs I hit
during bring-up, but otherwise this is tested implicitly via the
extensive usage throughout LLVM.
Planned follow-ups:
- Remove ilist_*sentinel_traits<T>. Here I've just gutted them to
prevent build failures in sub-projects. Once I stop referring to them
in sub-projects, I'll come back and delete them.
- Add ilist_base and move algorithms there.
- Check and fix move construction and assignment.
Eventually, there are other interesting directions:
- Rewrite reverse iterators, so that rbegin().getNodePtr()==&*rbegin().
This allows much simpler logic when erasing elements during a reverse
traversal.
- Remove ilist_traits::createNode, by deleting the remaining API that
creates nodes. Intrusive lists shouldn't be creating nodes
themselves.
- Remove ilist_traits::deleteNode, by (1) asserting that lists are empty
on destruction and (2) changing API that calls it to take a Deleter
functor (intrusive lists shouldn't be in the memory management
business).
- Reconfigure the remaining callback traits (addNodeToList, etc.) to be
higher-level, pulling out a simple_ilist<T> that is much easier to
read and understand.
- Allow tags (e.g., ilist_node<T,tag1> and ilist_node<T,tag2>) so that T
can be a member of multiple intrusive lists.
llvm-svn: 279314
This spiritually reapplies r279012 (reverted in r279052) without the
r278974 parts. The differences:
- Only the HasGetNext trait exists here, so I've only cleaned up (and
tested) it. I still added HasObsoleteCustomization since I know
this will be expanding when r278974 is reapplied.
- I changed the unit tests to use static_assert to catch problems
earlier in the build.
- I added negative tests for the type traits.
Original commit message follows.
----
Change the ilist traits to use decltype instead of sizeof, and add
HasObsoleteCustomization so that additions to this list don't
need to be added in two places.
I suspect this will now work with MSVC, since the trait tested in
r278991 seems to work. If for some reason it continues to fail on
Windows I'll follow up by adding back the #ifndef _MSC_VER.
llvm-svn: 279312
This adds a G_INSERT instruction, which technically makes G_SEQUENCE redundant
(it's equivalent to a G_INSERT into an IMPLICIT_DEF). We'll leave G_SEQUENCE
for now though: it's likely to be far more common as it's a fundamental part of
legalization, so avoiding the mess and bloat of the extra IMPLICIT_DEFs is
probably worthwhile.
llvm-svn: 279306
Summary: This way they can be re-used by target-specific schedulers.
Reviewers: atrick, MatzeB, kparzysz
Subscribers: kparzysz, llvm-commits, MatzeB
Differential Revision: https://reviews.llvm.org/D23678
llvm-svn: 279305
Specifically, this is done near the end of "SimplifyICmpInst" using
computeKnownBits() as the broader solution. There are even vector
tests (yay!) for this in test/Transforms/InstSimplify/compare.ll.
I considered putting an assert here instead of just deleting, but
then we could assert every possible fold in InstSimplify in
InstCombine, so...less is more?
llvm-svn: 279300
They can be deleted or replicated, so the cache may become outdated.
They only need to be visited once during frame lowering, so just scan
the function instead.
llvm-svn: 279297
It's pulling in all kinds of things it doesn't need (e.g, clang-tidy!).
Eliminating this dependency removes 1056 dependencies from the
'CommandObjectFrame.cpp.o' target and 454 dependencies from the 'lldb'
target. On my machine, this shaves 7 minutes off of a clean build of
lldb.
Thanks to Zachary Turner for pointing out some issues with an earlier
version of this patch!
Differential Revision: https://reviews.llvm.org/D22987
llvm-svn: 279296
was done to hopefully appease MSVC.
As an upside, this also implements the suggestion Sanjoy made in code
review, so two for one! =]
I'll be watching the bots to see if there are still issues.
llvm-svn: 279295
First, make sure all types involved are represented, rather than being implicit
from the register width.
Second, canonicalize all types to scalar. These operations just act in bits and
don't care about vectors.
Also standardize spelling of Indices in the MachineIRBuilder (NFC here).
llvm-svn: 279294
Unsigned addition and subtraction can reuse the instructions created to
legalize large width operations (i.e. both produce and consume a carry flag).
Signed operations and multiplies get a dedicated op-with-overflow instruction.
Once this is produced the two values are combined into a struct register (which
will almost always be merged with a corresponding G_EXTRACT as part of
legalization).
llvm-svn: 279278
Repeated inserts into AliasSetTracker have quadratic behavior - inserting a
pointer into AST is linear, since it requires walking over all "may" alias
sets and running an alias check vs. every pointer in the set.
We can avoid this by tracking the total number of pointers in "may" sets,
and when that number exceeds a threshold, declare the tracker "saturated".
This lumps all pointers into a single "may" set that aliases every other
pointer.
(This is a stop-gap solution until we migrate to MemorySSA)
This fixes PR28832.
Differential Revision: https://reviews.llvm.org/D23432
llvm-svn: 279274
This doesn't change tests codegen as we already combined to blend+zero which is what we lower VZEXT_MOVL to on SSE41+ targets, but it does put us in a better position when we improve shuffling for optsize.
llvm-svn: 279273