Summary:
It can be useful for tools to be able to retrieve the values of variables
declared via STATISTIC() directly without having to emit them and parse
them back. Use cases include:
* Needing to report specific statistics to a test harness
* Wanting to post-process statistics. For example, to produce a percentage of
functions that were fully selected by GlobalISel
Make this possible by adding llvm::GetStatistics() which returns an
iterator_range that can be used to inspect the statistics that have been
touched during execution. When statistics are disabled (NDEBUG and not
LLVM_ENABLE_STATISTICS) this method will return an empty range.
This patch doesn't address the effect of multiple compilations within the same
process. In such situations, the statistics will be cumulative for all
compilations up to the GetStatistics() call.
Reviewers: qcolombet, rtereshin, aditya_nandakumar, bogner
Reviewed By: rtereshin, bogner
Subscribers: llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D43901
This re-commit fixes a missing include of <vector> which it seems clang didn't
mind but G++ and MSVC objected to. It seems that, clang was ok with std::vector
only being forward declared at the point of use since it was fully defined
eventually but G++/MSVC both rejected it at the point of use.
llvm-svn: 326738
Despite building cleanly on my machine in three separate configs, it's failing on pretty much all bots due to missing includes among other things. Investigating.
llvm-svn: 326726
Summary:
It can be useful for tools to be able to retrieve the values of variables
declared via STATISTIC() directly without having to emit them and parse
them back. Use cases include:
* Needing to report specific statistics to a test harness
* Wanting to post-process statistics. For example, to produce a percentage of
functions that were fully selected by GlobalISel
Make this possible by adding llvm::GetStatistics() which returns an
iterator_range that can be used to inspect the statistics that have been
touched during execution. When statistics are disabled (NDEBUG and not
LLVM_ENABLE_STATISTICS) this method will return an empty range.
This patch doesn't address the effect of multiple compilations within the same
process. In such situations, the statistics will be cumulative for all
compilations up to the GetStatistics() call.
Reviewers: qcolombet, rtereshin, aditya_nandakumar, bogner
Reviewed By: rtereshin, bogner
Subscribers: llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D43901
llvm-svn: 326723
Summary:
This eliminates the boilerplate implementation of the iterator interface in
mapped_iterator.
This patch also adds unit tests that verify that the mapped function is applied
by operator* and operator->, and that references returned by the map function
are returned via operator*.
Reviewers: dblaikie, chandlerc
Subscribers: llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D39855
llvm-svn: 317902
Summary:
Given a flag (-mllvm -reverse-iterate) this patch will enable iteration of SmallPtrSet in reverse order.
The idea is to compile the same source with and without this flag and expect the code to not change.
If there is a difference in codegen then it would mean that the codegen is sensitive to the iteration order of SmallPtrSet.
This is enabled only with LLVM_ENABLE_ABI_BREAKING_CHECKS.
Reviewers: chandlerc, dexonsmith, mehdi_amini
Subscribers: mgorny, emaste, llvm-commits
Differential Revision: https://reviews.llvm.org/D26718
llvm-svn: 289619
llvm::join_items is similar to llvm::join, which produces a string
by concatenating a sequence of values together separated by a
given separator. But it differs in that the arguments to
llvm::join() are same-type members of a container, whereas the
arguments to llvm::join_items are arbitrary types passed into
a variadic template. The only requirement on parameters to
llvm::join_items (including for the separator themselves) is
that they be implicitly convertible to std::string or have
an overload of std::string::operator+
Differential Revision: https://reviews.llvm.org/D24880
llvm-svn: 282502
- Add AllocatorList, a non-intrusive list that owns an LLVM-style
allocator and provides a std::list-like interface (trivially built on
top of simple_ilist),
- add a typedef (and unit tests) for BumpPtrList, and
- use BumpPtrList for the list of llvm::yaml::Token (i.e., TokenQueueT).
TokenQueueT has no need for the complexity of an intrusive list. The
only reason to inherit from ilist was to customize the allocator.
TokenQueueT was the only example in-tree of using ilist<> in a truly
non-intrusive way.
Moreover, this removes the final use of the non-intrusive
ilist_traits<>::createNode (after r280573, r281177, and r281181). I
have a WIP patch that removes this customization point (and the API that
relies on it) that I plan to commit soon.
Note: AllocatorList owns the allocator, which limits the viable API
(e.g., splicing must be on the same list). For now I've left out
any problematic API. It wouldn't be hard to split AllocatorList into
two layers: an Impl class that calls DerivedT::getAlloc (via CRTP), and
derived classes that handle Allocator ownership/reference/etc semantics;
and then implement splice with appropriate assertions; but TBH we should
probably just customize the std::list allocators at that point.
llvm-svn: 281182
This adds two declarative configuration options for intrusive lists
(available for simple_ilist, iplist, and ilist). Both of these options
affect ilist_node interoperability and need to be passed both to the
node and the list. Instead of adding a new traits class, they're
specified as optional template parameters (in any order).
The two options:
1. Pass ilist_sentinel_tracking<true> or ilist_sentinel_tracking<false>
to control whether there's a bit on ilist_node "prev" pointer
indicating whether it's the sentinel. The default behaviour is to
use a bit if and only if LLVM_ENABLE_ABI_BREAKING_CHECKS.
2. Pass ilist_tag<TagA> and ilist_tag<TagB> to allow insertion of a
single node into two different lists (simultaneously).
I have an immediate use-case for (1) ilist_sentinel_tracking: fixing the
validation semantics of MachineBasicBlock::reverse_iterator to match
ilist::reverse_iterator (ala r280032: see the comments at the end of the
commit message there). I'm adding (2) ilist_tag in the same commit to
validate that the options framework supports expansion. Justin Bogner
mentioned this might enable a possible cleanup in SelectionDAG, but I'll
leave this to others to explore. In the meantime, the unit tests and
the comments for simple_ilist and ilist_node have usage examples.
Note that there's a layer of indirection to support optional,
out-of-order, template paramaters. Internal classes are templated on an
instantiation of the non-variadic ilist_detail::node_options.
User-facing classes use ilist_detail::compute_node_options to compute
the correct instantiation of ilist_detail::node_options.
The comments for ilist_detail::is_valid_option describe how to add new
options (e.g., ilist_packed_int<int NumBits>).
llvm-svn: 281167
Split out a new, low-level intrusive list type with clear semantics.
Unlike iplist (and ilist), all operations on simple_ilist are intrusive,
and simple_ilist never takes ownership of its nodes. This enables an
intuitive API that has the right defaults for intrusive lists.
- insert() takes references (not pointers!) to nodes (in iplist/ilist,
passing a reference will cause the node to be copied).
- erase() takes only iterators (like std::list), and does not destroy
the nodes.
- remove() takes only references and has the same behaviour as erase().
- clear() does not destroy the nodes.
- The destructor does not destroy the nodes.
- New API {erase,remove,clear}AndDispose() take an extra Disposer
functor for callsites that want to call some disposal routine (e.g.,
std::default_delete).
This list is not currently configurable, and has no callbacks.
The initial motivation was to fix iplist<>::sort to work correctly (even
with callbacks in ilist_traits<>). iplist<> uses simple_ilist<>::sort
directly. The new test in unittests/IR/ModuleTest.cpp crashes without
this commit.
Fixing sort() via a low-level layer provided a good opportunity to:
- Unit test the low-level functionality thoroughly.
- Modernize the API, largely inspired by other intrusive list
implementations.
Here's a sketch of a longer-term plan:
- Create BumpPtrList<>, a non-intrusive list implemented using
simple_ilist<>, and use it for the Token list in
lib/Support/YAMLParser.cpp. This will factor out the only real use of
createNode().
- Evolve the iplist<> and ilist<> APIs in the direction of
simple_ilist<>, making allocation/deallocation explicit at call sites
(similar to simple_ilist<>::eraseAndDispose()).
- Factor out remaining calls to createNode() and deleteNode() and remove
the customization from ilist_traits<>.
- Transition uses of iplist<>/ilist<> that don't need callbacks over to
simple_ilist<>.
llvm-svn: 280107
And rename the tests inside from ilistTest to IListTest. This makes the
file sort properly in the CMakeLists.txt (previously, sorting would
throw it down to the end of the list) and is consistent with the tests
I've added more recently.
Why use IListNodeBaseTest.cpp (and a test name of IListNodeBaseTest)?
- ilist_node_base_test is the obvious thing, since this is testing
ilist_node_base. However, gtest disallows underscores in test names.
- ilist_node_baseTest fails for the same reason.
- ilistNodeBaseTest is weird, because it isn't in our usual
TitleCaseTest form that we use for tests, and it also doesn't have the
name of the tested class in it.
- IlistNodeBaseTest matches TitleCaseTest, but "Ilist" is hard to read,
and really "ilist" is an abbreviation for "IntrusiveList" so the
lowercase "list" is strange.
- That left IListNodeBaseTest.
Note: I made this move in two stages, with a temporary filename of
ilistTestTemp in between in r279524. This was in the hopes of avoiding
problems on Git and SVN clients on case-insensitive filesystems,
particularly on buildbots with incremental checkouts.
llvm-svn: 280033
Reverse iterators to doubly-linked lists can be simpler (and cheaper)
than std::reverse_iterator. Make it so.
In particular, change ilist<T>::reverse_iterator so that it is *never*
invalidated unless the node it references is deleted. This matches the
guarantees of ilist<T>::iterator.
(Note: MachineBasicBlock::iterator is *not* an ilist iterator, but a
MachineInstrBundleIterator<MachineInstr>. This commit does not change
MachineBasicBlock::reverse_iterator, but it does update
MachineBasicBlock::reverse_instr_iterator. See note at end of commit
message for details on bundle iterators.)
Given the list (with the Sentinel showing twice for simplicity):
[Sentinel] <-> A <-> B <-> [Sentinel]
the following is now true:
1. begin() represents A.
2. begin() holds the pointer for A.
3. end() represents [Sentinel].
4. end() holds the poitner for [Sentinel].
5. rbegin() represents B.
6. rbegin() holds the pointer for B.
7. rend() represents [Sentinel].
8. rend() holds the pointer for [Sentinel].
The changes are #6 and #8. Here are some properties from the old
scheme (which used std::reverse_iterator):
- rbegin() held the pointer for [Sentinel] and rend() held the pointer
for A;
- operator*() cost two dereferences instead of one;
- converting from a valid iterator to its valid reverse_iterator
involved a confusing increment; and
- "RI++->erase()" left RI invalid. The unintuitive replacement was
"RI->erase(), RE = end()".
With vector-like data structures these properties are hard to avoid
(since past-the-beginning is not a valid pointer), and don't impose a
real cost (since there's still only one dereference, and all iterators
are invalidated on erase). But with lists, this was a poor design.
Specifically, the following code (which obviously works with normal
iterators) now works with ilist::reverse_iterator as well:
for (auto RI = L.rbegin(), RE = L.rend(); RI != RE;)
fooThatMightRemoveArgFromList(*RI++);
Converting between iterator and reverse_iterator for the same node uses
the getReverse() function.
reverse_iterator iterator::getReverse();
iterator reverse_iterator::getReverse();
Why doesn't iterator <=> reverse_iterator conversion use constructors?
In order to catch and update old code, reverse_iterator does not even
have an explicit conversion from iterator. It wouldn't be safe because
there would be no reasonable way to catch all the bugs from the changed
semantic (see the changes at call sites that are part of this patch).
Old code used this API:
std::reverse_iterator::reverse_iterator(iterator);
iterator std::reverse_iterator::base();
Here's how to update from old code to new (that incorporates the
semantic change), assuming I is an ilist<>::iterator and RI is an
ilist<>::reverse_iterator:
[Old] ==> [New]
reverse_iterator(I) (--I).getReverse()
reverse_iterator(I) ++I.getReverse()
--reverse_iterator(I) I.getReverse()
reverse_iterator(++I) I.getReverse()
RI.base() (--RI).getReverse()
RI.base() ++RI.getReverse()
--RI.base() RI.getReverse()
(++RI).base() RI.getReverse()
delete &*RI, RE = end() delete &*RI++
RI->erase(), RE = end() RI++->erase()
=======================================
Note: bundle iterators are out of scope
=======================================
MachineBasicBlock::iterator, also known as
MachineInstrBundleIterator<MachineInstr>, is a wrapper to represent
MachineInstr bundles. The idea is that each operator++ takes you to the
beginning of the next bundle. Implementing a sane reverse iterator for
this is harder than ilist. Here are the options:
- Use std::reverse_iterator<MBB::i>. Store a handle to the beginning of
the next bundle. A call to operator*() runs a loop (usually
operator--() will be called 1 time, for unbundled instructions).
Increment/decrement just works. This is the status quo.
- Store a handle to the final node in the bundle. A call to operator*()
still runs a loop, but it iterates one time fewer (usually
operator--() will be called 0 times, for unbundled instructions).
Increment/decrement just works.
- Make the ilist_sentinel<MachineInstr> *always* store that it's the
sentinel (instead of just in asserts mode). Then the bundle iterator
can sniff the sentinel bit in operator++().
I initially tried implementing the end() option as part of this commit,
but updating iterator/reverse_iterator conversion call sites was
error-prone. I have a WIP series of patches that implements the final
option.
llvm-svn: 280032
I'll rename this to IListTest.cpp after a waiting period (tonight?
tomorrow?), with a full explanation in that commit.
First, I'm moving it aside because Git doesn't play well with case-only
filename changes on case-insensitive file systems (and I suspect the
same is true of SVN). This two-stage change should help to avoid
spurious failures on bots that don't do clean checkouts.
llvm-svn: 279524
Separate algorithms in iplist<T> that don't depend on T into ilist_base,
and unit test them.
While I was adding unit tests for these algorithms anyway, I also added
unit tests for ilist_node_base and ilist_sentinel<T>.
To make the algorithms and unit tests easier to write, I also did the
following minor changes as a drive-by:
- encapsulate Prev/Next in ilist_node_base to so that algorithms are
easier to read, and
- update ilist_node_access API to take nodes by reference.
There should be no real functionality change here.
llvm-svn: 279484
Summary: Before the change, *Opt never actually gets updated by the end
of toNext(), so for every next time the loop has to start over from
child_begin(). This bug doesn't affect the correctness, since Visited prevents
it from re-entering the same node again; but it's slow.
Reviewers: dberris, dblaikie, dannyb
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23649
llvm-svn: 279482
This is a little class template that just builds an inheritance chain of
empty classes. Despite how simple this is, it can be used to really
nicely create ranked overload sets. I've added a unittest as much to
document this as test it. You can pass an object of this type as an
argument to a function overload set an it will call the first viable and
enabled candidate at or below the rank of the object.
I'm planning to use this in a subsequent commit to more clearly rank
overload candidates used for SFINAE. All credit for this technique and
both lines of code here to Richard Smith who was helping me rewrite the
SFINAE check in question to much more effectively capture the intended
set of checks.
llvm-svn: 279197
Summary: make_scope_exit() is described in C++ proposal p0052r2, which uses RAII to do cleanup works at scope exit.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22796
llvm-svn: 278251
Summary: Normally when you do a bitwise operation on an enum value, you
get back an instance of the underlying type (e.g. int). But using this
macro, bitwise ops on your enum will return you back instances of the
enum. This is particularly useful for enums which represent a
combination of flags.
Suppose you have a function which takes an int and a set of flags. One
way to do this would be to take two numeric params:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, int Flags);
void foo() {
Fn(42, F2 | F3);
}
But now if you get the order of arguments wrong, you won't get an error.
You might try to fix this by changing the signature of Fn so it accepts
a SomeFlags arg:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, static_cast<SomeFlags>(F2 | F3));
}
But now we need a static cast after doing "F2 | F3" because the result
of that computation is the enum's underlying type.
This patch adds a mechanism which gives us the safety of the second
approach with the brevity of the first.
enum SomeFlags {
F1 = 1, F2 = 2, F3 = 4, ..., F_MAX = 128,
LLVM_MARK_AS_BITMASK_ENUM(F_MAX)
};
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, F2 | F3); // No static_cast.
}
The LLVM_MARK_AS_BITMASK_ENUM macro enables overloads for bitwise
operators on SomeFlags. Critically, these operators return the enum
type, not its underlying type, so you don't need any static_casts.
An advantage of this solution over the previously-proposed BitMask class
[0, 1] is that we don't need any wrapper classes -- we can operate
directly on the enum itself.
The approach here is somewhat similar to OpenOffice's typed_flags_set
[2]. But we skirt the need for a wrapper class (and a good deal of
complexity) by judicious use of enable_if. We SFINAE on the presence of
a particular enumerator (added by the LLVM_MARK_AS_BITMASK_ENUM macro)
instead of using a traits class so that it's impossible to use the enum
before the overloads are present. The solution here also seamlessly
works across multiple namespaces.
[0] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20150622/283369.html
[1] http://lists.llvm.org/pipermail/llvm-commits/attachments/20150623/073434b6/attachment.obj
[2] https://cgit.freedesktop.org/libreoffice/core/tree/include/o3tl/typed_flags_set.hxx
Reviewers: chandlerc, rsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22279
llvm-svn: 275292
re-insertion of entries into the worklist moves them to the end.
This is fairly similar to a SetVector, but helps in the case where in
addition to not inserting duplicates you want to adjust the sequence of
a pop-off-the-back worklist.
I'm not at all attached to the name of this data structure if others
have better suggestions, but this is one that David Majnemer brought up
in IRC discussions that seems plausible.
I've trimmed the interface down somewhat from SetVector's interface
because several things make less sense here IMO: iteration primarily.
I'd prefer to add these back as we have users that need them. My use
case doesn't even need all of what is provided here. =]
I've also included a basic unittest to make sure this functions
reasonably.
Differential Revision: http://reviews.llvm.org/D21866
llvm-svn: 274198
a sequence of values.
It increments through the values in the half-open range: [Begin, End),
producing those values when indirecting the iterator. It should support
integers, iterators, and any other type providing these basic arithmetic
operations.
This came up in the C++ standards committee meeting, and it seemed like
a useful construct that LLVM might want as well, and I wanted to
understand how easily we could solve it. I suspect this can be used to
write simpler counting loops even in LLVM along the lines of:
for (int i : seq(0, v.size())) {
...
};
As part of this, I had to fix the lack of a proxy object returned from
the operator[] in our iterator facade.
Differential Revision: http://reviews.llvm.org/D17870
llvm-svn: 269390
This is a recommit of r264414 after fixing the buildbot failure caused by
incompatible use of std::vector.erase().
The original message:
Add erase() which returns an iterator pointing to the next element after the
erased one. This makes it possible to erase selected elements while iterating
over the SetVector :
while (I != E)
if (test(*I))
I = SetVector.erase(I);
else
++I;
Reviewers: qcolombet, mcrosier, MatzeB, dblaikie
Subscribers: dberlin, dblaikie, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18281
llvm-svn: 264450
Summary:
Add erase() which returns an iterator pointing to the next element after the
erased one. This makes it possible to erase selected elements while iterating
over the SetVector :
while (I != E)
if (test(*I))
I = SetVector.erase(I);
else
++I;
Reviewers: qcolombet, mcrosier, MatzeB, dblaikie
Subscribers: dberlin, dblaikie, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18281
llvm-svn: 264414
MSVC as usual:
C:\Buildbot\Slave\llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast\llvm.src\include\llvm/ADT/STLExtras.h(120):
error C2100: illegal indirection
C:\Buildbot\Slave\llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast\llvm.src\include\llvm/IR/Instructions.h(3966):
note: see reference to class template instantiation
'llvm::mapped_iterator<llvm::User::op_iterator,llvm::CatchSwitchInst::DerefFnTy>'
being compiled
This reverts commit e091dd63f1f34e043748e28ad160d3bc17731168.
llvm-svn: 263760
type.
This makes it easy and safe to use a set of flags as one elmenet of
a tagged union with pointers. There is quite a bit of code that has
historically done this by casting arbitrary integers to "pointers" and
assuming that this was safe and reliable. It is neither, and has started
to rear its head by triggering safety asserts in various abstractions
like PointerLikeTypeTraits when the integers chosen are invariably poor
choices for *some* platform and *some* situation. Not to mention the
(hopefully unlikely) prospect of one of these integers actually getting
allocated!
With this, it will be straightforward to build type safe abstractions
like this without being error prone. The abstraction itself is also
remarkably simple thanks to the implicit conversion.
This use case and pattern was also independently created by the folks
working on Swift, and they're going to incrementally add any missing
functionality they find.
Differential Revision: http://reviews.llvm.org/D15844
llvm-svn: 257284
This is a much more general and powerful form of PointerUnion. It
provides a reasonably complete sum type (from type theory) for
pointer-like types. It has several significant advantages over the
existing PointerUnion infrastructure:
1) It allows more than two pointer types to participate without awkward
nesting structures.
2) It directly exposes the tag so that it is convenient to write
switches over the possible members.
3) It can re-use the same type for multiple tag values, something that
has been worked around by either abusing PointerIntPair or defining
nonce types and doing unsafe pointer casting.
4) It supports customization of the PointerLikeTypeTraits used for
specific member types. This means it could (in theory) be used even
with types that are over-aligned on allocation to expose larger
numbers of bits to the tag.
All in all, I think it is at least complimentary to the existing
infrastructure, and a strict improvement for some use cases.
Differential Revision: http://reviews.llvm.org/D15843
llvm-svn: 257282
This reverts commit r243567, which ultimately reapplies r243563.
The fix here was to use std::enable_if for overload resolution. Thanks to David
Blaikie for lots of help on this, and for the extra tests!
Original commit message follows:
For cases where we needed a foreach loop in reverse over a container,
we had to do something like
for (const GlobalValue *GV : make_range(TypeInfos.rbegin(),
TypeInfos.rend())) {
This provides a convenience method which shortens this to
for (const GlobalValue *GV : reverse(TypeInfos)) {
There are 2 versions of this, with a preference to the rbegin() version.
The first uses rbegin() and rend() to construct an iterator_range.
The second constructs an iterator_range from the begin() and end() methods
wrapped in std::reverse_iterator's.
Reviewed by David Blaikie.
llvm-svn: 243581
This reverts commit r243563.
The GCC buildbots were extremely unhappy about this. Reverting while
we discuss a better way of doing overload resolution.
llvm-svn: 243567
For cases where we needed a foreach loop in reverse over a container,
we had to do something like
for (const GlobalValue *GV : make_range(TypeInfos.rbegin(),
TypeInfos.rend())) {
This provides a convenience method which shortens this to
for (const GlobalValue *GV : reverse(TypeInfos)) {
There are 2 versions of this, with a preference to the rbegin() version.
The first uses rbegin() and rend() to construct an iterator_range.
The second constructs an iterator_range from the begin() and end() methods
wrapped in std::reverse_iterator's.
Reviewed by David Blaikie.
llvm-svn: 243563
`LLVM_ENABLE_MODULES` builds sometimes fail because `Intrinsics.td`
needs to regenerate `Instrinsics.h` before anyone can include anything
from the LLVM_IR module. Represent the dependency explicitly to prevent
that.
llvm-svn: 239796
If the template specialization for externally managed sets in
PostOrderIterator call too far out of sync with each other, this unit
test will fail to build. This is especially useful for developers who
may not build Clang (the only in-tree user) every time.
llvm-svn: 222447
A subtle bug was found where attempting to copy a non-const function_ref
lvalue would actually invoke the generic forwarding constructor (as it
was a closer match - being T& rather than the const T& of the implicit
copy constructor). In the particular case this lead to a dangling
function_ref member (since it had referenced the function_ref passed by
value to its ctor, rather than the outer function_ref that was still
alive)
SFINAE the converting constructor to not be considered if the copy
constructor is available and demonstrate that this causes the copy to
refer to the original functor, not to the function_ref it was copied
from. (without the code change, the test would fail as Y would be
referencing X and Y() would see the result of the mutation to X, ie: 2)
llvm-svn: 221753
Previously, the assertions in PointerIntPair would try to calculate the value
(1 << NumLowBitsAvailable); the inferred type here is 'int', so if there were
more than 31 bits available we'd get a shift overflow.
Also, add a rudimentary unit test file for PointerIntPair.
llvm-svn: 203273
The interaction between defaulted operators and move elision isn't
totally obvious, add a unit test so it doesn't break unintentionally.
llvm-svn: 202662
it interoperate (minimally) with std::unique_ptr<T>. This is part of my
plan to migrate LLVM to use std::unique_ptr with a minimal impact on
out-of-tree code.
Patch by Ahmed Charles with some minor cleanups (and bool casts) by me.
llvm-svn: 202608
unique ownership smart pointer which is *deep* copyable by assuming it
can call a T::clone() method to allocate a copy of the owned data.
This is mostly useful with containers or other collections of uniquely
owned data in C++98 where they *might* copy. With C++11 we can likely
remove this in favor of move-only types and containers wrapped around
those types.
llvm-svn: 194315
This generalizes Optional to require less from the T type by using aligned
storage for backing & placement new/deleting the T into it when necessary.
Also includes unit tests.
llvm-svn: 175580