Happened pretty commonly during `LLVMContext` teardown when `clang -g`
hit an error. This fixes the use-after-free. Next I'll clean up
teardown so that it's not RAUW'ing when metadata-tracked values are
deleted (only really causes a problem if the graph is mid-construction
when teardown starts, but it's still unnecessary work).
llvm-svn: 226029
utils/sort_includes.py.
I clearly haven't done this in a while, so more changed than usual. This
even uncovered a missing include from the InstrProf library that I've
added. No functionality changed here, just mechanical cleanup of the
include order.
llvm-svn: 225974
Add a new subclass of `UniquableMDNode`, `MDLocation`. This will be the
IR version of `DebugLoc` and `DILocation`. The goal is to rename this
to `DILocation` once the IR classes supersede the `DI`-prefixed
wrappers.
This isn't used anywhere yet. Part of PR21433.
llvm-svn: 225824
This adds back the testcase from r225738, and adds to it. Looks like we
need both sides for now (the assertion was incorrect both ways, and
although it seemed reasonable (when written correctly) it wasn't
particularly important).
llvm-svn: 225745
This reverts commit r225738. Maybe the assertion is just plain wrong,
but this version fails on WAY more bots. I'll make sure both ways work
in a follow-up but I want to get bots green in the meantime.
llvm-svn: 225742
Allow distinct `MDNode`s to be explicitly created. There's no way (yet)
of representing their distinctness in assembly/bitcode, however, so this
still isn't first-class.
Part of PR22111.
llvm-svn: 225406
Add API to indicate whether an `MDNode` is distinct. A distinct node is
not stored in the MDNode uniquing tables, and will never be returned by
`MDNode::get()`.
Although distinct nodes are only currently created by uniquing
collisions (when operands change), PR22111 will allow these nodes to be
explicitly created.
llvm-svn: 225401
Now that `LLVMContextImpl` can call `MDNode::dropAllReferences()` to
prevent teardown madness, stop dropping uniquing just because an operand
drops to null.
Part of PR21532.
llvm-svn: 225223
manager.
This starts to allow us to test analyses more easily, but it's really
only the beginning. Some of the code here is still untestable without
manual changes to create analysis passes, but I wanted to factor it into
a small of chunks as possible.
Next up in order to be able to test things are, in no particular order:
- No-op analyses passes so we don't have to use real ones to exercise
the pass maneger itself.
- Automatic way of generating dummy passes that require an analysis be
run, including a variant that calls a 'print' method on a pass to make
it even easier to print out the results of an analysis.
- Dummy passes that invalidate all analyses for their IR unit so we can
test invalidation and re-runs.
- Automatic way to print each analysis pass as it is re-run.
- Automatic but optional verification of analysis passes everywhere
possible.
I'm not claiming I'll get to all of these immediately, but that's what
is in the pipeline at some stage. I'm fleshing out exactly what I need
and what to prioritize by working on converting analyses and then trying
to test the conversion. =]
llvm-svn: 225162
units.
This was debated back and forth a bunch, but using references is now
clearly cleaner. Of all the code written using pointers thus far, in
only one place did it really make more sense to have a pointer. In most
cases, this just removes immediate dereferencing from the code. I think
it is much better to get errors on null IR units earlier, potentially
at compile time, than to delay it.
Most notably, the legacy pass manager uses references for its routines
and so as more and more code works with both, the use of pointers was
likely to become really annoying. I noticed this when I ported the
domtree analysis over and wrote the entire thing with references only to
have it fail to compile. =/ It seemed better to switch now than to
delay. We can, of course, revisit this is we learn that references are
really problematic in the API.
llvm-svn: 225145
It's horrible to inspect `MDNode`s in a debugger. All of their operands
that are `MDNode`s get dumped as `<badref>`, since we can't assign
metadata slots in the context of a `Metadata::dump()`. (Why not? Why
not assign numbers lazily? Because then each time you called `dump()`,
a given `MDNode` could have a different lazily assigned number.)
Fortunately, the C memory model gives us perfectly good identifiers for
`MDNode`. Add pointer addresses to the dumps, transforming this:
(lldb) e N->dump()
!{i32 662302, i32 26, <badref>, null}
(lldb) e ((MDNode*)N->getOperand(2))->dump()
!{i32 4, !"foo"}
into:
(lldb) e N->dump()
!{i32 662302, i32 26, <0x100706ee0>, null}
(lldb) e ((MDNode*)0x100706ee0)->dump()
!{i32 4, !"foo"}
and this:
(lldb) e N->dump()
0x101200248 = !{<badref>, <badref>, <badref>, <badref>, <badref>}
(lldb) e N->getOperand(0)
(const llvm::MDOperand) $0 = {
MD = 0x00000001012004e0
}
(lldb) e N->getOperand(1)
(const llvm::MDOperand) $1 = {
MD = 0x00000001012004e0
}
(lldb) e N->getOperand(2)
(const llvm::MDOperand) $2 = {
MD = 0x0000000101200058
}
(lldb) e N->getOperand(3)
(const llvm::MDOperand) $3 = {
MD = 0x00000001012004e0
}
(lldb) e N->getOperand(4)
(const llvm::MDOperand) $4 = {
MD = 0x0000000101200058
}
(lldb) e ((MDNode*)0x00000001012004e0)->dump()
!{}
(lldb) e ((MDNode*)0x0000000101200058)->dump()
!{null}
into:
(lldb) e N->dump()
!{<0x1012004e0>, <0x1012004e0>, <0x101200058>, <0x1012004e0>, <0x101200058>}
(lldb) e ((MDNode*)0x1012004e0)->dump()
!{}
(lldb) e ((MDNode*)0x101200058)->dump()
!{null}
llvm-svn: 224325
The RAUW support in `Metadata` supports going to `nullptr` specifically
to handle values being deleted, causing `ValueAsMetadata` to be deleted.
Fix the case where the reference is from a `TrackingMDRef` (as opposed
to an `MDOperand` or a `MetadataAsValue`).
This is surprisingly rare -- metadata tracked by `TrackingMDRef` going
to null -- but it came up in an openSUSE bootstrap during inlining. The
tracking ref was held by the `ValueMap` because it was referencing a
local, the basic block containing the local became dead after it had
been merged in, and when the local was deleted, the tracking ref
asserted in an `isa`.
llvm-svn: 224146
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532. Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.
I have a follow-up patch prepared for `clang`. If this breaks other
sub-projects, I apologize in advance :(. Help me compile it on Darwin
I'll try to fix it. FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.
This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.
Here's a quick guide for updating your code:
- `Metadata` is the root of a class hierarchy with three main classes:
`MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from
the `Value` class hierarchy. It is typeless -- i.e., instances do
*not* have a `Type`.
- `MDNode`'s operands are all `Metadata *` (instead of `Value *`).
- `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.
If you're referring solely to resolved `MDNode`s -- post graph
construction -- just use `MDNode*`.
- `MDNode` (and the rest of `Metadata`) have only limited support for
`replaceAllUsesWith()`.
As long as an `MDNode` is pointing at a forward declaration -- the
result of `MDNode::getTemporary()` -- it maintains a side map of its
uses and can RAUW itself. Once the forward declarations are fully
resolved RAUW support is dropped on the ground. This means that
uniquing collisions on changing operands cause nodes to become
"distinct". (This already happened fairly commonly, whenever an
operand went to null.)
If you're constructing complex (non self-reference) `MDNode` cycles,
you need to call `MDNode::resolveCycles()` on each node (or on a
top-level node that somehow references all of the nodes). Also,
don't do that. Metadata cycles (and the RAUW machinery needed to
construct them) are expensive.
- An `MDNode` can only refer to a `Constant` through a bridge called
`ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).
As a side effect, accessing an operand of an `MDNode` that is known
to be, e.g., `ConstantInt`, takes three steps: first, cast from
`Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
third, cast down to `ConstantInt`.
The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
metadata schema owners transition away from using `Constant`s when
the type isn't important (and they don't care about referring to
`GlobalValue`s).
In the meantime, I've added transitional API to the `mdconst`
namespace that matches semantics with the old code, in order to
avoid adding the error-prone three-step equivalent to every call
site. If your old code was:
MDNode *N = foo();
bar(isa <ConstantInt>(N->getOperand(0)));
baz(cast <ConstantInt>(N->getOperand(1)));
bak(cast_or_null <ConstantInt>(N->getOperand(2)));
bat(dyn_cast <ConstantInt>(N->getOperand(3)));
bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));
you can trivially match its semantics with:
MDNode *N = foo();
bar(mdconst::hasa <ConstantInt>(N->getOperand(0)));
baz(mdconst::extract <ConstantInt>(N->getOperand(1)));
bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2)));
bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3)));
bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));
and when you transition your metadata schema to `MDInt`:
MDNode *N = foo();
bar(isa <MDInt>(N->getOperand(0)));
baz(cast <MDInt>(N->getOperand(1)));
bak(cast_or_null <MDInt>(N->getOperand(2)));
bat(dyn_cast <MDInt>(N->getOperand(3)));
bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));
- A `CallInst` -- specifically, intrinsic instructions -- can refer to
metadata through a bridge called `MetadataAsValue`. This is a
subclass of `Value` where `getType()->isMetadataTy()`.
`MetadataAsValue` is the *only* class that can legally refer to a
`LocalAsMetadata`, which is a bridged form of non-`Constant` values
like `Argument` and `Instruction`. It can also refer to any other
`Metadata` subclass.
(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)
llvm-svn: 223802
It doesn't make sense to unique self-referencing nodes. Drop uniquing
for them.
Note that `MDNode::intersect()` occasionally returns self-referencing
nodes. Previously these would be returned by `MDNode::get()`. I'm not
convinced this was intended behaviour -- to me it seems it should return
a node whose only operand is the self-reference -- but I don't know much
about alias scopes so I'm preserving it for now.
This is part of PR21532.
llvm-svn: 223618
Apparently `MDNode` uniquing used to be optional. I suppose the
configure flag must have disappeared at some point. Change the test so
it actually tests uniquing, and remove the check for
`ENABLE_MDNODE_UNIQUING`.
llvm-svn: 223617
This reverts commit r218918, effectively reapplying r218914 after fixing
an Ocaml bindings test and an Asan crash. The root cause of the latter
was a tightened-up check in `DILexicalBlock::Verify()`, so I'll file a
PR to investigate who requires the loose check (and why).
Original commit message follows.
--
This patch addresses the first stage of PR17891 by folding constant
arguments together into a single MDString. Integers are stringified and
a `\0` character is used as a separator.
Part of PR17891.
Note: I've attached my testcases upgrade scripts to the PR. If I've
just broken your out-of-tree testcases, they might help.
llvm-svn: 219010
This patch addresses the first stage of PR17891 by folding constant
arguments together into a single MDString. Integers are stringified and
a `\0` character is used as a separator.
Part of PR17891.
Note: I've attached my testcases upgrade scripts to the PR. If I've
just broken your out-of-tree testcases, they might help.
llvm-svn: 218914
With this a DataLayoutPass can be reused for multiple modules.
Once we have doInitialization/doFinalization, it doesn't seem necessary to pass
a Module to the constructor.
Overall this change seems in line with the idea of making DataLayout a required
part of Module. With it the only way of having a DataLayout used is to add it
to the Module.
llvm-svn: 217548
"Setting" does not equal "copying". This bug has sat dormant for 2 reasons:
1. The unit test was not adequate.
2. Every current user of the "copyFastMathFlags" API is operating on a new instruction.
(ie, all existing fast-math flags are off). If you copy flags to an existing
instruction that has some flags on already, you will not necessarily turn them off
as expected.
I uncovered this bug while trying to implement a fix for PR20802.
llvm-svn: 216939
In r216015 I missed propagating `OnlyIfReduced` through the inline
versions of `getGetElementPtr()` (I was relying on compile failures on
mismatches between the header and source signatures to get them all).
llvm-svn: 216023
Change `ConstantExpr` to follow the model the other constants are using:
only malloc a replacement if it's going to be used. This fixes a subtle
bug where if an API user had used `ConstantExpr::get()` already to
create the replacement but hadn't given it any users, we'd delete the
replacement.
This relies on r216015 to thread `OnlyIfReduced` through
`ConstantExpr::getWithOperands()`.
llvm-svn: 216016
* Use StringRef instead of std::string&
* Return a std::unique_ptr<Module> instead of taking an optional module to write
to (was not really used).
* Use current comment style.
* Use current naming convention.
llvm-svn: 215989
This reverts commit r215981, which reverted the above commits because
MSVC std::equal asserts on nullptr iterators, and thes commits
introduced an `ArrayRef::equals()` on empty ArrayRefs.
ArrayRef was changed not to use std::equal in r215986.
llvm-svn: 215987
Previously, `ConstantArray::replaceUsesOfWithOnConstant()` neglected to
check whether it becomes a `ConstantDataArray`. Call
`ConstantArray::getImpl()` to check for that.
llvm-svn: 215965
Add `Value::sortUseList()`, templated on the comparison function to use.
The sort is an iterative merge sort that uses a binomial vector of
already-merged lists to limit the size overhead to `O(1)`.
This is part of PR5680.
llvm-svn: 213824
This reverts commit 1f502bd9d7d2c1f98ad93a09ffe435e11a95aedd, due to
GCC / MinGW's lack of support for C++11 threading.
It's possible this will go back in after we come up with a
reasonable solution.
llvm-svn: 211401
This change has a bit of a trickle down effect due to the fact that
there are a number of derived implementations of ExecutionEngine,
and that the mutex is not tightly encapsulated so is used by other
classes directly.
Reviewed by: rnk
Differential Revision: http://reviews.llvm.org/D4196
llvm-svn: 211214
This enables static polymorphism of the mutex type, which is
necessary in order to replace the standard mutex implementation
with a different type.
llvm-svn: 211080
Alias with unnamed_addr were in a strange state. It is stored in GlobalValue,
the language reference talks about "unnamed_addr aliases" but the verifier
was rejecting them.
It seems natural to allow unnamed_addr in aliases:
* It is a property of how it is accessed, not of the data itself.
* It is perfectly possible to write code that depends on the address
of an alias.
This patch then makes unname_addr legal for aliases. One side effect is that
the syntax changes for a corner case: In globals, unnamed_addr is now printed
before the address space.
llvm-svn: 210302
This patch changes GlobalAlias to point to an arbitrary ConstantExpr and it is
up to MC (or the system assembler) to decide if that expression is valid or not.
This reduces our ability to diagnose invalid uses and how early we can spot
them, but it also lets us do things like
@test5 = alias inttoptr(i32 sub (i32 ptrtoint (i32* @test2 to i32),
i32 ptrtoint (i32* @bar to i32)) to i32*)
An important implication of this patch is that the notion of aliased global
doesn't exist any more. The alias has to encode the information needed to
access it in its metadata (linkage, visibility, type, etc).
Another consequence to notice is that getSection has to return a "const char *".
It could return a NullTerminatedStringRef if there was such a thing, but when
that was proposed the decision was to just uses "const char*" for that.
llvm-svn: 210062
This patch changes the design of GlobalAlias so that it doesn't take a
ConstantExpr anymore. It now points directly to a GlobalObject, but its type is
independent of the aliasee type.
To avoid changing all alias related tests in this patches, I kept the common
syntax
@foo = alias i32* @bar
to mean the same as now. The cases that used to use cast now use the more
general syntax
@foo = alias i16, i32* @bar.
Note that GlobalAlias now behaves a bit more like GlobalVariable. We
know that its type is always a pointer, so we omit the '*'.
For the bitcode, a nice surprise is that we were writing both identical types
already, so the format change is minimal. Auto upgrade is handled by looking
through the casts and no new fields are needed for now. New bitcode will
simply have different types for Alias and Aliasee.
One last interesting point in the patch is that replaceAllUsesWith becomes
smart enough to avoid putting a ConstantExpr in the aliasee. This seems better
than checking and updating every caller.
A followup patch will delete getAliasedGlobal now that it is redundant. Another
patch will add support for an explicit offset.
llvm-svn: 209007
We already had an assert for foo->RAUW(foo), but not for something like
foo->RAUW(GEP(foo)) and would go in an infinite loop trying to apply
the replacement.
llvm-svn: 208663
this code ages ago and lost track of it. Seems worth doing though --
this thing can get called from places that would benefit from knowing
that std::distance is O(1). Also add a very fledgeling unittest for
Users and make sure various aspects of this seem to work reasonably.
llvm-svn: 206453
In CallInst, op_end() points at the callee, which we don't want to iterate over
when just iterating over arguments. Now take this into account when returning
a iterator_range from arg_operands. Similar reasoning for InvokeInst.
Also adds a unit test to verify this actually works as expected.
llvm-svn: 204851
order to use the single assignment. That's probably worth doing for
a lot of these types anyways as they may have non-trivial moves and so
getting copy elision in more places seems worthwhile.
I've tried to add some tests that actually catch this mistake, and one
of the types is now well tested but the others' tests still fail to
catch this. I'll keep working on tests, but this gets the core pattern
right.
llvm-svn: 203780
it is available. Also make the move semantics sufficiently correct to
tolerate move-only passes, as the PassManagers *are* move-only passes.
llvm-svn: 203391
This compiles with no changes to clang/lld/lldb with MSVC and includes
overloads to various functions which are used by those projects and llvm
which have OwningPtr's as parameters. This should allow out of tree
projects some time to move. There are also no changes to libs/Target,
which should help out of tree targets have time to move, if necessary.
llvm-svn: 203083
source file had already been moved. Also move the unittest into the IR
unittest library.
This may seem an odd thing to put in the IR library but we only really
use this with instructions and it needs the LLVM context to work, so it
is intrinsically tied to the IR library.
llvm-svn: 202842
a bit surprising, as the class is almost entirely abstracted away from
any particular IR, however it encodes the comparsion predicates which
mutate ranges as ICmp predicate codes. This is reasonable as they're
used for both instructions and constants. Thus, it belongs in the IR
library with instructions and constants.
llvm-svn: 202838
directly care about the Value class (it is templated so that the key can
be any arbitrary Value subclass), it is in fact concretely tied to the
Value class through the ValueHandle's CallbackVH interface which relies
on the key type being some Value subclass to establish the value handle
chain.
Ironically, the unittest is already in the right library.
llvm-svn: 202824
Move the test for this class into the IR unittests as well.
This uncovers that ValueMap too is in the IR library. Ironically, the
unittest for ValueMap is useless in the Support library (honestly, so
was the ValueHandle test) and so it already lives in the IR unittests.
Mmmm, tasty layering.
llvm-svn: 202821
No tool does this currently, but as everything else in a module we should be
able to change its DataLayout.
Most of the fix is in DataLayout to make sure it can be reset properly.
The test uses Module::setDataLayout since the fact that we mutate a DataLayout
is an implementation detail. The module could hold a OwningPtr<DataLayout> and
the DataLayout itself could be immutable.
Thanks to Philip Reames for pushing me in the right direction.
llvm-svn: 202198
I think this was just over-eagerness on my part. The analysis results
need to often be non-const because they need to (in some cases at least)
be updated by the transformation pass in order to remain correct. It
also makes lazy analyses (a common case) needlessly annoying to write in
order to make their entire state mutable.
llvm-svn: 200881
different number of elements.
Bitcasts were passing with vectors of pointers with different number of
elements since the number of elements was checking
SrcTy->getVectorNumElements() == SrcTy->getVectorNumElements() which
isn't helpful. The addrspacecast was also wrong, but that case at least
is caught by the verifier. Refactor bitcast and addrspacecast handling
in castIsValid to be more readable and fix this problem.
llvm-svn: 199821
This makes the 'verifyFunction' and 'verifyModule' functions totally
independent operations on the LLVM IR. It also cleans up their API a bit
by lifting the abort behavior into their clients and just using an
optional raw_ostream parameter to control printing.
The implementation of the verifier is now just an InstVisitor with no
multiple inheritance. It also is significantly more const-correct, and
hides the const violations internally. The two layers that force us to
break const correctness are building a DomTree and dispatching through
the InstVisitor.
A new VerifierPass is used to implement the legacy pass manager
interface in terms of the other pieces.
The error messages produced may be slightly different now, and we may
have slightly different short circuiting behavior with different usage
models of the verifier, but generally everything works equivalently and
this unblocks wiring the verifier up to the new pass manager.
llvm-svn: 199569
can be used by both the new pass manager and the old.
This removes it from any of the virtual mess of the pass interfaces and
lets it derive cleanly from the DominatorTreeBase<> template. In turn,
tons of boilerplate interface can be nuked and it turns into a very
straightforward extension of the base DominatorTree interface.
The old analysis pass is now a simple wrapper. The names and style of
this split should match the split between CallGraph and
CallGraphWrapperPass. All of the users of DominatorTree have been
updated to match using many of the same tricks as with CallGraph. The
goal is that the common type remains the resulting DominatorTree rather
than the pass. This will make subsequent work toward the new pass
manager significantly easier.
Also in numerous places things became cleaner because I switched from
re-running the pass (!!! mid way through some other passes run!!!) to
directly recomputing the domtree.
llvm-svn: 199104
directory. These passes are already defined in the IR library, and it
doesn't make any sense to have the headers in Analysis.
Long term, I think there is going to be a much better way to divide
these matters. The dominators code should be fully separated into the
abstract graph algorithm and have that put in Support where it becomes
obvious that evn Clang's CFGBlock's can use it. Then the verifier can
manually construct dominance information from the Support-driven
interface while the Analysis library can provide a pass which both
caches, reconstructs, and supports a nice update API.
But those are very long term, and so I don't want to leave the really
confusing structure until that day arrives.
llvm-svn: 199082
name to match the source file which I got earlier. Update the include
sites. Also modernize the comments in the header to use the more
recommended doxygen style.
llvm-svn: 199041
mode that can be used to debug the execution of everything.
No support for analyses here, that will come later. This already helps
show parts of the opt commandline integration that isn't working. Tests
of that will start using it as the bugs are fixed.
llvm-svn: 199004
are part of the core IR library in order to support dumping and other
basic functionality.
Rename the 'Assembly' include directory to 'AsmParser' to match the
library name and the only functionality left their -- printing has been
in the core IR library for quite some time.
Update all of the #includes to match.
All of this started because I wanted to have the layering in good shape
before I started adding support for printing LLVM IR using the new pass
infrastructure, and commandline support for the new pass infrastructure.
llvm-svn: 198688
instructions. I needed this for a quick experiment I was making, and
while I've no idea if that will ever get committed, I didn't want to
throw away the pattern match code and for anyone else to have to write
it again. I've added unittests to make sure this works correctly.
In fun news, this also uncovered the IRBuilder bug. Doh!
llvm-svn: 198541
failed to correctly propagate the NUW and NSW flags to the constant
folder for two instructions. I've added a unittest to cover flag
propagation for the rest of the instructions and constant expressions.
llvm-svn: 198538
basic block to hold instructions, and managing all of their lifetimes in
a fixture. This makes it easy to sink the expectations into the test
cases themselves which also makes things a bit more explicit and clearer
IMO.
llvm-svn: 198532
We were previously not adding fast-math flags through CreateBinOp()
when it happened to be making a floating point binary operator. This
patch updates it to do so similarly to directly calling CreateF*().
llvm-svn: 196438
When a block is unreachable, asking its dom tree descendants should
return the empty set. However, the computation of the descendants
was causing a segmentation fault because the dom tree node we get
from the basic block is initially NULL.
Fixed by adding a test for a valid dom tree node before we iterate.
The patch also adds some unit tests to the existing dom tree tests.
llvm-svn: 196099
CallGraph.
This makes the CallGraph a totally generic analysis object that is the
container for the graph data structure and the primary interface for
querying and manipulating it. The pass logic is separated into its own
class. For compatibility reasons, the pass provides wrapper methods for
most of the methods on CallGraph -- they all just forward.
This will allow the new pass manager infrastructure to provide its own
analysis pass that constructs the same CallGraph object and makes it
available. The idea is that in the new pass manager, the analysis pass's
'run' method returns a concrete analysis 'result'. Here, that result is
a 'CallGraph'. The 'run' method will typically do only minimal work,
deferring much of the work into the implementation of the result object
in order to be lazy about computing things, but when (like DomTree)
there is *some* up-front computation, the analysis does it prior to
handing the result back to the querying pass.
I know some of this is fairly ugly. I'm happy to change it around if
folks can suggest a cleaner interim state, but there is going to be some
amount of unavoidable ugliness during the transition period. The good
thing is that this is very limited and will naturally go away when the
old pass infrastructure goes away. It won't hang around to bother us
later.
Next up is the initial new-PM-style call graph analysis. =]
llvm-svn: 195722
proxy. This lets a function pass query a module analysis manager.
However, the interface is const to indicate that only cached results can
be safely queried.
With this, I think the new pass manager is largely functionally complete
for modules and analyses. Still lots to test, and need to generalize to
SCCs and Loops, and need to build an adaptor layer to support the use of
existing Pass objects in the new managers.
llvm-svn: 195538
results.
This is the last piece of infrastructure needed to effectively support
querying *up* the analysis layers. The next step will be to introduce
a proxy which provides access to those layers with appropriate use of
const to direct queries to the safe interface.
llvm-svn: 195525
one function's analyses are invalidated at a time. Also switch the
preservation of the proxy to *fully* preserve the lower (function)
analyses.
Combined, this gets both upward and downward analysis invalidation to
a point I'm happy with:
- A function pass invalidates its function analyses, and its parent's
module analyses.
- A module pass invalidates all of its functions' analyses including the
set of which functions are in the module.
- A function pass can preserve a module analysis pass.
- If all function passes preserve a module analysis pass, that
preservation persists. If any doesn't the module analysis is
invalidated.
- A module pass can opt into managing *all* function analysis
invalidation itself or *none*.
- The conservative default is none, and the proxy takes the maximally
conservative approach that works even if the set of functions has
changed.
- If a module pass opts into managing function analysis invalidation it
has to propagate the invalidation itself, the proxy just does nothing.
The only thing really missing is a way to query for a cached analysis or
nothing at all. With this, function passes can more safely request
a cached module analysis pass without fear of it accidentally running
part way through.
llvm-svn: 195519
run methods of the analysis passes.
Also generalizes and re-uses the SFINAE for transformation passes so
that users can write an analysis pass and only accept an analysis
manager if that is useful to their pass.
This completes the plumbing to make an analysis manager available
through every pass's run method if desired so that passes no longer need
to be constructed around them.
llvm-svn: 195451
Since the analysis managers were split into explicit function and module
analysis managers, it is now completely trivial to specify this when
building up the concept and model types explicitly, and it is impossible
to end up with a type error at run time. We instantiate a template when
registering a pass that will enforce the requirement at a type-system
level, and we produce a dynamic error on all the other query paths to
the analysis manager if the pass in question isn't registered.
llvm-svn: 195447
This is supposed to be the whole type of the IR unit, and so we
shouldn't pass a pointer to it but rather the value itself. In turn, we
need to provide a 'Module *' as that type argument (for example). This
will become more relevant with SCCs or other units which may not be
passed as a pointer type, but also brings consistency with the
transformation pass templates.
llvm-svn: 195445
rather than the constructors of passes.
This simplifies the APIs of passes significantly and removes an error
prone pattern where the *same* manager had to be given to every
different layer. With the new API the analysis managers themselves will
have to be cross connected with proxy analyses that allow a pass at one
layer to query for the analysis manager of another layer. The proxy will
both expose a handle to the other layer's manager and it will provide
the invalidation hooks to ensure things remain consistent across layers.
Finally, the outer-most analysis manager has to be passed to the run
method of the outer-most pass manager. The rest of the propagation is
automatic.
I've used SFINAE again to allow passes to completely disregard the
analysis manager if they don't need or want to care. This helps keep
simple things simple for users of the new pass manager.
Also, the system specifically supports passing a null pointer into the
outer-most run method if your pass pipeline neither needs nor wants to
deal with analyses. I find this of dubious utility as while some
*passes* don't care about analysis, I'm not sure there are any
real-world users of the pass manager itself that need to avoid even
creating an analysis manager. But it is easy to support, so there we go.
Finally I renamed the module proxy for the function analysis manager to
the more verbose but less confusing name of
FunctionAnalysisManagerModuleProxy. I hate this name, but I have no idea
what else to name these things. I'm expecting in the fullness of time to
potentially have the complete cross product of types at the proxy layer:
{Module,SCC,Function,Loop,Region}AnalysisManager{Module,SCC,Function,Loop,Region}Proxy
(except for XAnalysisManagerXProxy which doesn't make any sense)
This should make it somewhat easier to do the next phases which is to
build the upward proxy and get its invalidation correct, as well as to
make the invalidation within the Module -> Function mapping pass be more
fine grained so as to invalidate fewer fuction analyses.
After all of the proxy analyses are done and the invalidation working,
I'll finally be able to start working on the next two fun fronts: how to
adapt an existing pass to work in both the legacy pass world and the new
one, and building the SCC, Loop, and Region counterparts. Fun times!
llvm-svn: 195400
it is completely optional, and sink the logic for handling the preserved
analysis set into it.
This allows us to implement the delegation logic desired in the proxy
module analysis for the function analysis manager where if the proxy
itself is preserved we assume the set of functions hasn't changed and we
do a fine grained invalidation by walking the functions in the module
and running the invalidate for them all at the manager level and letting
it try to invalidate any passes.
This in turn makes it blindingly obvious why we should hoist the
invalidate trait and have two collections of results. That allows
handling invalidation for almost all analyses without indirect calls and
it allows short circuiting when the preserved set is all.
llvm-svn: 195338
type and detect whether or not it provides an 'invalidate' member the
analysis manager should use.
This lets the overwhelming common case of *not* caring about custom
behavior when an analysis is invalidated be the the obvious default
behavior with no code written by the author of an analysis. Only when
they write code specifically to handle invalidation does it get used.
Both cases are actually covered by tests here. The test analysis uses
the default behavior, and the proxy module analysis actually has custom
behavior on invalidation that is firing correctly. (In fact, this is the
analysis which was the primary motivation for having custom invalidation
behavior in the first place.)
llvm-svn: 195332
This proxy will fill the role of proxying invalidation events down IR
unit layers so that when a module changes we correctly invalidate
function analyses. Currently this is a very coarse solution -- any
change blows away the entire thing -- but the next step is to make
invalidation handling more nuanced so that we can propagate specific
amounts of invalidation from one layer to the next.
The test is extended to place a module pass between two function pass
managers each of which have preserved function analyses which get
correctly invalidated by the module pass that might have changed what
functions are even in the module.
llvm-svn: 195304
This adds a new set-like type which represents a set of preserved
analysis passes. The set is managed via the opaque PassT::ID() void*s.
The expected convenience templates for interacting with specific passes
are provided. It also supports a symbolic "all" state which is
represented by an invalid pointer in the set. This state is nicely
saturating as it comes up often. Finally, it supports intersection which
is used when finding the set of preserved passes after N different
transforms.
The pass API is then changed to return the preserved set rather than
a bool. This is much more self-documenting than the previous system.
Returning "none" is a conservatively correct solution just like
returning "true" from todays passes and not marking any passes as
preserved. Passes can also be dynamically preserved or not throughout
the run of the pass, and whatever gets returned is the binding state.
Finally, preserving "all" the passes is allowed for no-op transforms
that simply can't harm such things.
Finally, the analysis managers are changed to instead of blindly
invalidating all of the analyses, invalidate those which were not
preserved. This should rig up all of the basic preservation
functionality. This also correctly combines the preservation moving up
from one IR-layer to the another and the preservation aggregation across
N pass runs. Still to go is incrementally correct invalidation and
preservation across IR layers incrementally during N pass runs. That
will wait until we have a device for even exposing analyses across IR
layers.
While the core of this change is obvious, I'm not happy with the current
testing, so will improve it to cover at least some of the invalidation
that I can test easily in a subsequent commit.
llvm-svn: 195241
The FunctionPassManager is now itself a function pass. When run over
a function, it runs all N of its passes over that function. This is the
1:N mapping in the pass dimension only. This allows it to be used in
either a ModulePassManager or potentially some other manager that
works on IR units which are supersets of Functions.
This commit also adds the obvious adaptor to map from a module pass to
a function pass, running the function pass across every function in the
module.
The test has been updated to use this new pattern.
llvm-svn: 195192
a module-specific interface. This is the first of many steps necessary
to generalize the infrastructure such that we can support both
a Module-to-Function and Module-to-SCC-to-Function pass manager
nestings.
After a *lot* of attempts that never worked and didn't even make it to
a committable state, it became clear that I had gotten the layering
design of analyses flat out wrong. Four days later, I think I have most
of the plan for how to correct this, and I'm starting to reshape the
code into it. This is just a baby step I'm afraid, but starts separating
the fundamentally distinct concepts of function analysis passes and
module analysis passes so that in subsequent steps we can effectively
layer them, and have a consistent design for the eventual SCC layer.
As part of this, I've started some interface changes to make passes more
regular. The module pass accepts the module in the run method, and some
of the constructor parameters are gone. I'm still working out exactly
where constructor parameters vs. method parameters will be used, so
I expect this to fluctuate a bit.
This actually makes the invalidation less "correct" at this phase,
because now function passes don't invalidate module analysis passes, but
that was actually somewhat of a misfeature. It will return in a better
factored form which can scale to other units of IR. The documentation
has gotten less verbose and helpful.
llvm-svn: 195189
AnalysisManager. All this method did was assert something and we have
a perfectly good way to trigger that assert from the query path.
llvm-svn: 194947
more smarts in it. This is where most of the interesting logic that used
to live in the implicit-scheduling-hackery of the old pass manager will
live.
Like the previous commits, note that this is a very early prototype!
I expect substantial changes before this is ready to use.
The core of the design is the following:
- We have an AnalysisManager which can be used across a series of
passes over a module.
- The code setting up a pass pipeline registers the analyses available
with the manager.
- Individual transform passes can check than an analysis manager
provides the analyses they require in order to fail-fast.
- There is *no* implicit registration or scheduling.
- Analysis passes are different from other passes: they produce an
analysis result that is cached and made available via the analysis
manager.
- Cached results are invalidated automatically by the pass managers.
- When a transform pass requests an analysis result, either the analysis
is run to produce the result or a cached result is provided.
There are a few aspects of this design that I *know* will change in
subsequent commits:
- Currently there is no "preservation" system, that needs to be added.
- All of the analysis management should move up to the analysis library.
- The analysis management needs to support at least SCC passes. Maybe
loop passes. Living in the analysis library will facilitate this.
- Need support for analyses which are *both* module and function passes.
- Need support for pro-actively running module analyses to have cached
results within a function pass manager.
- Need a clear design for "immutable" passes.
- Need support for requesting cached results when available and not
re-running the pass even if that would be necessary.
- Need more thorough testing of all of this infrastructure.
There are other aspects that I view as open questions I'm hoping to
resolve as I iterate a bit on the infrastructure, and especially as
I start writing actual passes against this.
- Should we have separate management layers for function, module, and
SCC analyses? I think "yes", but I'm not yet ready to switch the code.
Adding SCC support will likely resolve this definitively.
- How should the 'require' functionality work? Should *that* be the only
way to request results to ensure that passes always require things?
- How should preservation work?
- Probably some other things I'm forgetting. =]
Look forward to more patches in shorter order now that this is in place.
llvm-svn: 194538
This is still just a skeleton. I'm trying to pull together the
experimentation I've done into committable chunks, and this is the first
coherent one. Others will follow in hopefully short order that move this
more toward a useful initial implementation. I still expect the design
to continue evolving in small ways as I work through the different
requirements and features needed here though.
Keep in mind, all of this is off by default.
Currently, this mostly exercises the use of a polymorphic smart pointer
and templates to hide the polymorphism for the pass manager from the
pass implementation. The next step will be more significant, adding the
first framework of analysis support.
llvm-svn: 194325
give the files a legacy prefix in the right directory. Use forwarding
headers in the old locations to paper over the name change for most
clients during the transitional period.
No functionality changed here! This is just clearing some space to
reduce renaming churn later on with a new system.
Even when the new stuff starts to go in, it is going to be hidden behind
a flag and off-by-default as it is still WIP and under development.
This patch is specifically designed so that very little out-of-tree code
has to change. I'm going to work as hard as I can to keep that the case.
Only direct forward declarations of the PassManager class are impacted
by this change.
llvm-svn: 194324
The function verifyFunction() in lib/IR/Verifier.cpp misses some
calls. It creates a temporary FunctionPassManager that will run a
single Verifier pass. Unfortunately, FunctionPassManager is no
PassManager and does not call doInitialization() and doFinalization()
by itself. Verifier does important tasks in doInitialization() such as
collecting type information used to check DebugInfo metadata and
doFinalization() does some additional checks. Therefore these checks
were missed and debug info couldn't be verified at all, it just
crashed if the function had some.
verifyFunction() is currently not used in llvm unless -debug option is
enabled, and in unittests/IR/VerifierTest.cpp
VerifierTest had to be changed to create the function in a module from
which the type debug info can be collected.
Patch by Michael Kruse.
llvm-svn: 193719
Inspired by the object from the SLPVectorizer. This found a minor bug in the
debug loc restoration in the vectorizer where the location of a following
instruction was attached instead of the location from the original instruction.
llvm-svn: 191673
The work on this project was left in an unfinished and inconsistent state.
Hopefully someone will eventually get a chance to implement this feature, but
in the meantime, it is better to put things back the way the were. I have
left support in the bitcode reader to handle the case-range bitcode format,
so that we do not lose bitcode compatibility with the llvm 3.3 release.
This reverts the following commits: 155464, 156374, 156377, 156613, 156704,
156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575,
157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884,
157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100,
159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659,
159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736
llvm-svn: 190328
One form would accept a vector of pointers, and the other did not.
Make both accept vectors of pointers, and add an assertion
for the number of elements.
llvm-svn: 187464
This avoids constant folding bitcast/ptrtoint/inttoptr combinations
that have illegal bitcasts between differently sized address spaces.
llvm-svn: 187455
It will now only convert the arguments / return value and call
the underlying function if the types are able to be bitcasted.
This avoids using fp<->int conversions that would occur before.
llvm-svn: 187444
Add support for matching 'ordered' and 'unordered' floating point min/max
constructs.
In LLVM we can express min/max functions as a combination of compare and select.
We have support for matching such constructs for integers but not for floating
point. In floating point math there is no total order because of the presence of
'NaN'. Therefore, we have to be careful to preserve the original fcmp semantics
when interpreting floating point compare select combinations as a minimum or
maximum function. The resulting 'ordered/unordered' floating point maximum
function has to select the same value as the select/fcmp combination it is based
on.
ordered_max(x,y) = max(x,y) iff x and y are not NaN, y otherwise
unordered_max(x,y) = max(x,y) iff x and y are not NaN, x otherwise
ordered_min(x,y) = min(x,y) iff x and y are not NaN, y otherwise
unordered_min(x,y) = min(x,y) iff x and y are not NaN, x otherwise
This matches the behavior of the underlying select(fcmp(olt/ult/.., L, R), L, R)
construct.
Any code using this predicate has to preserve this semantics.
A follow-up patch will use this to implement floating point min/max reductions
in the vectorizer.
radar://13723044
llvm-svn: 181143