llvm-project/llvm/lib/Passes/PassBuilder.cpp

831 lines
32 KiB
C++
Raw Normal View History

[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
//===- Parsing, selection, and construction of pass pipelines -------------===//
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
/// \file
///
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
/// This file provides the implementation of the PassBuilder based on our
/// static pass registry as well as related functionality. It also provides
/// helpers to aid in analyzing, debugging, and testing passes and pass
/// pipelines.
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
///
//===----------------------------------------------------------------------===//
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
#include "llvm/Passes/PassBuilder.h"
#include "llvm/ADT/StringSwitch.h"
#include "llvm/Analysis/AliasAnalysis.h"
#include "llvm/Analysis/AliasAnalysisEvaluator.h"
#include "llvm/Analysis/AssumptionCache.h"
#include "llvm/Analysis/BasicAliasAnalysis.h"
#include "llvm/Analysis/BlockFrequencyInfo.h"
#include "llvm/Analysis/BlockFrequencyInfoImpl.h"
#include "llvm/Analysis/BranchProbabilityInfo.h"
#include "llvm/Analysis/CFGPrinter.h"
#include "llvm/Analysis/CFLAndersAliasAnalysis.h"
#include "llvm/Analysis/CFLSteensAliasAnalysis.h"
#include "llvm/Analysis/CGSCCPassManager.h"
#include "llvm/Analysis/CallGraph.h"
#include "llvm/Analysis/DemandedBits.h"
#include "llvm/Analysis/DependenceAnalysis.h"
#include "llvm/Analysis/DominanceFrontier.h"
#include "llvm/Analysis/GlobalsModRef.h"
#include "llvm/Analysis/IVUsers.h"
[PM] Add a new "lazy" call graph analysis pass for the new pass manager. The primary motivation for this pass is to separate the call graph analysis used by the new pass manager's CGSCC pass management from the existing call graph analysis pass. That analysis pass is (somewhat unfortunately) over-constrained by the existing CallGraphSCCPassManager requirements. Those requirements make it *really* hard to cleanly layer the needed functionality for the new pass manager on top of the existing analysis. However, there are also a bunch of things that the pass manager would specifically benefit from doing differently from the existing call graph analysis, and this new implementation tries to address several of them: - Be lazy about scanning function definitions. The existing pass eagerly scans the entire module to build the initial graph. This new pass is significantly more lazy, and I plan to push this even further to maximize locality during CGSCC walks. - Don't use a single synthetic node to partition functions with an indirect call from functions whose address is taken. This node creates a huge choke-point which would preclude good parallelization across the fanout of the SCC graph when we got to the point of looking at such changes to LLVM. - Use a memory dense and lightweight representation of the call graph rather than value handles and tracking call instructions. This will require explicit update calls instead of some updates working transparently, but should end up being significantly more efficient. The explicit update calls ended up being needed in many cases for the existing call graph so we don't really lose anything. - Doesn't explicitly model SCCs and thus doesn't provide an "identity" for an SCC which is stable across updates. This is essential for the new pass manager to work correctly. - Only form the graph necessary for traversing all of the functions in an SCC friendly order. This is a much simpler graph structure and should be more memory dense. It does limit the ways in which it is appropriate to use this analysis. I wish I had a better name than "call graph". I've commented extensively this aspect. This is still very much a WIP, in fact it is really just the initial bits. But it is about the fourth version of the initial bits that I've implemented with each of the others running into really frustrating problms. This looks like it will actually work and I'd like to split the actual complexity across commits for the sake of my reviewers. =] The rest of the implementation along with lots of wiring will follow somewhat more rapidly now that there is a good path forward. Naturally, this doesn't impact any of the existing optimizer. This code is specific to the new pass manager. A bunch of thanks are deserved for the various folks that have helped with the design of this, especially Nick Lewycky who actually sat with me to go through the fundamentals of the final version here. llvm-svn: 200903
2014-02-06 12:37:03 +08:00
#include "llvm/Analysis/LazyCallGraph.h"
#include "llvm/Analysis/LazyValueInfo.h"
#include "llvm/Analysis/LoopAccessAnalysis.h"
#include "llvm/Analysis/LoopInfo.h"
#include "llvm/Analysis/MemoryDependenceAnalysis.h"
#include "llvm/Analysis/ModuleSummaryAnalysis.h"
#include "llvm/Analysis/OptimizationDiagnosticInfo.h"
#include "llvm/Analysis/PostDominators.h"
#include "llvm/Analysis/ProfileSummaryInfo.h"
#include "llvm/Analysis/RegionInfo.h"
[PM] Port ScalarEvolution to the new pass manager. This change makes ScalarEvolution a stand-alone object and just produces one from a pass as needed. Making this work well requires making the object movable, using references instead of overwritten pointers in a number of places, and other refactorings. I've also wired it up to the new pass manager and added a RUN line to a test to exercise it under the new pass manager. This includes basic printing support much like with other analyses. But there is a big and somewhat scary change here. Prior to this patch ScalarEvolution was never *actually* invalidated!!! Re-running the pass just re-wired up the various other analyses and didn't remove any of the existing entries in the SCEV caches or clear out anything at all. This might seem OK as everything in SCEV that can uses ValueHandles to track updates to the values that serve as SCEV keys. However, this still means that as we ran SCEV over each function in the module, we kept accumulating more and more SCEVs into the cache. At the end, we would have a SCEV cache with every value that we ever needed a SCEV for in the entire module!!! Yowzers. The releaseMemory routine would dump all of this, but that isn't realy called during normal runs of the pipeline as far as I can see. To make matters worse, there *is* actually a key that we don't update with value handles -- there is a map keyed off of Loop*s. Because LoopInfo *does* release its memory from run to run, it is entirely possible to run SCEV over one function, then over another function, and then lookup a Loop* from the second function but find an entry inserted for the first function! Ouch. To make matters still worse, there are plenty of updates that *don't* trip a value handle. It seems incredibly unlikely that today GVN or another pass that invalidates SCEV can update values in *just* such a way that a subsequent run of SCEV will incorrectly find lookups in a cache, but it is theoretically possible and would be a nightmare to debug. With this refactoring, I've fixed all this by actually destroying and recreating the ScalarEvolution object from run to run. Technically, this could increase the amount of malloc traffic we see, but then again it is also technically correct. ;] I don't actually think we're suffering from tons of malloc traffic from SCEV because if we were, the fact that we never clear the memory would seem more likely to have come up as an actual problem before now. So, I've made the simple fix here. If in fact there are serious issues with too much allocation and deallocation, I can work on a clever fix that preserves the allocations (while clearing the data) between each run, but I'd prefer to do that kind of optimization with a test case / benchmark that shows why we need such cleverness (and that can test that we actually make it faster). It's possible that this will make some things faster by making the SCEV caches have higher locality (due to being significantly smaller) so until there is a clear benchmark, I think the simple change is best. Differential Revision: http://reviews.llvm.org/D12063 llvm-svn: 245193
2015-08-17 10:08:17 +08:00
#include "llvm/Analysis/ScalarEvolution.h"
#include "llvm/Analysis/ScalarEvolutionAliasAnalysis.h"
#include "llvm/Analysis/ScopedNoAliasAA.h"
#include "llvm/Analysis/TargetLibraryInfo.h"
#include "llvm/Analysis/TargetTransformInfo.h"
#include "llvm/Analysis/TypeBasedAliasAnalysis.h"
#include "llvm/CodeGen/PreISelIntrinsicLowering.h"
#include "llvm/CodeGen/UnreachableBlockElim.h"
#include "llvm/IR/Dominators.h"
#include "llvm/IR/IRPrintingPasses.h"
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
#include "llvm/IR/PassManager.h"
#include "llvm/IR/Verifier.h"
#include "llvm/Support/Debug.h"
#include "llvm/Support/Regex.h"
#include "llvm/Target/TargetMachine.h"
#include "llvm/Transforms/GCOVProfiler.h"
[PM] Port the always inliner to the new pass manager in a much more minimal and boring form than the old pass manager's version. This pass does the very minimal amount of work necessary to inline functions declared as always-inline. It doesn't support a wide array of things that the legacy pass manager did support, but is alse ... about 20 lines of code. So it has that going for it. Notably things this doesn't support: - Array alloca merging - To support the above, bottom-up inlining with careful history tracking and call graph updates - DCE of the functions that become dead after this inlining. - Inlining through call instructions with the always_inline attribute. Instead, it focuses on inlining functions with that attribute. The first I've omitted because I'm hoping to just turn it off for the primary pass manager. If that doesn't pan out, I can add it here but it will be reasonably expensive to do so. The second should really be handled by running global-dce after the inliner. I don't want to re-implement the non-trivial logic necessary to do comdat-correct DCE of functions. This means the -O0 pipeline will have to be at least 'always-inline,global-dce', but that seems reasonable to me. If others are seriously worried about this I'd like to hear about it and understand why. Again, this is all solveable by factoring that logic into a utility and calling it here, but I'd like to wait to do that until there is a clear reason why the existing pass-based factoring won't work. The final point is a serious one. I can fairly easily add support for this, but it seems both costly and a confusing construct for the use case of the always inliner running at -O0. This attribute can of course still impact the normal inliner easily (although I find that a questionable re-use of the same attribute). I've started a discussion to sort out what semantics we want here and based on that can figure out if it makes sense ta have this complexity at O0 or not. One other advantage of this design is that it should be quite a bit faster due to checking for whether the function is a viable candidate for inlining exactly once per function instead of doing it for each call site. Anyways, hopefully a reasonable starting point for this pass. Differential Revision: https://reviews.llvm.org/D23299 llvm-svn: 278896
2016-08-17 10:56:20 +08:00
#include "llvm/Transforms/IPO/AlwaysInliner.h"
#include "llvm/Transforms/IPO/ConstantMerge.h"
#include "llvm/Transforms/IPO/CrossDSOCFI.h"
#include "llvm/Transforms/IPO/DeadArgumentElimination.h"
#include "llvm/Transforms/IPO/ElimAvailExtern.h"
#include "llvm/Transforms/IPO/ForceFunctionAttrs.h"
#include "llvm/Transforms/IPO/FunctionAttrs.h"
#include "llvm/Transforms/IPO/FunctionImport.h"
#include "llvm/Transforms/IPO/GlobalDCE.h"
#include "llvm/Transforms/IPO/GlobalOpt.h"
#include "llvm/Transforms/IPO/GlobalSplit.h"
#include "llvm/Transforms/IPO/InferFunctionAttrs.h"
#include "llvm/Transforms/IPO/Internalize.h"
#include "llvm/Transforms/IPO/LowerTypeTests.h"
#include "llvm/Transforms/IPO/PartialInlining.h"
#include "llvm/Transforms/IPO/SCCP.h"
#include "llvm/Transforms/IPO/StripDeadPrototypes.h"
#include "llvm/Transforms/IPO/WholeProgramDevirt.h"
#include "llvm/Transforms/InstCombine/InstCombine.h"
#include "llvm/Transforms/InstrProfiling.h"
#include "llvm/Transforms/PGOInstrumentation.h"
#include "llvm/Transforms/SampleProfile.h"
#include "llvm/Transforms/Scalar/ADCE.h"
#include "llvm/Transforms/Scalar/AlignmentFromAssumptions.h"
#include "llvm/Transforms/Scalar/BDCE.h"
2016-07-19 00:29:17 +08:00
#include "llvm/Transforms/Scalar/ConstantHoisting.h"
#include "llvm/Transforms/Scalar/CorrelatedValuePropagation.h"
#include "llvm/Transforms/Scalar/DCE.h"
#include "llvm/Transforms/Scalar/DeadStoreElimination.h"
#include "llvm/Transforms/Scalar/EarlyCSE.h"
#include "llvm/Transforms/Scalar/Float2Int.h"
#include "llvm/Transforms/Scalar/GVN.h"
#include "llvm/Transforms/Scalar/GuardWidening.h"
#include "llvm/Transforms/Scalar/IndVarSimplify.h"
#include "llvm/Transforms/Scalar/JumpThreading.h"
#include "llvm/Transforms/Scalar/LICM.h"
#include "llvm/Transforms/Scalar/LoopDataPrefetch.h"
#include "llvm/Transforms/Scalar/LoopDeletion.h"
#include "llvm/Transforms/Scalar/LoopDistribute.h"
#include "llvm/Transforms/Scalar/LoopIdiomRecognize.h"
#include "llvm/Transforms/Scalar/LoopInstSimplify.h"
#include "llvm/Transforms/Scalar/LoopRotation.h"
#include "llvm/Transforms/Scalar/LoopSimplifyCFG.h"
#include "llvm/Transforms/Scalar/LoopStrengthReduce.h"
#include "llvm/Transforms/Scalar/LoopUnrollPass.h"
#include "llvm/Transforms/Scalar/LowerAtomic.h"
#include "llvm/Transforms/Scalar/LowerExpectIntrinsic.h"
#include "llvm/Transforms/Scalar/LowerGuardIntrinsic.h"
#include "llvm/Transforms/Scalar/MemCpyOptimizer.h"
#include "llvm/Transforms/Scalar/MergedLoadStoreMotion.h"
#include "llvm/Transforms/Scalar/NaryReassociate.h"
#include "llvm/Transforms/Scalar/PartiallyInlineLibCalls.h"
#include "llvm/Transforms/Scalar/Reassociate.h"
#include "llvm/Transforms/Scalar/SCCP.h"
[PM] Port SROA to the new pass manager. In some ways this is a very boring port to the new pass manager as there are no interesting analyses or dependencies or other oddities. However, this does introduce the first good example of a transformation pass with non-trivial state porting to the new pass manager. I've tried to carve out patterns here to replicate elsewhere, and would appreciate comments on whether folks like these patterns: - A common need in the new pass manager is to effectively lift the pass class and some of its state into a public header file. Prior to this, LLVM used anonymous namespaces to provide "module private" types and utilities, but that doesn't scale to cases where a public header file is needed and the new pass manager will exacerbate that. The pattern I've adopted here is to use the namespace-cased-name of the core pass (what would be a module if we had them) as a module-private namespace. Then utility and other code can be declared and defined in this namespace. At some point in the future, we could even have (conditionally compiled) code that used modules features when available to do the same basic thing. - I've split the actual pass run method in two in order to expose a private method usable by the old pass manager to wrap the new class with a minimum of duplicated code. I actually looked at a bunch of ways to automate or generate these, but they are all quite terrible IMO. The fundamental need is to extract the set of analyses which need to cross this interface boundary, and that will end up being too unpredictable to effectively encapsulate IMO. This is also a relatively small amount of boiler plate that will live a relatively short time, so I'm not too worried about the fact that it is boiler plate. The rest of the patch is totally boring but results in a massive diff (sorry). It just moves code around and removes or adds qualifiers to reflect the new name and nesting structure. Differential Revision: http://reviews.llvm.org/D12773 llvm-svn: 247501
2015-09-12 17:09:14 +08:00
#include "llvm/Transforms/Scalar/SROA.h"
#include "llvm/Transforms/Scalar/SimplifyCFG.h"
#include "llvm/Transforms/Scalar/Sink.h"
#include "llvm/Transforms/Scalar/SpeculativeExecution.h"
#include "llvm/Transforms/Scalar/TailRecursionElimination.h"
#include "llvm/Transforms/Utils/AddDiscriminators.h"
#include "llvm/Transforms/Utils/BreakCriticalEdges.h"
#include "llvm/Transforms/Utils/LCSSA.h"
#include "llvm/Transforms/Utils/LibCallsShrinkWrap.h"
#include "llvm/Transforms/Utils/LoopSimplify.h"
#include "llvm/Transforms/Utils/LowerInvoke.h"
#include "llvm/Transforms/Utils/Mem2Reg.h"
#include "llvm/Transforms/Utils/MemorySSA.h"
#include "llvm/Transforms/Utils/NameAnonGlobals.h"
#include "llvm/Transforms/Utils/SimplifyInstructions.h"
#include "llvm/Transforms/Utils/SymbolRewriter.h"
#include "llvm/Transforms/Vectorize/LoopVectorize.h"
#include "llvm/Transforms/Vectorize/SLPVectorizer.h"
#include <type_traits>
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
using namespace llvm;
static Regex DefaultAliasRegex("^(default|lto-pre-link|lto)<(O[0123sz])>$");
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
namespace {
/// \brief No-op module pass which does nothing.
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
struct NoOpModulePass {
PreservedAnalyses run(Module &M, ModuleAnalysisManager &) {
return PreservedAnalyses::all();
}
static StringRef name() { return "NoOpModulePass"; }
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
};
/// \brief No-op module analysis.
class NoOpModuleAnalysis : public AnalysisInfoMixin<NoOpModuleAnalysis> {
friend AnalysisInfoMixin<NoOpModuleAnalysis>;
[PM] Change the static object whose address is used to uniquely identify analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
2016-11-24 01:53:26 +08:00
static AnalysisKey Key;
public:
struct Result {};
Result run(Module &, ModuleAnalysisManager &) { return Result(); }
static StringRef name() { return "NoOpModuleAnalysis"; }
};
/// \brief No-op CGSCC pass which does nothing.
struct NoOpCGSCCPass {
[PM] Introduce basic update capabilities to the new PM's CGSCC pass manager, including both plumbing and logic to handle function pass updates. There are three fundamentally tied changes here: 1) Plumbing *some* mechanism for updating the CGSCC pass manager as the CG changes while passes are running. 2) Changing the CGSCC pass manager infrastructure to have support for the underlying graph to mutate mid-pass run. 3) Actually updating the CG after function passes run. I can separate them if necessary, but I think its really useful to have them together as the needs of #3 drove #2, and that in turn drove #1. The plumbing technique is to extend the "run" method signature with extra arguments. We provide the call graph that intrinsically is available as it is the basis of the pass manager's IR units, and an output parameter that records the results of updating the call graph during an SCC passes's run. Note that "...UpdateResult" isn't a *great* name here... suggestions very welcome. I tried a pretty frustrating number of different data structures and such for the innards of the update result. Every other one failed for one reason or another. Sometimes I just couldn't keep the layers of complexity right in my head. The thing that really worked was to just directly provide access to the underlying structures used to walk the call graph so that their updates could be informed by the *particular* nature of the change to the graph. The technique for how to make the pass management infrastructure cope with mutating graphs was also something that took a really, really large number of iterations to get to a place where I was happy. Here are some of the considerations that drove the design: - We operate at three levels within the infrastructure: RefSCC, SCC, and Node. In each case, we are working bottom up and so we want to continue to iterate on the "lowest" node as the graph changes. Look at how we iterate over nodes in an SCC running function passes as those function passes mutate the CG. We continue to iterate on the "lowest" SCC, which is the one that continues to contain the function just processed. - The call graph structure re-uses SCCs (and RefSCCs) during mutation events for the *highest* entry in the resulting new subgraph, not the lowest. This means that it is necessary to continually update the current SCC or RefSCC as it shifts. This is really surprising and subtle, and took a long time for me to work out. I actually tried changing the call graph to provide the opposite behavior, and it breaks *EVERYTHING*. The graph update algorithms are really deeply tied to this particualr pattern. - When SCCs or RefSCCs are split apart and refined and we continually re-pin our processing to the bottom one in the subgraph, we need to enqueue the newly formed SCCs and RefSCCs for subsequent processing. Queuing them presents a few challenges: 1) SCCs and RefSCCs use wildly different iteration strategies at a high level. We end up needing to converge them on worklist approaches that can be extended in order to be able to handle the mutations. 2) The order of the enqueuing need to remain bottom-up post-order so that we don't get surprising order of visitation for things like the inliner. 3) We need the worklists to have set semantics so we don't duplicate things endlessly. We don't need a *persistent* set though because we always keep processing the bottom node!!!! This is super, super surprising to me and took a long time to convince myself this is correct, but I'm pretty sure it is... Once we sink down to the bottom node, we can't re-split out the same node in any way, and the postorder of the current queue is fixed and unchanging. 4) We need to make sure that the "current" SCC or RefSCC actually gets enqueued here such that we re-visit it because we continue processing a *new*, *bottom* SCC/RefSCC. - We also need the ability to *skip* SCCs and RefSCCs that get merged into a larger component. We even need the ability to skip *nodes* from an SCC that are no longer part of that SCC. This led to the design you see in the patch which uses SetVector-based worklists. The RefSCC worklist is always empty until an update occurs and is just used to handle those RefSCCs created by updates as the others don't even exist yet and are formed on-demand during the bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and we push new SCCs onto it and blacklist existing SCCs on it to get the desired processing. We then *directly* update these when updating the call graph as I was never able to find a satisfactory abstraction around the update strategy. Finally, we need to compute the updates for function passes. This is mostly used as an initial customer of all the update mechanisms to drive their design to at least cover some real set of use cases. There are a bunch of interesting things that came out of doing this: - It is really nice to do this a function at a time because that function is likely hot in the cache. This means we want even the function pass adaptor to support online updates to the call graph! - To update the call graph after arbitrary function pass mutations is quite hard. We have to build a fairly comprehensive set of data structures and then process them. Fortunately, some of this code is related to the code for building the cal graph in the first place. Unfortunately, very little of it makes any sense to share because the nature of what we're doing is so very different. I've factored out the one part that made sense at least. - We need to transfer these updates into the various structures for the CGSCC pass manager. Once those were more sanely worked out, this became relatively easier. But some of those needs necessitated changes to the LazyCallGraph interface to make it significantly easier to extract the changed SCCs from an update operation. - We also need to update the CGSCC analysis manager as the shape of the graph changes. When an SCC is merged away we need to clear analyses associated with it from the analysis manager which we didn't have support for in the analysis manager infrsatructure. New SCCs are easy! But then we have the case that the original SCC has its shape changed but remains in the call graph. There we need to *invalidate* the analyses associated with it. - We also need to invalidate analyses after we *finish* processing an SCC. But the analyses we need to invalidate here are *only those for the newly updated SCC*!!! Because we only continue processing the bottom SCC, if we split SCCs apart the original one gets invalidated once when its shape changes and is not processed farther so its analyses will be correct. It is the bottom SCC which continues being processed and needs to have the "normal" invalidation done based on the preserved analyses set. All of this is mostly background and context for the changes here. Many thanks to all the reviewers who helped here. Especially Sanjoy who caught several interesting bugs in the graph algorithms, David, Sean, and others who all helped with feedback. Differential Revision: http://reviews.llvm.org/D21464 llvm-svn: 279618
2016-08-24 17:37:14 +08:00
PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &,
LazyCallGraph &, CGSCCUpdateResult &UR) {
return PreservedAnalyses::all();
}
static StringRef name() { return "NoOpCGSCCPass"; }
};
/// \brief No-op CGSCC analysis.
class NoOpCGSCCAnalysis : public AnalysisInfoMixin<NoOpCGSCCAnalysis> {
friend AnalysisInfoMixin<NoOpCGSCCAnalysis>;
[PM] Change the static object whose address is used to uniquely identify analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
2016-11-24 01:53:26 +08:00
static AnalysisKey Key;
public:
struct Result {};
[PM] Introduce basic update capabilities to the new PM's CGSCC pass manager, including both plumbing and logic to handle function pass updates. There are three fundamentally tied changes here: 1) Plumbing *some* mechanism for updating the CGSCC pass manager as the CG changes while passes are running. 2) Changing the CGSCC pass manager infrastructure to have support for the underlying graph to mutate mid-pass run. 3) Actually updating the CG after function passes run. I can separate them if necessary, but I think its really useful to have them together as the needs of #3 drove #2, and that in turn drove #1. The plumbing technique is to extend the "run" method signature with extra arguments. We provide the call graph that intrinsically is available as it is the basis of the pass manager's IR units, and an output parameter that records the results of updating the call graph during an SCC passes's run. Note that "...UpdateResult" isn't a *great* name here... suggestions very welcome. I tried a pretty frustrating number of different data structures and such for the innards of the update result. Every other one failed for one reason or another. Sometimes I just couldn't keep the layers of complexity right in my head. The thing that really worked was to just directly provide access to the underlying structures used to walk the call graph so that their updates could be informed by the *particular* nature of the change to the graph. The technique for how to make the pass management infrastructure cope with mutating graphs was also something that took a really, really large number of iterations to get to a place where I was happy. Here are some of the considerations that drove the design: - We operate at three levels within the infrastructure: RefSCC, SCC, and Node. In each case, we are working bottom up and so we want to continue to iterate on the "lowest" node as the graph changes. Look at how we iterate over nodes in an SCC running function passes as those function passes mutate the CG. We continue to iterate on the "lowest" SCC, which is the one that continues to contain the function just processed. - The call graph structure re-uses SCCs (and RefSCCs) during mutation events for the *highest* entry in the resulting new subgraph, not the lowest. This means that it is necessary to continually update the current SCC or RefSCC as it shifts. This is really surprising and subtle, and took a long time for me to work out. I actually tried changing the call graph to provide the opposite behavior, and it breaks *EVERYTHING*. The graph update algorithms are really deeply tied to this particualr pattern. - When SCCs or RefSCCs are split apart and refined and we continually re-pin our processing to the bottom one in the subgraph, we need to enqueue the newly formed SCCs and RefSCCs for subsequent processing. Queuing them presents a few challenges: 1) SCCs and RefSCCs use wildly different iteration strategies at a high level. We end up needing to converge them on worklist approaches that can be extended in order to be able to handle the mutations. 2) The order of the enqueuing need to remain bottom-up post-order so that we don't get surprising order of visitation for things like the inliner. 3) We need the worklists to have set semantics so we don't duplicate things endlessly. We don't need a *persistent* set though because we always keep processing the bottom node!!!! This is super, super surprising to me and took a long time to convince myself this is correct, but I'm pretty sure it is... Once we sink down to the bottom node, we can't re-split out the same node in any way, and the postorder of the current queue is fixed and unchanging. 4) We need to make sure that the "current" SCC or RefSCC actually gets enqueued here such that we re-visit it because we continue processing a *new*, *bottom* SCC/RefSCC. - We also need the ability to *skip* SCCs and RefSCCs that get merged into a larger component. We even need the ability to skip *nodes* from an SCC that are no longer part of that SCC. This led to the design you see in the patch which uses SetVector-based worklists. The RefSCC worklist is always empty until an update occurs and is just used to handle those RefSCCs created by updates as the others don't even exist yet and are formed on-demand during the bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and we push new SCCs onto it and blacklist existing SCCs on it to get the desired processing. We then *directly* update these when updating the call graph as I was never able to find a satisfactory abstraction around the update strategy. Finally, we need to compute the updates for function passes. This is mostly used as an initial customer of all the update mechanisms to drive their design to at least cover some real set of use cases. There are a bunch of interesting things that came out of doing this: - It is really nice to do this a function at a time because that function is likely hot in the cache. This means we want even the function pass adaptor to support online updates to the call graph! - To update the call graph after arbitrary function pass mutations is quite hard. We have to build a fairly comprehensive set of data structures and then process them. Fortunately, some of this code is related to the code for building the cal graph in the first place. Unfortunately, very little of it makes any sense to share because the nature of what we're doing is so very different. I've factored out the one part that made sense at least. - We need to transfer these updates into the various structures for the CGSCC pass manager. Once those were more sanely worked out, this became relatively easier. But some of those needs necessitated changes to the LazyCallGraph interface to make it significantly easier to extract the changed SCCs from an update operation. - We also need to update the CGSCC analysis manager as the shape of the graph changes. When an SCC is merged away we need to clear analyses associated with it from the analysis manager which we didn't have support for in the analysis manager infrsatructure. New SCCs are easy! But then we have the case that the original SCC has its shape changed but remains in the call graph. There we need to *invalidate* the analyses associated with it. - We also need to invalidate analyses after we *finish* processing an SCC. But the analyses we need to invalidate here are *only those for the newly updated SCC*!!! Because we only continue processing the bottom SCC, if we split SCCs apart the original one gets invalidated once when its shape changes and is not processed farther so its analyses will be correct. It is the bottom SCC which continues being processed and needs to have the "normal" invalidation done based on the preserved analyses set. All of this is mostly background and context for the changes here. Many thanks to all the reviewers who helped here. Especially Sanjoy who caught several interesting bugs in the graph algorithms, David, Sean, and others who all helped with feedback. Differential Revision: http://reviews.llvm.org/D21464 llvm-svn: 279618
2016-08-24 17:37:14 +08:00
Result run(LazyCallGraph::SCC &, CGSCCAnalysisManager &, LazyCallGraph &G) {
return Result();
}
static StringRef name() { return "NoOpCGSCCAnalysis"; }
};
/// \brief No-op function pass which does nothing.
struct NoOpFunctionPass {
PreservedAnalyses run(Function &F, FunctionAnalysisManager &) {
return PreservedAnalyses::all();
}
static StringRef name() { return "NoOpFunctionPass"; }
};
/// \brief No-op function analysis.
class NoOpFunctionAnalysis : public AnalysisInfoMixin<NoOpFunctionAnalysis> {
friend AnalysisInfoMixin<NoOpFunctionAnalysis>;
[PM] Change the static object whose address is used to uniquely identify analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
2016-11-24 01:53:26 +08:00
static AnalysisKey Key;
public:
struct Result {};
Result run(Function &, FunctionAnalysisManager &) { return Result(); }
static StringRef name() { return "NoOpFunctionAnalysis"; }
};
/// \brief No-op loop pass which does nothing.
struct NoOpLoopPass {
PreservedAnalyses run(Loop &L, LoopAnalysisManager &) {
return PreservedAnalyses::all();
}
static StringRef name() { return "NoOpLoopPass"; }
};
/// \brief No-op loop analysis.
class NoOpLoopAnalysis : public AnalysisInfoMixin<NoOpLoopAnalysis> {
friend AnalysisInfoMixin<NoOpLoopAnalysis>;
[PM] Change the static object whose address is used to uniquely identify analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
2016-11-24 01:53:26 +08:00
static AnalysisKey Key;
public:
struct Result {};
Result run(Loop &, LoopAnalysisManager &) { return Result(); }
static StringRef name() { return "NoOpLoopAnalysis"; }
};
[PM] Change the static object whose address is used to uniquely identify analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
2016-11-24 01:53:26 +08:00
AnalysisKey NoOpModuleAnalysis::Key;
AnalysisKey NoOpCGSCCAnalysis::Key;
AnalysisKey NoOpFunctionAnalysis::Key;
AnalysisKey NoOpLoopAnalysis::Key;
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
} // End anonymous namespace.
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
void PassBuilder::registerModuleAnalyses(ModuleAnalysisManager &MAM) {
#define MODULE_ANALYSIS(NAME, CREATE_PASS) \
MAM.registerPass([&] { return CREATE_PASS; });
#include "PassRegistry.def"
}
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
void PassBuilder::registerCGSCCAnalyses(CGSCCAnalysisManager &CGAM) {
#define CGSCC_ANALYSIS(NAME, CREATE_PASS) \
CGAM.registerPass([&] { return CREATE_PASS; });
#include "PassRegistry.def"
}
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
void PassBuilder::registerFunctionAnalyses(FunctionAnalysisManager &FAM) {
#define FUNCTION_ANALYSIS(NAME, CREATE_PASS) \
FAM.registerPass([&] { return CREATE_PASS; });
#include "PassRegistry.def"
}
void PassBuilder::registerLoopAnalyses(LoopAnalysisManager &LAM) {
#define LOOP_ANALYSIS(NAME, CREATE_PASS) \
LAM.registerPass([&] { return CREATE_PASS; });
#include "PassRegistry.def"
}
void PassBuilder::addPerModuleDefaultPipeline(ModulePassManager &MPM,
OptimizationLevel Level,
bool DebugLogging) {
// FIXME: Finish fleshing this out to match the legacy pipelines.
FunctionPassManager EarlyFPM(DebugLogging);
EarlyFPM.addPass(SimplifyCFGPass());
EarlyFPM.addPass(SROA());
EarlyFPM.addPass(EarlyCSEPass());
EarlyFPM.addPass(LowerExpectIntrinsicPass());
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(EarlyFPM)));
}
void PassBuilder::addLTOPreLinkDefaultPipeline(ModulePassManager &MPM,
OptimizationLevel Level,
bool DebugLogging) {
// FIXME: We should use a customized pre-link pipeline!
addPerModuleDefaultPipeline(MPM, Level, DebugLogging);
}
void PassBuilder::addLTODefaultPipeline(ModulePassManager &MPM,
OptimizationLevel Level,
bool DebugLogging) {
// FIXME: Finish fleshing this out to match the legacy LTO pipelines.
FunctionPassManager LateFPM(DebugLogging);
LateFPM.addPass(InstCombinePass());
LateFPM.addPass(SimplifyCFGPass());
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(LateFPM)));
}
static Optional<int> parseRepeatPassName(StringRef Name) {
if (!Name.consume_front("repeat<") || !Name.consume_back(">"))
return None;
int Count;
if (Name.getAsInteger(0, Count) || Count <= 0)
return None;
return Count;
}
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
static bool isModulePassName(StringRef Name) {
// Manually handle aliases for pre-configured pipeline fragments.
if (Name.startswith("default") || Name.startswith("lto"))
return DefaultAliasRegex.match(Name);
// Explicitly handle pass manager names.
if (Name == "module")
return true;
if (Name == "cgscc")
return true;
if (Name == "function")
return true;
// Explicitly handle custom-parsed pass names.
if (parseRepeatPassName(Name))
return true;
#define MODULE_PASS(NAME, CREATE_PASS) \
if (Name == NAME) \
return true;
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
#define MODULE_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">" || Name == "invalidate<" NAME ">") \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
return true;
#include "PassRegistry.def"
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
return false;
}
static bool isCGSCCPassName(StringRef Name) {
// Explicitly handle pass manager names.
if (Name == "cgscc")
return true;
if (Name == "function")
return true;
// Explicitly handle custom-parsed pass names.
if (parseRepeatPassName(Name))
return true;
#define CGSCC_PASS(NAME, CREATE_PASS) \
if (Name == NAME) \
return true;
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
#define CGSCC_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">" || Name == "invalidate<" NAME ">") \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
return true;
#include "PassRegistry.def"
return false;
}
static bool isFunctionPassName(StringRef Name) {
// Explicitly handle pass manager names.
if (Name == "function")
return true;
if (Name == "loop")
return true;
// Explicitly handle custom-parsed pass names.
if (parseRepeatPassName(Name))
return true;
#define FUNCTION_PASS(NAME, CREATE_PASS) \
if (Name == NAME) \
return true;
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
#define FUNCTION_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">" || Name == "invalidate<" NAME ">") \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
return true;
#include "PassRegistry.def"
return false;
}
static bool isLoopPassName(StringRef Name) {
// Explicitly handle pass manager names.
if (Name == "loop")
return true;
// Explicitly handle custom-parsed pass names.
if (parseRepeatPassName(Name))
return true;
#define LOOP_PASS(NAME, CREATE_PASS) \
if (Name == NAME) \
return true;
#define LOOP_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">" || Name == "invalidate<" NAME ">") \
return true;
#include "PassRegistry.def"
return false;
}
Optional<std::vector<PassBuilder::PipelineElement>>
PassBuilder::parsePipelineText(StringRef Text) {
std::vector<PipelineElement> ResultPipeline;
SmallVector<std::vector<PipelineElement> *, 4> PipelineStack = {
&ResultPipeline};
for (;;) {
std::vector<PipelineElement> &Pipeline = *PipelineStack.back();
size_t Pos = Text.find_first_of(",()");
Pipeline.push_back({Text.substr(0, Pos), {}});
// If we have a single terminating name, we're done.
if (Pos == Text.npos)
break;
char Sep = Text[Pos];
Text = Text.substr(Pos + 1);
if (Sep == ',')
// Just a name ending in a comma, continue.
continue;
if (Sep == '(') {
// Push the inner pipeline onto the stack to continue processing.
PipelineStack.push_back(&Pipeline.back().InnerPipeline);
continue;
}
assert(Sep == ')' && "Bogus separator!");
// When handling the close parenthesis, we greedily consume them to avoid
// empty strings in the pipeline.
do {
// If we try to pop the outer pipeline we have unbalanced parentheses.
if (PipelineStack.size() == 1)
return None;
PipelineStack.pop_back();
} while (Text.consume_front(")"));
// Check if we've finished parsing.
if (Text.empty())
break;
// Otherwise, the end of an inner pipeline always has to be followed by
// a comma, and then we can continue.
if (!Text.consume_front(","))
return None;
}
if (PipelineStack.size() > 1)
// Unbalanced paretheses.
return None;
assert(PipelineStack.back() == &ResultPipeline &&
"Wrong pipeline at the bottom of the stack!");
return {std::move(ResultPipeline)};
}
bool PassBuilder::parseModulePass(ModulePassManager &MPM,
const PipelineElement &E, bool VerifyEachPass,
bool DebugLogging) {
auto &Name = E.Name;
auto &InnerPipeline = E.InnerPipeline;
// First handle complex passes like the pass managers which carry pipelines.
if (!InnerPipeline.empty()) {
if (Name == "module") {
ModulePassManager NestedMPM(DebugLogging);
if (!parseModulePassPipeline(NestedMPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
MPM.addPass(std::move(NestedMPM));
return true;
}
if (Name == "cgscc") {
CGSCCPassManager CGPM(DebugLogging);
if (!parseCGSCCPassPipeline(CGPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
MPM.addPass(createModuleToPostOrderCGSCCPassAdaptor(std::move(CGPM),
DebugLogging));
return true;
}
if (Name == "function") {
FunctionPassManager FPM(DebugLogging);
if (!parseFunctionPassPipeline(FPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
MPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
return true;
}
if (auto Count = parseRepeatPassName(Name)) {
ModulePassManager NestedMPM(DebugLogging);
if (!parseModulePassPipeline(NestedMPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
MPM.addPass(createRepeatedPass(*Count, std::move(NestedMPM)));
return true;
}
// Normal passes can't have pipelines.
return false;
}
// Manually handle aliases for pre-configured pipeline fragments.
if (Name.startswith("default") || Name.startswith("lto")) {
SmallVector<StringRef, 3> Matches;
if (!DefaultAliasRegex.match(Name, &Matches))
return false;
assert(Matches.size() == 3 && "Must capture two matched strings!");
OptimizationLevel L = StringSwitch<OptimizationLevel>(Matches[2])
.Case("O0", O0)
.Case("O1", O1)
.Case("O2", O2)
.Case("O3", O3)
.Case("Os", Os)
.Case("Oz", Oz);
if (Matches[1] == "default") {
addPerModuleDefaultPipeline(MPM, L, DebugLogging);
} else if (Matches[1] == "lto-pre-link") {
addLTOPreLinkDefaultPipeline(MPM, L, DebugLogging);
} else {
assert(Matches[1] == "lto" && "Not one of the matched options!");
addLTODefaultPipeline(MPM, L, DebugLogging);
}
return true;
}
// Finally expand the basic registered passes from the .inc file.
#define MODULE_PASS(NAME, CREATE_PASS) \
if (Name == NAME) { \
MPM.addPass(CREATE_PASS); \
return true; \
[PM] Add a new "lazy" call graph analysis pass for the new pass manager. The primary motivation for this pass is to separate the call graph analysis used by the new pass manager's CGSCC pass management from the existing call graph analysis pass. That analysis pass is (somewhat unfortunately) over-constrained by the existing CallGraphSCCPassManager requirements. Those requirements make it *really* hard to cleanly layer the needed functionality for the new pass manager on top of the existing analysis. However, there are also a bunch of things that the pass manager would specifically benefit from doing differently from the existing call graph analysis, and this new implementation tries to address several of them: - Be lazy about scanning function definitions. The existing pass eagerly scans the entire module to build the initial graph. This new pass is significantly more lazy, and I plan to push this even further to maximize locality during CGSCC walks. - Don't use a single synthetic node to partition functions with an indirect call from functions whose address is taken. This node creates a huge choke-point which would preclude good parallelization across the fanout of the SCC graph when we got to the point of looking at such changes to LLVM. - Use a memory dense and lightweight representation of the call graph rather than value handles and tracking call instructions. This will require explicit update calls instead of some updates working transparently, but should end up being significantly more efficient. The explicit update calls ended up being needed in many cases for the existing call graph so we don't really lose anything. - Doesn't explicitly model SCCs and thus doesn't provide an "identity" for an SCC which is stable across updates. This is essential for the new pass manager to work correctly. - Only form the graph necessary for traversing all of the functions in an SCC friendly order. This is a much simpler graph structure and should be more memory dense. It does limit the ways in which it is appropriate to use this analysis. I wish I had a better name than "call graph". I've commented extensively this aspect. This is still very much a WIP, in fact it is really just the initial bits. But it is about the fourth version of the initial bits that I've implemented with each of the others running into really frustrating problms. This looks like it will actually work and I'd like to split the actual complexity across commits for the sake of my reviewers. =] The rest of the implementation along with lots of wiring will follow somewhat more rapidly now that there is a good path forward. Naturally, this doesn't impact any of the existing optimizer. This code is specific to the new pass manager. A bunch of thanks are deserved for the various folks that have helped with the design of this, especially Nick Lewycky who actually sat with me to go through the fundamentals of the final version here. llvm-svn: 200903
2014-02-06 12:37:03 +08:00
}
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
#define MODULE_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">") { \
MPM.addPass( \
RequireAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type, Module>()); \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
return true; \
} \
if (Name == "invalidate<" NAME ">") { \
MPM.addPass(InvalidateAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type>()); \
return true; \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
}
#include "PassRegistry.def"
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
return false;
}
bool PassBuilder::parseCGSCCPass(CGSCCPassManager &CGPM,
const PipelineElement &E, bool VerifyEachPass,
bool DebugLogging) {
auto &Name = E.Name;
auto &InnerPipeline = E.InnerPipeline;
// First handle complex passes like the pass managers which carry pipelines.
if (!InnerPipeline.empty()) {
if (Name == "cgscc") {
CGSCCPassManager NestedCGPM(DebugLogging);
if (!parseCGSCCPassPipeline(NestedCGPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
// Add the nested pass manager with the appropriate adaptor.
CGPM.addPass(std::move(NestedCGPM));
return true;
}
if (Name == "function") {
FunctionPassManager FPM(DebugLogging);
if (!parseFunctionPassPipeline(FPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
// Add the nested pass manager with the appropriate adaptor.
CGPM.addPass(
createCGSCCToFunctionPassAdaptor(std::move(FPM), DebugLogging));
return true;
}
if (auto Count = parseRepeatPassName(Name)) {
CGSCCPassManager NestedCGPM(DebugLogging);
if (!parseCGSCCPassPipeline(NestedCGPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
CGPM.addPass(createRepeatedPass(*Count, std::move(NestedCGPM)));
return true;
}
// Normal passes can't have pipelines.
return false;
}
// Now expand the basic registered passes from the .inc file.
#define CGSCC_PASS(NAME, CREATE_PASS) \
if (Name == NAME) { \
CGPM.addPass(CREATE_PASS); \
return true; \
}
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
#define CGSCC_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">") { \
CGPM.addPass(RequireAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type, \
[PM] Introduce basic update capabilities to the new PM's CGSCC pass manager, including both plumbing and logic to handle function pass updates. There are three fundamentally tied changes here: 1) Plumbing *some* mechanism for updating the CGSCC pass manager as the CG changes while passes are running. 2) Changing the CGSCC pass manager infrastructure to have support for the underlying graph to mutate mid-pass run. 3) Actually updating the CG after function passes run. I can separate them if necessary, but I think its really useful to have them together as the needs of #3 drove #2, and that in turn drove #1. The plumbing technique is to extend the "run" method signature with extra arguments. We provide the call graph that intrinsically is available as it is the basis of the pass manager's IR units, and an output parameter that records the results of updating the call graph during an SCC passes's run. Note that "...UpdateResult" isn't a *great* name here... suggestions very welcome. I tried a pretty frustrating number of different data structures and such for the innards of the update result. Every other one failed for one reason or another. Sometimes I just couldn't keep the layers of complexity right in my head. The thing that really worked was to just directly provide access to the underlying structures used to walk the call graph so that their updates could be informed by the *particular* nature of the change to the graph. The technique for how to make the pass management infrastructure cope with mutating graphs was also something that took a really, really large number of iterations to get to a place where I was happy. Here are some of the considerations that drove the design: - We operate at three levels within the infrastructure: RefSCC, SCC, and Node. In each case, we are working bottom up and so we want to continue to iterate on the "lowest" node as the graph changes. Look at how we iterate over nodes in an SCC running function passes as those function passes mutate the CG. We continue to iterate on the "lowest" SCC, which is the one that continues to contain the function just processed. - The call graph structure re-uses SCCs (and RefSCCs) during mutation events for the *highest* entry in the resulting new subgraph, not the lowest. This means that it is necessary to continually update the current SCC or RefSCC as it shifts. This is really surprising and subtle, and took a long time for me to work out. I actually tried changing the call graph to provide the opposite behavior, and it breaks *EVERYTHING*. The graph update algorithms are really deeply tied to this particualr pattern. - When SCCs or RefSCCs are split apart and refined and we continually re-pin our processing to the bottom one in the subgraph, we need to enqueue the newly formed SCCs and RefSCCs for subsequent processing. Queuing them presents a few challenges: 1) SCCs and RefSCCs use wildly different iteration strategies at a high level. We end up needing to converge them on worklist approaches that can be extended in order to be able to handle the mutations. 2) The order of the enqueuing need to remain bottom-up post-order so that we don't get surprising order of visitation for things like the inliner. 3) We need the worklists to have set semantics so we don't duplicate things endlessly. We don't need a *persistent* set though because we always keep processing the bottom node!!!! This is super, super surprising to me and took a long time to convince myself this is correct, but I'm pretty sure it is... Once we sink down to the bottom node, we can't re-split out the same node in any way, and the postorder of the current queue is fixed and unchanging. 4) We need to make sure that the "current" SCC or RefSCC actually gets enqueued here such that we re-visit it because we continue processing a *new*, *bottom* SCC/RefSCC. - We also need the ability to *skip* SCCs and RefSCCs that get merged into a larger component. We even need the ability to skip *nodes* from an SCC that are no longer part of that SCC. This led to the design you see in the patch which uses SetVector-based worklists. The RefSCC worklist is always empty until an update occurs and is just used to handle those RefSCCs created by updates as the others don't even exist yet and are formed on-demand during the bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and we push new SCCs onto it and blacklist existing SCCs on it to get the desired processing. We then *directly* update these when updating the call graph as I was never able to find a satisfactory abstraction around the update strategy. Finally, we need to compute the updates for function passes. This is mostly used as an initial customer of all the update mechanisms to drive their design to at least cover some real set of use cases. There are a bunch of interesting things that came out of doing this: - It is really nice to do this a function at a time because that function is likely hot in the cache. This means we want even the function pass adaptor to support online updates to the call graph! - To update the call graph after arbitrary function pass mutations is quite hard. We have to build a fairly comprehensive set of data structures and then process them. Fortunately, some of this code is related to the code for building the cal graph in the first place. Unfortunately, very little of it makes any sense to share because the nature of what we're doing is so very different. I've factored out the one part that made sense at least. - We need to transfer these updates into the various structures for the CGSCC pass manager. Once those were more sanely worked out, this became relatively easier. But some of those needs necessitated changes to the LazyCallGraph interface to make it significantly easier to extract the changed SCCs from an update operation. - We also need to update the CGSCC analysis manager as the shape of the graph changes. When an SCC is merged away we need to clear analyses associated with it from the analysis manager which we didn't have support for in the analysis manager infrsatructure. New SCCs are easy! But then we have the case that the original SCC has its shape changed but remains in the call graph. There we need to *invalidate* the analyses associated with it. - We also need to invalidate analyses after we *finish* processing an SCC. But the analyses we need to invalidate here are *only those for the newly updated SCC*!!! Because we only continue processing the bottom SCC, if we split SCCs apart the original one gets invalidated once when its shape changes and is not processed farther so its analyses will be correct. It is the bottom SCC which continues being processed and needs to have the "normal" invalidation done based on the preserved analyses set. All of this is mostly background and context for the changes here. Many thanks to all the reviewers who helped here. Especially Sanjoy who caught several interesting bugs in the graph algorithms, David, Sean, and others who all helped with feedback. Differential Revision: http://reviews.llvm.org/D21464 llvm-svn: 279618
2016-08-24 17:37:14 +08:00
LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, \
CGSCCUpdateResult &>()); \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
return true; \
} \
if (Name == "invalidate<" NAME ">") { \
CGPM.addPass(InvalidateAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type>()); \
return true; \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
}
#include "PassRegistry.def"
return false;
}
bool PassBuilder::parseFunctionPass(FunctionPassManager &FPM,
const PipelineElement &E,
bool VerifyEachPass, bool DebugLogging) {
auto &Name = E.Name;
auto &InnerPipeline = E.InnerPipeline;
// First handle complex passes like the pass managers which carry pipelines.
if (!InnerPipeline.empty()) {
if (Name == "function") {
FunctionPassManager NestedFPM(DebugLogging);
if (!parseFunctionPassPipeline(NestedFPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
// Add the nested pass manager with the appropriate adaptor.
FPM.addPass(std::move(NestedFPM));
return true;
}
if (Name == "loop") {
LoopPassManager LPM(DebugLogging);
if (!parseLoopPassPipeline(LPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
// Add the nested pass manager with the appropriate adaptor.
FPM.addPass(createFunctionToLoopPassAdaptor(std::move(LPM)));
return true;
}
if (auto Count = parseRepeatPassName(Name)) {
FunctionPassManager NestedFPM(DebugLogging);
if (!parseFunctionPassPipeline(NestedFPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
FPM.addPass(createRepeatedPass(*Count, std::move(NestedFPM)));
return true;
}
// Normal passes can't have pipelines.
return false;
}
// Now expand the basic registered passes from the .inc file.
#define FUNCTION_PASS(NAME, CREATE_PASS) \
if (Name == NAME) { \
FPM.addPass(CREATE_PASS); \
return true; \
}
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
#define FUNCTION_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">") { \
FPM.addPass( \
RequireAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type, Function>()); \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
return true; \
} \
if (Name == "invalidate<" NAME ">") { \
FPM.addPass(InvalidateAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type>()); \
return true; \
[PM] Add a utility to the new pass manager for generating a pass which is a no-op other than requiring some analysis results be available. This can be used in real pass pipelines to force the usually lazy analysis running to eagerly compute something at a specific point, and it can be used to test the pass manager infrastructure (my primary use at the moment). I've also added bit of pipeline parsing magic to support generating these directly from the opt command so that you can directly use these when debugging your analysis. The syntax is: require<analysis-name> This can be used at any level of the pass manager. For example: cgscc(function(require<my-analysis>,no-op-function)) This would produce a no-op function pass requiring my-analysis, followed by a fully no-op function pass, both of these in a function pass manager which is nested inside of a bottom-up CGSCC pass manager which is in the top-level (implicit) module pass manager. I have zero attachment to the particular syntax I'm using here. Consider it a straw man for use while I'm testing and fleshing things out. Suggestions for better syntax welcome, and I'll update everything based on any consensus that develops. I've used this new functionality to more directly test the analysis printing rather than relying on the cgscc pass manager running an analysis for me. This is still minimally tested because I need to have analyses to run first! ;] That patch is next, but wanted to keep this one separate for easier review and discussion. llvm-svn: 225236
2015-01-06 10:10:51 +08:00
}
#include "PassRegistry.def"
return false;
}
bool PassBuilder::parseLoopPass(LoopPassManager &LPM, const PipelineElement &E,
bool VerifyEachPass, bool DebugLogging) {
StringRef Name = E.Name;
auto &InnerPipeline = E.InnerPipeline;
// First handle complex passes like the pass managers which carry pipelines.
if (!InnerPipeline.empty()) {
if (Name == "loop") {
LoopPassManager NestedLPM(DebugLogging);
if (!parseLoopPassPipeline(NestedLPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
// Add the nested pass manager with the appropriate adaptor.
LPM.addPass(std::move(NestedLPM));
return true;
}
if (auto Count = parseRepeatPassName(Name)) {
LoopPassManager NestedLPM(DebugLogging);
if (!parseLoopPassPipeline(NestedLPM, InnerPipeline, VerifyEachPass,
DebugLogging))
return false;
LPM.addPass(createRepeatedPass(*Count, std::move(NestedLPM)));
return true;
}
// Normal passes can't have pipelines.
return false;
}
// Now expand the basic registered passes from the .inc file.
#define LOOP_PASS(NAME, CREATE_PASS) \
if (Name == NAME) { \
LPM.addPass(CREATE_PASS); \
return true; \
}
#define LOOP_ANALYSIS(NAME, CREATE_PASS) \
if (Name == "require<" NAME ">") { \
LPM.addPass(RequireAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type, Loop>()); \
return true; \
} \
if (Name == "invalidate<" NAME ">") { \
LPM.addPass(InvalidateAnalysisPass< \
std::remove_reference<decltype(CREATE_PASS)>::type>()); \
return true; \
}
#include "PassRegistry.def"
return false;
}
bool PassBuilder::parseAAPassName(AAManager &AA, StringRef Name) {
#define MODULE_ALIAS_ANALYSIS(NAME, CREATE_PASS) \
if (Name == NAME) { \
AA.registerModuleAnalysis< \
std::remove_reference<decltype(CREATE_PASS)>::type>(); \
return true; \
}
#define FUNCTION_ALIAS_ANALYSIS(NAME, CREATE_PASS) \
if (Name == NAME) { \
AA.registerFunctionAnalysis< \
std::remove_reference<decltype(CREATE_PASS)>::type>(); \
return true; \
}
#include "PassRegistry.def"
return false;
}
bool PassBuilder::parseLoopPassPipeline(LoopPassManager &LPM,
ArrayRef<PipelineElement> Pipeline,
bool VerifyEachPass,
bool DebugLogging) {
for (const auto &Element : Pipeline) {
if (!parseLoopPass(LPM, Element, VerifyEachPass, DebugLogging))
return false;
// FIXME: No verifier support for Loop passes!
}
return true;
}
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool PassBuilder::parseFunctionPassPipeline(FunctionPassManager &FPM,
ArrayRef<PipelineElement> Pipeline,
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool VerifyEachPass,
bool DebugLogging) {
for (const auto &Element : Pipeline) {
if (!parseFunctionPass(FPM, Element, VerifyEachPass, DebugLogging))
return false;
if (VerifyEachPass)
FPM.addPass(VerifierPass());
}
return true;
}
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool PassBuilder::parseCGSCCPassPipeline(CGSCCPassManager &CGPM,
ArrayRef<PipelineElement> Pipeline,
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool VerifyEachPass,
bool DebugLogging) {
for (const auto &Element : Pipeline) {
if (!parseCGSCCPass(CGPM, Element, VerifyEachPass, DebugLogging))
return false;
// FIXME: No verifier support for CGSCC passes!
}
return true;
}
void PassBuilder::crossRegisterProxies(LoopAnalysisManager &LAM,
FunctionAnalysisManager &FAM,
CGSCCAnalysisManager &CGAM,
ModuleAnalysisManager &MAM) {
MAM.registerPass([&] { return FunctionAnalysisManagerModuleProxy(FAM); });
MAM.registerPass([&] { return CGSCCAnalysisManagerModuleProxy(CGAM); });
CGAM.registerPass([&] { return ModuleAnalysisManagerCGSCCProxy(MAM); });
FAM.registerPass([&] { return CGSCCAnalysisManagerFunctionProxy(CGAM); });
FAM.registerPass([&] { return ModuleAnalysisManagerFunctionProxy(MAM); });
FAM.registerPass([&] { return LoopAnalysisManagerFunctionProxy(LAM); });
LAM.registerPass([&] { return FunctionAnalysisManagerLoopProxy(FAM); });
}
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool PassBuilder::parseModulePassPipeline(ModulePassManager &MPM,
ArrayRef<PipelineElement> Pipeline,
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool VerifyEachPass,
bool DebugLogging) {
for (const auto &Element : Pipeline) {
if (!parseModulePass(MPM, Element, VerifyEachPass, DebugLogging))
return false;
if (VerifyEachPass)
MPM.addPass(VerifierPass());
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
}
return true;
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
}
// Primary pass pipeline description parsing routine.
// FIXME: Should this routine accept a TargetMachine or require the caller to
// pre-populate the analysis managers with target-specific stuff?
[PM] Create a separate library for high-level pass management code. This will provide the analogous replacements for the PassManagerBuilder and other code long term. This code is extracted from the opt tool currently, and I plan to extend it as I build up support for using the new pass manager in Clang and other places. Mailing this out for review in part to let folks comment on the terrible names here. A brief word about why I chose the names I did. The library is called "Passes" to try and make it clear that it is a high-level utility and where *all* of the passes come together and are registered in a common library. I didn't want it to be *limited* to a registry though, the registry is just one component. The class is a "PassBuilder" but this name I'm less happy with. It doesn't build passes in any traditional sense and isn't a Builder-style API at all. The class is a PassRegisterer or PassAdder, but neither of those really make a lot of sense. This class is responsible for constructing passes for registry in an analysis manager or for population of a pass pipeline. If anyone has a better name, I would love to hear it. The other candidate I looked at was PassRegistrar, but that doesn't really fit either. There is no register of all the passes in use, and so I think continuing the "registry" analog outside of the registry of pass *names* and *types* is a mistake. The objects themselves are just objects with the new pass manager. Differential Revision: http://reviews.llvm.org/D8054 llvm-svn: 231556
2015-03-07 17:02:36 +08:00
bool PassBuilder::parsePassPipeline(ModulePassManager &MPM,
StringRef PipelineText, bool VerifyEachPass,
bool DebugLogging) {
auto Pipeline = parsePipelineText(PipelineText);
if (!Pipeline || Pipeline->empty())
return false;
// If the first name isn't at the module layer, wrap the pipeline up
// automatically.
StringRef FirstName = Pipeline->front().Name;
if (!isModulePassName(FirstName)) {
if (isCGSCCPassName(FirstName))
Pipeline = {{"cgscc", std::move(*Pipeline)}};
else if (isFunctionPassName(FirstName))
Pipeline = {{"function", std::move(*Pipeline)}};
else if (isLoopPassName(FirstName))
Pipeline = {{"function", {{"loop", std::move(*Pipeline)}}}};
else
// Unknown pass name!
return false;
}
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
return parseModulePassPipeline(MPM, *Pipeline, VerifyEachPass, DebugLogging);
[PM] Add (very skeletal) support to opt for running the new pass manager. I cannot emphasize enough that this is a WIP. =] I expect it to change a great deal as things stabilize, but I think its really important to get *some* functionality here so that the infrastructure can be tested more traditionally from the commandline. The current design is looking something like this: ./bin/opt -passes='module(pass_a,pass_b,function(pass_c,pass_d))' So rather than custom-parsed flags, there is a single flag with a string argument that is parsed into the pass pipeline structure. This makes it really easy to have nice structural properties that are very explicit. There is one obvious and important shortcut. You can start off the pipeline with a pass, and the minimal context of pass managers will be built around the entire specified pipeline. This makes the common case for tests super easy: ./bin/opt -passes=instcombine,sroa,gvn But this won't introduce any of the complexity of the fully inferred old system -- we only ever do this for the *entire* argument, and we only look at the first pass. If the other passes don't fit in the pass manager selected it is a hard error. The other interesting aspect here is that I'm not relying on any registration facilities. Such facilities may be unavoidable for supporting plugins, but I have alternative ideas for plugins that I'd like to try first. My plan is essentially to build everything without registration until we hit an absolute requirement. Instead of registration of pass names, there will be a library dedicated to parsing pass names and the pass pipeline strings described above. Currently, this is directly embedded into opt for simplicity as it is very early, but I plan to eventually pull this into a library that opt, bugpoint, and even Clang can depend on. It should end up as a good home for things like the existing PassManagerBuilder as well. There are a bunch of FIXMEs in the code for the parts of this that are just stubbed out to make the patch more incremental. A quick list of what's coming up directly after this: - Support for function passes and building the structured nesting. - Support for printing the pass structure, and FileCheck tests of all of this code. - The .def-file based pass name parsing. - IR priting passes and the corresponding tests. Some obvious things that I'm not going to do right now, but am definitely planning on as the pass manager work gets a bit further: - Pull the parsing into library, including the builders. - Thread the rest of the target stuff into the new pass manager. - Wire support for the new pass manager up to llc. - Plugin support. Some things that I'd like to have, but are significantly lower on my priority list. I'll get to these eventually, but they may also be places where others want to contribute: - Adding nice error reporting for broken pass pipeline descriptions. - Typo-correction for pass names. llvm-svn: 198998
2014-01-11 16:16:35 +08:00
}
bool PassBuilder::parseAAPipeline(AAManager &AA, StringRef PipelineText) {
while (!PipelineText.empty()) {
StringRef Name;
std::tie(Name, PipelineText) = PipelineText.split(',');
if (!parseAAPassName(AA, Name))
return false;
}
return true;
}