This significantly simplifies the boilerplate necessary for passes
to define nested pass pipelines.
Differential Revision: https://reviews.llvm.org/D122880
This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made FuncOp an operation (years ago). This commit handles the
functional aspects of the move, but various aspects are left untouched
to ease migration: func::FuncOp is re-exported into mlir to reduce
the actual API churn, the assembly format still accepts the unqualified
`func`. These temporary measures will remain for a little while to
simplify migration before being removed.
Differential Revision: https://reviews.llvm.org/D121266
The current decision of when to run the verifier is running on the
assumption that nested passes can't affect the validity of the parent
operation, which isn't true. Parent operations may attach any number
of constraints on nested operations, which may not necessarily be
captured (or shouldn't be captured) at a smaller granularity.
This commit rectifies this by properly running the verifier after an
OpToOpAdaptor pass. To avoid an explosive increase in compile time,
we only run verification on the parent operation itself. To do this, a
flag to mlir::verify is added to avoid recursive verification if it isn't
desired.
Fixes#54288
Differential Revision: https://reviews.llvm.org/D121836
Early adoption of new technologies or adjusting certain code generation/IR optimization thresholds
is often available through some cl::opt options (which have unstable surfaces).
Specifying such an option twice will lead to an error.
```
% clang -c a.c -mllvm -disable-binop-extract-shuffle -mllvm -disable-binop-extract-shuffle
clang (LLVM option parsing): for the --disable-binop-extract-shuffle option: may only occur zero or one times!
% clang -c a.c -mllvm -hwasan-instrument-reads=0 -mllvm -hwasan-instrument-reads=0
clang (LLVM option parsing): for the --hwasan-instrument-reads option: may only occur zero or one times!
% clang -c a.c -mllvm --scalar-evolution-max-arith-depth=32 -mllvm --scalar-evolution-max-arith-depth=16
clang (LLVM option parsing): for the --scalar-evolution-max-arith-depth option: may only occur zero or one times!
```
The option is specified twice, because there is sometimes a global setting and
a specific file or project may need to override (or duplicately specify) the
value.
The error is contrary to the common practice of getopt/getopt_long command line
utilities that let the last option win and the `getLastArg` behavior used by
Clang driver options. I have seen such errors for several times. I think the
error just makes users inconvenient, while providing very little value on
discouraging production usage of unstable surfaces (this goal is itself
controversial, because developers might not want to commit to a stable surface
too early, or there is just some subtle codegen toggle which is infeasible to
have a driver option). Therefore, I suggest we drop the diagnostic, at least
before the diagnostic gets sufficiently better support for the overridding needs.
Removing the error is a degraded error checking experience. I think this error
checking behavior, if desirable, should be enabled explicitly by tools. Users
preferring the behavior can figure out a way to do so.
Reviewed By: jhenderson, rnk
Differential Revision: https://reviews.llvm.org/D120455
This commit adds a new hook Pass `bool canScheduleOn(RegisteredOperationName)` that
indicates if the given pass can be scheduled on operations of the given type. This makes it
easier to define constraints on generic passes without a) adding conditional checks to
the beginning of the `runOnOperation`, or b) defining a new pass type that forwards
from `runOnOperation` (after checking the invariants) to a new hook. This new hook is
used to implement an `InterfacePass` pass class, that represents a generic pass that
runs on operations of the given interface type.
The PassManager will also verify that passes added to a pass manager can actually be
scheduled on that pass manager, meaning that we will properly error when an Interface
is scheduled on an operation that doesn't actually implement that interface.
Differential Revision: https://reviews.llvm.org/D120791
The option parser currently does not properly handle nested options, meaning that
in some cases we can print pass pipelines that we can't actually parse back in. For example,
from #52885 we currently can't parse in inliner pipelines that have nested pipeline strings.
This commit adds handling for string values (e.g. "...") and nested options
(e.g. `foo{baz{bar=10 pizza=11}}`).
Fixes#52885
Differential Revision: https://reviews.llvm.org/D118078
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
This avoids keeping references to passes that may be freed by
the time that the pass manager has finished executing (in the
non-crash case).
Fixes PR#52069
Differential Revision: https://reviews.llvm.org/D111106
Historically the builtin dialect has had an empty namespace. This has unfortunately created a very awkward situation, where many utilities either have to special case the empty namespace, or just don't work at all right now. This revision adds a namespace to the builtin dialect, and starts to cleanup some of the utilities to no longer handle empty namespaces. For now, the assembly form of builtin operations does not require the `builtin.` prefix. (This should likely be re-evaluated though)
Differential Revision: https://reviews.llvm.org/D105149
This revision refactors the usage of multithreaded utilities in MLIR to use a common
thread pool within the MLIR context, in addition to a new utility that makes writing
multi-threaded code in MLIR less error prone. Using a unified thread pool brings about
several advantages:
* Better thread usage and more control
We currently use the static llvm threading utilities, which do not allow multiple
levels of asynchronous scheduling (even if there are open threads). This is due to
how the current TaskGroup structure works, which only allows one truly multithreaded
instance at a time. By having our own ThreadPool we gain more control and flexibility
over our job/thread scheduling, and in a followup can enable threading more parts of
the compiler.
* The static nature of TaskGroup causes issues in certain configurations
Due to the static nature of TaskGroup, there have been quite a few problems related to
destruction that have caused several downstream projects to disable threading. See
D104207 for discussion on some related fallout. By having a ThreadPool scoped to
the context, we don't have to worry about destruction and can ensure that any
additional MLIR thread usage ends when the context is destroyed.
Differential Revision: https://reviews.llvm.org/D104516
This allows for better interaction with tools (such as mlir-lsp-server), as it separates the IR into separate modules for consecutive dumps.
Differential Revision: https://reviews.llvm.org/D104073
This revision allows for attaching "debug labels" to patterns, and provides to FrozenRewritePatternSet for filtering patterns based on these labels (in addition to the debug name of the pattern). This will greatly simplify the ability to write tests targeted towards specific patterns (in cases where many patterns may interact), will also simplify debugging pattern application by observing how application changes when enabling/disabling specific patterns.
To enable better reuse of pattern rewrite options between passes, this revision also adds a new PassUtil.td file to the Rewrite/ library that will allow for passes to easily hook into a common interface for pattern debugging. Two options are used to seed this utility, `disable-patterns` and `enable-patterns`, which are used to enable the filtering behavior indicated above.
Differential Revision: https://reviews.llvm.org/D102441
The current implementation has several key limitations and weirdness, e.g local reproducers don't support dynamic pass pipelines, error messages don't include the passes that failed, etc. This revision refactors the implementation to support more use cases, and also be much cleaner.
The main change in this revision, aside from moving the implementation out of Pass.cpp and into its own file, is the addition of a crash recovery pass instrumentation. For local reproducers, this instrumentation handles setting up the recovery context before executing each pass. For global reproducers, the instrumentation is used to provide a more detailed error message, containing information about which passes are running and on which operations.
Example of new message:
```
error: Failures have been detected while processing an MLIR pass pipeline
note: Pipeline failed while executing [`TestCrashRecoveryPass` on 'module' operation: @foo]: reproducer generated at `crash-recovery.mlir.tmp`
```
Differential Revision: https://reviews.llvm.org/D101854
This flag will print the IR after a pass only in the case where the pass failed. This can be useful to more easily view the invalid IR, without needing to print after every pass in the pipeline.
Differential Revision: https://reviews.llvm.org/D101853
This factors out the pass timing code into a separate `TimingManager`
that can be plugged into the `PassManager` from the outside. Users are
able to provide their own implementation of this manager, and use it to
time additional code paths outside of the pass manager. Also allows for
multiple `PassManager`s to run and contribute to a single timing report.
More specifically, moves most of the existing infrastructure in
`Pass/PassTiming.cpp` into a new `Support/Timing.cpp` file and adds a
public interface in `Support/Timing.h`. The `PassTiming` instrumentation
becomes a wrapper around the new timing infrastructure which adapts the
instrumentation callbacks to the new timers.
Reviewed By: rriddle, lattner
Differential Revision: https://reviews.llvm.org/D100647
In LLVM_ENABLE_STATS=0 builds, `llvm::Statistic` maps to `llvm::NoopStatistic`
but has 3 mostly unused pointers. GlobalOpt considers that the pointers can
potentially retain allocated objects, so GlobalOpt cannot optimize out the
`NoopStatistic` variables (see D69428 for more context), wasting 23KiB for stage
2 clang.
This patch makes `NoopStatistic` empty and thus reclaims the wasted space. The
clang size is even smaller than applying D69428 (slightly smaller in both .bss and
.text).
```
# This means the D69428 optimization on clang is mostly nullified by this patch.
HEAD+D69428: size(.bss) = 0x0725a8
HEAD+D101211: size(.bss) = 0x072238
# bloaty - HEAD+D69428 vs HEAD+D101211
# With D101211, we also save a lot of string table space (.rodata).
FILE SIZE VM SIZE
-------------- --------------
-0.0% -32 -0.0% -24 .eh_frame
-0.0% -336 [ = ] 0 .symtab
-0.0% -360 [ = ] 0 .strtab
[ = ] 0 -0.2% -880 .bss
-0.0% -2.11Ki -0.0% -2.11Ki .rodata
-0.0% -2.89Ki -0.0% -2.89Ki .text
-0.0% -5.71Ki -0.0% -5.88Ki TOTAL
```
Note: LoopFuse is a disabled pass. For now this patch adds
`#if LLVM_ENABLE_STATS` so `OptimizationRemarkMissed` is skipped in
LLVM_ENABLE_STATS==0 builds. If these `OptimizationRemarkMissed` are useful in
LLVM_ENABLE_STATS==0 builds, we can replace `llvm::Statistic` with
`llvm::TrackingStatistic`, or use a different abstraction to keep track of the strings.
Similarly, skip the code in `mlir/lib/Pass/PassStatistics.cpp` which
calls `getName`/`getDesc`/`getValue`.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D101211
Add command line option to read the configuration dumped by the MLIR crash
reproducer and adds those to the other command line options parsed by mlir-opt.
Simple convenience that enables `mlir-opt --run-reproducer /tmp/repro.mlir`
instead of needing to copy&paste the configuration.
Differential Revision: https://reviews.llvm.org/D93924
Now that passes have support for running nested pipelines, the inliner can now allow for users to provide proper nested pipelines to use for optimization during inlining. This revision also changes the behavior of optimization during inlining to optimize before attempting to inline, which should lead to a more accurate cost model and prevents the need for users to schedule additional duplicate cleanup passes before/after the inliner that would already be run during inlining.
Differential Revision: https://reviews.llvm.org/D91211
Previous the textual form of the pass pipeline would implicitly nest,
instead we opt for the explicit form here: this has less surprise.
This also avoids asserting in the bindings when passing a pass pipeline
with incorrect nesting.
Differential Revision: https://reviews.llvm.org/D91233
This simplifies a few parts of the pass manager, but in particular we don't add as many
verifierpass as there are passes in the pipeline, and we can now enable/disable the
verifier after the fact on an already built PassManager.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D90669
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.
Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637
Recommit after fixing an ASAN issue: the callback lambda needs to be
allocated to a temporary to have its lifetime extended to the end of the
current block instead of just the current call expression.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D86392
This reverts commit 385c3f43fc.
Test mlir/test/Pass:dynamic-pipeline-fail-on-parent.mlir.test fails
when run with ASAN:
ERROR: AddressSanitizer: stack-use-after-scope on address ...
Reviewed By: bkramer, pifon2a
Differential Revision: https://reviews.llvm.org/D88079
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.
Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D86392
This crash only happens when a function pass is followed by a module
pass. In this case the splitting of the pass pipeline didn't handle
properly the verifier passes and ended up with an odd number of pass in
the pipeline, breaking an assumption of the local crash reproducer
executor and hitting an assertion.
Differential Revision: https://reviews.llvm.org/D88000
This is useful for several reasons:
* In some situations the user can guarantee that thread-safety isn't necessary and don't want to pay the cost of synchronization, e.g., when parsing a very large module.
* For things like logging threading is not desirable as the output is not guaranteed to be in stable order.
This flag also subsumes the pass manager flag for multi-threading.
Differential Revision: https://reviews.llvm.org/D79266
This revision adds a mode to the crash reproducer generator to attempt to generate a more local reproducer. This will attempt to generate a reproducer right before the offending pass that fails. This is useful for the majority of failures that are specific to a single pass, and situations where some passes in the pipeline are not registered with a specific tool.
Differential Revision: https://reviews.llvm.org/D78314
This change refactors pass options to be more similar to how statistics are modeled. More specifically, the options are specified directly on the pass instead of in a separate options class. (Note that the behavior and specification for pass pipelines remains the same.) This brings about several benefits:
* The specification of options is much simpler
* The round-trip format of a pass can be generated automatically
* This gives a somewhat deeper integration with "configuring" a pass, which we could potentially expose to users in the future.
PiperOrigin-RevId: 286953824
This adds an additional filtering mode for printing after a pass that checks to see if the pass actually changed the IR before printing it. This "change" detection is implemented using a SHA1 hash of the current operation and its children.
PiperOrigin-RevId: 284291089
It would be nice if we could detect if stats were enabled or not and use 'Requires', but this isn't possible to do at configure time.
Fixestensorflow/mlir#296
PiperOrigin-RevId: 284200271
Statistics are a way to keep track of what the compiler is doing and how effective various optimizations are. It is useful to see what optimizations are contributing to making a particular program run faster. Pass-instance specific statistics take this even further as you can see the effect of placing a particular pass at specific places within the pass pipeline, e.g. they could help answer questions like "what happens if I run CSE again here".
Statistics can be added to a pass by simply adding members of type 'Pass::Statistics'. This class takes as a constructor arguments: the parent pass pointer, a name, and a description. Statistics can be dumped by the pass manager in a similar manner to how pass timing information is dumped, i.e. via PassManager::enableStatistics programmatically; or -pass-statistics and -pass-statistics-display via the command line pass manager options.
Below is an example:
struct MyPass : public OperationPass<MyPass> {
Statistic testStat{this, "testStat", "A test statistic"};
void runOnOperation() {
...
++testStat;
...
}
};
$ mlir-opt -pass-pipeline='func(my-pass,my-pass)' foo.mlir -pass-statistics
Pipeline Display:
===-------------------------------------------------------------------------===
... Pass statistics report ...
===-------------------------------------------------------------------------===
'func' Pipeline
MyPass
(S) 15 testStat - A test statistic
MyPass
(S) 6 testStat - A test statistic
List Display:
===-------------------------------------------------------------------------===
... Pass statistics report ...
===-------------------------------------------------------------------------===
MyPass
(S) 21 testStat - A test statistic
PiperOrigin-RevId: 284022014
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.
PiperOrigin-RevId: 276573038
This cl adds support for generating a .mlir file containing a reproducer for crashes and failures that happen during pass execution. The reproducer contains a comment detailing the configuration of the pass manager(e.g. the textual description of the pass pipeline that the pass manager was executing), along with the original input module.
Example Output:
// configuration: -pass-pipeline='func(cse, canonicalize), inline'
// note: verifyPasses=false
module {
...
}
PiperOrigin-RevId: 274088134
Allow printing out pipelines in a format that is as close as possible to the
textual pass pipeline format. Individual passes can override the print function
in order to format any options that may have been used to construct that pass.
PiperOrigin-RevId: 273813627
This allows individual passes to define options structs and for these options to be parsed per instance of the pass while building the pass pipeline from the command line provided textual specification.
The user can specify these per-instance pipeline options like so:
```
struct MyPassOptions : public PassOptions<MyPassOptions> {
Option<int> exampleOption{*this, "flag-name", llvm:🆑:desc("...")};
List<int> exampleListOption{*this, "list-flag-name", llvm:🆑:desc("...")};
};
static PassRegistration<MyPass, MyPassOptions> pass("my-pass", "description");
```
PiperOrigin-RevId: 273650140
This allows for users other than those on the command line to apply a textual description of a pipeline to a given pass manager.
PiperOrigin-RevId: 269017028
This allows for explicitly specifying the pipeline to add to the pass manager. This includes the nesting structure, as well as the passes/pipelines to run. A textual pipeline string is defined as a series of names, each of which may in itself recursively contain a nested pipeline description. A name is either the name of a registered pass, or pass pipeline, (e.g. "cse") or the name of an operation type (e.g. "func").
For example, the following pipeline:
$ mlir-opt foo.mlir -cse -canonicalize -lower-to-llvm
Could now be specified as:
$ mlir-opt foo.mlir -pass-pipeline='func(cse, canonicalize), lower-to-llvm'
This will allow for running pipelines on nested operations, like say spirv modules. This does not remove any of the current functionality, and in fact can be used in unison. The new option is available via 'pass-pipeline'.
PiperOrigin-RevId: 268954279
This allows for parallelizing across pipelines of multiple operation types. AdaptorPasses can now hold pass managers for multiple operation types and will dispatch based upon the operation being operated on.
PiperOrigin-RevId: 268017344
This is done via a new set of instrumentation hooks runBeforePipeline/runAfterPipeline, that signal the lifetime of a pass pipeline on a specific operation type. These hooks also provide the parent thread of the pipeline, allowing for accurate merging of timers running on different threads.
PiperOrigin-RevId: 267909193
This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used:
// Pass manager for the top-level module.
PassManager pm(ctx);
// Nest a pipeline operating on FuncOp.
OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(...);
// Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp
OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>();
// Nest a pipeline on FuncOps inside of the spirv::ModuleOp.
OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>();
To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation.
/// Pass to verify an operation and signal failure if necessary.
class VerifierPass : public OperationPass<VerifierPass> {
void runOnOperation() override {
Operation *op = getOperation();
if (failed(verify(op)))
signalPassFailure();
markAllAnalysesPreserved();
}
};
PiperOrigin-RevId: 266840344
When multi-threading is enabled in the pass manager the meaning of the display
slightly changes. First, a new timing column is added, `User Time`, that
displays the total time spent across all threads. Secondly, the `Wall Time`
column displays the longest individual time spent amongst all of the threads.
This means that the `Wall Time` column will continue to give an indicator on the
perceived time, or clock time, whereas the `User Time` will display the total
cpu time.
Example:
$ mlir-opt foo.mlir -experimental-mt-pm -cse -canonicalize -convert-to-llvmir -pass-timing
===-------------------------------------------------------------------------===
... Pass execution timing report ...
===-------------------------------------------------------------------------===
Total Execution Time: 0.0078 seconds
---User Time--- ---Wall Time--- --- Name ---
0.0175 ( 88.3%) 0.0055 ( 70.4%) Function Pipeline
0.0018 ( 9.3%) 0.0006 ( 8.1%) CSE
0.0013 ( 6.3%) 0.0004 ( 5.8%) (A) DominanceInfo
0.0017 ( 8.7%) 0.0006 ( 7.1%) FunctionVerifier
0.0128 ( 64.6%) 0.0039 ( 50.5%) Canonicalizer
0.0011 ( 5.7%) 0.0004 ( 4.7%) FunctionVerifier
0.0004 ( 2.1%) 0.0004 ( 5.2%) ModuleVerifier
0.0010 ( 5.3%) 0.0010 ( 13.4%) LLVMLowering
0.0009 ( 4.3%) 0.0009 ( 11.0%) ModuleVerifier
0.0198 (100.0%) 0.0078 (100.0%) Total
PiperOrigin-RevId: 240636269
* print-ir-before=(comma-separated-pass-list)
- Print the IR before each of the passes provided within the pass list.
* print-ir-before-all
- Print the IR before every pass in the pipeline.
* print-ir-after=(comma-separated-pass-list)
- Print the IR after each of the passes provided within the pass list.
* print-ir-after-all
- Print the IR after every pass in the pipeline.
* print-ir-module-scope
- Always print the Module IR, even for non module passes.
PiperOrigin-RevId: 238523649