Historically the builtin dialect has had an empty namespace. This has unfortunately created a very awkward situation, where many utilities either have to special case the empty namespace, or just don't work at all right now. This revision adds a namespace to the builtin dialect, and starts to cleanup some of the utilities to no longer handle empty namespaces. For now, the assembly form of builtin operations does not require the `builtin.` prefix. (This should likely be re-evaluated though)
Differential Revision: https://reviews.llvm.org/D105149
This revision refactors the usage of multithreaded utilities in MLIR to use a common
thread pool within the MLIR context, in addition to a new utility that makes writing
multi-threaded code in MLIR less error prone. Using a unified thread pool brings about
several advantages:
* Better thread usage and more control
We currently use the static llvm threading utilities, which do not allow multiple
levels of asynchronous scheduling (even if there are open threads). This is due to
how the current TaskGroup structure works, which only allows one truly multithreaded
instance at a time. By having our own ThreadPool we gain more control and flexibility
over our job/thread scheduling, and in a followup can enable threading more parts of
the compiler.
* The static nature of TaskGroup causes issues in certain configurations
Due to the static nature of TaskGroup, there have been quite a few problems related to
destruction that have caused several downstream projects to disable threading. See
D104207 for discussion on some related fallout. By having a ThreadPool scoped to
the context, we don't have to worry about destruction and can ensure that any
additional MLIR thread usage ends when the context is destroyed.
Differential Revision: https://reviews.llvm.org/D104516
This allows for better interaction with tools (such as mlir-lsp-server), as it separates the IR into separate modules for consecutive dumps.
Differential Revision: https://reviews.llvm.org/D104073
This revision allows for attaching "debug labels" to patterns, and provides to FrozenRewritePatternSet for filtering patterns based on these labels (in addition to the debug name of the pattern). This will greatly simplify the ability to write tests targeted towards specific patterns (in cases where many patterns may interact), will also simplify debugging pattern application by observing how application changes when enabling/disabling specific patterns.
To enable better reuse of pattern rewrite options between passes, this revision also adds a new PassUtil.td file to the Rewrite/ library that will allow for passes to easily hook into a common interface for pattern debugging. Two options are used to seed this utility, `disable-patterns` and `enable-patterns`, which are used to enable the filtering behavior indicated above.
Differential Revision: https://reviews.llvm.org/D102441
The current implementation has several key limitations and weirdness, e.g local reproducers don't support dynamic pass pipelines, error messages don't include the passes that failed, etc. This revision refactors the implementation to support more use cases, and also be much cleaner.
The main change in this revision, aside from moving the implementation out of Pass.cpp and into its own file, is the addition of a crash recovery pass instrumentation. For local reproducers, this instrumentation handles setting up the recovery context before executing each pass. For global reproducers, the instrumentation is used to provide a more detailed error message, containing information about which passes are running and on which operations.
Example of new message:
```
error: Failures have been detected while processing an MLIR pass pipeline
note: Pipeline failed while executing [`TestCrashRecoveryPass` on 'module' operation: @foo]: reproducer generated at `crash-recovery.mlir.tmp`
```
Differential Revision: https://reviews.llvm.org/D101854
This flag will print the IR after a pass only in the case where the pass failed. This can be useful to more easily view the invalid IR, without needing to print after every pass in the pipeline.
Differential Revision: https://reviews.llvm.org/D101853
This factors out the pass timing code into a separate `TimingManager`
that can be plugged into the `PassManager` from the outside. Users are
able to provide their own implementation of this manager, and use it to
time additional code paths outside of the pass manager. Also allows for
multiple `PassManager`s to run and contribute to a single timing report.
More specifically, moves most of the existing infrastructure in
`Pass/PassTiming.cpp` into a new `Support/Timing.cpp` file and adds a
public interface in `Support/Timing.h`. The `PassTiming` instrumentation
becomes a wrapper around the new timing infrastructure which adapts the
instrumentation callbacks to the new timers.
Reviewed By: rriddle, lattner
Differential Revision: https://reviews.llvm.org/D100647
In LLVM_ENABLE_STATS=0 builds, `llvm::Statistic` maps to `llvm::NoopStatistic`
but has 3 mostly unused pointers. GlobalOpt considers that the pointers can
potentially retain allocated objects, so GlobalOpt cannot optimize out the
`NoopStatistic` variables (see D69428 for more context), wasting 23KiB for stage
2 clang.
This patch makes `NoopStatistic` empty and thus reclaims the wasted space. The
clang size is even smaller than applying D69428 (slightly smaller in both .bss and
.text).
```
# This means the D69428 optimization on clang is mostly nullified by this patch.
HEAD+D69428: size(.bss) = 0x0725a8
HEAD+D101211: size(.bss) = 0x072238
# bloaty - HEAD+D69428 vs HEAD+D101211
# With D101211, we also save a lot of string table space (.rodata).
FILE SIZE VM SIZE
-------------- --------------
-0.0% -32 -0.0% -24 .eh_frame
-0.0% -336 [ = ] 0 .symtab
-0.0% -360 [ = ] 0 .strtab
[ = ] 0 -0.2% -880 .bss
-0.0% -2.11Ki -0.0% -2.11Ki .rodata
-0.0% -2.89Ki -0.0% -2.89Ki .text
-0.0% -5.71Ki -0.0% -5.88Ki TOTAL
```
Note: LoopFuse is a disabled pass. For now this patch adds
`#if LLVM_ENABLE_STATS` so `OptimizationRemarkMissed` is skipped in
LLVM_ENABLE_STATS==0 builds. If these `OptimizationRemarkMissed` are useful in
LLVM_ENABLE_STATS==0 builds, we can replace `llvm::Statistic` with
`llvm::TrackingStatistic`, or use a different abstraction to keep track of the strings.
Similarly, skip the code in `mlir/lib/Pass/PassStatistics.cpp` which
calls `getName`/`getDesc`/`getValue`.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D101211
Add command line option to read the configuration dumped by the MLIR crash
reproducer and adds those to the other command line options parsed by mlir-opt.
Simple convenience that enables `mlir-opt --run-reproducer /tmp/repro.mlir`
instead of needing to copy&paste the configuration.
Differential Revision: https://reviews.llvm.org/D93924
Now that passes have support for running nested pipelines, the inliner can now allow for users to provide proper nested pipelines to use for optimization during inlining. This revision also changes the behavior of optimization during inlining to optimize before attempting to inline, which should lead to a more accurate cost model and prevents the need for users to schedule additional duplicate cleanup passes before/after the inliner that would already be run during inlining.
Differential Revision: https://reviews.llvm.org/D91211
Previous the textual form of the pass pipeline would implicitly nest,
instead we opt for the explicit form here: this has less surprise.
This also avoids asserting in the bindings when passing a pass pipeline
with incorrect nesting.
Differential Revision: https://reviews.llvm.org/D91233
This simplifies a few parts of the pass manager, but in particular we don't add as many
verifierpass as there are passes in the pipeline, and we can now enable/disable the
verifier after the fact on an already built PassManager.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D90669
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.
Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637
Recommit after fixing an ASAN issue: the callback lambda needs to be
allocated to a temporary to have its lifetime extended to the end of the
current block instead of just the current call expression.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D86392
This reverts commit 385c3f43fc.
Test mlir/test/Pass:dynamic-pipeline-fail-on-parent.mlir.test fails
when run with ASAN:
ERROR: AddressSanitizer: stack-use-after-scope on address ...
Reviewed By: bkramer, pifon2a
Differential Revision: https://reviews.llvm.org/D88079
Instead of performing a transformation, such pass yields a new pass pipeline
to run on the currently visited operation.
This feature can be used for example to implement a sub-pipeline that
would run only on an operation with specific attributes. Another example
would be to compute a cost model and dynamic schedule a pipeline based
on the result of this analysis.
Discussion: https://llvm.discourse.group/t/rfc-dynamic-pass-pipeline/1637
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D86392
This crash only happens when a function pass is followed by a module
pass. In this case the splitting of the pass pipeline didn't handle
properly the verifier passes and ended up with an odd number of pass in
the pipeline, breaking an assumption of the local crash reproducer
executor and hitting an assertion.
Differential Revision: https://reviews.llvm.org/D88000
This is useful for several reasons:
* In some situations the user can guarantee that thread-safety isn't necessary and don't want to pay the cost of synchronization, e.g., when parsing a very large module.
* For things like logging threading is not desirable as the output is not guaranteed to be in stable order.
This flag also subsumes the pass manager flag for multi-threading.
Differential Revision: https://reviews.llvm.org/D79266
This revision adds a mode to the crash reproducer generator to attempt to generate a more local reproducer. This will attempt to generate a reproducer right before the offending pass that fails. This is useful for the majority of failures that are specific to a single pass, and situations where some passes in the pipeline are not registered with a specific tool.
Differential Revision: https://reviews.llvm.org/D78314
This change refactors pass options to be more similar to how statistics are modeled. More specifically, the options are specified directly on the pass instead of in a separate options class. (Note that the behavior and specification for pass pipelines remains the same.) This brings about several benefits:
* The specification of options is much simpler
* The round-trip format of a pass can be generated automatically
* This gives a somewhat deeper integration with "configuring" a pass, which we could potentially expose to users in the future.
PiperOrigin-RevId: 286953824
This adds an additional filtering mode for printing after a pass that checks to see if the pass actually changed the IR before printing it. This "change" detection is implemented using a SHA1 hash of the current operation and its children.
PiperOrigin-RevId: 284291089
It would be nice if we could detect if stats were enabled or not and use 'Requires', but this isn't possible to do at configure time.
Fixestensorflow/mlir#296
PiperOrigin-RevId: 284200271
Statistics are a way to keep track of what the compiler is doing and how effective various optimizations are. It is useful to see what optimizations are contributing to making a particular program run faster. Pass-instance specific statistics take this even further as you can see the effect of placing a particular pass at specific places within the pass pipeline, e.g. they could help answer questions like "what happens if I run CSE again here".
Statistics can be added to a pass by simply adding members of type 'Pass::Statistics'. This class takes as a constructor arguments: the parent pass pointer, a name, and a description. Statistics can be dumped by the pass manager in a similar manner to how pass timing information is dumped, i.e. via PassManager::enableStatistics programmatically; or -pass-statistics and -pass-statistics-display via the command line pass manager options.
Below is an example:
struct MyPass : public OperationPass<MyPass> {
Statistic testStat{this, "testStat", "A test statistic"};
void runOnOperation() {
...
++testStat;
...
}
};
$ mlir-opt -pass-pipeline='func(my-pass,my-pass)' foo.mlir -pass-statistics
Pipeline Display:
===-------------------------------------------------------------------------===
... Pass statistics report ...
===-------------------------------------------------------------------------===
'func' Pipeline
MyPass
(S) 15 testStat - A test statistic
MyPass
(S) 6 testStat - A test statistic
List Display:
===-------------------------------------------------------------------------===
... Pass statistics report ...
===-------------------------------------------------------------------------===
MyPass
(S) 21 testStat - A test statistic
PiperOrigin-RevId: 284022014
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.
PiperOrigin-RevId: 276573038
This cl adds support for generating a .mlir file containing a reproducer for crashes and failures that happen during pass execution. The reproducer contains a comment detailing the configuration of the pass manager(e.g. the textual description of the pass pipeline that the pass manager was executing), along with the original input module.
Example Output:
// configuration: -pass-pipeline='func(cse, canonicalize), inline'
// note: verifyPasses=false
module {
...
}
PiperOrigin-RevId: 274088134
Allow printing out pipelines in a format that is as close as possible to the
textual pass pipeline format. Individual passes can override the print function
in order to format any options that may have been used to construct that pass.
PiperOrigin-RevId: 273813627
This allows individual passes to define options structs and for these options to be parsed per instance of the pass while building the pass pipeline from the command line provided textual specification.
The user can specify these per-instance pipeline options like so:
```
struct MyPassOptions : public PassOptions<MyPassOptions> {
Option<int> exampleOption{*this, "flag-name", llvm:🆑:desc("...")};
List<int> exampleListOption{*this, "list-flag-name", llvm:🆑:desc("...")};
};
static PassRegistration<MyPass, MyPassOptions> pass("my-pass", "description");
```
PiperOrigin-RevId: 273650140
This allows for users other than those on the command line to apply a textual description of a pipeline to a given pass manager.
PiperOrigin-RevId: 269017028
This allows for explicitly specifying the pipeline to add to the pass manager. This includes the nesting structure, as well as the passes/pipelines to run. A textual pipeline string is defined as a series of names, each of which may in itself recursively contain a nested pipeline description. A name is either the name of a registered pass, or pass pipeline, (e.g. "cse") or the name of an operation type (e.g. "func").
For example, the following pipeline:
$ mlir-opt foo.mlir -cse -canonicalize -lower-to-llvm
Could now be specified as:
$ mlir-opt foo.mlir -pass-pipeline='func(cse, canonicalize), lower-to-llvm'
This will allow for running pipelines on nested operations, like say spirv modules. This does not remove any of the current functionality, and in fact can be used in unison. The new option is available via 'pass-pipeline'.
PiperOrigin-RevId: 268954279
This allows for parallelizing across pipelines of multiple operation types. AdaptorPasses can now hold pass managers for multiple operation types and will dispatch based upon the operation being operated on.
PiperOrigin-RevId: 268017344
This is done via a new set of instrumentation hooks runBeforePipeline/runAfterPipeline, that signal the lifetime of a pass pipeline on a specific operation type. These hooks also provide the parent thread of the pipeline, allowing for accurate merging of timers running on different threads.
PiperOrigin-RevId: 267909193
This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used:
// Pass manager for the top-level module.
PassManager pm(ctx);
// Nest a pipeline operating on FuncOp.
OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(...);
// Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp
OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>();
// Nest a pipeline on FuncOps inside of the spirv::ModuleOp.
OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>();
To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation.
/// Pass to verify an operation and signal failure if necessary.
class VerifierPass : public OperationPass<VerifierPass> {
void runOnOperation() override {
Operation *op = getOperation();
if (failed(verify(op)))
signalPassFailure();
markAllAnalysesPreserved();
}
};
PiperOrigin-RevId: 266840344
When multi-threading is enabled in the pass manager the meaning of the display
slightly changes. First, a new timing column is added, `User Time`, that
displays the total time spent across all threads. Secondly, the `Wall Time`
column displays the longest individual time spent amongst all of the threads.
This means that the `Wall Time` column will continue to give an indicator on the
perceived time, or clock time, whereas the `User Time` will display the total
cpu time.
Example:
$ mlir-opt foo.mlir -experimental-mt-pm -cse -canonicalize -convert-to-llvmir -pass-timing
===-------------------------------------------------------------------------===
... Pass execution timing report ...
===-------------------------------------------------------------------------===
Total Execution Time: 0.0078 seconds
---User Time--- ---Wall Time--- --- Name ---
0.0175 ( 88.3%) 0.0055 ( 70.4%) Function Pipeline
0.0018 ( 9.3%) 0.0006 ( 8.1%) CSE
0.0013 ( 6.3%) 0.0004 ( 5.8%) (A) DominanceInfo
0.0017 ( 8.7%) 0.0006 ( 7.1%) FunctionVerifier
0.0128 ( 64.6%) 0.0039 ( 50.5%) Canonicalizer
0.0011 ( 5.7%) 0.0004 ( 4.7%) FunctionVerifier
0.0004 ( 2.1%) 0.0004 ( 5.2%) ModuleVerifier
0.0010 ( 5.3%) 0.0010 ( 13.4%) LLVMLowering
0.0009 ( 4.3%) 0.0009 ( 11.0%) ModuleVerifier
0.0198 (100.0%) 0.0078 (100.0%) Total
PiperOrigin-RevId: 240636269
* print-ir-before=(comma-separated-pass-list)
- Print the IR before each of the passes provided within the pass list.
* print-ir-before-all
- Print the IR before every pass in the pipeline.
* print-ir-after=(comma-separated-pass-list)
- Print the IR after each of the passes provided within the pass list.
* print-ir-after-all
- Print the IR after every pass in the pipeline.
* print-ir-module-scope
- Always print the Module IR, even for non module passes.
PiperOrigin-RevId: 238523649
Below shows the output for an example mlir-opt command line.
mlir-opt foo.mlir -verify-each=false -cse -canonicalize -cse -cse -pass-timing
list view (-pass-timing-display=list):
* In this mode the results are displayed in a list sorted by total time; with each pass/analysis instance aggregated into one unique result. This mode is similar to the output of 'time-passes' in llvm-opt.
===-------------------------------------------------------------------------===
... Pass execution timing report ...
===-------------------------------------------------------------------------===
Total Execution Time: 0.0097 seconds (0.0096 wall clock)
---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name ---
0.0051 ( 58.3%) 0.0001 ( 12.2%) 0.0052 ( 53.8%) 0.0052 ( 53.8%) Canonicalizer
0.0025 ( 29.1%) 0.0005 ( 58.2%) 0.0031 ( 31.9%) 0.0031 ( 32.0%) CSE
0.0011 ( 12.6%) 0.0003 ( 29.7%) 0.0014 ( 14.3%) 0.0014 ( 14.2%) DominanceInfo
0.0087 (100.0%) 0.0009 (100.0%) 0.0097 (100.0%) 0.0096 (100.0%) Total
pipeline view (-pass-timing-display=pipeline):
* In this mode the results are displayed in a nested pipeline view that mirrors the internal pass pipeline that is being executed in the pass manager. This view is useful for understanding specifically which parts of the pipeline are taking the most time, and can also be used to identify when analyses are being invalidated and recomputed.
===-------------------------------------------------------------------------===
... Pass execution timing report ...
===-------------------------------------------------------------------------===
Total Execution Time: 0.0082 seconds (0.0081 wall clock)
---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name ---
0.0042 (100.0%) 0.0039 (100.0%) 0.0082 (100.0%) 0.0081 (100.0%) Function Pipeline
0.0005 ( 11.6%) 0.0008 ( 21.1%) 0.0013 ( 16.1%) 0.0013 ( 16.2%) CSE
0.0002 ( 5.0%) 0.0004 ( 9.3%) 0.0006 ( 7.0%) 0.0006 ( 7.0%) (A) DominanceInfo
0.0026 ( 61.8%) 0.0018 ( 45.6%) 0.0044 ( 54.0%) 0.0044 ( 54.1%) Canonicalizer
0.0005 ( 11.7%) 0.0005 ( 13.0%) 0.0010 ( 12.3%) 0.0010 ( 12.4%) CSE
0.0003 ( 6.1%) 0.0003 ( 8.3%) 0.0006 ( 7.2%) 0.0006 ( 7.1%) (A) DominanceInfo
0.0002 ( 3.8%) 0.0001 ( 2.8%) 0.0003 ( 3.3%) 0.0003 ( 3.3%) CSE
0.0042 (100.0%) 0.0039 (100.0%) 0.0082 (100.0%) 0.0081 (100.0%) Total
PiperOrigin-RevId: 237825367