We weren't properly visiting region successors when the terminator wasn't return like, which could create incorrect results in the analysis. This revision ensures that we properly visit region successors, to avoid optimistically assuming a value is constant when it isn't.
Differential Revision: https://reviews.llvm.org/D101783
All linalg.init operations must be fed into a linalg operation before
subtensor. The inserted linalg.fill guarantees it executes correctly.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D101848
TransferReadOps that are a scalar read + broadcast are handled by TransferReadToVectorLoadLowering.
Differential Revision: https://reviews.llvm.org/D101808
Lowerings equal and arithmetic_right_shift for elementwise ops to linalg dialect using linalg.generic
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D101804
Given the source and destination shapes, if they are static, or if the
expanded/collapsed dimensions are unit-extent, it is possible to
compute the reassociation maps that can be used to reshape one type
into another. Add a utility method to return the reassociation maps
when possible.
This utility function can be used to fuse a sequence of reshape ops,
given the type of the source of the producer and the final result
type. This pattern supercedes a more constrained folding pattern added
to DropUnitDims pass.
Differential Revision: https://reviews.llvm.org/D101343
Convert subtensor and subtensor_insert operations to use their
rank-reduced versions to drop unit dimensions.
Differential Revision: https://reviews.llvm.org/D101495
The current implementation had a bug as it was relying on the target vector
dimension sizes to calculate where to insert broadcast. If several dimensions
have the same size we may insert the broadcast on the wrong dimension. The
correct broadcast cannot be inferred from the type of the source and
destination vector.
Instead when we want to extend transfer ops we calculate an "inverse" map to the
projected permutation and insert broadcast in place of the projected dimensions.
Differential Revision: https://reviews.llvm.org/D101738
* NFC but has some fixes for CMake glitches discovered along the way (things not cleaning properly, co-mingled depends).
* Includes previously unsubmitted fix in D98681 and a TODO to fix it more appropriately in a smaller followup.
Differential Revision: https://reviews.llvm.org/D101493
Move TransposeOp lowering in its own populate function as in some cases
it is better to keep it during ContractOp lowering to better
canonicalize it rather than emiting scalar insert/extract.
Differential Revision: https://reviews.llvm.org/D101647
* This makes them consistent with custom types/attributes, whose constructors will do a type checked conversion. Of course, the base classes can represent everything so never error.
* More importantly, this makes it possible to subclass Type and Attribute out of tree in sensible ways.
Differential Revision: https://reviews.llvm.org/D101734
Added canonicalization for vector_load and vector_store. An existing
pattern SimplifyAffineOp can be reused to compose maps that supplies
result into them. Added AffineVectorStoreOp and AffineVectorLoadOp
into static_assert of SimplifyAffineOp to allow operation to use it.
This fixes the bug filed: https://bugs.llvm.org/show_bug.cgi?id=50058
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D101691
(1) migrates the encoding from TensorDialect into the new SparseTensorDialect
(2) replaces dictionary-based storage and builders with struct-like data
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D101669
Stop using the compatibility spellings of `OF_{None,Text,Append}`
left behind by 1f67a3cba9. A follow-up
will remove them.
Differential Revision: https://reviews.llvm.org/D101650
Three patterns are added to convert into vector.multi_reduction into a
sequence of vector.reduction as the following:
- Transpose the inputs so inner most dimensions are always reduction.
- Reduce rank of vector.multi_reduction into 2d with inner most
reduction dim (get the 2d canical form)
- 2D canonical form is converted into a sequence of vector.reduction.
There are two things we might worth in a follow up diff:
- An scf.for (maybe optionally) around vector.reduction instead of unrolling it.
- Breakdown the vector.reduction into a sequence of vector.reduction
(e.g tree-based reduction) instead of relying on how downstream dialects
handle it.
Note: this will requires passing target-vector-length
Differential Revision: https://reviews.llvm.org/D101570
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.
NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.
Differential Revision: https://reviews.llvm.org/D101573
Constant-0 dim expr values should be avoided for linalg as it can prevent
fusion. This includes adding support for rank-0 reshapes.
Differential Revision: https://reviews.llvm.org/D101418
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.
NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D101488
This enables to express more complex parallel loops in the affine framework,
for example, in cases of tiling by sizes not dividing loop trip counts perfectly
or inner wavefront parallelism, among others. One can't use affine.max/min
and supply values to the nested loop bounds since the results of such
affine.max/min operations aren't valid symbols. Making them valid symbols
isn't an option since they would introduce selection trees into memref
subscript arithmetic as an unintended and undesired consequence. Also
add support for converting such loops to SCF. Drop some API that isn't used in
the core repo from AffineParallelOp since its semantics becomes ambiguous in
presence of max/min bounds. Loop normalization is currently unavailable for
such loops.
Depends On D101171
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D101172
Introduce a basic support for parallelizing affine loops with reductions
expressed using iteration arguments. Affine parallelism detector now has a flag
to assume such reductions are parallel. The transformation handles a subset of
parallel reductions that are can be expressed using affine.parallel:
integer/float addition and multiplication. This requires to detect the
reduction operation since affine.parallel only supports a fixed set of
reduction operators.
Reviewed By: chelini, kumasento, bondhugula
Differential Revision: https://reviews.llvm.org/D101171
FillOp allows complex ops, and filling a properly sized buffer with
a default zero complex number is implemented.
Differential Revision: https://reviews.llvm.org/D99939
This will allow the bindings to be built as a library and reused in out-of-tree
projects that want to provide bindings on top of MLIR bindings.
Reviewed By: stellaraccident, mikeurbach
Differential Revision: https://reviews.llvm.org/D101075
This revision adds support for vectorizing more general linalg operations with projected permutation maps.
This is achieved by eagerly broadcasting the intermediate vector to the common size
of the iteration domain of the linalg op. This allows a much more natural expression of
generalized vectorization but may introduce additional computations until all the
proper canonicalizations are implemented.
This generalization modifies the vector.transfer_read/write permutation logic and
exposes the fact that the logic employed in vector.contract was too ad-hoc.
As a consequence, changes occur in the permutation / transposition logic for contraction. In turn this prompts supporting more cases in the lowering of contract
to matrix intrinsics, which is required to make the corresponding tests pass.
Differential revision: https://reviews.llvm.org/D101165
The patch extends the OpDSL with support for:
- Constant values
- Capture scalar parameters
- Access the iteration indices using the index operation
- Provide predefined floating point and integer types.
Up to now the patch only supports emitting the new nodes. The C++/yaml path is not fully implemented. The fill_rng_2d operation defined in emit_structured_generic.py makes use of the new DSL constructs.
Differential Revision: https://reviews.llvm.org/D101364
This adds a method to directly invoke `mlirOperationDestroy` on the
MlirOperation wrapped by a PyOperation.
Reviewed By: stellaraccident, mehdi_amini
Differential Revision: https://reviews.llvm.org/D101422
Previously, this API would return the PyObjectRef, rather than the
underlying PyOperation.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D101416
Canonicalizations for subtensor operations defaulted to use the
rank-reduced version of the operation, but the cast inserted to get
back the original type would be illegal if the rank was actually
reduced. Instead make the canonicalization not reduce the rank of the
operation.
Differential Revision: https://reviews.llvm.org/D101258
The current canonicalization did not remap operation results correctly
and attempted to erase tiledLoop, which is incorrect if not all tensor
results are folded.
Tensor inputs, if not used in the body of TiledLoopOp, can be removed.
memref::CastOp can be folded into TiledLoopOp as well.
Differential Revision: https://reviews.llvm.org/D101445
Both, `shape.broadcast` and `shape.cstr_broadcastable` accept dynamic and static
extent tensors. If their operands are casted, we can use the original value
instead.
Differential Revision: https://reviews.llvm.org/D101376
Add a section attribute to LLVM_GlobalOp, during module translation attribute value is propagated to llvm
Reviewed By: sgrechanik, ftynse, mehdi_amini
Differential Revision: https://reviews.llvm.org/D100947
This adds `mlirOperationSetOperand` to the IR C API, similar to the
function to get an operand.
In the Python API, this adds `operands[index] = value` syntax, similar
to the syntax to get an operand with `operands[index]`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D101398
Add the `getCapsule()` and `createFromCapsule()` methods to the
PyValue class, as well as the necessary interoperability.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D101090
MatMul and FullyConnected have transposed dimensions for the weights.
Also, removed uneeded tensor reshape for bias.
Differential Revision: https://reviews.llvm.org/D101220
Quantized negation can be performed using higher bits operations.
Minimal bits are picked to perform the operation.
Differential Revision: https://reviews.llvm.org/D101225
Explicitly check for uninitialized to prevent crashes in edge cases where the derived analysis creates a lattice element for a value that hasn't been visited yet.
Like `print-ir-after-all` and `-before-all`, this allows to inspect IR for
debug purposes. While the former allow to inspect only between passes, this
change allows to follow the rewrites that happen within passes.
Differential Revision: https://reviews.llvm.org/D100940
Empty extent tensor operands were only removed when they were defined as a
constant. Additionally, we can remove them if they are known to be empty by
their type `tensor<0xindex>`.
Differential Revision: https://reviews.llvm.org/D101351
Splat constant folding was limited to `std.constant` operations. Instead, use
the constant matcher and apply splat constant folding to any constant-like
operation that holds a splat attribute.
Differential Revision: https://reviews.llvm.org/D101301
This revision takes the forward value propagation engine in SCCP and refactors it into a more generalized forward dataflow analysis framework. This framework allows for propagating information about values across the various control flow constructs in MLIR, and removes the need for users to reinvent the traversal (often not as completely). There are a few aspects of the traversal, that were conservative for SCCP, that should be relaxed to support the needs of different value analyses. To keep this revision simple, these conservative behaviors will be left in (Note that this won't produce an incorrect result, but may produce more conservative results than necessary in certain edge cases. e.g. region entry arguments for non-region branch interface operations). The framework also only focuses on computing lattices for values, given the SCCP origins, but this is something to relax as needed in the future.
Given that this logic is already in SCCP, a majority of this commit is NFC. The more interesting parts are the interface glue that clients interact with.
Differential Revision: https://reviews.llvm.org/D100915
The new "encoding" field in tensor types so far had no meaning. This revision introduces:
1. an encoding attribute interface in IR: for verification between tensors and encodings in general
2. an attribute in Tensor dialect; #tensor.sparse<dict> + concrete sparse tensors API
Active discussion:
https://llvm.discourse.group/t/rfc-introduce-a-sparse-tensor-type-to-core-mlir/2944/
Reviewed By: silvas, penpornk, bixia
Differential Revision: https://reviews.llvm.org/D101008
Add two canoncalizations for scf.if.
1) A canonicalization that allows users of a condition within an if to assume the condition
is true if in the true region, etc.
2) A canonicalization that removes yielded statements that are equivalent to the condition
or its negation
Differential Revision: https://reviews.llvm.org/D101012
In LLVM_ENABLE_STATS=0 builds, `llvm::Statistic` maps to `llvm::NoopStatistic`
but has 3 mostly unused pointers. GlobalOpt considers that the pointers can
potentially retain allocated objects, so GlobalOpt cannot optimize out the
`NoopStatistic` variables (see D69428 for more context), wasting 23KiB for stage
2 clang.
This patch makes `NoopStatistic` empty and thus reclaims the wasted space. The
clang size is even smaller than applying D69428 (slightly smaller in both .bss and
.text).
```
# This means the D69428 optimization on clang is mostly nullified by this patch.
HEAD+D69428: size(.bss) = 0x0725a8
HEAD+D101211: size(.bss) = 0x072238
# bloaty - HEAD+D69428 vs HEAD+D101211
# With D101211, we also save a lot of string table space (.rodata).
FILE SIZE VM SIZE
-------------- --------------
-0.0% -32 -0.0% -24 .eh_frame
-0.0% -336 [ = ] 0 .symtab
-0.0% -360 [ = ] 0 .strtab
[ = ] 0 -0.2% -880 .bss
-0.0% -2.11Ki -0.0% -2.11Ki .rodata
-0.0% -2.89Ki -0.0% -2.89Ki .text
-0.0% -5.71Ki -0.0% -5.88Ki TOTAL
```
Note: LoopFuse is a disabled pass. For now this patch adds
`#if LLVM_ENABLE_STATS` so `OptimizationRemarkMissed` is skipped in
LLVM_ENABLE_STATS==0 builds. If these `OptimizationRemarkMissed` are useful in
LLVM_ENABLE_STATS==0 builds, we can replace `llvm::Statistic` with
`llvm::TrackingStatistic`, or use a different abstraction to keep track of the strings.
Similarly, skip the code in `mlir/lib/Pass/PassStatistics.cpp` which
calls `getName`/`getDesc`/`getValue`.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D101211
This tidies up the code a bit:
* Eliminate the ctx member, which doesn't need to be stored.
* Rename verify(Operation) to make it more clear that it is
doing more than verifyOperation and that the dominance check
isn't being done multiple times.
* Rename mayNotHaveTerminator which was confusing about whether
it wasn't known whether it had a terminator, when it is really
about whether it is legal to have a terminator.
* Some minor optimizations: don't check for RegionKindInterface
if there are no regions. Don't do two passes over the
operations in a block in OperationVerifier::verifyDominance when
one will do.
The optimizations are actually a measurable (but minor) win in some
CIRCT cases.
Differential Revision: https://reviews.llvm.org/D101267
Ensure to preserve the correct type during when folding and canonicalization.
`shape.broadcast` of of a single operand can only be folded away if the argument
type is correct.
Differential Revision: https://reviews.llvm.org/D101158
Includes tests and implementation for both integer and floating point values.
Both nearest neighbor and bilinear interpolation is included.
Differential Revision: https://reviews.llvm.org/D101009
Mask vectors are handled similar to data vectors in N-D TransferWriteOp. They are copied into a temporary memory buffer, which can be indexed into with non-constant values.
Differential Revision: https://reviews.llvm.org/D101136
This commit adds support for broadcast dimensions in permutation maps of vector transfer ops.
Also fixes a bug in VectorToSCF that generated incorrect in-bounds checks for broadcast dimensions.
Differential Revision: https://reviews.llvm.org/D101019
This commit adds support for dimension permutations in permutation maps of vector transfer ops.
Differential Revision: https://reviews.llvm.org/D101007
Strided 1D vector transfer ops are 1D transfers operating on a memref dimension different from the last one. Such transfer ops do not accesses contiguous memory blocks (vectors), but access memory in a strided fashion. In the absence of a mask, strided 1D vector transfer ops can also be lowered using matrix.column.major.* LLVM instructions (in a later commit).
Subsequent commits will extend the pass to handle the remaining missing permutation maps (broadcasts, transposes, etc.).
Differential Revision: https://reviews.llvm.org/D100946
Eliminate empty shapes from the operands, partially fold all constant shape
operands, and fix normal folding.
Differential Revision: https://reviews.llvm.org/D100634
The interchange option attached to the linalg to loop lowering affects only the loops and does not update the memory accesses generated in to body of the operation. Instead of performing the interchange during the loop lowering use the interchange pattern.
Differential Revision: https://reviews.llvm.org/D100758
Example:
```
%0 = linalg.init_tensor : tensor<...>
%1 = linalg.generic ... outs(%0: tensor<...>)
%2 = linalg.generic ... outs(%0: tensor<...>)
```
Memref allocated as a result of `init_tensor` bufferization can be incorrectly overwritten by the second linalg.generic operation
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D100921
This commits adds a basic LSP server for MLIR that supports resolving references and definitions. Several components of the setup are simplified to keep the size of this commit down, and will be built out in later commits. A followup commit will add a vscode language client that communicates with this server, paving the way for better IDE experience when interfacing with MLIR files.
The structure of this tool is similar to mlir-opt and mlir-translate, i.e. the implementation is structured as a library that users can call into to implement entry points that contain the dialects/passes that they are interested in.
Note: This commit contains several files, namely those in `mlir-lsp-server/lsp`, that have been copied from the LSP code in clangd and adapted for use in MLIR. This copying was decided as the best initial path forward (discussed offline by several stake holders in MLIR and clangd) given the different needs of our MLIR server, and the one for clangd. If a strong desire/need for unification arises in the future, the existence of these files in mlir-lsp-server can be reconsidered.
Differential Revision: https://reviews.llvm.org/D100439
This information isn't useful for general compilation, but is useful for building tools that process .mlir files. This class will be used in a followup to start building an LSP language server for MLIR.
Differential Revision: https://reviews.llvm.org/D100438
This will prevent fusion that spains all dims and generates
(d0, d1, ...) -> () reshape that isn't legal
Differential Revision: https://reviews.llvm.org/D100805
Break up the dependency between SCF ops and substituteMin helper and make a
more generic version of AffineMinSCFCanonicalization. This reduce dependencies
between linalg and SCF and will allow the logic to be used with other kind of
ops. (Like ID ops).
Differential Revision: https://reviews.llvm.org/D100321
Previously, any terminator without ReturnLike and BranchOpInterface traits (e.g. scf.condition) were causing pass to fail.
Differential Revision: https://reviews.llvm.org/D100832
Instead of always running the region builder check if the generalized op has a region attached. If yes inline the existing region instead of calling the region builder. This change circumvents a problem with named operations that have a region builder taking captures and the generalization pass not knowing about this captures.
Differential Revision: https://reviews.llvm.org/D100880
The current implementation allows for TransferWriteOps with broadcasts that do not make sense. E.g., a broadcast could write a vector into a single (scalar) memory location, which is effectively the same as writing only the last element of the vector.
Differential Revision: https://reviews.llvm.org/D100842
std.xor ops on bool are lowered to spv.LogicalNotEqual. For Boolean values, xor
and not-equal are the same thing.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D100817
The patch extends the vectorization pass to lower linalg index operations to vector code. It allocates constant 1d vectors that enumerate the indexes along the iteration dimensions and broadcasts/transposes these 1d vectors to the iteration space.
Differential Revision: https://reviews.llvm.org/D100373
Add a new ProgressiveVectorToSCF pass that lowers vector transfer ops to SCF by gradually unpacking one dimension at time. Unpacking stops at 1D, but can be configured to stop earlier, should the HW support (N>1)-d vectors.
The current implementation cannot handle permutation maps, masks, tensor types and unrolling yet. These will be added in subsequent commits. Once features are on par with VectorToSCF, this implementation will replace VectorToSCF.
Differential Revision: https://reviews.llvm.org/D100622
Some Math operations do not have an equivalent in LLVM. In these cases,
allow a low priority fallback of calling the libm functions. This is to
give functionality and is not a performant option.
Differential Revision: https://reviews.llvm.org/D100367
This patch extends the control-flow cost-model for detensoring by
implementing a forward-looking pass on block arguments that should be
detensored. This makes sure that if a (to-be-detensored) block argument
"escapes" its block through the terminator, then the successor arguments
are also detensored.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D100457
The patch replaces the index operations in the body of fused producers and linearizes the indices after expansion.
Differential Revision: https://reviews.llvm.org/D100479
Update the dimensions of the index operations to account for dropped dimensions and replace the index operations of dropped dimensions by zero.
Differential Revision: https://reviews.llvm.org/D100395
This patch add the UnnamedAddr attribute for the GlobalOp in the LLVM
dialect. The attribute is also handled to and from LLVM IR.
This is meant to be used in a follow up patch to lower OpenACC/OpenMP ops to
call to kmp and tgt runtime calls (D100678).
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D100677
The patch enables the library call lowering for linalg operations that contain index operations.
Differential Revision: https://reviews.llvm.org/D100537
Expose the debug flag as a readable and assignable property of a
dedicated class instead of a write-only function. Actually test the fact
of setting the flag. Move test to a dedicated file, it has zero relation
to context_managers.py where it was added.
Arguably, it should be promoted from mlir.ir to mlir module, but we are
not re-exporting the latter and this functionality is purposefully
hidden so can stay in IR for now. Drop unnecessary export code.
Refactor C API and put Debug into a separate library, fix it to actually
set the flag to the given value.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100757
Instead of interchanging loops during the loop lowering this pass performs the interchange by permuting the indexing maps. It also updates the iterator types and the index accesses in the body of the operation.
Differential Revision: https://reviews.llvm.org/D100627
Move the existing optimization for transfer op on tensor to folder and
canonicalization. This handles the write after write case and read after write
and also add write after read case.
Differential Revision: https://reviews.llvm.org/D100597
The implementation supports static schedule for Fortran do loops. This
implements the dynamic variant of the same concept.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D97393
ArmSVE dialect is behind the recent changes in how the Vector dialect
interacts with backend vector dialects and the MLIR -> LLVM IR
translation module. This patch cleans up ArmSVE initialization within
Vector and removes the need for an LLVMArmSVE dialect.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D100171
When Linalg named ops support was added, captures were omitted
from the body builder. This revision adds support for captures
which allows us to write FillOp in a more idiomatic fashion using
the _linalg_ops_ext mixin support.
This raises an issue in the generation of `_linalg_ops_gen.py` where
```
@property
def result(self):
return self.operation.results[0] if len(self.operation.results) > 1 else None
```.
The condition should be `== 1`.
This will be fixed in a separate commit.
Differential Revision: https://reviews.llvm.org/D100363
This offers the ability to pass numpy arrays to the corresponding
memref argument.
Reviewed By: mehdi_amini, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100077
This allows for walking all nested locations of a given location, and is generally useful when processing locations.
Differential Revision: https://reviews.llvm.org/D100437
In the long run, we want to unify the dot product codegen solutions between
all target architectures, but this intrinsic enables experimenting with AVX
specific implementations in the meantime.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100593
We were using llvm::nulls, but that isn't thread safe so we switch to giving each thread it's own null stream.
Differential Revision: https://reviews.llvm.org/D100578
This matches the current support provided to operations, and allows attaching traits, interfaces, and using the DeclareInterfaceMethods utility. This was missed when attribute/type generation was first added.
Differential Revision: https://reviews.llvm.org/D100233
Rationale:
Now that vector<?xindex> is allowed, the restriction on vectorization
of index types in the sparse compiler can be removed. Also needs
generalization of scatter/gather index types.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D100522
Add iterator for ReductionNode traversal and use range to indicate the
region we would like to keep. Refactor the interaction between
Pass/Tester/ReductionNode.
Now it'll be easier to add new traversal type and OpReducer
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D99713
This reverts commit a32846b1d0.
The build is broken with -DBUILD_SHARED_LIBS=ON:
tools/mlir/lib/Reducer/CMakeFiles/obj.MLIRReduce.dir/Tester.cpp.o: In function `mlir::Tester::isInteresting(mlir::ModuleOp) const':
Tester.cpp:(.text._ZNK4mlir6Tester13isInterestingENS_8ModuleOpE+0xa8): undefined reference to `mlir::OpPrintingFlags::OpPrintingFlags()'
Tester.cpp:(.text._ZNK4mlir6Tester13isInterestingENS_8ModuleOpE+0xc6): undefined reference to `mlir::Operation::print(llvm::raw_ostream&, mlir::OpPrintingFlags)'
Add iterator for ReductionNode traversal and use range to indicate the region we would like to keep. Refactor the interaction between Pass/Tester/ReductionNode.
Now it'll be easier to add new traversal type and OpReducer
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D99713