The mlir-translate tool is expected to discover individual translations at link
time. These translations must register themselves and may need the utilities
that are currently defined in mlir-translate.cpp for their entry point
functions. Since mlir-translate is linking against individual translations,
the translations cannot link against mlir-translate themselves. Extract out
the utilities into a separate "Translation" library to avoid the potential
dependency cycle. Individual translations link to that library to access
TranslateRegistration. The mlir-translate tool links to individual translations
and to the "Translation" library because it needs the utilities as well.
The main header of the new library is located in include/mlir/Translation.h to
make it easily accessible by translators. The rationale for putting it to
include/mlir rather than to one of its subdirectories is that its purpose is
similar to that of include/mlir/Pass.h so it makes sense to put them at the
same level.
PiperOrigin-RevId: 222398617
Also, added iterators for VariadicResults class.
TESTED with unit tests
TODOs:
- Handle non-bool condition results (similar to the IfOp)
- Use PatternRewriter
PiperOrigin-RevId: 222340376
Existing default visitation function for dimension and symbols were called
"visitAffineDimExpr" and "visitAffineSymbolExpr". However, generic CRTP-based
visit and walk methods were calling "visitDimExpr" and "visitSymbolExpr",
respectively, on derived classes. This has not been discovered before because
all existing affine expression visitors (re)define functions for dimensions and
symbols. Change the names of the default empty visitation functions to the
latter form.
PiperOrigin-RevId: 222312114
This reverts the previous method which needs to create a new dialect with the
constant fold hook from TensorFlow. This new method uses a function object in
dialect to store the constant fold hook. Once a hook is registered to the
dialect, this function object will be assigned when the dialect is added to the
MLIRContext.
For the operations which are not registered, a new method getRegisteredDialects
is added to the MLIRContext to query the dialects which matches their op name
prefixes.
PiperOrigin-RevId: 222310149
This CL refactors a few things in Vectorize.cpp:
1. a clear distinction is made between:
a. the LoadOp are the roots of vectorization and must be vectorized
eagerly and propagate their value; and
b. the StoreOp which are the terminals of vectorization and must be
vectorized late (i.e. they do not produce values that need to be
propagated).
2. the StoreOp must be vectorized late because in general it can store a value
that is not reachable from the subset of loads defined in the
current pattern. One trivial such case is storing a constant defined at the
top-level of the MLFunction and that needs to be turned into a splat.
3. a description of the algorithm is given;
4. the implementation matches the algorithm;
5. the last example is made parametric, in practice it will fully rely on the
implementation of vector_transfer_read/write which will handle boundary
conditions and padding. This will happen by lowering to a lower-level
abstraction either:
a. directly in MLIR (whether DMA or just loops or any async tasks in the
future) (whiteboxing);
b. in LLO/LLVM-IR/whatever blackbox library call/ search + swizzle inventor
one may want to use;
c. a partial mix of a. and b. (grey-boxing)
5. minor cleanups are applied;
6. mistakenly disabled unit tests are re-enabled (oopsie).
With this CL, this MLIR snippet:
```
mlfunc @vector_add_2d(%M : index, %N : index) -> memref<?x?xf32> {
%A = alloc (%M, %N) : memref<?x?xf32>
%B = alloc (%M, %N) : memref<?x?xf32>
%C = alloc (%M, %N) : memref<?x?xf32>
%f1 = constant 1.0 : f32
%f2 = constant 2.0 : f32
for %i0 = 0 to %M {
for %i1 = 0 to %N {
// non-scoped %f1
store %f1, %A[%i0, %i1] : memref<?x?xf32>
}
}
for %i4 = 0 to %M {
for %i5 = 0 to %N {
%a5 = load %A[%i4, %i5] : memref<?x?xf32>
%b5 = load %B[%i4, %i5] : memref<?x?xf32>
%s5 = addf %a5, %b5 : f32
// non-scoped %f1
%s6 = addf %s5, %f1 : f32
store %s6, %C[%i4, %i5] : memref<?x?xf32>
}
}
return %C : memref<?x?xf32>
}
```
vectorized with these arguments:
```
-vectorize -virtual-vector-size 256 --test-fastest-varying=0
```
vectorization produces this standard innermost-loop vectorized code:
```
mlfunc @vector_add_2d(%arg0 : index, %arg1 : index) -> memref<?x?xf32> {
%0 = alloc(%arg0, %arg1) : memref<?x?xf32>
%1 = alloc(%arg0, %arg1) : memref<?x?xf32>
%2 = alloc(%arg0, %arg1) : memref<?x?xf32>
%cst = constant 1.000000e+00 : f32
%cst_0 = constant 2.000000e+00 : f32
for %i0 = 0 to %arg0 {
for %i1 = 0 to %arg1 step 256 {
%cst_1 = constant splat<vector<256xf32>, 1.000000e+00> : vector<256xf32>
"vector_transfer_write"(%cst_1, %0, %i0, %i1) : (vector<256xf32>, memref<?x?xf32>, index, index) -> ()
}
}
for %i2 = 0 to %arg0 {
for %i3 = 0 to %arg1 step 256 {
%3 = "vector_transfer_read"(%0, %i2, %i3) : (memref<?x?xf32>, index, index) -> vector<256xf32>
%4 = "vector_transfer_read"(%1, %i2, %i3) : (memref<?x?xf32>, index, index) -> vector<256xf32>
%5 = addf %3, %4 : vector<256xf32>
%cst_2 = constant splat<vector<256xf32>, 1.000000e+00> : vector<256xf32>
%6 = addf %5, %cst_2 : vector<256xf32>
"vector_transfer_write"(%6, %2, %i2, %i3) : (vector<256xf32>, memref<?x?xf32>, index, index) -> ()
}
}
return %2 : memref<?x?xf32>
}
```
Of course, much more intricate n-D imperfectly-nested patterns can be emitted too in a fully declarative fashion, but this is enough for now.
PiperOrigin-RevId: 222280209
Added TF::Conv2D op and TFL::Conv2D op, and converted TF::Conv2D to
TFL::Conv2D, which need to address the operand numberr mismatch
and attribute conversion.
PiperOrigin-RevId: 222277554
In the general case, loop bounds can be expressed as affine maps of the outer
loop iterators and function arguments. Relax the check for loop bounds to be
known integer constants and also accept one-dimensional affine bounds in
ConvertToCFG ForStmt lowering. Emit affine_apply operations for both the upper
and the lower bound. The semantics of MLFunctions guarantees that both bounds
can be computed before the loop starts iterating. Constant bounds are merely a
short-hand notation for zero-dimensional affine maps and get supported
transparently.
Multidimensional affine bounds are not yet supported because the target IR
dialect lacks min/max operations necessary to implement the corresponding
semantics.
PiperOrigin-RevId: 222275801
op-stats pass currently returns the number of occurrences of different operations in a Module. Useful for verifying transformation properties (e.g., 3 ops of specific dialect, 0 of another), but probably not useful outside of that so keeping it local to mlir-opt. This does not consider op attributes when counting.
PiperOrigin-RevId: 222259727
This CL adds some vector support in prevision of the upcoming vector
materialization pass. In particular this CL adds 2 functions to:
1. compute the multiplicity of a subvector shape in a supervector shape;
2. help match operations on strict super-vectors. This is defined for a given
subvector shape as an operation that manipulates a vector type that is an
integral multiple of the subtype, with multiplicity at least 2.
This CL also adds a TestUtil pass where we can dump arbitrary testing of
functions and analysis that operate at a much smaller granularity than a pass
(e.g. an analysis for which it is convenient to write a bit of artificial MLIR
and write some custom test). This is in order to keep using Filecheck for
things that essentially look and feel like C++ unit tests.
PiperOrigin-RevId: 222250910
This does create an inconsistency between the print formats (e.g., attributes are normally before operands) but fixes an invalid parsing & keeps constant uniform wrt itself (function or int attributes have type at same place). And specifying the specific type for a int/float attribute might get revised shortly.
Also add test to verify that output printed can be parsed again.
PiperOrigin-RevId: 221923893
and getMemRefRegion() to work with specified loop depths; add support for
outgoing DMAs, store op's.
- add support for getMemRefRegion symbolic in outer loops - hence support for
DMAs symbolic in outer surrounding loops.
- add DMA generation support for outgoing DMAs (store op's to lower memory
space); extend getMemoryRegion to store op's. -memref-bound-check now works
with store op's as well.
- fix dma-generate (references to the old memref in the dma_start op were also
being replaced with the new buffer); we need replace all memref uses to work
only on a subset of the uses - add a new optional argument for
replaceAllMemRefUsesWith. update replaceAllMemRefUsesWith to take an optional
'operation' argument to serve as a filter - if provided, only those uses that
are dominated by the filter are replaced.
- Add missing print for attributes for dma_start, dma_wait op's.
- update the FlatAffineConstraints API
PiperOrigin-RevId: 221889223
This would also make the CallOp and ExtractElementOp invocations from eliminateIfOp function always valid and removes the need for error handling.
Also, verify TensorFlowOp trait.
PiperOrigin-RevId: 221737192
We do some limited renaming here but define an alias for OperationInst so that a follow up cl can solely perform the large scale renaming.
PiperOrigin-RevId: 221726963
* Optionally attach the type of integer and floating point attributes to the attributes, this allows restricting a int/float to specific width.
- Currently this allows suffixing int/float constant with type [this might be revised in future].
- Default to i64 and f32 if not specified.
* For index types the APInt width used is 64.
* Change callers to request a specific attribute type.
* Store iN type with APInt of width N.
* This change does not handle the folding of constants of different types (e.g., doing int type promotions to support constant folding i3 and i32), and instead restricts the constant folding to only operate on the same types.
PiperOrigin-RevId: 221722699
Unranked tensors used to return an empty list of dimensions as their shape. This is confusing since an empty list of dimensions is also returned for 0-D tensors. In particular, hasStaticShape() method used to check if any of the dimensions are -1, which held for unranked tensors even though they don't have static shape.
PiperOrigin-RevId: 221571138
Array attributes can nested and function attributes can appear anywhere at that
level. They should be remapped to point to the generated CFGFunction after
ML-to-CFG conversion, similarly to plain function attributes. Extract the
nested attribute remapping functionality from the Parser to Utils. Extract out
the remapping function for individual Functions from the module remapping
function. Use these new functions in the ML-to-CFG conversion pass and in the
parser.
PiperOrigin-RevId: 221510997
These functions are declared in Transforms/LoopUtils.h (included to the
Transforms/Utils library) but were defined in the loop unrolling pass in
Transforms/LoopUnroll.cpp. As a result, targets depending only on
TransformUtils library but not on Transforms could get link errors. Move the
definitions to Transforms/Utils/LoopUtils.cpp where they should actually live.
This does not modify any code.
PiperOrigin-RevId: 221508882
This CL adds support for and a vectorization test to perform scalar 2-D addf.
The support extension notably comprises:
1. extend vectorizable test to exclude vector_transfer operations and
expose them to LoopAnalysis where they are needed. This is a temporary
solution a concrete MLIR Op exists;
2. add some more functional sugar mapKeys, apply and ScopeGuard (which became
relevant again);
3. fix improper shifting during coarsening;
4. rename unaligned load/store to vector_transfer_read/write and simplify the
design removing the unnecessary AllocOp that were introduced prematurely:
vector_transfer_read currently has the form:
(memref<?x?x?xf32>, index, index, index) -> vector<32x64x256xf32>
vector_transfer_write currently has the form:
(vector<32x64x256xf32>, memref<?x?x?xf32>, index, index, index) -> ()
5. adds vectorizeOperations which traverses the operations in a ForStmt and
rewrites them to their vector form;
6. add support for vector splat from a constant.
The relevant tests are also updated.
PiperOrigin-RevId: 221421426
* Add skeleton br/cond_br builtin ops.
* Add a terminator trait for operations.
* Mark ReturnOp as a Terminator.
The functionality for managing/parsing/verifying successors will be added in a follow up cl.
PiperOrigin-RevId: 221283000
This is to allow usage of comment blocks along with splits in test cases.
For example, "Function Control Flow Lowering" comment block in
raise-control-flow.mlir
TESTED with existing unit tests
PiperOrigin-RevId: 221214451
Similarly to other types, introduce "get" and "getChecked" static member
functions for IntegerType. The latter emits errors to the error handler
registered with the MLIR context and returns a null type for the caller to
handle errors gracefully. This deduplicates type consistency checks between
the parser and the builder. Update the parser to call IntegerType::getChecked
for error reporting instead of the builder that would simply assert.
This CL completes the type system error emission refactoring: the parser now
only emits syntax-related errors for types while type factory systems may emit
type consistency errors.
PiperOrigin-RevId: 221165207
Branch instruction arguments were defined and used inconsistently across
different instructions, in both the spec and the implementation. In
particular, conditional and unconditional branch instructions were using
different syntax in the implementation. This led to the IR we produce not
being accepted by the parser. Update the printer to use common syntax: `(`
list-of-SSA-uses `:` list-of-types `)`. The motivation for choosing this
syntax as opposed to the one in the spec, `(` list-of-SSA-uses `)` `:`
list-of-types is double-fold. First, it is tricky to differentiate the label
of the false branch from the type while parsing conditional branches (which is
what apparently motivated the implementation to diverge from the spec in the
first place). Second, the ongoing convergence between terminator instructions
and other operations prompts for consistency between their operand list syntax.
After this change, the only remaining difference between the two is the use of
parentheses. Update the comment of the parser that did not correspond to the
code. Remove the unused isParenthesized argument from parseSSAUseAndTypeList.
Update the spec accordingly. Note that the examples in the spec were _not_
using the EBNF defined a couple of lines above them, but were using the current
syntax. Add a supplementary example of a branch to a basic block with multiple
arguments.
PiperOrigin-RevId: 221162655
Implement a pass converting a subset of MLFunctions to CFGFunctions. Currently
supports arbitrarily complex imperfect loop nests with statically constant
(i.e., not affine map) bounds filled with operations. Does NOT support
branches and non-constant loop bounds.
Conversion is performed per-function and the function names are preserved to
avoid breaking any external references to the current module. In-memory IR is
updated to point to the right functions in direct calls and constant loads.
This behavior is tested via a really hidden flag that enables function
renaming.
Inside each function, the control flow conversion is based on single-entry
single-exit regions, i.e. subgraphs of the CFG that have exactly one incoming
and exactly one outgoing edge. Since an MLFunction must have a single "return"
statement as per MLIR spec, it constitutes an SESE region. Individual
operations are appended to this region. Control flow statements are
recursively converted into such regions that are concatenated with the current
region. Bodies of the compound statement also form SESE regions, which allows
to nest control flow statements easily. Note that SESE regions are not
materialized in the code. It is sufficent to keep track of the end of the
region as the current instruction insertion point as long as all recursive
calls update the insertion point in the end.
The converter maintains a mapping between SSA values in ML functions and their
CFG counterparts. The mapping is used to find the operands for each operation
and is updated to contain the results of each operation as the conversion
continues.
PiperOrigin-RevId: 221162602
Change the storage type to APInt from int64_t for IntegerAttr (following the change to APFloat storage in FloatAttr). Effectively a direct change from int64_t to 64-bit APInt throughout (the bitwidth hardcoded). This change also adds a getInt convenience method to IntegerAttr and replaces previous getValue calls with getInt calls.
While this changes updates the storage type, it does not update all constant folding calls.
PiperOrigin-RevId: 221082788
time. The "Fast and Flexible Instruction Selection With Constraints" paper
from CC2018 makes a credible argument that dynamic costs aren't actually
necessary/important, and we are not using them.
- Check in my "MLIR Generic DAG Rewriter Infrastructure" design doc into the
source tree.
PiperOrigin-RevId: 221017546