I used the codemod python tool to do this with the following commands:
codemod 'tensorflow/mlir/blob/master/include' 'llvm/llvm-project/blob/master/mlir/include'
codemod 'tensorflow/mlir/blob/master' 'llvm/llvm-project/blob/master/mlir'
codemod 'tensorflow/mlir' 'llvm-project/llvm'
Differential Revision: https://reviews.llvm.org/D72244
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
LLVM IR supports linkage on global objects such as global variables and
functions. Introduce the Linkage attribute into the LLVM dialect, backed by an
integer storage. Use this attribute on LLVM::GlobalOp and make it mandatory.
Implement parsing/printing of the attribute and conversion to LLVM IR.
See tensorflow/mlir#277.
PiperOrigin-RevId: 283309328
Support for including a file multiple times was added in tablegen, removing the need for these extra guards. This is because we already insert c/c++ style header guards within each of the specific .td files.
PiperOrigin-RevId: 282076728
This change allows for adding additional nested references to a SymbolRefAttr to allow for further resolving a symbol if that symbol also defines a SymbolTable. If a referenced symbol also defines a symbol table, a nested reference can be used to refer to a symbol within that table. Nested references are printed after the main reference in the following form:
symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*
Example:
module @reference {
func @nested_reference()
}
my_reference_op @reference::@nested_reference
Given that SymbolRefAttr is now more general, the existing functionality centered around a single reference is moved to a derived class FlatSymbolRefAttr. Followup commits will add support to lookups, rauw, etc. for scoped references.
PiperOrigin-RevId: 279860501
This chapter adds a new composite type to Toy, and shows the process of adding a new type to the IR, adding and updating operations to use it, and constant folding operations producing it.
PiperOrigin-RevId: 279107885
Upstream LLVM gained support for #ifndef with https://reviews.llvm.org/D61888
This is changed mechanically via the following command:
find . -name "*.td" -exec sed -i -e ':a' -e 'N' -e '$!ba' -e 's/#ifdef \([A-Z_]*\)\n#else/#ifndef \1/g' {} \;
PiperOrigin-RevId: 277789427
This allows for them to be used on other non-function, or even other function-like, operations. The algorithms are already generic, so this is simply changing the derived pass type. The majority of this change is just ensuring that the nesting of these passes remains the same, as the pass manager won't auto-nest them anymore.
PiperOrigin-RevId: 276573038
This change rewrites Ch-4.md to introduced interfaces in a detailed step-by-step manner, adds examples, and fixes some errors.
PiperOrigin-RevId: 275887017
This part of the tutorial is now covered by a new flow in Toy. This also removes a point of confusion as there is also a proper Linalg dialect.
PiperOrigin-RevId: 275338933
This chapters introduces the notion of a full conversion, and adds support for lowering down to the LLVM dialect, LLVM IR, and thus code generation.
PiperOrigin-RevId: 275337786
This chapter adds a partial lowering of toy operations, all but PrintOp, to a combination of the Affine and Std dialects. This chapter focuses on introducing the conversion framework, the benefits of partial lowering, and how easily dialects may co-exist in the IR.
PiperOrigin-RevId: 275150649
The GenericCallOp needed to have the CallOpInterface to be picked up by the inliner. This also adds a CastOp to perform shape casts that are generated during inlining. The casts generated by the inliner will be folded away after shape inference.
PiperOrigin-RevId: 275150438
This change performs general cleanups of the implementation of ch.4 and fixes some bugs. For example, the operations currently don't inherit from the shape inference interface.
PiperOrigin-RevId: 275089914
This Chapter now introduces and makes use of the Interface concept
in MLIR to demonstrate ShapeInference.
END_PUBLIC
Closestensorflow/mlir#191
PiperOrigin-RevId: 275085151
This change refactors the toyc driver to be much cleaner and easier to extend. It also cleans up a few comments in the combiner.
PiperOrigin-RevId: 274973808
This is using Table-driven Declarative Rewrite Rules (DRR), the previous
version of the tutorial only showed the C++ patterns.
Closestensorflow/mlir#187
PiperOrigin-RevId: 274852321
This effectively rewrites Ch.2 to introduce dialects, operations, and registration instead of deferring to Ch.3. This allows for introducing the best practices up front(using ODS, registering operations, etc.), and limits the opaque API to the chapter document instead of the code.
PiperOrigin-RevId: 274724289
This function-like operation allows one to define functions that have wrapped
LLVM IR function type, in particular variadic functions. The operation was
added in parallel to the existing lowering flow, this commit only switches the
flow to use it.
Using a custom function type makes the LLVM IR dialect type system more
consistent and avoids complex conversion rules for functions that previously
had to use the built-in function type instead of a wrapped LLVM IR dialect type
and perform conversions during the analysis.
PiperOrigin-RevId: 273910855
This makes the name of the conversion pass more consistent with the naming
scheme, since it actually converts from the Loop dialect to the Standard
dialect rather than working with arbitrary control flow operations.
PiperOrigin-RevId: 272612112
This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Strided memrefs correspond conceptually to the following templated C++ struct:
```
template <typename Elem, size_t Rank>
struct {
Elem *ptr;
int64_t offset;
int64_t sizes[Rank];
int64_t strides[Rank];
};
```
The linearization procedure for address calculation for strided memrefs is the same as for linalg views:
`base_offset + SUM_i index_i * stride_i`.
The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs.
PiperOrigin-RevId: 272033399
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.
A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.
PiperOrigin-RevId: 271591708
The RFC for unifying Linalg and Affine compilation passes into an end-to-end flow with a predictable ABI and linkage to external function calls raised the question of why we have variable sized descriptors for memrefs depending on whether they have static or dynamic dimensions (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
This CL standardizes the ABI on the rank of the memrefs.
The LLVM struct for a memref becomes equivalent to:
```
template <typename Elem, size_t Rank>
struct {
Elem *ptr;
int64_t sizes[Rank];
};
```
PiperOrigin-RevId: 270947276