llvm-project/mlir/test/mlir-tblgen/reference-impl.td

23 lines
678 B
TableGen
Raw Normal View History

// RUN: mlir-tblgen -gen-reference-implementations -I %S/../../include %s | FileCheck %s
#ifdef OP_BASE
#else
include "mlir/IR/OpBase.td"
#endif // OP_BASE
def X_AddOp : Op<"x.add">,
Cleanup EDSCs and start a functional auto-generated library of custom Ops This CL applies the following simplifications to EDSCs: 1. Rename Block to StmtList because an MLIR Block is a different, not yet supported, notion; 2. Rework Bindable to drop specific storage and just use it as a simple wrapper around Expr. The only value of Bindable is to force a static cast when used by the user to bind into the emitter. For all intended purposes, Bindable is just a lightweight check that an Expr is Unbound. This simplifies usage and reduces the API footprint. After playing with it for some time, it wasn't worth the API cognition overhead; 3. Replace makeExprs and makeBindables by makeNewExprs and copyExprs which is more explicit and less easy to misuse; 4. Add generally useful functionality to MLIREmitter: a. expose zero and one for the ubiquitous common lower bounds and step; b. add support to create already bound Exprs for all function arguments as well as shapes and views for Exprs bound to memrefs. 5. Delete Stmt::operator= and replace by a `Stmt::set` method which is more explicit. 6. Make Stmt::operator Expr() explicit. 7. Indexed.indices assertions are removed to pave the way for expressing slices and views as well as to work with 0-D memrefs. The CL plugs those simplifications with TableGen and allows emitting a full MLIR function for pointwise add. This "x.add" op is both type and rank-agnostic (by allowing ArrayRef of Expr passed to For loops) and opens the door to spinning up a composable library of existing and custom ops that should automate a lot of the tedious work in TF/XLA -> MLIR. Testing needs to be significantly improved but can be done in a separate CL. PiperOrigin-RevId: 231982325
2019-02-02 01:16:31 +08:00
Arguments<(ins Tensor:$A, Tensor:$B)>,
Results<(outs Tensor: $C)> {
// TODO: extract referenceImplementation to Op.
code referenceImplementation = [{
auto ivs = IndexHandle::makeIndexHandles(view_A.rank());
auto pivs = IndexHandle::makeIndexHandlePointers(ivs);
IndexedValue A(arg_A), B(arg_B), C(arg_C);
LoopNestBuilder(pivs, view_A.getLbs(), view_A.getUbs(), view_A.getSteps())({
C(ivs) = A(ivs) + B(ivs)
});
}];
}
// CHECK: printRefImplementation