forked from OSchip/llvm-project
Update code block designations
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax highlighting, while '{.mlir}' was a markdown extension that used a style file to color the background differently of the code block. The background color extension was a custom one that we can retire given we have syntax highlighting. Also change '```td' to '```tablegen' to match chroma syntax highlighting designation. PiperOrigin-RevId: 286222976
This commit is contained in:
parent
2666b97314
commit
d7e2cc9bd1
|
@ -105,7 +105,7 @@ array-type ::= `!spv.array<` integer-literal `x` element-type `>`
|
|||
|
||||
For example,
|
||||
|
||||
```{.mlir}
|
||||
```mlir
|
||||
!spv.array<4 x i32>
|
||||
!spv.array<16 x vector<4 x f32>>
|
||||
```
|
||||
|
@ -154,7 +154,7 @@ pointer-type ::= `!spv.ptr<` element-type `,` storage-class `>`
|
|||
|
||||
For example,
|
||||
|
||||
```{.mlir}
|
||||
```mlir
|
||||
!spv.ptr<i32, Function>
|
||||
!spv.ptr<vector<4 x f32>, Uniform>
|
||||
```
|
||||
|
@ -169,7 +169,7 @@ runtime-array-type ::= `!spv.rtarray<` element-type `>`
|
|||
|
||||
For example,
|
||||
|
||||
```{.mlir}
|
||||
```mlir
|
||||
!spv.rtarray<i32>
|
||||
!spv.rtarray<vector<4 x f32>>
|
||||
```
|
||||
|
|
|
@ -374,7 +374,7 @@ Example:
|
|||
|
||||
TODO: This operation is easy to extend to broadcast to dynamically shaped
|
||||
tensors in the same way dynamically shaped memrefs are handled.
|
||||
```mlir {.mlir}
|
||||
```mlir
|
||||
// Broadcasts %s to a 2-d dynamically shaped tensor, with %m, %n binding
|
||||
// to the sizes of the two dynamic dimensions.
|
||||
%m = "foo"() : () -> (index)
|
||||
|
|
|
@ -43,7 +43,7 @@ operations are generated from. To define an operation one needs to specify:
|
|||
are ignored by the main op and doc generators, but could be used in, say,
|
||||
the translation from a dialect to another representation.
|
||||
|
||||
```td {.td}
|
||||
```tablegen
|
||||
def TFL_LeakyReluOp: TFL_Op<TFL_Dialect, "leaky_relu",
|
||||
[NoSideEffect, SameValueType]>,
|
||||
Results<(outs Tensor)> {
|
||||
|
@ -99,7 +99,7 @@ generated.
|
|||
Let us continue with LeakyRelu. To map from TensorFlow's `LeakyRelu` to
|
||||
TensorFlow Lite's `LeakyRelu`:
|
||||
|
||||
```td {.td}
|
||||
```tablegen
|
||||
def : Pat<(TF_LeakyReluOp $arg, F32Attr:$a), (TFL_LeakyReluOp $arg, $a)>
|
||||
```
|
||||
|
||||
|
@ -119,7 +119,7 @@ as destination then one could use a general native code fallback method. This
|
|||
consists of defining a pattern as well as adding a C++ function to perform the
|
||||
replacement:
|
||||
|
||||
```td {.td}
|
||||
```tablegen
|
||||
def createTFLLeakyRelu : NativeCodeCall<
|
||||
"createTFLLeakyRelu($_builder, $0->getDefiningOp(), $1, $2)">;
|
||||
|
||||
|
|
|
@ -88,7 +88,7 @@ definition of the trait class. This can be done using the `NativeOpTrait` and
|
|||
`ParamNativeOpTrait` classes. `ParamNativeOpTrait` provides a mechanism in which
|
||||
to specify arguments to a parametric trait class with an internal `Impl`.
|
||||
|
||||
```td
|
||||
```tablegen
|
||||
// The argument is the c++ trait class name.
|
||||
def MyTrait : NativeOpTrait<"MyTrait">;
|
||||
|
||||
|
@ -100,7 +100,7 @@ class MyParametricTrait<int prop>
|
|||
|
||||
These can then be used in the `traits` list of an op definition:
|
||||
|
||||
```td
|
||||
```tablegen
|
||||
def OpWithInferTypeInterfaceOp : Op<...[MyTrait, MyParametricTrait<10>]> { ... }
|
||||
```
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ def transpose_transpose(x) {
|
|||
|
||||
Which corresponds to the following IR:
|
||||
|
||||
```MLIR(.mlir)
|
||||
```mlir
|
||||
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
|
||||
%0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
|
||||
%1 = "toy.transpose"(%0) : (tensor<*xf64>) -> tensor<*xf64>
|
||||
|
@ -131,7 +131,7 @@ similar way to LLVM:
|
|||
Finally, we can run `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt` and
|
||||
observe our pattern in action:
|
||||
|
||||
```MLIR(.mlir)
|
||||
```mlir
|
||||
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
|
||||
%0 = "toy.transpose"(%arg0) : (tensor<*xf64>) -> tensor<*xf64>
|
||||
"toy.return"(%arg0) : (tensor<*xf64>) -> ()
|
||||
|
@ -146,13 +146,13 @@ input. The Canonicalizer knows to clean up dead operations; however, MLIR
|
|||
conservatively assumes that operations may have side-effects. We can fix this by
|
||||
adding a new trait, `NoSideEffect`, to our `TransposeOp`:
|
||||
|
||||
```TableGen(.td):
|
||||
```tablegen:
|
||||
def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
|
||||
```
|
||||
|
||||
Let's retry now `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt`:
|
||||
|
||||
```MLIR(.mlir)
|
||||
```mlir
|
||||
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
|
||||
"toy.return"(%arg0) : (tensor<*xf64>) -> ()
|
||||
}
|
||||
|
@ -169,7 +169,7 @@ Declarative, rule-based pattern-match and rewrite (DRR) is an operation
|
|||
DAG-based declarative rewriter that provides a table-based syntax for
|
||||
pattern-match and rewrite rules:
|
||||
|
||||
```TableGen(.td):
|
||||
```tablegen:
|
||||
class Pattern<
|
||||
dag sourcePattern, list<dag> resultPatterns,
|
||||
list<dag> additionalConstraints = [],
|
||||
|
@ -179,7 +179,7 @@ class Pattern<
|
|||
A redundant reshape optimization similar to SimplifyRedundantTranspose can be
|
||||
expressed more simply using DRR as follows:
|
||||
|
||||
```TableGen(.td):
|
||||
```tablegen:
|
||||
// Reshape(Reshape(x)) = Reshape(x)
|
||||
def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)),
|
||||
(ReshapeOp $arg)>;
|
||||
|
@ -193,7 +193,7 @@ transformation is conditional on some properties of the arguments and results.
|
|||
An example is a transformation that eliminates reshapes when they are redundant,
|
||||
i.e. when the input and output shapes are identical.
|
||||
|
||||
```TableGen(.td):
|
||||
```tablegen:
|
||||
def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>;
|
||||
def RedundantReshapeOptPattern : Pat<
|
||||
(ReshapeOp:$res $arg), (replaceWithValue $arg),
|
||||
|
@ -207,7 +207,7 @@ C++. An example of such an optimization is FoldConstantReshape, where we
|
|||
optimize Reshape of a constant value by reshaping the constant in place and
|
||||
eliminating the reshape operation.
|
||||
|
||||
```TableGen(.td):
|
||||
```tablegen:
|
||||
def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">;
|
||||
def FoldConstantReshapeOptPattern : Pat<
|
||||
(ReshapeOp:$res (ConstantOp $arg)),
|
||||
|
@ -226,7 +226,7 @@ def main() {
|
|||
}
|
||||
```
|
||||
|
||||
```MLIR(.mlir)
|
||||
```mlir
|
||||
module {
|
||||
func @main() {
|
||||
%0 = "toy.constant"() {value = dense<[1.000000e+00, 2.000000e+00]> : tensor<2xf64>}
|
||||
|
@ -243,7 +243,7 @@ module {
|
|||
We can try to run `toyc-ch3 test/trivialReshape.toy -emit=mlir -opt` and observe
|
||||
our pattern in action:
|
||||
|
||||
```MLIR(.mlir)
|
||||
```mlir
|
||||
module {
|
||||
func @main() {
|
||||
%0 = "toy.constant"() {value = dense<[[1.000000e+00], [2.000000e+00]]> \
|
||||
|
|
|
@ -107,7 +107,7 @@ and core to a single operation. The interface that we will be adding here is the
|
|||
To add this interface we just need to include the definition into our operation
|
||||
specification file (`Ops.td`):
|
||||
|
||||
```.td
|
||||
```tablegen
|
||||
#ifdef MLIR_CALLINTERFACES
|
||||
#else
|
||||
include "mlir/Analysis/CallInterfaces.td"
|
||||
|
@ -116,7 +116,7 @@ include "mlir/Analysis/CallInterfaces.td"
|
|||
|
||||
and add it to the traits list of `GenericCallOp`:
|
||||
|
||||
```.td
|
||||
```tablegen
|
||||
def GenericCallOp : Toy_Op<"generic_call",
|
||||
[DeclareOpInterfaceMethods<CallOpInterface>]> {
|
||||
...
|
||||
|
@ -176,7 +176,7 @@ the inliner expects an explicit cast operation to be inserted. For this, we need
|
|||
to add a new operation to the Toy dialect, `ToyCastOp`(toy.cast), to represent
|
||||
casts between two different shapes.
|
||||
|
||||
```.td
|
||||
```tablegen
|
||||
def CastOp : Toy_Op<"cast", [NoSideEffect, SameOperandsAndResultShape]> {
|
||||
let summary = "shape cast operation";
|
||||
let description = [{
|
||||
|
@ -263,7 +263,7 @@ to be given to the generated C++ interface class as a template argument. For our
|
|||
purposes, we will name the generated class a simpler `ShapeInference`. We also
|
||||
provide a description for the interface.
|
||||
|
||||
```.td
|
||||
```tablegen
|
||||
def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
|
||||
let description = [{
|
||||
Interface to access a registered method to infer the return types for an
|
||||
|
@ -279,7 +279,7 @@ the need. See the
|
|||
[ODS documentation](../../OpDefinitions.md#operation-interfaces) for more
|
||||
information.
|
||||
|
||||
```.td
|
||||
```tablegen
|
||||
def ShapeInferenceOpInterface : OpInterface<"ShapeInference"> {
|
||||
let description = [{
|
||||
Interface to access a registered method to infer the return types for an
|
||||
|
|
|
@ -237,7 +237,7 @@ def PrintOp : Toy_Op<"print"> {
|
|||
|
||||
Looking back at our current working example:
|
||||
|
||||
```.mlir
|
||||
```mlir
|
||||
func @main() {
|
||||
%0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
|
||||
%2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
|
||||
|
|
|
@ -113,7 +113,7 @@ that only legal operations will remain after the conversion.
|
|||
|
||||
Looking back at our current working example:
|
||||
|
||||
```.mlir
|
||||
```mlir
|
||||
func @main() {
|
||||
%0 = "toy.constant"() {value = dense<[[1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00]]> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
|
||||
%2 = "toy.transpose"(%0) : (tensor<2x3xf64>) -> tensor<3x2xf64>
|
||||
|
@ -125,7 +125,7 @@ func @main() {
|
|||
|
||||
We can now lower down to the LLVM dialect, which produces the following code:
|
||||
|
||||
```.mlir
|
||||
```mlir
|
||||
llvm.func @free(!llvm<"i8*">)
|
||||
llvm.func @printf(!llvm<"i8*">, ...) -> !llvm.i32
|
||||
llvm.func @malloc(!llvm.i64) -> !llvm<"i8*">
|
||||
|
|
|
@ -358,7 +358,7 @@ A few of our existing operations will need to be updated to handle `StructType`.
|
|||
The first step is to make the ODS framework aware of our Type so that we can use
|
||||
it in the operation definitions. A simple example is shown below:
|
||||
|
||||
```td
|
||||
```tablegen
|
||||
// Provide a definition for the Toy StructType for use in ODS. This allows for
|
||||
// using StructType in a similar way to Tensor or MemRef.
|
||||
def Toy_StructType :
|
||||
|
@ -371,7 +371,7 @@ def Toy_Type : AnyTypeOf<[F64Tensor, Toy_StructType]>;
|
|||
We can then update our operations, e.g. `ReturnOp`, to also accept the
|
||||
`Toy_StructType`:
|
||||
|
||||
```td
|
||||
```tablegen
|
||||
def ReturnOp : Toy_Op<"return", [Terminator, HasParent<"FuncOp">]> {
|
||||
...
|
||||
let arguments = (ins Variadic<Toy_Type>:$input);
|
||||
|
|
|
@ -1,11 +0,0 @@
|
|||
.mlir {
|
||||
background-color: #eef;
|
||||
}
|
||||
|
||||
.ebnf {
|
||||
background-color: #ffe;
|
||||
}
|
||||
|
||||
.td {
|
||||
background-color: #eef;
|
||||
}
|
|
@ -67,7 +67,7 @@ std::string generateLibraryCallName(Operation *op);
|
|||
/// `A(i, k) * B(k, j) -> C(i, j)` will have the following, ordered, list of
|
||||
/// affine maps:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// (
|
||||
/// (i, j, k) -> (i, k),
|
||||
/// (i, j, k) -> (k, j),
|
||||
|
|
|
@ -46,7 +46,7 @@ public:
|
|||
/// It is constructed by calling the linalg.range op with three values index of
|
||||
/// index type:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// func @foo(%arg0 : index, %arg1 : index, %arg2 : index) {
|
||||
/// %0 = linalg.range %arg0:%arg1:%arg2 : !linalg.range
|
||||
/// }
|
||||
|
|
|
@ -180,27 +180,27 @@ AffineMap simplifyAffineMap(AffineMap map);
|
|||
///
|
||||
/// Example 1:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// (d0, d1, d2) -> (d1, d1, d0, d2, d1, d2, d1, d0)
|
||||
/// 0 2 3
|
||||
/// ```
|
||||
///
|
||||
/// returns:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// (d0, d1, d2, d3, d4, d5, d6, d7) -> (d2, d0, d3)
|
||||
/// ```
|
||||
///
|
||||
/// Example 2:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// (d0, d1, d2) -> (d1, d0 + d1, d0, d2, d1, d2, d1, d0)
|
||||
/// 0 2 3
|
||||
/// ```
|
||||
///
|
||||
/// returns:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// (d0, d1, d2, d3, d4, d5, d6, d7) -> (d2, d0, d3)
|
||||
/// ```
|
||||
AffineMap inversePermutation(AffineMap map);
|
||||
|
@ -214,7 +214,7 @@ AffineMap inversePermutation(AffineMap map);
|
|||
/// Example:
|
||||
/// When applied to the following list of 3 affine maps,
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// {
|
||||
/// (i, j, k) -> (i, k),
|
||||
/// (i, j, k) -> (k, j),
|
||||
|
@ -224,7 +224,7 @@ AffineMap inversePermutation(AffineMap map);
|
|||
///
|
||||
/// Returns the map:
|
||||
///
|
||||
/// ```{.mlir}
|
||||
/// ```mlir
|
||||
/// (i, j, k) -> (i, k, k, j, i, j)
|
||||
/// ```
|
||||
AffineMap concatAffineMaps(ArrayRef<AffineMap> maps);
|
||||
|
|
|
@ -512,7 +512,7 @@ static LogicalResult verify(YieldOp op) {
|
|||
|
||||
// A LinalgLibraryOp prints as:
|
||||
//
|
||||
// ```{.mlir}
|
||||
// ```mlir
|
||||
// concrete_op_name (ssa-inputs, ssa-outputs) : view-types
|
||||
// ```
|
||||
//
|
||||
|
|
Loading…
Reference in New Issue