Existing type syntax contains the following productions:
function-type ::= type-list-parens `->` type-list
type-list ::= type | type-list-parens
type ::= <..> | function-type
Due to these rules, when the parser sees `->` followed by `(`, it cannot
disambiguate if `(` starts a parenthesized list of function result types, or a
parenthesized list of operands of another function type, returned from the
current function. We would need an unknown amount of lookahead to try to find
the `->` at the right level of function nesting to differentiate between type
lists and singular function types.
Instead, require the result type of the function that is a function type itself
to be always parenthesized, at the syntax level. Update the spec and the
parser to correspond to the production rule names used in the spec (although it
would have worked without modifications). Fix the function type parsing bug in
the process, as it used to accept the non-parenthesized list of types for
arguments, disallowed by the spec.
PiperOrigin-RevId: 232528361
The generic form may be more desirable even when there is a custom form
specified so add option to enable emitting it. This also exposes a current bug
when round tripping constant with function attribute.
PiperOrigin-RevId: 232350712
This adds signed/unsigned integer division and remainder operations to the
StandardOps dialect. Two versions are required because MLIR integers are
signless, but the meaning of the leading bit is important in division and
affects the results. LLVM IR made a similar choice. Define the operations in
the tablegen file and add simple constant folding hooks in the C++
implementation. Handle signed division overflow and division by zero errors in
constant folding. Canonicalization is left for future work.
These operations are necessary to lower affine_apply's down to LLVM IR.
PiperOrigin-RevId: 228077549
The entire compiler now looks at structural properties of the function (e.g.
does it have one block, does it contain an if/for stmt, etc) so the only thing
holding up this difference is round tripping through the parser/printer syntax.
Removing this shrinks the compile by ~140LOC.
This is step 31/n towards merging instructions and statements. The last step
is updating the docs, which I will do as a separate patch in order to split it
from this mostly mechanical patch.
PiperOrigin-RevId: 227540453
printing the entry block in a CFG function's argument line. Since I'm touching
all of the testcases anyway, change the argument list from printing as
"%arg : type" to "%arg: type" which is more consistent with bb arguments.
In addition to being more consistent, this is a much nicer look for cfg functions.
PiperOrigin-RevId: 227240069
An extensive discussion demonstrated that it is difficult to support `index`
types as elements of compound (vector, memref, tensor) types. In particular,
their size is unknown until the target-specific lowering takes place. MLIR may
need to store constants of the fixed-shape compound types (e.g.,
vector<4 x index>) internally and must know the size of the element type and
data layout constraints. The same information is necessary for target-specific
lowering and translation to reliably support compound types with `index`
elements, but MLIR does not have a dedicated target description mechanism yet.
The uses cases for compound types with `index` elements, should they appear,
can be handled via an `index_cast` operation that converts between `index` and
fixed-size integer types at the SSA value level instead of the type level.
PiperOrigin-RevId: 225064373
This CL implements and uses VectorTransferOps in lieu of the former custom
call op. Tests are updated accordingly.
VectorTransferOps come in 2 flavors: VectorTransferReadOp and
VectorTransferWriteOp.
VectorTransferOps can be thought of as a backend-independent
pseudo op/library call that needs to be legalized to MLIR (whiteboxed) before
it can be lowered to backend-dependent IR.
Note that the current implementation does not yet support a real permutation
map. Proper support will come in a followup CL.
VectorTransferReadOp
====================
VectorTransferReadOp performs a blocking read from a scalar memref
location into a super-vector of the same elemental type. This operation is
called 'read' by opposition to 'load' because the super-vector granularity
is generally not representable with a single hardware register. As a
consequence, memory transfers will generally be required when lowering
VectorTransferReadOp. A VectorTransferReadOp is thus a mid-level abstraction
that supports super-vectorization with non-effecting padding for full-tile
only code.
A vector transfer read has semantics similar to a vector load, with additional
support for:
1. an optional value of the elemental type of the MemRef. This value
supports non-effecting padding and is inserted in places where the
vector read exceeds the MemRef bounds. If the value is not specified,
the access is statically guaranteed to be within bounds;
2. an attribute of type AffineMap to specify a slice of the original
MemRef access and its transposition into the super-vector shape. The
permutation_map is an unbounded AffineMap that must represent a
permutation from the MemRef dim space projected onto the vector dim
space.
Example:
```mlir
%A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>
...
%val = `ssa-value` : f32
// let %i, %j, %k, %l be ssa-values of type index
%v0 = vector_transfer_read %src, %i, %j, %k, %l
{permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
(memref<?x?x?x?xf32>, index, index, index, index) ->
vector<16x32x64xf32>
%v1 = vector_transfer_read %src, %i, %j, %k, %l, %val
{permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
(memref<?x?x?x?xf32>, index, index, index, index, f32) ->
vector<16x32x64xf32>
```
VectorTransferWriteOp
=====================
VectorTransferWriteOp performs a blocking write from a super-vector to
a scalar memref of the same elemental type. This operation is
called 'write' by opposition to 'store' because the super-vector
granularity is generally not representable with a single hardware register. As
a consequence, memory transfers will generally be required when lowering
VectorTransferWriteOp. A VectorTransferWriteOp is thus a mid-level
abstraction that supports super-vectorization with non-effecting padding
for full-tile only code.
A vector transfer write has semantics similar to a vector store, with
additional support for handling out-of-bounds situations.
Example:
```mlir
%A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>.
%val = `ssa-value` : vector<16x32x64xf32>
// let %i, %j, %k, %l be ssa-values of type index
vector_transfer_write %val, %src, %i, %j, %k, %l
{permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
(vector<16x32x64xf32>, memref<?x?x?x?xf32>, index, index, index, index)
```
PiperOrigin-RevId: 223873234
The semantics of 'select' is conventional: return the second operand if the
first operand is true (1 : i1) and the third operand otherwise. It is
applicable to vectors and tensors element-wise, similarly to LLVM instruction.
This operation is necessary to implement min/max to lower 'for' loops with
complex bounds to CFG functions and to support ternary operations in ML
functions. It is preferred to first-class min/max because of its simplicity,
e.g. it is not concered with signedness.
PiperOrigin-RevId: 223160860
This does create an inconsistency between the print formats (e.g., attributes are normally before operands) but fixes an invalid parsing & keeps constant uniform wrt itself (function or int attributes have type at same place). And specifying the specific type for a int/float attribute might get revised shortly.
Also add test to verify that output printed can be parsed again.
PiperOrigin-RevId: 221923893
* Optionally attach the type of integer and floating point attributes to the attributes, this allows restricting a int/float to specific width.
- Currently this allows suffixing int/float constant with type [this might be revised in future].
- Default to i64 and f32 if not specified.
* For index types the APInt width used is 64.
* Change callers to request a specific attribute type.
* Store iN type with APInt of width N.
* This change does not handle the folding of constants of different types (e.g., doing int type promotions to support constant folding i3 and i32), and instead restricts the constant folding to only operate on the same types.
PiperOrigin-RevId: 221722699
It is unclear why vector types were not allowed to have "index" as element
type. Index values are integers, although of unknown bit width, and should
behave as such. Vectors of integers are allowed and so are tensors of indices
(for indirection purposes), it is more consistent to also have vectors of
indices.
PiperOrigin-RevId: 220630123
Arithmetic and comparison instructions are necessary to implement, e.g.,
control flow when lowering MLFunctions to CFGFunctions. (While it is possible
to replace some of the arithmetics by affine_apply instructions for loop
bounds, it is still necessary for loop bounds checking, steps, if-conditions,
non-trivial memref subscripts, etc.) Furthermore, working with indirect
accesses in, e.g., lookup tables for large embeddings, may require operating on
tensors of indexes. For example, the equivalents to C code "LUT[Index[i]]" or
"ResultIndex[i] = i + j" where i, j are loop induction variables require the
arithmetics on indices as well as the possibility to operate on tensors
thereof. Allow arithmetic and comparison operations to apply to index types by
declaring them integer-like. Allow tensors whose element type is index for
indirection purposes.
The absence of vectors with "index" element type is explicitly tested, but the
only justification for this restriction in the CL introducing the test is
"because we don't need them". Do NOT enable vectors of index types, although
it makes vector and tensor types inconsistent with respect to allowed element
types.
PiperOrigin-RevId: 220614055
This binary operation is applicable to integers, vectors and tensors thereof
similarly to binary arithmetic operations. The operand types must match
exactly, and the shape of the result type is the same as that of the operands.
The element type of the result is always i1. The kind of the comparison is
defined by the "predicate" integer attribute. This attribute requests one of:
- equals to;
- not equals to;
- signed less than;
- signed less than or equals;
- signed greater than;
- signed greater than or equals;
- unsigned less than;
- unsigned less than or equals;
- unsigned greater than;
- unsigned greater than or equals.
Since integer values themselves do not have a sign, the comparison operator
specifies whether to use signed or unsigned comparison logic, i.e. whether to
interpret values where the foremost bit is set as negatives expressed as two's
complements or as positive values. For non-scalar operands, pairwise
per-element comparison is performed. Comparison operators on scalars are
necessary to implement basic control flow with conditional branches.
PiperOrigin-RevId: 220613566
- Added a mechanism for specifying pattern matching more concisely like LLVM.
- Added support for canonicalization of addi/muli over vector/tensor splat
- Added ValueType to Attribute class hierarchy
- Allowed creating constant splat
PiperOrigin-RevId: 219149621
"shape_cast" only applies to tensors, and there are other operations that
actually affect shape, for example "reshape". Rename "shape_cast" to
"tensor_cast" in both the code and the documentation.
PiperOrigin-RevId: 218528122
1) affineint (as it is named) is not a type suitable for general computation (e.g. the multiply/adds in an integer matmul). It has undefined width and is undefined on overflow. They are used as the indices for forstmt because they are intended to be used as indexes inside the loop.
2) It can be used in both cfg and ml functions, and in cfg functions. As you mention, “symbols” are not affine, and we use affineint values for symbols.
3) Integers aren’t affine, the algorithms applied to them can be. :)
4) The only suitable use for affineint in MLIR is for indexes and dimension sizes (i.e. the bounds of those indexes).
PiperOrigin-RevId: 216057974
This CL adds support for `mulf` which is necessary to write/emit a simple scalar
matmul in MLIR. This CL does not consider automation of generation of ops but
mulf is important and useful enough to be added on its own atm.
PiperOrigin-RevId: 214496098
optimization pass:
- Give the ability for operations to implement a constantFold hook (a simple
one for single-result ops as well as general support for multi-result ops).
- Implement folding support for constant and addf.
- Implement support in AbstractOperation and Operation to make this usable by
clients.
- Implement a very simple constant folding pass that does top down folding on
CFG and ML functions, with a testcase that exercises all the above stuff.
Random cleanups:
- Improve the build APIs for ConstantOp.
- Stop passing "-o -" to mlir-opt in the testsuite, since that is the default.
PiperOrigin-RevId: 213749809
new VectorOrTensorType class that provides a common interface between vector
and tensor since a number of operations will be uniform across them (including
extract_element). Improve the LoadOp verifier.
I also updated the MLIR spec doc as well.
PiperOrigin-RevId: 209953189
resolver support.
Still TODO are verifier support (to make sure you don't use an attribute for a
function in another module) and the TODO in ModuleParser::finalizeModule that I
will handle in the next patch.
PiperOrigin-RevId: 209361648
Prior to this CL, return statement had no explicit representation in MLIR. Now, it is represented as ReturnOp standard operation and is pretty printed according to the return statement syntax. This way statement walkers can process ML function return operands without making special case for them.
PiperOrigin-RevId: 208092424
generalize the asmprinters handling of pretty names to allow arbitrary sugar to
be dumped on various constructs. Give CFG function arguments nice "arg0" names
like MLFunctions get, and give constant integers pretty names like %c37 for a
constant 377
PiperOrigin-RevId: 206953080
This is still (intentionally) generating redundant parens for nested tightly
binding expressions, but I think that is reasonable for readability sake.
This also print x-y instead of x-(y*1)
PiperOrigin-RevId: 206847212
- Enhance memref type to allow omission of mappings and address
spaces (implying a default mapping).
- Fix printing of function types to properly recurse with printType
so mappings are printed by name.
- Simplify parsing of AffineMaps a bit now that we have
isSymbolicOrConstant()
PiperOrigin-RevId: 206039755
This regresses parser error recovery in some cases (in invalid.mlir) which I'll
consider in a follow-up patch. The important thing in this patch is that the
parse methods in StandardOps.cpp are nice and simple.
PiperOrigin-RevId: 206023308