This attribute details the segment sizes for operand groups within the operation. This revision add support for automatically populating this attribute in the declarative parser.
Differential Revision: https://reviews.llvm.org/D75315
Summary:
This adds an rsqrt op to the standard dialect, and lowers
it as 1 / sqrt to the LLVM dialect.
Differential Revision: https://reviews.llvm.org/D75353
Summary:
The RFC for this op is here: https://llvm.discourse.group/t/rfc-add-std-atomic-rmw-op/489
The std.atmomic_rmw op provides a way to support read-modify-write
sequences with data race freedom. It is intended to be used in the lowering
of an upcoming affine.atomic_rmw op which can be used for reductions.
A lowering to LLVM is provided with 2 paths:
- Simple patterns: llvm.atomicrmw
- Everything else: llvm.cmpxchg
Differential Revision: https://reviews.llvm.org/D74401
Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D72429
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.
Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').
The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.
Closestensorflow/mlir#226
PiperOrigin-RevId: 286803388
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.
Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.
affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#225
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
Memref_cast supports cast from static shape to dynamic shape
memrefs. The same should be true for strides as well, i.e a memref
with static strides can be casted to a memref with dynamic strides.
PiperOrigin-RevId: 282381862
The current SubViewOp specification allows for either all offsets,
shape and stride to be dynamic or all of them to be static. There are
opportunities for more fine-grained canonicalization based on which of
these are static. For example, if the sizes are static, the result
memref is of static shape. The specification of SubViewOp is modified
to allow on or more of offsets, shapes and strides to be statically
specified. The verification is updated to ensure that the result type
of the subview op is consistent with which of these are static and
which are dynamic.
PiperOrigin-RevId: 281560457
This CL moves VectorOps to Tablegen and cleans up the implementation.
This is almost NFC but 2 changes occur:
1. an interface change occurs in the padding value specification in vector_transfer_read:
the value becomes non-optional. As a shortcut we currently use %f0 for all paddings.
This should become an OpInterface for vectorization in the future.
2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>`
Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect.
The op documentation is moved to the .td file.
PiperOrigin-RevId: 280430869
This operation is a companion operation to the std.view operation added as proposed in "Updates to the MLIR MemRefType" RFC.
PiperOrigin-RevId: 279766410
- also remove stale terminology/references in docs
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#148
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591
PiperOrigin-RevId: 271618279
- introduce splat op in standard dialect (currently for int/float/index input
type, output type can be vector or statically shaped tensor)
- implement LLVM lowering (when result type is 1-d vector)
- add constant folding hook for it
- while on Ops.cpp, fix some stale names
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#141
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51
PiperOrigin-RevId: 270965304
This adds sign- and zero-extension and truncation of integer types to the
standard dialects. This allows to perform integer type conversions without
having to go to the LLVM dialect and introduce custom type casts (between
standard and LLVM integer types).
Closestensorflow/mlir#134
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31
PiperOrigin-RevId: 270479722
Verification complained when using zero-dimensional memrefs in
affine.load, affine.store, std.load and std.store. This PR extends
verification so that those memrefs can be used.
Closestensorflow/mlir#58
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/58 from dcaballe:dcaballe/zero-dim 49bcdcd45c52c48beca776431328e5ce551dfa9e
PiperOrigin-RevId: 262164916
Conversion from integers (window or input size, padding etc) to floating point is required to express many ML kernels, for example average pooling.
PiperOrigin-RevId: 259575284
These ops should not belong to the std dialect.
This CL extracts them in their own dialect and updates the corresponding conversions and tests.
PiperOrigin-RevId: 258123853
Change the AsmPrinter to number values breadth-first so that values in adjacent regions can have the same name. This allows for ModuleOp to contain operations that produce results. This also standardizes the special name of region entry arguments to "arg[0-9+]" now that Functions are also operations.
PiperOrigin-RevId: 257225069
This CL adds an "std.if" op to represent an if-then-else construct whose condition is an arbitrary value of type i1.
This is necessary to lower all the existing examples from affine and linalg to std.for + std.if.
This CL introduces the op and adds the relevant positive and negative unit test. Lowering will be done in a separate followup CL.
PiperOrigin-RevId: 256649138
This CL is the first step of a refactoring unification of the control flow abstraction used in different dialects. The `std.for` loop accepts unrestricted indices to encode min, max and step and will be used as a common abstraction on the way to lower level dialects.
PiperOrigin-RevId: 256331795
The current syntax separates the name and value with ':', but ':' is already overloaded by several other things(e.g. trailing types). This makes the syntax difficult to parse in some situtations:
Old:
"foo: 10 : i32"
New:
"foo = 10 : i32"
PiperOrigin-RevId: 255097928
This is the standard syntax for types on operations, and is also already used by IntegerAttr and FloatAttr.
Example:
dense<5> : tensor<i32>
dense<[3]> : tensor<1xi32>
PiperOrigin-RevId: 255069157
Index types integers of platform-specific bit width. They are used to index
memrefs and as loop induction variables, however they could not be obtained
from an integer until now, making it virtually impossible to express indirect
accesses (given that memrefs of indices are not allowed) or data-dependent
loops. Introduce `std.index_cast` to transform indices into integers and vice
versa. The semantics of this cast is to sign-extend when casting to a wider
integer, and to truncate when casting to a narrower integer. It belongs to
StandardOps because both types it operates on are standard types, and because
its results are likely to be used in std.load and std.store.
Introduce llvm.sext, llvm.zext and llvm.trunc operations to the LLVM dialect.
Provide the conversion of `std.index_cast` to llvm.sext or llvm.trunc,
depending on the actual bitwidth of `index` known during the conversion.
PiperOrigin-RevId: 253624100
* There is no longer a need to explicitly remap function attrs.
- This removes a potentially expensive call from the destructor of Function.
- This will enable some interprocedural transformations to now run intraprocedurally.
- This wasn't scalable and forces dialect defined attributes to override
a virtual function.
* Replacing a function is now a trivial operation.
* This is a necessary first step to representing functions as operations.
--
PiperOrigin-RevId: 249510802
This closely mirrors the llvm fcmp instruction, defining 16 different predicates
Constant folding is unsupported for NaN and Inf because there's no way to represent those as constants at the moment
--
PiperOrigin-RevId: 246932358
This adds parsing, printing and some folding/canonicalization.
Also extends rewriting of subi %0, %0 to handle vectors and tensors.
--
PiperOrigin-RevId: 242448164
Add support for lowering DivF and RemF to LLVM::FDiv and LLMV::FRem
respectively. The lowering is a trivial one-to-one transformation.
The corresponding operations already existed in the LLVM IR dialect and can be
lowered to the LLVM IR proper. Add the necessary tests for scalar and vector
forms.
PiperOrigin-RevId: 234984608
Existing type syntax contains the following productions:
function-type ::= type-list-parens `->` type-list
type-list ::= type | type-list-parens
type ::= <..> | function-type
Due to these rules, when the parser sees `->` followed by `(`, it cannot
disambiguate if `(` starts a parenthesized list of function result types, or a
parenthesized list of operands of another function type, returned from the
current function. We would need an unknown amount of lookahead to try to find
the `->` at the right level of function nesting to differentiate between type
lists and singular function types.
Instead, require the result type of the function that is a function type itself
to be always parenthesized, at the syntax level. Update the spec and the
parser to correspond to the production rule names used in the spec (although it
would have worked without modifications). Fix the function type parsing bug in
the process, as it used to accept the non-parenthesized list of types for
arguments, disallowed by the spec.
PiperOrigin-RevId: 232528361
The generic form may be more desirable even when there is a custom form
specified so add option to enable emitting it. This also exposes a current bug
when round tripping constant with function attribute.
PiperOrigin-RevId: 232350712
This adds signed/unsigned integer division and remainder operations to the
StandardOps dialect. Two versions are required because MLIR integers are
signless, but the meaning of the leading bit is important in division and
affects the results. LLVM IR made a similar choice. Define the operations in
the tablegen file and add simple constant folding hooks in the C++
implementation. Handle signed division overflow and division by zero errors in
constant folding. Canonicalization is left for future work.
These operations are necessary to lower affine_apply's down to LLVM IR.
PiperOrigin-RevId: 228077549
The entire compiler now looks at structural properties of the function (e.g.
does it have one block, does it contain an if/for stmt, etc) so the only thing
holding up this difference is round tripping through the parser/printer syntax.
Removing this shrinks the compile by ~140LOC.
This is step 31/n towards merging instructions and statements. The last step
is updating the docs, which I will do as a separate patch in order to split it
from this mostly mechanical patch.
PiperOrigin-RevId: 227540453
printing the entry block in a CFG function's argument line. Since I'm touching
all of the testcases anyway, change the argument list from printing as
"%arg : type" to "%arg: type" which is more consistent with bb arguments.
In addition to being more consistent, this is a much nicer look for cfg functions.
PiperOrigin-RevId: 227240069
An extensive discussion demonstrated that it is difficult to support `index`
types as elements of compound (vector, memref, tensor) types. In particular,
their size is unknown until the target-specific lowering takes place. MLIR may
need to store constants of the fixed-shape compound types (e.g.,
vector<4 x index>) internally and must know the size of the element type and
data layout constraints. The same information is necessary for target-specific
lowering and translation to reliably support compound types with `index`
elements, but MLIR does not have a dedicated target description mechanism yet.
The uses cases for compound types with `index` elements, should they appear,
can be handled via an `index_cast` operation that converts between `index` and
fixed-size integer types at the SSA value level instead of the type level.
PiperOrigin-RevId: 225064373
This CL implements and uses VectorTransferOps in lieu of the former custom
call op. Tests are updated accordingly.
VectorTransferOps come in 2 flavors: VectorTransferReadOp and
VectorTransferWriteOp.
VectorTransferOps can be thought of as a backend-independent
pseudo op/library call that needs to be legalized to MLIR (whiteboxed) before
it can be lowered to backend-dependent IR.
Note that the current implementation does not yet support a real permutation
map. Proper support will come in a followup CL.
VectorTransferReadOp
====================
VectorTransferReadOp performs a blocking read from a scalar memref
location into a super-vector of the same elemental type. This operation is
called 'read' by opposition to 'load' because the super-vector granularity
is generally not representable with a single hardware register. As a
consequence, memory transfers will generally be required when lowering
VectorTransferReadOp. A VectorTransferReadOp is thus a mid-level abstraction
that supports super-vectorization with non-effecting padding for full-tile
only code.
A vector transfer read has semantics similar to a vector load, with additional
support for:
1. an optional value of the elemental type of the MemRef. This value
supports non-effecting padding and is inserted in places where the
vector read exceeds the MemRef bounds. If the value is not specified,
the access is statically guaranteed to be within bounds;
2. an attribute of type AffineMap to specify a slice of the original
MemRef access and its transposition into the super-vector shape. The
permutation_map is an unbounded AffineMap that must represent a
permutation from the MemRef dim space projected onto the vector dim
space.
Example:
```mlir
%A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>
...
%val = `ssa-value` : f32
// let %i, %j, %k, %l be ssa-values of type index
%v0 = vector_transfer_read %src, %i, %j, %k, %l
{permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
(memref<?x?x?x?xf32>, index, index, index, index) ->
vector<16x32x64xf32>
%v1 = vector_transfer_read %src, %i, %j, %k, %l, %val
{permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
(memref<?x?x?x?xf32>, index, index, index, index, f32) ->
vector<16x32x64xf32>
```
VectorTransferWriteOp
=====================
VectorTransferWriteOp performs a blocking write from a super-vector to
a scalar memref of the same elemental type. This operation is
called 'write' by opposition to 'store' because the super-vector
granularity is generally not representable with a single hardware register. As
a consequence, memory transfers will generally be required when lowering
VectorTransferWriteOp. A VectorTransferWriteOp is thus a mid-level
abstraction that supports super-vectorization with non-effecting padding
for full-tile only code.
A vector transfer write has semantics similar to a vector store, with
additional support for handling out-of-bounds situations.
Example:
```mlir
%A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>.
%val = `ssa-value` : vector<16x32x64xf32>
// let %i, %j, %k, %l be ssa-values of type index
vector_transfer_write %val, %src, %i, %j, %k, %l
{permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
(vector<16x32x64xf32>, memref<?x?x?x?xf32>, index, index, index, index)
```
PiperOrigin-RevId: 223873234
The semantics of 'select' is conventional: return the second operand if the
first operand is true (1 : i1) and the third operand otherwise. It is
applicable to vectors and tensors element-wise, similarly to LLVM instruction.
This operation is necessary to implement min/max to lower 'for' loops with
complex bounds to CFG functions and to support ternary operations in ML
functions. It is preferred to first-class min/max because of its simplicity,
e.g. it is not concered with signedness.
PiperOrigin-RevId: 223160860
This does create an inconsistency between the print formats (e.g., attributes are normally before operands) but fixes an invalid parsing & keeps constant uniform wrt itself (function or int attributes have type at same place). And specifying the specific type for a int/float attribute might get revised shortly.
Also add test to verify that output printed can be parsed again.
PiperOrigin-RevId: 221923893
* Optionally attach the type of integer and floating point attributes to the attributes, this allows restricting a int/float to specific width.
- Currently this allows suffixing int/float constant with type [this might be revised in future].
- Default to i64 and f32 if not specified.
* For index types the APInt width used is 64.
* Change callers to request a specific attribute type.
* Store iN type with APInt of width N.
* This change does not handle the folding of constants of different types (e.g., doing int type promotions to support constant folding i3 and i32), and instead restricts the constant folding to only operate on the same types.
PiperOrigin-RevId: 221722699
It is unclear why vector types were not allowed to have "index" as element
type. Index values are integers, although of unknown bit width, and should
behave as such. Vectors of integers are allowed and so are tensors of indices
(for indirection purposes), it is more consistent to also have vectors of
indices.
PiperOrigin-RevId: 220630123
Arithmetic and comparison instructions are necessary to implement, e.g.,
control flow when lowering MLFunctions to CFGFunctions. (While it is possible
to replace some of the arithmetics by affine_apply instructions for loop
bounds, it is still necessary for loop bounds checking, steps, if-conditions,
non-trivial memref subscripts, etc.) Furthermore, working with indirect
accesses in, e.g., lookup tables for large embeddings, may require operating on
tensors of indexes. For example, the equivalents to C code "LUT[Index[i]]" or
"ResultIndex[i] = i + j" where i, j are loop induction variables require the
arithmetics on indices as well as the possibility to operate on tensors
thereof. Allow arithmetic and comparison operations to apply to index types by
declaring them integer-like. Allow tensors whose element type is index for
indirection purposes.
The absence of vectors with "index" element type is explicitly tested, but the
only justification for this restriction in the CL introducing the test is
"because we don't need them". Do NOT enable vectors of index types, although
it makes vector and tensor types inconsistent with respect to allowed element
types.
PiperOrigin-RevId: 220614055
This binary operation is applicable to integers, vectors and tensors thereof
similarly to binary arithmetic operations. The operand types must match
exactly, and the shape of the result type is the same as that of the operands.
The element type of the result is always i1. The kind of the comparison is
defined by the "predicate" integer attribute. This attribute requests one of:
- equals to;
- not equals to;
- signed less than;
- signed less than or equals;
- signed greater than;
- signed greater than or equals;
- unsigned less than;
- unsigned less than or equals;
- unsigned greater than;
- unsigned greater than or equals.
Since integer values themselves do not have a sign, the comparison operator
specifies whether to use signed or unsigned comparison logic, i.e. whether to
interpret values where the foremost bit is set as negatives expressed as two's
complements or as positive values. For non-scalar operands, pairwise
per-element comparison is performed. Comparison operators on scalars are
necessary to implement basic control flow with conditional branches.
PiperOrigin-RevId: 220613566
- Added a mechanism for specifying pattern matching more concisely like LLVM.
- Added support for canonicalization of addi/muli over vector/tensor splat
- Added ValueType to Attribute class hierarchy
- Allowed creating constant splat
PiperOrigin-RevId: 219149621
"shape_cast" only applies to tensors, and there are other operations that
actually affect shape, for example "reshape". Rename "shape_cast" to
"tensor_cast" in both the code and the documentation.
PiperOrigin-RevId: 218528122
1) affineint (as it is named) is not a type suitable for general computation (e.g. the multiply/adds in an integer matmul). It has undefined width and is undefined on overflow. They are used as the indices for forstmt because they are intended to be used as indexes inside the loop.
2) It can be used in both cfg and ml functions, and in cfg functions. As you mention, “symbols” are not affine, and we use affineint values for symbols.
3) Integers aren’t affine, the algorithms applied to them can be. :)
4) The only suitable use for affineint in MLIR is for indexes and dimension sizes (i.e. the bounds of those indexes).
PiperOrigin-RevId: 216057974
This CL adds support for `mulf` which is necessary to write/emit a simple scalar
matmul in MLIR. This CL does not consider automation of generation of ops but
mulf is important and useful enough to be added on its own atm.
PiperOrigin-RevId: 214496098
optimization pass:
- Give the ability for operations to implement a constantFold hook (a simple
one for single-result ops as well as general support for multi-result ops).
- Implement folding support for constant and addf.
- Implement support in AbstractOperation and Operation to make this usable by
clients.
- Implement a very simple constant folding pass that does top down folding on
CFG and ML functions, with a testcase that exercises all the above stuff.
Random cleanups:
- Improve the build APIs for ConstantOp.
- Stop passing "-o -" to mlir-opt in the testsuite, since that is the default.
PiperOrigin-RevId: 213749809
new VectorOrTensorType class that provides a common interface between vector
and tensor since a number of operations will be uniform across them (including
extract_element). Improve the LoadOp verifier.
I also updated the MLIR spec doc as well.
PiperOrigin-RevId: 209953189
resolver support.
Still TODO are verifier support (to make sure you don't use an attribute for a
function in another module) and the TODO in ModuleParser::finalizeModule that I
will handle in the next patch.
PiperOrigin-RevId: 209361648
Prior to this CL, return statement had no explicit representation in MLIR. Now, it is represented as ReturnOp standard operation and is pretty printed according to the return statement syntax. This way statement walkers can process ML function return operands without making special case for them.
PiperOrigin-RevId: 208092424
generalize the asmprinters handling of pretty names to allow arbitrary sugar to
be dumped on various constructs. Give CFG function arguments nice "arg0" names
like MLFunctions get, and give constant integers pretty names like %c37 for a
constant 377
PiperOrigin-RevId: 206953080
This is still (intentionally) generating redundant parens for nested tightly
binding expressions, but I think that is reasonable for readability sake.
This also print x-y instead of x-(y*1)
PiperOrigin-RevId: 206847212
- Enhance memref type to allow omission of mappings and address
spaces (implying a default mapping).
- Fix printing of function types to properly recurse with printType
so mappings are printed by name.
- Simplify parsing of AffineMaps a bit now that we have
isSymbolicOrConstant()
PiperOrigin-RevId: 206039755
This regresses parser error recovery in some cases (in invalid.mlir) which I'll
consider in a follow-up patch. The important thing in this patch is that the
parse methods in StandardOps.cpp are nice and simple.
PiperOrigin-RevId: 206023308