This attribute details the segment sizes for operand groups within the operation. This revision add support for automatically populating this attribute in the declarative parser.
Differential Revision: https://reviews.llvm.org/D75315
Summary:
This adds an rsqrt op to the standard dialect, and lowers
it as 1 / sqrt to the LLVM dialect.
Differential Revision: https://reviews.llvm.org/D75353
Summary:
The RFC for this op is here: https://llvm.discourse.group/t/rfc-add-std-atomic-rmw-op/489
The std.atmomic_rmw op provides a way to support read-modify-write
sequences with data race freedom. It is intended to be used in the lowering
of an upcoming affine.atomic_rmw op which can be used for reductions.
A lowering to LLVM is provided with 2 paths:
- Simple patterns: llvm.atomicrmw
- Everything else: llvm.cmpxchg
Differential Revision: https://reviews.llvm.org/D74401
Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D72429
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.
Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').
The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.
Closestensorflow/mlir#226
PiperOrigin-RevId: 286803388
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.
Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.
affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#225
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
Memref_cast supports cast from static shape to dynamic shape
memrefs. The same should be true for strides as well, i.e a memref
with static strides can be casted to a memref with dynamic strides.
PiperOrigin-RevId: 282381862
The current SubViewOp specification allows for either all offsets,
shape and stride to be dynamic or all of them to be static. There are
opportunities for more fine-grained canonicalization based on which of
these are static. For example, if the sizes are static, the result
memref is of static shape. The specification of SubViewOp is modified
to allow on or more of offsets, shapes and strides to be statically
specified. The verification is updated to ensure that the result type
of the subview op is consistent with which of these are static and
which are dynamic.
PiperOrigin-RevId: 281560457
This CL moves VectorOps to Tablegen and cleans up the implementation.
This is almost NFC but 2 changes occur:
1. an interface change occurs in the padding value specification in vector_transfer_read:
the value becomes non-optional. As a shortcut we currently use %f0 for all paddings.
This should become an OpInterface for vectorization in the future.
2. the return type of vector.type_cast is trivial and simplified to `memref<vector<...>>`
Relevant roundtrip and invalid tests that used to sit in core are moved to the vector dialect.
The op documentation is moved to the .td file.
PiperOrigin-RevId: 280430869
This operation is a companion operation to the std.view operation added as proposed in "Updates to the MLIR MemRefType" RFC.
PiperOrigin-RevId: 279766410
- also remove stale terminology/references in docs
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#148
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591
PiperOrigin-RevId: 271618279
- introduce splat op in standard dialect (currently for int/float/index input
type, output type can be vector or statically shaped tensor)
- implement LLVM lowering (when result type is 1-d vector)
- add constant folding hook for it
- while on Ops.cpp, fix some stale names
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#141
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/141 from bondhugula:splat 48976a6aa0a75be6d91187db6418de989e03eb51
PiperOrigin-RevId: 270965304
This adds sign- and zero-extension and truncation of integer types to the
standard dialects. This allows to perform integer type conversions without
having to go to the LLVM dialect and introduce custom type casts (between
standard and LLVM integer types).
Closestensorflow/mlir#134
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/134 from ombre5733:sext-zext-trunc-in-std c7657bc84c0ca66b304e53ec03797e09152e4d31
PiperOrigin-RevId: 270479722
Verification complained when using zero-dimensional memrefs in
affine.load, affine.store, std.load and std.store. This PR extends
verification so that those memrefs can be used.
Closestensorflow/mlir#58
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/58 from dcaballe:dcaballe/zero-dim 49bcdcd45c52c48beca776431328e5ce551dfa9e
PiperOrigin-RevId: 262164916
Conversion from integers (window or input size, padding etc) to floating point is required to express many ML kernels, for example average pooling.
PiperOrigin-RevId: 259575284
These ops should not belong to the std dialect.
This CL extracts them in their own dialect and updates the corresponding conversions and tests.
PiperOrigin-RevId: 258123853
Change the AsmPrinter to number values breadth-first so that values in adjacent regions can have the same name. This allows for ModuleOp to contain operations that produce results. This also standardizes the special name of region entry arguments to "arg[0-9+]" now that Functions are also operations.
PiperOrigin-RevId: 257225069
This CL adds an "std.if" op to represent an if-then-else construct whose condition is an arbitrary value of type i1.
This is necessary to lower all the existing examples from affine and linalg to std.for + std.if.
This CL introduces the op and adds the relevant positive and negative unit test. Lowering will be done in a separate followup CL.
PiperOrigin-RevId: 256649138
This CL is the first step of a refactoring unification of the control flow abstraction used in different dialects. The `std.for` loop accepts unrestricted indices to encode min, max and step and will be used as a common abstraction on the way to lower level dialects.
PiperOrigin-RevId: 256331795
The current syntax separates the name and value with ':', but ':' is already overloaded by several other things(e.g. trailing types). This makes the syntax difficult to parse in some situtations:
Old:
"foo: 10 : i32"
New:
"foo = 10 : i32"
PiperOrigin-RevId: 255097928
This is the standard syntax for types on operations, and is also already used by IntegerAttr and FloatAttr.
Example:
dense<5> : tensor<i32>
dense<[3]> : tensor<1xi32>
PiperOrigin-RevId: 255069157
Index types integers of platform-specific bit width. They are used to index
memrefs and as loop induction variables, however they could not be obtained
from an integer until now, making it virtually impossible to express indirect
accesses (given that memrefs of indices are not allowed) or data-dependent
loops. Introduce `std.index_cast` to transform indices into integers and vice
versa. The semantics of this cast is to sign-extend when casting to a wider
integer, and to truncate when casting to a narrower integer. It belongs to
StandardOps because both types it operates on are standard types, and because
its results are likely to be used in std.load and std.store.
Introduce llvm.sext, llvm.zext and llvm.trunc operations to the LLVM dialect.
Provide the conversion of `std.index_cast` to llvm.sext or llvm.trunc,
depending on the actual bitwidth of `index` known during the conversion.
PiperOrigin-RevId: 253624100
* There is no longer a need to explicitly remap function attrs.
- This removes a potentially expensive call from the destructor of Function.
- This will enable some interprocedural transformations to now run intraprocedurally.
- This wasn't scalable and forces dialect defined attributes to override
a virtual function.
* Replacing a function is now a trivial operation.
* This is a necessary first step to representing functions as operations.
--
PiperOrigin-RevId: 249510802
This closely mirrors the llvm fcmp instruction, defining 16 different predicates
Constant folding is unsupported for NaN and Inf because there's no way to represent those as constants at the moment
--
PiperOrigin-RevId: 246932358
This adds parsing, printing and some folding/canonicalization.
Also extends rewriting of subi %0, %0 to handle vectors and tensors.
--
PiperOrigin-RevId: 242448164
Add support for lowering DivF and RemF to LLVM::FDiv and LLMV::FRem
respectively. The lowering is a trivial one-to-one transformation.
The corresponding operations already existed in the LLVM IR dialect and can be
lowered to the LLVM IR proper. Add the necessary tests for scalar and vector
forms.
PiperOrigin-RevId: 234984608