With SCCP and integer range analysis ported to the new framework, this old framework is redundant. Delete it.
Depends on D128866
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D128867
Integer range inference has been swapped to the new framework. The integer value range lattices automatically updates the corresponding constant value on update.
Depends on D127173
Reviewed By: krzysz00, rriddle
Differential Revision: https://reviews.llvm.org/D128866
complex.angle corresponds to arg function in libm. We can lower complex.angle to arg and argf.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D129341
This patch introduces an implementation of dense data-flow analysis. Dense
data-flow analysis attaches a lattice before and after the execution of every
operation. The lattice state is propagated across operations by a user-defined
transfer function. The state is joined across control-flow and callgraph edges.
Thge patch provides an example pass that uses both a dense and a sparse analysis
together.
Depends on D127139
Reviewed By: rriddle, phisiart
Differential Revision: https://reviews.llvm.org/D127173
This commit adds code completion results to the MLIR LSP using
a new code completion context in the MLIR parser. This commit
adds initial completion for dialect, operation, SSA value, and
block names.
Differential Revision: https://reviews.llvm.org/D129183
OpenCL's round function matches `math.round` so we can directly lower to
the op, this includes adding the op definition to the SPIRV OCL ops.
GLSL does not guarantee rounding direction so we include custom rounding
code to guarantee correct rounding direction.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D129236
Because the buffer descriptor structure (the V#) has no backwards-compatibility
guarentees, and since said guarantees have been violated in practice
(see https://github.com/llvm/llvm-project/issues/56323 ), and since
the `targetIsRDNA` attribute isn't something that higher-level clients can set
in general, make the lowering of the amdgpu dialect to rocdl take a --chipset
option.
Note that this option is a string because adding a parser for the Chipset
struct to llvm::cl wasn't working out.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D129228
This revision revisits the implementation of applyToOne and its handling
of recoverable errors as well as propagation of null handles.
The implementation is simplified to always require passing a vector<Operation*>
in which the results are returned, resulting in less template instantiation magic.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D129185
This Transform dialect op allows one to merge the lists of Payload IR
operations pointed to by several handles into a single list associated with one
handle. This is an important Transform dialect usability improvement for cases
where transformations may temporarily diverge for different groups of Payload
IR ops before converging back to the same script. Without this op, several
copies of the trailing transformations would have to be present in the
transformation script.
Depends On D129090
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129110
Introduce a new transformation on structured ops that splits the iteration
space into two parts along the specified dimension. The index at which the
splitting happens may be static or dynamic. This transformation can be seen as
a rudimentary form of index-set splitting that only supports the splitting
along hyperplanes parallel to the iteration space hyperplanes, and is therefore
decomposable into per-dimension application.
It is a key low-level transformation that enables independent scheduling for
different parts of the iteration space of the same op, which hasn't been
possible previously. It may be used to implement, e.g., multi-sized tiling. In
future, peeling can be implemented as a combination of split-off amount
computation and splitting.
The transformation is conceptually close to tiling in its separation of the
iteration and data spaces, but cannot be currently implemented on top of
TilingInterface as the latter does not properly support `linalg.index`
offsetting.
Note that the transformation intentionally bypasses folding of
`tensor.extract_slice` operations when creating them as this folding was found
to prevent repeated splitting of the same operation because due to internal
assumptions about extract/insert_slice combination in dialect utilities.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129090
By making TypeInterfaces and AttrInterfaces, Types and Attrs respectively it'd then be possible to use them anywhere where a Type or Attr may go. That is within the arguments and results of an Op definition, in a RewritePattern etc.
Prior to this change users had to separately define a Type or Attr, with a predicate to check whether a type or attribute implements a given interface. Such code will be redundant now.
Removing such occurrences in upstream dialects will be part of a separate patch.
As part of implementing this patch, slight refactoring had to be done. In particular, Interfaces cppClassName field was renamed to cppInterfaceName as it "clashed" with TypeConstraints cppClassName. In particular Interfaces cppClassName expected just the class name, without any namespaces, while TypeConstraints cppClassName expected a fully qualified class name.
Differential Revision: https://reviews.llvm.org/D129209
This patch adds a conditional binary constant folder which allow to exit when the constants not meet the fold condition. And use it for PowFOp to make it able to fold the constant dense.
Differential Revision: https://reviews.llvm.org/D129108
The revision makes a start with implementing expand/collapse reshaping
for sparse tensors. When either source or destination is sparse, but
other is dense, the "cheap" dense reshape can be used prior to converting
from or to a sparse tensor.
Note1
sparse to sparse reshaping is still TBD.
Note2
in the long run, we may want to implement a "view" into a sparse tensor so that the operation remains cheap and does not require data shuffling
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D129031
This commit refactors the syntax of "ugly" attribute/type formats to not use
strings for wrapping. This means that moving forward attirbutes and type formats
will always need to be in some recognizable form, i.e. if they use incompatible
characters they will need to manually wrap those in a string, the framework will
no longer do it automatically.
This has the benefit of greatly simplifying how parsing attributes/types work, given
that we currently rely on some extremely complicated nested parser logic which is
quite problematic for a myriad of reasons; unecessary complexity(we create a nested
source manager/lexer/etc.), diagnostic locations can be off/wrong given string escaping,
etc.
Differential Revision: https://reviews.llvm.org/D118505
With the exceptions of AttrOrTypeParameter and DerivedAttr, all of MLIR consistently uses $_ctxt as the substitute variable for the MLIRContext in TableGen C++ code.
Usually this does not matter unless one where to reuse some code in multiple fields but it is still needlessly inconsistent and prone to error.
This patch fixes that by consistently using _$ctxt everywhere.
Differential Revision: https://reviews.llvm.org/D129153
- Applying CallOpInterface on CallOp and InvokeOp.
- Applying CallableInterface on LLVMFuncOp.
We're testing the changes using CallGraph, which uses both interfaces.
Differential Revision: https://reviews.llvm.org/D129026
The result shape of a rank-reducing subview cannot be inferred in the general case. Just the result rank is not enough. The only thing that we can infer is the layout map.
This change also improves the bufferization patterns of tensor.extract_slice and tensor.insert_slice to fully support rank-reducing operations.
Differential Revision: https://reviews.llvm.org/D129144
This patch extends the affine parser to allow affine constraints with `<=`.
This is useful in writing unittests for Presburger library and test in general.
The internal storage and printing of IntegerSet is still in the original format.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D129046
ParallelInsertSlice behaves similarly to tensor::InsertSliceOp in its
rank-reducing properties.
This revision extends rank-reducing rewrite behavior and reuses most of the
existing implementation.
Differential Revision: https://reviews.llvm.org/D129091
Infers block/grid dimensions/indices or ranges of such dimensions/indices.
Reviewed By: krzysz00
Differential Revision: https://reviews.llvm.org/D129036
This patch adds translation for omp.task from OpenMPDialect to LLVM IR
Dialect and adds tests for the same.
Depends on D71989
Reviewed By: ftynse, kiranchandramohan, peixin, Meinersbur
Differential Revision: https://reviews.llvm.org/D123919
This revision updates the op semantics to also allow rank-reducing behavior as well
as updates the implementation to reuse code between the sequential and the parallel
version of the op.
Depends on D128920
Differential Revision: https://reviews.llvm.org/D128985
This is moslty NFC and will allow tensor.parallel_insert_slice to gain
rank-reducing semantics by reusing the vast majority of the tensor.insert_slice impl.
Depends on D128857
Differential Revision: https://reviews.llvm.org/D128920
Currently, the parser for IntegerSet, only allows constraints like:
```
affine-constraint ::= affine-expr `>=` `0`
| affine-expr `==` `0`
```
This form is sometimes unreadable and painful to use when writing unittests
for Presburger library and tests in general.
This patch extends the parser to allow affine constraints with affine-expr on
the RHS:
```
affine-constraint ::= affine-expr `>=` `affine-expr`
| affine-expr `==` `affine-expr`
```
The internal storage and printing of IntegerSet is still in the original format.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D128915
We can canonicalize consecutive complex.exp and complex.log which are inverse functions each other.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D128966
The TOSA Specification doesn't have a dilation attribute for transpose_conv2d,
and the padding array is of size 4. (top,bottom,left,right).
This change updates the dialect to match the specification, and updates the lit
tests to match the dialect changes.
Differential Revision: https://reviews.llvm.org/D127332
This revision avoids a crash in the 0-D case of distributing vector.transfer ops out of
vector.warp_execute_on_lane_0.
Due to the code complexity and lack of documentation, it took untangling the implementation
before realizing that the simple fix was to fail in the 0-D case.
The rewrite is still very useful to understand this code better.
Differential Revision: https://reviews.llvm.org/D128793
This differential improves two error conditions, by detecting them earlier and by providing better messages to help users understand what went wrong.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D128555
This patch implements the analysis state classes needed for sparse data-flow analysis and implements a dead-code analysis using those states to determine liveness of blocks, control-flow edges, region predecessors, and function callsites.
Depends on D126751
Reviewed By: rriddle, phisiart
Differential Revision: https://reviews.llvm.org/D127064
For the conversion to nvgpu `mma.sync` and `ldmatrix` pathways, the code
was missing support for the `i4` data type. While fixing this, another
bug was discoverd that caused the number of ldmatrix tiles calculated for
certain operand types and configurations to be incorrect. This change
fixes both issues and adds additional tests.
Differential Revision: https://reviews.llvm.org/D128074
This revision merges the 2 split_reduction transforms and adds extra control by using attributes.
SplitReduction is known to require a concrete additional buffer to store tempoaray information.
Add an option to introduce a `bufferization.alloc_tensor` instead of `linalg.init_tensor`.
This behaves better with subset-based tiling and bufferization.
Differential Revision: https://reviews.llvm.org/D128722
When the iteration graph is cyclic (even after several attempts using less and less constraints), the current sparse compiler bails out, and no rewriting hapens. However, this revision adds some new logic where the sparse compiler tries to find a single input sparse tensor that breaks the cycle, and then adds a proper sparse conversion operation. This way, more incoming kernels can be handled!
Note, the resulting code is not optimal (although it keeps more or less proper "sparse" complexity), and more improvements should be added (especially when the kernel directly yields without computation, such as the transpose example). However, handling is better than not handling ;-)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D128847
Since only mutable types and attributes can go into infinite recursion
inside SubElementInterface::walkSubElement, and there are only a few of
them (mutable types and attributes), we introduce new traits for Type
and Attribute: TypeTrait::IsMutable and AttributeTrait::IsMutable,
respectively. They indicate whether a type or attribute is mutable.
Such traits are required if the ImplType defines a `mutate` function.
Then, inside SubElementInterface, we use a set to record visited mutable
types and attributes that have been visited before.
Differential Revision: https://reviews.llvm.org/D127537