Uses elementwise interface to generalize canonicalization pattern and add a new
pattern for vector.contract case.
Differential Revision: https://reviews.llvm.org/D104343
Similarly to batch_mat vec outer most dim is a batching dim
and this op does |b| matrix-vector-products :
C[b, i] = sum_k(A[b, i, k] * B[b, k])
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D104739
The executeregionop is used to allow multiple blocks within SCF constructs. If the container allows multiple blocks, inline the region
Differential Revision: https://reviews.llvm.org/D104960
Deduce circumstances where an affine load could not possibly be read by an operation (such as an affine load), and if so, eliminate the load
Differential Revision: https://reviews.llvm.org/D105041
Update the OpDSL documentation to reflect recent changes. In particular, the updated documentation discusses:
- Attributes used to parameterize index expressions
- Shape-only tensor support
- Scalar parameters
Differential Revision: https://reviews.llvm.org/D105123
Extend the OpDSL syntax with an optional `domain` function to specify an explicit dimension order. The extension is needed to provide more control over the dimension order instead of deducing it implicitly depending on the formulation of the tensor comprehension. Additionally, the patch also ensures the symbols are ordered according to the operand definitions of the operation.
Differential Revision: https://reviews.llvm.org/D105117
This was missing and also there was a bug in the lowering itself, which went unnoticed due to it.
Differential Revision: https://reviews.llvm.org/D105122
Fix generateCopyForMemRefRegion for a missing check: in some cases, when
the thing to generate copies for itself is empty, no fast buffer/copy
loops would have been allocated/generated. Add an extra assertion there
while at this.
Differential Revision: https://reviews.llvm.org/D105170
This reverts commit 652f4b5140.
Re-enable MLLIR JIT tests.
The MLIR Bot was updated to export LD_LIBRARY_PATH=/usr/lib64, which
seem to fix this issue.
* Previously, we were only generating .h.inc files. We foresee the need to also generate implementations and this is a step towards that.
* Discussed in https://llvm.discourse.group/t/generating-cpp-inc-files-for-dialects/3732/2
* Deviates from the discussion above by generating a default constructor in the .cpp.inc file (and adding a tablegen bit that disables this in case if this is user provided).
* Generating the destructor started as a way to flush out the missing includes (produces a link error), but it is a strict improvement on its own that is worth doing (i.e. by emitting key methods in the .cpp file, we root vtables in one translation unit, which is a non-controversial improvement).
Differential Revision: https://reviews.llvm.org/D105070
Depends On D105037
Avoid creating too many tasks when the number of workers is large.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D105126
Depends On D104999
Automatic reference counting based on the liveness analysis can add a lot of reference counting overhead at runtime. If the IR is known to be constrained to few particular "shapes", it's much more efficient to provide a custom reference counting policy that will specify where it is required to update the async value reference count.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D105037
This revision adds the minimal plumbing to create a simple ComprehensiveModuleBufferizePass that can behave conservatively in the presence of CallOps.
A topological sort of caller/callee is performed and, if the call-graph is cycle-free, analysis can proceed.
Differential revision: https://reviews.llvm.org/D104859
Depends On D104998
Function calls "transfer ownership" to the callee and it puts additional constraints on the reference counting optimization pass
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D104999
Depends On D104891
Outlining scf.parallel body as a function requires async-parallel-for pass to be a ModuleOp pass
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D104998
Depends On D104850
Add a test that verifies that canonicalization removes all async overheads if it is statically known that the scf.parallel operation will be computed using a single block.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D104891
The case where a non-dominating read can be found is captured by slightly generalizing `AliasInfo::wouldCreaateReadAfterWriteInterference`.
This simplification will make it easier to implement bufferization across function call.
APIs are also simplified were possible.
Differential revision: https://reviews.llvm.org/D104845
This patch brings support for setting runtime preemption specifiers of
LLVM's GlobalValues. In LLVM semantics, if the `dso_local` attribute
is not explicitly requested, then it is inferred based on linkage and
visibility. We model this same behavior with a UnitAttribute: if it is
present, then we explicitly request the GlobalValue to marked as
`dso_local`, otherwise we rely on the GlobalValue itself to make this
decision.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D104983
Without it BufferDeallocationPass process only CloneOps created during pass itself and ignore all CloneOps that were already present in IR.
For our specific usecase:
```
func @dealloc_existing_clones(%arg0: memref<?x?xf64>, %arg1: memref<?x?xf64>) -> memref<?x?xf64> {
return %arg0 : memref<?x?xf64>
}
```
Input arguments will be freed immediately after return from function and we want to prolong lifetime for the returned argument.
To achieve this we explicitly add clones to all input memrefs and expect that BufferDeallocationPass will add correct deallocs to them (unnessesary clone+dealloc pairs will be canonicalized away later).
Differential Revision: https://reviews.llvm.org/D104973
Adapt the StructuredOp verifier to ensure all operands are either in the input or the output group. The change is possible after adding support for scalar input operands (https://reviews.llvm.org/D104220).
Differential Revision: https://reviews.llvm.org/D104783
The current code does not preserve the order of the parallel
dimensions when doing multi-reductions and thus we can end
up in scenarios where the result shape does not match the
desired shape after reduction.
This patch fixes that by ensuring that the parallel indices
are in order and then concatenates them to the reduction dimensions
so that the reduction dimensions are innermost.
Differential Revision: https://reviews.llvm.org/D104884
This revision adds detection for changes to either the mlir-lsp-server binary or the setting, and prompts the user to restart the server. Whether the user gets prompted or not is a configurable setting in the extension, and this setting may updated based on the user response to the prompt.
Differential Revision: https://reviews.llvm.org/D104501
This enables creating a replacement rule where range of positional replacements
need not be spelled out, or are not known (e.g., enable having a rewrite that
forward all operands to a call generically).
Differential Revision: https://reviews.llvm.org/D104955
Input/output types can be integers, which represent a quantized convolution.
Update verifier to expect this behavior.
Reviewed By: sjarus
Differential Revision: https://reviews.llvm.org/D104949
The executeregionop is used to allow multiple blocks within SCF constructs. If the container allows multiple blocks, inline the region
Differential Revision: https://reviews.llvm.org/D104960
MemRefDataFlow performs mem2reg style operations for affine load/stores. Unfortunately, it is not presently correct in the presence of external operations such as memref.cast, or function calls. This diff extends the functionality of the pass to remain correct in the presence of such ops.
Differential Revision: https://reviews.llvm.org/D104053
A canonicalization accidentally will remove a memref allocation if it is only stored into. However, this is incorrect if the allocation is the value being stored, not the allocation being stored into.
Differential Revision: https://reviews.llvm.org/D104947
Given a select that returns the logical negation of the condition, replace it with a not of the condition.
Differential Revision: https://reviews.llvm.org/D104966
Reduce code duplication: Move various helper functions, that are duplicated in TensorDialect, MemRefDialect, LinalgDialect, StandardDialect, into a new StaticValueUtils.cpp.
Differential Revision: https://reviews.llvm.org/D104687
Depends On D104780
Recursive work splitting instead of sequential async tasks submission gives ~20%-30% speedup in microbenchmarks.
Algorithm outline:
1. Collapse scf.parallel dimensions into a single dimension
2. Compute the block size for the parallel operations from the 1d problem size
3. Launch parallel tasks
4. Each parallel task reconstructs its own bounds in the original multi-dimensional iteration space
5. Each parallel task computes the original parallel operation body using scf.for loop nest
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D104850
Specify the `!async.group` size (the number of tokens that will be added to it) at construction time. `async.await_all` operation can potentially race with `async.execute` operations that keep updating the group, for this reason it is required to know upfront how many tokens will be added to the group.
Reviewed By: ftynse, herhut
Differential Revision: https://reviews.llvm.org/D104780
Moves iteration lattice/merger code into new SparseTensor/Utils directory. A follow-up CL will add lattice/merger unit tests.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D104757
This commit moves the type translator from LLVM to MLIR to a public header for use by external projects or other code.
Unlike a previous attempt (https://reviews.llvm.org/D104726), this patch moves the type conversion into separate files which remedies the linker error which was only caught by CI.
Differential Revision: https://reviews.llvm.org/D104834
scf::ForOp bufferization analysis proceeds just like for any other op (including FuncOp) at its boundaries; i.e. if:
1. The tensor operand is inplaceable.
2. The matching result has no subsequent read (i.e. all reads dominate the scf::ForOp).
3. In and does not create a RAW interference.
then it can bufferize inplace.
Still there are a few differences:
1. bbArgs for an scf::ForOp are always considered inplaceable when seen from ops inside the body. This is because a) either the matching tensor operand is not inplaceable and an alloc will be inserted (which makes bbArg itself inplaceable); or b) the tensor operand and bbArg are both already inplaceable.
2. Bufferization within the scf::ForOp body has implications to the outside world : the scf.yield terminator may well ping-pong values of the same type. This muddies the water for alias analysis and is not supported atm. Such cases result in a pass failure.
Differential revision: https://reviews.llvm.org/D104490
Add an index_dim annotation to specify the shape to loop mapping of shape-only tensors. A shape-only tensor serves is not accessed withing the body of the operation but is required to span the iteration space of certain operations such as pooling.
Differential Revision: https://reviews.llvm.org/D104767
This test shows how convert-linalg-to-std rewrites named linalg ops as library calls.
This can be coupled with a C++ shim to connect to existing libraries such as https://gist.github.com/nicolasvasilache/691ef992404c49dc9b5d543c4aa6db38.
Everything can then be linked together with mlir-cpu-runner and MLIR can call C++ (which can itself call MLIR if needed).
This should evolve into specific rewrite patterns that can be applied on op instances independently rather than having to use a full conversion.
Differential Revision: https://reviews.llvm.org/D104842
Extend the OpDSL with index attributes. After tensors and scalars, index attributes are the third operand type. An index attribute represents a compile-time constant that is limited to index expressions. A use cases are the strides and dilations defined by convolution and pooling operations.
The patch only updates the OpDSL. The C++ yaml codegen is updated by a followup patch.
Differential Revision: https://reviews.llvm.org/D104711
This includes a basic implementation for the OpenMP target
operation. Currently, the if, thread_limit, private, shared, device, and nowait clauses are included in this implementation.
Co-authored-by: Kiran Chandramohan <kiran.chandramohan@arm.com>
Reviewed By: ftynse, kiranchandramohan
Differential Revision: https://reviews.llvm.org/D102816
In cases where arithmetic (addi/muli) ops are performed on an scf.for loops induction variable with a single use, we can fold those ops directly into the scf.for loop.
For example, in the following code:
```
scf.for %i = %c0 to %arg1 step %c1 {
%0 = addi %arg2, %i : index
%1 = muli %0, %c4 : index
%2 = memref.load %arg0[%1] : memref<?xi32>
%3 = muli %2, %2 : i32
memref.store %3, %arg0[%1] : memref<?xi32>
}
```
we can lift `%0` up into the scf.for loop range, as it is the only user of %i:
```
%lb = addi %arg2, %c0 : index
%ub = addi %arg2, %i : index
scf.for %i = %lb to %ub step %c1 {
%1 = muli %0, %c4 : index
%2 = memref.load %arg0[%1] : memref<?xi32>
%3 = muli %2, %2 : i32
memref.store %3, %arg0[%1] : memref<?xi32>
}
```
Reviewed By: mehdi_amini, ftynse, Anthony
Differential Revision: https://reviews.llvm.org/D104289
This commit moves the type translator from LLVM to MLIR to a public header for use by external projects or other code
Differential Revision: https://reviews.llvm.org/D104726
The patch changes the pretty printed FillOp operand order from output, value to value, output. The change is a follow up to https://reviews.llvm.org/D104121 that passes the fill value using a scalar input instead of the former capture semantics.
Differential Revision: https://reviews.llvm.org/D104356
During slice computation of affine loop fusion, detect one id as the mod
of another id w.r.t a constant in a more generic way. Restrictions on
co-efficients of the ids is removed. Also, information from the
previously calculated ids is used for simplification of affine
expressions, e.g.,
If `id1` = `id2`,
`id_n - divisor * id_q - id_r + id1 - id2 = 0`, is simplified to:
`id_n - divisor * id_q - id_r = 0`.
If `c` is a non-zero integer,
`c*id_n - c*divisor * id_q - c*id_r = 0`, is simplified to:
`id_n - divisor * id_q - id_r = 0`.
Reviewed By: bondhugula, ayzhuang
Differential Revision: https://reviews.llvm.org/D104614
Correct a documentation typo, and delete a duplicate test in
`pdl-to-pdl-interp-rewriter.mlir`.
Reviewed By: pr4tgpt, bondhugula, rriddle
Differential Revision: https://reviews.llvm.org/D104688
This revision refactors the usage of multithreaded utilities in MLIR to use a common
thread pool within the MLIR context, in addition to a new utility that makes writing
multi-threaded code in MLIR less error prone. Using a unified thread pool brings about
several advantages:
* Better thread usage and more control
We currently use the static llvm threading utilities, which do not allow multiple
levels of asynchronous scheduling (even if there are open threads). This is due to
how the current TaskGroup structure works, which only allows one truly multithreaded
instance at a time. By having our own ThreadPool we gain more control and flexibility
over our job/thread scheduling, and in a followup can enable threading more parts of
the compiler.
* The static nature of TaskGroup causes issues in certain configurations
Due to the static nature of TaskGroup, there have been quite a few problems related to
destruction that have caused several downstream projects to disable threading. See
D104207 for discussion on some related fallout. By having a ThreadPool scoped to
the context, we don't have to worry about destruction and can ensure that any
additional MLIR thread usage ends when the context is destroyed.
Differential Revision: https://reviews.llvm.org/D104516
Slowly we are moving toward full support of sparse tensor *outputs*. First
step was support for all-dense annotated "sparse" tensors. This step adds
support for truly sparse tensors, but only for operations in which the values
of a tensor change, but not the nonzero structure (this was refered to as
"simply dynamic" in the [Bik96] thesis).
Some background text was posted on discourse:
https://llvm.discourse.group/t/sparse-tensors-in-mlir/3389/25
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D104577
This used to be important for reducing lock contention when accessing identifiers, but
the cost of the cache can be quite large if parsing in a multi-threaded context. After
D104167, the win of keeping a cache is not worth the cost.
Differential Revision: https://reviews.llvm.org/D104737
Operations currently rely on the string name of attributes during attribute lookup/removal/replacement, in build methods, and more. This unfortunately means that some of the most used APIs in MLIR require string comparisons, additional hashing(+mutex locking) to construct Identifiers, and more. This revision remedies this by caching identifiers for all of the attributes of the operation in its corresponding AbstractOperation. Just updating the autogenerated usages brings up to a 15% reduction in compile time, greatly reducing the cost of interacting with the attributes of an operation. This number can grow even higher as we use these methods in handwritten C++ code.
Methods for accessing these cached identifiers are exposed via `<attr-name>AttrName` methods on the derived operation class. Moving forward, users should generally use these methods over raw strings when an attribute name is necessary.
Differential Revision: https://reviews.llvm.org/D104167
The main goal of this commit is to remove the dependency of Standard dialect on the Tensor dialect.
* Rename SubTensorOp -> tensor.extract_slice, SubTensorInsertOp -> tensor.insert_slice.
* Some helper functions are (already) duplicated between the Tensor dialect and the MemRef dialect. To keep this commit smaller, this will be cleaned up in a separate commit.
* Additional dialect dependencies: Shape --> Tensor, Tensor --> Standard
* Remove dialect dependencies: Standard --> Tensor
* Move canonicalization test cases to correct dialect (Tensor/MemRef).
Note: This is a fixed version of https://reviews.llvm.org/D104499, which was reverted due to a missing update to two CMakeFile.txt.
Differential Revision: https://reviews.llvm.org/D104676
Adapt the FillOp definition to use a scalar operand instead of a capture. This patch is a follow up to https://reviews.llvm.org/D104109. As the input operands are in front of the output operands the patch changes the internal operand order of the FillOp. The pretty printed version of the operation remains unchanged though. The patch also adapts the linalg to standard lowering to ensure the c signature of the FillOp remains unchanged as well.
Differential Revision: https://reviews.llvm.org/D104121
TosaMakeBroadcastable needs to include tosa.div, which was added later in the
specification.
Reviewed By: sjarus, NatashaKnk
Differential Revision: https://reviews.llvm.org/D104157
The approximation relays on range reduced version y \in [0, pi/2]. An input x will have
the property that sin(x) = sin(y), -sin(y), cos(y), -cos(y) depends on which quadrable x
is in, where sin(y) and cos(y) are approximated with 5th degree polynomial (of x^2).
As a result a single pattern can be used to compute approximation for both sine and cosine.
Reviewed By: ezhulenev
Differential Revision: https://reviews.llvm.org/D104582
The main goal of this commit is to remove the dependency of Standard dialect on the Tensor dialect.
* Rename ops: SubTensorOp --> ExtractTensorOp, SubTensorInsertOp --> InsertTensorOp
* Some helper functions are (already) duplicated between the Tensor dialect and the MemRef dialect. To keep this commit smaller, this will be cleaned up in a separate commit.
* Additional dialect dependencies: Shape --> Tensor, Tensor --> Standard
* Remove dialect dependencies: Standard --> Tensor
* Move canonicalization test cases to correct dialect (Tensor/MemRef).
Differential Revision: https://reviews.llvm.org/D104499
Redirect the copy ctor to the actual class instead of
overwriting it with `TypeID` based ctor.
This allows the final Pass classes to have extra fields and logic for their copy.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D104302
* Remove dependency: Standard --> MemRef
* Add dependencies: GPUToNVVMTransforms --> MemRef, Linalg --> MemRef, MemRef --> Tensor
* Note: The `subtensor_insert_propagate_dest_cast` test case in MemRef/canonicalize.mlir will be moved to Tensor/canonicalize.mlir in a subsequent commit, which moves over the remaining Tensor ops from the Standard dialect to the Tensor dialect.
Differential Revision: https://reviews.llvm.org/D104506
This revision adds a BufferizationAliasInfo which maintains and updates information about which tensors will alias once bufferized, which bufferized tensors are equivalent to others and how to handle clobbers.
Bufferization greedily tries to bufferize inplace by:
1. first trying to bufferize SubTensorInsertOp inplace, in reverse order (these are deemed the most expensives).
2. then trying to bufferize all non SubTensorOp / SubTensorInsertOp, in reverse order.
3. lastly trying to bufferize all SubTensorOp in reverse order.
Reverse order is a heuristic that seems to work nicely because structured tensor codegen very often proceeds by:
1. take a subset of a tensor
2. compute on that subset
3. insert the result subset into the full tensor and yield a new tensor.
BufferizationAliasInfo + equivalence sets + clobber analysis allows bufferizing nested
subtensor/compute/subtensor_insert sequences inplace to a certain extent.
To fully realize inplace bufferization, additional container-containee analysis will be necessary and is left for a subsequent commit.
Differential revision: https://reviews.llvm.org/D104110
This revision adds support for passing a functor to SourceMgrDiagnosticHandler for filtering out FileLineColLocs when emitting a diagnostic. More specifically, this can be useful in situations where there may be large CallSiteLocs with locations that aren't necessarily important/useful for users.
For now the filtering support is limited to FileLineColLocs, but conceptually we could allow filtering for all locations types if a need arises in the future.
Differential Revision: https://reviews.llvm.org/D103649
Introduce the execute_region op that is able to hold a region which it
executes exactly once. The op encapsulates a CFG within itself while
isolating it from the surrounding control flow. Proposal discussed here:
https://llvm.discourse.group/t/introduce-std-inlined-call-op-proposal/282
execute_region enables one to inline a function without lowering out all
other higher level control flow constructs (affine.for/if, scf.for/if)
to the flat list of blocks / CFG form. It thus allows the benefit of
transforms on higher level control flow ops available in the presence of
the inlined calls. The inlined calls continue to benefit from
propagation of SSA values across their top boundary. Functions won’t
have to remain outlined until later than desired. Abstractions like
affine execute_regions, lambdas with implicit captures could be lowered
to this without first lowering out structured loops/ifs or outlining.
But two potential early use cases are of: (1) an early inliner (which
can inline functions by introducing execute_region ops), (2) lowering of
an affine.execute_region, which cleanly maps to an scf.execute_region
when going from the affine dialect to the scf dialect.
Differential Revision: https://reviews.llvm.org/D75837
This functionality is similar to delayed registration of dialect interfaces. It
allows external interface models to be registered before the dialect containing
the attribute/operation/type interface is loaded, or even before the context is
created.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D104397
This is similar to attribute and type interfaces and mostly the same mechanism
(FallbackModel / ExternalModel, ODS generation). There are minor differences in
how the concept-based polymorphism is implemented for operations that are
accounted for by ODS backends, and this essentially adds a test and exposes the
API.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D104294
ODS currently emits the interface trait class as a nested class inside the
interface class. As an unintended consequence, the default implementations of
interface methods have implicit access to static fields of the interface class,
e.g. those declared in `extraClassDeclaration`, including private methods (!),
or in the parent class. This may break the use of default implementations for
external models, which are not defined in the interface class, and generally
complexifies the abstraction.
Emit intraface traits outside of the interface class itself to avoid accidental
implicit visibility. Public static fields can still be accessed via explicit
qualification with a class name, e.g., `MyOpInterface::staticMethod()` instead
of `staticMethod`.
Update the documentation to clarify the role of `extraClassDeclaration` in
interfaces.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D104384
Based on dicussion in
[this](https://llvm.discourse.group/t/remove-canonicalizer-for-memref-dim-via-shapedtypeopinterface/3641)
thread the pattern to resolve the `memref.dim` of a value that is a
result of an operation that implements the
`InferShapedTypeOpInterface` is moved to a separate pass instead of
running it as a canonicalization pass. This allows shape resolution to
happen when explicitly required, instead of automatically through a
canonicalization.
Differential Revision: https://reviews.llvm.org/D104321
Many tests fails by D101969 (https://reviews.llvm.org/D101969)
on big-endian machines. This patch changes bit order of
TrailingOperandStorage in big-endian machines. This patch
works on System Z (Triple = "s390x-ibm-linux", CPU = "z14").
Signed-off-by: Haruki Imai <imaihal@jp.ibm.com>
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D104225
When the vscode extension is published, it may be unclear how to contribute improvements to the extension. This revision makes it clear that contributions should follow the traditional LLVM guidelines.
This patch changes the (not recommended) static registration API from:
static PassRegistration<MyPass> reg("my-pass", "My Pass Description.");
to:
static PassRegistration<MyPass> reg;
And the explicit registration from:
void registerPass("my-pass", "My Pass Description.",
[] { return createMyPass(); });
To:
void registerPass([] { return createMyPass(); });
It is expected that Pass implementations overrides the getArgument() method
instead. This will ensure that pipeline description can be printed and parsed
back.
Differential Revision: https://reviews.llvm.org/D104421
Adds an integration test for the SPMM (sparse matrix multiplication) kernel, which multiplies a sparse matrix by a dense matrix, resulting in a dense matrix. This is just a simple modification on the existing matrix-vector multiplication kernel.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D104334
Make store to load fwd condition for -memref-dataflow-opt less
conservative. Post dominance info is not really needed. Add additional
check for common cases.
Differential Revision: https://reviews.llvm.org/D104174
To control the number of outer parallel loops, we need to process the
outer loops first and hence pre-order walk fixes the issue.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D104361
This allows for dialects to do different post-processing depending on operations with the inliner (my use case requires different attribute propagation rules depending on call op). This hook runs before the regular processInlinedBlocks method.
Differential Revision: https://reviews.llvm.org/D104399
In a region with multiple blocks the verifier will try to look for
dominance and may get successor list for blocks, even though a block
may be empty or does not end with a terminator.
Differential Revision: https://reviews.llvm.org/D104411
We have several ways of introducing a scalar invariant value into
linalg generic ops (should we limit this somewhat?). This revision
makes sure we handle all of them correctly in the sparse compiler.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D104335
Default implementations of interfaces may rely on extra class
declarations, which aren't currently generated in the external model,
that in turn may rely on functions defined in the main Attribute/Type
class, which wouldn't be available on the external model.
This is a very careful start with alllowing sparse tensors at the
left-hand-side of tensor index expressions (viz. sparse output).
Note that there is a subtle difference between non-annotated tensors
(dense, remain n-dim, handled by classic bufferization) and all-dense
annotated "sparse" tensors (linearized to 1-dim without overhead
storage, bufferized by sparse compiler, backed by runtime support library).
This revision gently introduces some new IR to facilitate annotated outputs,
to be generalized to truly sparse tensors in the future.
Reviewed By: gussmith23, bixia
Differential Revision: https://reviews.llvm.org/D104074
The index cast operation accepts vector types. Implement its lowering in this patch.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D104280
It may be desirable to provide an interface implementation for an attribute or
a type without modifying the definition of said attribute or type. Notably,
this allows to implement interfaces for attributes and types outside of the
dialect that defines them and, in particular, provide interfaces for built-in
types. Provide the mechanism to do so.
Currently, separable registration requires the attribute or type to have been
registered with the context, i.e. for the dialect containing the attribute or
type to be loaded. This can be relaxed in the future using a mechanism similar
to delayed dialect interface registration.
See https://llvm.discourse.group/t/rfc-separable-attribute-type-interfaces/3637
Depends On D104233
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D104234
The patch replaces the existing capture functionality by scalar operands that have been introduced by https://reviews.llvm.org/D104109. Scalar operands behave as tensor operands except for the fact that they are not indexed. As a result ScalarDefs can be accessed directly as no indexing expression is needed.
The patch only updates the OpDSL. The C++ side is updated by a follow up patch.
Differential Revision: https://reviews.llvm.org/D104220
This doesn't add any canonicalizations, but executes the same
simplification on bufferSemantic linalg.generic ops by using
linalg::ReshapeOp instead of linalg::TensorReshapeOp.
Differential Revision: https://reviews.llvm.org/D103513
This is useful for "build tuple" type ops. In my case, in npcomp, I have
an op:
```
// Result type is `!torch.tuple<!torch.tensor, !torch.tensor>`.
torch.prim.TupleConstruct %0, %1 : !torch.tensor, !torch.tensor
```
and the context is required for the `Torch::TupleType::get` call (for
the case of an empty tuple).
The handling of these FmtContext's in the code is pretty ad-hoc -- I didn't
attempt to rationalize it and just made a targeted fix. As someone
unfamiliar with the code I had a hard time seeing how to more broadly fix
the situation.
Differential Revision: https://reviews.llvm.org/D104274
The parser of generic op did not recognize the output from mlir-opt when there
are multiple outputs. One would wrap the result types with braces, and one would
not. The patch makes the behavior the same.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D104256
This changes the pass manager to not rerun the verifier when a pass says it
didn't change anything or after an OpToOpPassAdaptor, since neither of those
cases need verification (and if the pass lied, then there will be much larger
semantic problems than will be caught by the verifier).
This maintains behavior in EXPENSIVE_CHECKS mode.
Differential Revision: https://reviews.llvm.org/D104243
Interface patterns are unique in that they get added to every operation that also implements that interface, given that they aren't tied to individual operations. When the same interface pattern gets added to multiple operations (such as the current behavior with Linalg), an reference to each of these patterns is added to every op (meaning that an operation will now have N references to effectively the same pattern). This revision fixes this problematic behavior in Linalg, and can bring upwards of a 25% reduction in compile time in Linalg based workloads.
Differential Revision: https://reviews.llvm.org/D104160
This changes the outer verification loop to not recurse into
IsolatedFromAbove operations - instead return them up to a place
where a parallel for loop can process them all in parallel. This
also changes Dominance checking to happen on IsolatedFromAbove
chunks of the region tree, which makes it easy to fold operation
and dominance verification into a single simple parallel regime.
This speeds up firtool in CIRCT from ~40s to 31s on a large
testcase in -verify-each mode (the default). The .fir parser and
module passes in particular benefit from this - FModule passes
(roughly analogous to function passes) were already running the
verifier in parallel as part of the pass manager. This allows
the whole-module passes to verify their enclosed functions /
FModules in parallel.
-verify-each mode is still faster (26.3s on the same testcase),
but we do expect the verifier to take *some* time.
Differential Revision: https://reviews.llvm.org/D104207
This change adds `AutomaticAllocationScope` to the
memref.alloca_scope op. Additionally, it also clarifies
that alloca_scope is is conceptually a passthrough operation.
Reviewed By: ftynse, bondhugula
Differential Revision: https://reviews.llvm.org/D104227
There's no need for `toSmallVector()` as `SmallVector.h` already provides a `to_vector` free function that takes a range.
Reviewed By: Quuxplusone
Differential Revision: https://reviews.llvm.org/D104024
Actually, no vector types are supported so far. We should add the traits once
the vector types are supported (e.g. ElementwiseMappable.traits).
Instead add Elementwise trait to each op.
Differential Revision: https://reviews.llvm.org/D104103
Up to now all structured op operands are assumed to be shaped. The patch relaxes this assumption and allows scalar input operands. In contrast to shaped operands scalar operands are not indexed and directly forwarded to the body of the operation. As all other operands, scalar operands are associated to an indexing map that in case of a scalar or a 0D-operand has an empty range.
We will use scalar operands as a replacement for the capture mechanism. In contrast to captures, the approach ensures we can generate the function signature from the operand list and it prevents outdated capture values in case a transformation updates only the capture operand but not the hidden body of a named operation.
Removing captures and updating existing operations such as linalg.fill is left for a later patch.
The patch depends on https://reviews.llvm.org/D103891 and https://reviews.llvm.org/D103890.
Differential Revision: https://reviews.llvm.org/D104109
The padding of such ops is not generated in a vectorized way. Instead, emit a tensor::GenerateOp.
We may vectorize GenerateOps in the future.
Differential Revision: https://reviews.llvm.org/D103879
If the source operand of a linalg.pad_op operation has static shape, vectorize the copying of the source.
Differential Revision: https://reviews.llvm.org/D103747
Currently limited to constant pad values. Any combination of dynamic/static tensor sizes and padding sizes is supported.
Differential Revision: https://reviews.llvm.org/D103679
The generic vectorization pattern handles only those cases, where
low and high padding is zero. This is already handled by a
canonicalization pattern.
Also add a new canonicalization test case to ensure that tensor cast ops
are properly inserted.
A more general vectorization pattern will be added in a subsequent commit.
Differential Revision: https://reviews.llvm.org/D103590
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a transfer_write.
Differential Revision: https://reviews.llvm.org/D103137
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a subtensor_insert.
Differential Revision: https://reviews.llvm.org/D103780
Vectorize linalg.pad_tensor without generating a linalg.init_tensor when consumed by a transfer_read.
Differential Revision: https://reviews.llvm.org/D103735
* Add a helper function that returns the constant padding value (if applicable).
* Remove existing getConstantYieldValueFromBlock function, which does almost the same.
* Adapted from D103243.
Differential Revision: https://reviews.llvm.org/D104004
Add `tensor.insert` op to make `tensor.extract`/`tensor.insert` work in pairs
for `scalar` domain. Like `subtensor`/`subtensor_insert` work in pairs in
`tensor` domain, and `vector.transfer_read`/`vector.transfer_write` work in
pairs in `vector` domain.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D104139
There is a slight change in behavior: if the arg dictionnary is empty
then we return this empty dictionnary instead of a null attribute.
This is more consistent with accessing it through:
ArrayAttr args_attr = func_op.getAllArgAttrs();
args_attr[num].cast<DictionnaryAttr>() ...
Differential Revision: https://reviews.llvm.org/D104189
The commit simplifies affine.if ops :
The affine if operation gets removed if the condition is universally true or false and then/else block is merged with the parent block.
Signed-off-by: Shashij Gupta shashij.gupta@polymagelabs.com
Reviewed By: bondhugula, pr4tgpt
Differential Revision: https://reviews.llvm.org/D104015
Add support to Python bindings for the MLIR execution engine to load a
specified list of shared libraries - for eg. to use MLIR runtime
utility libraries.
Differential Revision: https://reviews.llvm.org/D104009
## Introduction
This proposal describes the new op to be added to the `std` (and later moved `memref`)
dialect called `alloca_scope`.
## Motivation
Alloca operations are easy to misuse, especially if one relies on it while doing
rewriting/conversion passes. For example let's consider a simple example of two
independent dialects, one defines an op that wants to allocate on-stack and
another defines a construct that corresponds to some form of looping:
```
dialect1.looping_op {
%x = dialect2.stack_allocating_op
}
```
Since the dialects might not know about each other they are going to define a
lowering to std/scf/etc independently:
```
scf.for … {
%x_temp = std.alloca …
… // do some domain-specific work using %x_temp buffer
… // and store the result into %result
%x = %result
}
```
Later on the scf and `std.alloca` is going to be lowered to llvm using a
combination of `llvm.alloca` and unstructured control flow.
At this point the use of `%x_temp` is bound to either be either optimized by
llvm (for example using mem2reg) or in the worst case: perform an independent
stack allocation on each iteration of the loop. While the llvm optimizations are
likely to succeed they are not guaranteed to do so, and they provide
opportunities for surprising issues with unexpected use of stack size.
## Proposal
We propose a new operation that defines a finer-grain allocation scope for the
alloca-allocated memory called `alloca_scope`:
```
alloca_scope {
%x_temp = alloca …
...
}
```
Here the lifetime of `%x_temp` is going to be bound to the narrow annotated
region within `alloca_scope`. Moreover, one can also return values out of the
alloca_scope with an accompanying `alloca_scope.return` op (that behaves
similarly to `scf.yield`):
```
%result = alloca_scope {
%x_temp = alloca …
…
alloca_scope.return %myvalue
}
```
Under the hood the `alloca_scope` is going to lowered to a combination of
`llvm.intr.stacksave` and `llvm.intr.strackrestore` that are going to be invoked
automatically as control-flow enters and leaves the body of the `alloca_scope`.
The key value of the new op is to allow deterministic guaranteed stack use
through an explicit annotation in the code which is finer-grain than the
function-level scope of `AutomaticAllocationScope` interface. `alloca_scope`
can be inserted at arbitrary locations and doesn’t require non-trivial
transformations such as outlining.
## Which dialect
Before memref dialect is split, `alloca_scope` can temporarily reside in `std`
dialect, and later on be moved to `memref` together with the rest of
memory-related operations.
## Implementation
An implementation of the op is available [here](https://reviews.llvm.org/D97768).
Original commits:
* Add initial scaffolding for alloca_scope op
* Add alloca_scope.return op
* Add no region arguments and variadic results
* Add op descriptions
* Add failing test case
* Add another failing test
* Initial implementation of lowering for std.alloca_scope
* Fix backticks
* Fix getSuccessorRegions implementation
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D97768
This is the first step to convert vector ops to MMA operations in order to
target GPUs tensor core ops. This currently only support simple cases,
transpose and element-wise operation will be added later.
Differential Revision: https://reviews.llvm.org/D102962
Create a ComplexUnaryOp base class and use it for AbsOp, ReOp and ImOp.
Sort all ops in lexicographic order.
Differential Revision: https://reviews.llvm.org/D104095
This allows for better interaction with tools (such as mlir-lsp-server), as it separates the IR into separate modules for consecutive dumps.
Differential Revision: https://reviews.llvm.org/D104073
These interfaces allow for a composite attribute or type to opaquely provide access to any held attributes or types. There are several intended use cases for this interface. The first of which is to allow the printer to create aliases for non-builtin dialect attributes and types. In the future, this interface will also be extended to allow for SymbolRefAttr to be placed on other entities aside from just DictionaryAttr and ArrayAttr.
To limit potential test breakages, this revision only adds the new interfaces to the builtin attributes/types that are currently hardcoded during AsmPrinter alias generation. In a followup the remaining builtin attributes/types, and non-builtin attributes/types can be extended to support it.
Differential Revision: https://reviews.llvm.org/D102945
This allows for using other type interfaces in the builtin dialect, which currently results in a compile time failure (as it generates duplicate interface declarations).
This adds Sdot2d op, which is similar to the usual Neon
intrinsic except that it takes 2d vector operands, reflecting the
structure of the arithmetic that it's performing: 4 separate
4-dimensional dot products, whence the vector<4x4xi8> shape.
This also adds a new pass, arm-neon-2d-to-intr, lowering
this new 2d op to the 1d intrinsic.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D102504
This revision adds focused documentation on each of the individual features of the server, with images showcasing how they look in vscode.
Differential Revision: https://reviews.llvm.org/D103942
This allows for building an outline of the symbols and symbol tables within the IR. This allows for easy navigations to functions/modules and other symbol/symbol table operations within the IR.
Differential Revision: https://reviews.llvm.org/D103729
This allow creating a matrix with all elements set to a given value. This is
needed to be able to implement a simple dot op.
Differential Revision: https://reviews.llvm.org/D103870
This is a roll forward of D102679.
This patch simplifies the implementation of Sequence and makes it compatible with llvm::reverse.
It exposes the reverse iterators through rbegin/rend which prevents a dangling reference in std::reverse_iterator::operator++().
Note: Compared to D102679, this patch introduces a `asSmallVector()` member function and fixes compilation issue with GCC 5.
Differential Revision: https://reviews.llvm.org/D103948
This brings us closer to replacing the LLVM data layout string with a
first-class layout modeling in MLIR.
Depends On D103945
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D103946
This reverts commit 08664d005c, which according to
https://reviews.llvm.org/D103373 was pushed accidentally, and I believe it
causes timeouts in some internal mlir tests.
Allow gpu ops implementing the async interface to already be async when running the GpuAsyncRegionPass.
That pass threads a 'current token' through a block with ops implementing the gpu async interface.
After this change, existing async ops (returning a !gpu.async.token) set the current token.
Existing synchronous `gpu.wait` ops reset the current token.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D103396
A common mistake for newcomers to MLIR is to try to store extra member
on the Op class. However these are intended to be thing wrapper around
an Operation*, all the storage is meant to be encoded in attribute on
the underlying Operation. This can be confusing to debug, so better
catch it at build time.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103869
tosa.matmul is a batched matmul, update the lowering for linalg
with the tests.
Reviewed By: sjarus
Differential Revision: https://reviews.llvm.org/D103937
This allows us to remove the `spv.mlir.endmodule` op and
all the code associated with it.
Along the way, tightened the APIs for `spv.module` a bit
by removing some aliases. Now we use `getRegion` to get
the only region, and `getBody` to get the region's only
block.
Reviewed By: mravishankar, hanchung
Differential Revision: https://reviews.llvm.org/D103265
Consolidate the type conversion in a single function to make it simpler
to use. This allow to re-use the type conversion for up coming ops.
Differential Revision: https://reviews.llvm.org/D103868
The top-level verifier of data layout specifications delegates verification of
entries with identifier keys to the dialect of the identifier prefix. This flow
was missing a check whether the dialect actually implements the relevant
interface.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D103945
ArmSVE-specific memory operations are needed to generate end-to-end
code for as long as MLIR core doesn't support scalable vectors. This
instructions will be eventually unnecessary, for now they're required
for more complex testing.
Differential Revision: https://reviews.llvm.org/D103535
Move the index variable used to track variables inside of the specific
processDataOperands functions.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D103924
These `arm_sve.cmp` functions are needed to generate scalable vector
masks as long as scalable vectors are not part of the standard types.
Once in standard, these can be removed and `std.cmp` can be used
instead.
Differential Revision: https://reviews.llvm.org/D103473
As a follow-up to the discussion in https://reviews.llvm.org/D103822,
make the templated `DictionaryAttr::getAs` take the name by `&&`
reference and properly forward the argument to the underlying `get`.
Temporarily support 2D and 3D while the TOSA Matmul op is updated to support batched operations.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D103854
A common mistake for newcomers to MLIR is to try to store extra member
on the Op class. However these are intended to be thing wrapper around
an Operation*, all the storage is meant to be encoded in attribute on
the underlying Operation. This can be confusing to debug, so better
catch it at build time.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103869
This reverts commit e772216e70
(and fixup 7f6c878a2c).
The build is broken with gcc5 host compiler:
In file included from
from mlir/lib/Dialect/Utils/StructuredOpsUtils.cpp:9:
tools/mlir/include/mlir/IR/BuiltinAttributes.h.inc:424:57: error: type/value mismatch at argument 1 in template parameter list for 'template<class ItTy, class FuncTy, class FuncReturnTy> class llvm::mapped_iterator'
std::function<T(ptrdiff_t)>>;
^
tools/mlir/include/mlir/IR/BuiltinAttributes.h.inc:424:57: note: expected a type, got 'decltype (seq<ptrdiff_t>(0, 0))::const_iterator'
This is both more efficient and more ergonomic than going
through an std::string, e.g. when using llvm::utostr and
in string concat cases.
Unfortunately we can't just overload ::get(). This causes an
ambiguity because both twine and stringref implicitly convert
from std::string.
Differential Revision: https://reviews.llvm.org/D103754
One of the key algorithms used in the "mlir::verify(op)" method is the
dominance checker, which ensures that operand values properly dominate
the operations that use them.
The MLIR dominance implementation has a number of algorithmic problems,
and is not really set up in general to answer dense queries: it's constant
factors are really slow with multiple map lookups and scans, even in the
easy cases. Furthermore, when calling mlir::verify(module) or some other
high level operation, it makes sense to parallelize the dominator
verification of all the functions within the module.
This patch has a few changes to enact this:
1) It splits dominance checking into "IsolatedFromAbove" units. Instead
of building a monolithic DominanceInfo for everything in a module,
for example, it checks dominance for the module to all the functions
within it (noop, since there are no operands at this level) then each
function gets their own DominanceInfo for each of their scope.
2) It adds the ability for mlir::DominanceInfo (and post dom) to be
constrained to an IsolatedFromAbove region. There is no reason to
recurse into IsolatedFromAbove regions since use/def relationships
can't span this region anyway. This is already checked by the time
the verifier gets here.
3) It avoids querying DominanceInfo for trivial checks (e.g. intra Block
references) to eliminate constant factor issues).
4) It switches to lazily constructing DominanceInfo because the trivial
check case handles the vast majority of the cases and avoids
constructing DominanceInfo entirely in some cases (e.g. at the module
level or for many Regions's that contain a single Block).
5) It parallelizes analysis of collections IsolatedFromAbove operations,
e.g. each of the functions within a Module.
All together this is more than a 10% speedup on `firtool` in circt on a
large design when run in -verify-each mode (our default) since the verifier
is invoked after each pass.
Still todo is to parallelize the main verifier pass. I decided to split
this out to its own thing since this patch is already large-ish.
Differential Revision: https://reviews.llvm.org/D103373
LLVM Dialect uses builtin-integer types. The existing LLVM_AnyInteger
type constraint is a dupe of AnyInteger. This patch removes LLVM_AnyInteger
and replaces all usage with AnyInteger.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103839
Currently canonicalizations of a store and a cast try to fold all casts into the store.
In the case where the operand being stored is itself a cast, this is illegal as the type of the value being stored
will change. This PR fixes this by not checking the value for folding with a cast.
Depends on https://reviews.llvm.org/D103828
Differential Revision: https://reviews.llvm.org/D103829
In the interests of disabling misc-no-recursion across LLVM (this seems
like a stylistic choice that is not consistent with LLVM's
style/development approach) this NFC preliminary change adjusts all the
.clang-tidy files to inherit from their parents as much as possible.
This change specifically preserves all the quirks of the current configs
in order to make it easier to review as NFC.
I validatad the change is NFC as follows:
for X in `cat ../files.txt`;
do
mkdir -p ../tmp/$(dirname $X)
touch $(dirname $X)/blaikie.cpp
clang-tidy -dump-config $(dirname $X)/blaikie.cpp > ../tmp/$(dirname $X)/after
rm $(dirname $X)/blaikie.cpp
done
(similarly for the "before" state, without this patch applied)
for X in `cat ../files.txt`;
do
echo $X
diff \
../tmp/$(dirname $X)/before \
<(cat ../tmp/$(dirname $X)/after \
| sed -e "s/,readability-identifier-naming\(.*\),-readability-identifier-naming/\1/" \
| sed -e "s/,-llvm-include-order\(.*\),llvm-include-order/\1/" \
| sed -e "s/,-misc-no-recursion\(.*\),misc-no-recursion/\1/" \
| sed -e "s/,-clang-diagnostic-\*\(.*\),clang-diagnostic-\*/\1/")
done
(using sed to strip some add/remove pairs to reduce the diff and make it easier to read)
The resulting report is:
.clang-tidy
clang/.clang-tidy
2c2
< Checks: 'clang-diagnostic-*,clang-analyzer-*,-*,clang-diagnostic-*,llvm-*,misc-*,-misc-unused-parameters,-misc-non-private-member-variables-in-classes,-readability-identifier-naming,-misc-no-recursion'
---
> Checks: 'clang-diagnostic-*,clang-analyzer-*,-*,clang-diagnostic-*,llvm-*,misc-*,-misc-unused-parameters,-misc-non-private-member-variables-in-classes,-misc-no-recursion'
compiler-rt/.clang-tidy
2c2
< Checks: 'clang-diagnostic-*,clang-analyzer-*,-*,clang-diagnostic-*,llvm-*,-llvm-header-guard,misc-*,-misc-unused-parameters,-misc-non-private-member-variables-in-classes'
---
> Checks: 'clang-diagnostic-*,clang-analyzer-*,-*,clang-diagnostic-*,llvm-*,misc-*,-misc-unused-parameters,-misc-non-private-member-variables-in-classes,-llvm-header-guard'
flang/.clang-tidy
2c2
< Checks: 'clang-diagnostic-*,clang-analyzer-*,-*,llvm-*,-llvm-include-order,misc-*,-misc-no-recursion,-misc-unused-parameters,-misc-non-private-member-variables-in-classes'
---
> Checks: 'clang-diagnostic-*,clang-analyzer-*,-*,llvm-*,misc-*,-misc-unused-parameters,-misc-non-private-member-variables-in-classes,-llvm-include-order,-misc-no-recursion'
flang/include/flang/Lower/.clang-tidy
flang/include/flang/Optimizer/.clang-tidy
flang/lib/Lower/.clang-tidy
flang/lib/Optimizer/.clang-tidy
lld/.clang-tidy
lldb/.clang-tidy
llvm/tools/split-file/.clang-tidy
mlir/.clang-tidy
The `clang/.clang-tidy` change is a no-op, disabling an option that was never enabled.
The compiler-rt and flang changes are no-op reorderings of the same flags.
(side note, the .clang-tidy file in parallel-libs is broken and crashes
clang-tidy because it uses "lowerCase" as the style instead of "lower_case" -
so I'll deal with that separately)
Differential Revision: https://reviews.llvm.org/D103842
This patch simplifies the implementation of Sequence and makes it compatible with llvm::reverse.
It exposes the reverse iterators through rbegin/rend which prevents a dangling reference in std::reverse_iterator::operator++().
Differential Revision: https://reviews.llvm.org/D102679
* Mark the following methods const:
* `ArrayAttr::getAsRange`
* `ArrayAttr::getAsValueRange`
* `DictionaryAttr::getAs`
* Make `DictionarAttr::getAs` generic over the name class, such that
`Identifier` and `StringRef` arguments get forwarded to the underlying
call to `get`. (Made generic to avoid introducing a dependency on
`include/mlir/IR/Identifier.h` as per the diff discussion.)
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D103822
Now that memref supports arbitrary element types, add support for memref of
memref and make sure it is properly converted to the LLVM dialect. The type
support itself avoids adding the interface to the memref type itself similarly
to other built-in types. This allows the shape, and therefore byte size, of the
memref descriptor to remain a lowering aspect that is easier to customize and
evolve as opposed to sanctifying it in the data layout specification for the
memref type itself.
Factor out the code previously in a testing pass to live in a dedicated data
layout analysis and use that analysis in the conversion to compute the
allocation size for memref of memref. Other conversions will be ported
separately.
Depends On D103827
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D103828
Historically, MemRef only supported a restricted list of element types that
were known to be storable in memory. This is unnecessarily restrictive given
the open nature of MLIR's type system. Allow types to opt into being used as
MemRef elements by implementing a type interface. For now, the interface is
merely a declaration with no methods. Later, methods to query, e.g., the type
size or whether a type can alias elements of another type may be added.
Harden the "standard"-to-LLVM conversion against memrefs with non-builtin
types.
See https://llvm.discourse.group/t/rfc-memref-of-custom-types/3558.
Depends On D103826
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D103827
Some places in the alloc-like op conversion use the converted index type
whereas other places use the pointer-sized integer type, which may not be the
same. Consistently use the converted index type, similarly to other address
calculations.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D103826
These `arm_sve.cmp` functions are needed to generate scalable vector
masks as long as scalable vectors are not part of the standard types.
Once in standard, these can be removed and `std.cmp` can be used
instead.
Differential Revision: https://reviews.llvm.org/D103473
We were accidentally only using the first found reference, instead of all of them. This revision fixes this by properly tracking all references to a symbol.
Differential Revision: https://reviews.llvm.org/D103730
For now the hover simply shows the same information as hovering on the operation
name. If necessary this can be tweaked to something symbol specific later.
Differential Revision: https://reviews.llvm.org/D103728
This revision adds support for hover on region operations, by temporarily removing the regions during printing. This revision also tweaks the hover format for operations to include symbol information, now that FuncOp can be shown in the hover.
Differential Revision: https://reviews.llvm.org/D103727
In an operation in the true/false dest of a branch,
one can assume that the operation itself was true/false if
only that edge can reach the operation.
Differential Revision: https://reviews.llvm.org/D101709
This patch convert the if condition on standalone data operation such as acc.update,
acc.enter_data and acc.exit_data to a scf.if with the operation in the if region.
It removes the operation when the if condition is constant and false. It removes the
the condition if it is contant and true.
Conversion to scf.if is done in order to use the translation to LLVM IR dialect out of the box.
Not sure this is the best approach or we should perform this during the translation from OpenACC
to LLVM IR dialect. Any thoughts welcome.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103325
This patch add canonicalization for the standalone data operation with constant if condition.
It is extracted from this patch D103325.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103712
Convert data operands from the acc.parallel operation using the same conversion pattern than D102170.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103337
Implements better naming for results of spv.mlir.addressof ops by making it
inherit from OpAsmOpInterface and implementing the associated
getAsmResultName(...) hook.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D103594
* Add hasUnitStride and hasZeroOffset to OffsetSizeAndStrideOpInterface. These functions are useful for various patterns. E.g., some vectorization patterns apply only for tensor ops with zero offsets and/or unit stride.
* Add getConstantIntValue and isEqualConstantInt helper functions, which are useful for implementing the two above functions, as well as various patterns.
Differential Revision: https://reviews.llvm.org/D103763
Controlled by a compiler option, if 32-bit indices can be handled
with zero/sign-extention alike (viz. no worries on non-negative
indices), scatter/gather operations can use the more efficient
32-bit SIMD version.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D103632
* Rename PadTensorOpVectorizationPattern to GenericPadTensorOpVectorizationPattern.
* Make GenericPadTensorOpVectorizationPattern a private pattern, to be instantiated via populatePadTensorOpVectorizationPatterns.
* Factor out parts of PadTensorOpVectorizationPattern into helper functions.
This commit prepares PadTensorOpVectorizationPattern for a series of subsequent commits that add more specialized PadTensorOp vectorization patterns.
Differential Revision: https://reviews.llvm.org/D103681
Convert data operands from the acc.data operation using the same conversion pattern than D102170.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D103332
These started failing on one of our buildbots. I didn't completely root cause the situation and just added the explicit includes that correct the issue.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D103657
This revision adds assembly state tracking for uses of symbols, allowing for go-to-definition and references support for SymbolRefAttrs.
Differential Revision: https://reviews.llvm.org/D103585
This is a followup to D103422. The DenseMapInfo implementations for
ArrayRef and StringRef are moved into the ArrayRef.h and StringRef.h
headers, which means that these two headers no longer need to be
included by DenseMapInfo.h.
This required adding a few additional includes, as many files were
relying on various things pulled in by ArrayRef.h.
Differential Revision: https://reviews.llvm.org/D103491
Useful for "exhaustively" testing and benchmarking annotation combinations
to verify correctness and perform state space search for best performing.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D103566
Introduces a test pass that rewrites PadTensorOps with static shapes as a sequence of:
```
linalg.init_tensor // to create output
linalg.fill // to initialize with padding value
linalg.generic // to copy the original contents to the padded tensor
```
The pass can be triggered with:
- `--test-linalg-transform-patterns="test-transform-pad-tensor"`
Differential Revision: https://reviews.llvm.org/D102804
Replace the uses of deprecated Structured Op Interface methods in TestLinalgElementwiseFusion.cpp, TestLinalgFusionTransforms.cpp, and Transforms.cpp. The patch is based on https://reviews.llvm.org/D103394.
Differential Revision: https://reviews.llvm.org/D103528
Move the core reducer algorithm into a library so that it'll be easier
for porting to different projects.
Depends On D101046
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D101607
This removes the need to define the derived Operand class before the derived
Value class. The major benefit of this refactoring is that we no longer need
the OpaqueValue class, as OpOperand can now be defined after Value. As part of
this refactoring the BlockOperand and OpOperand classes are moved out of
UseDefLists.h and to more suitable locations in BlockSupport and Value. After
this change, UseDefLists.h is almost entirely composed of generic use def utilities.
Differential Revision: https://reviews.llvm.org/D103353
This simplifies various pieces of code that interact with the pass registry, e.g. this removes the need to register passes to get accurate pass pipelines descriptions when generating crash reproducers.
Differential Revision: https://reviews.llvm.org/D101880
This revision allows for attaching "debug labels" to patterns, and provides to FrozenRewritePatternSet for filtering patterns based on these labels (in addition to the debug name of the pattern). This will greatly simplify the ability to write tests targeted towards specific patterns (in cases where many patterns may interact), will also simplify debugging pattern application by observing how application changes when enabling/disabling specific patterns.
To enable better reuse of pattern rewrite options between passes, this revision also adds a new PassUtil.td file to the Rewrite/ library that will allow for passes to easily hook into a common interface for pattern debugging. Two options are used to seed this utility, `disable-patterns` and `enable-patterns`, which are used to enable the filtering behavior indicated above.
Differential Revision: https://reviews.llvm.org/D102441
Currently the diagnostics reports the file:line:col, but some LSP
frontends require a non-empty range. Report either the range of an
identifier that starts at location, or a range of 1. Expose the id
location to range helper and reuse here.
Differential Revision: https://reviews.llvm.org/D103482
When LLVM and MLIR are built as subprojects (via add_subdirectory),
the CMake configuration that indicates where the MLIR libraries are is
not necessarily in the same cmake/ directory as LLVM's configuration.
This patch removes that assumption about where MLIRConfig.cmake is
located.
(As an additional none, the %llvm_lib_dir substitution was never
defined, and so find_package(MLIR) in the build was succeeding for
other reasons.)
Reviewed By: stephenneuendorffer
Differential Revision: https://reviews.llvm.org/D103276
Support for tensor types in the unrolled version will follow in a separate commit.
Add a new pass option to activate lowering of transfer ops with tensor types (default: deactivated).
Differential Revision: https://reviews.llvm.org/D102666
* A Reducer is a kind of RewritePattern, so it's just the same as
writing graph rewrite.
* ReductionTreePass operates on Operation rather than ModuleOp, so that
* we are able to reduce a nested structure(e.g., module in module) by
* self-nesting.
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D101046
I backed this off to make the previous patch easier to wrangle, but now
this is an efficient query and it is better to not replace it in CSE.
Differential Revision: https://reviews.llvm.org/D103494
The previous impl densely scanned the entire region starting with an op
when dominators were created, creating a DominatorTree for every region.
This is extremely expensive up front -- particularly for clients like
Linalg/Transforms/Fusion.cpp that construct DominanceInfo for a single
query. It is also extremely memory wasteful for IRs that use single
block regions commonly (e.g. affine.for) because it's making a
dominator tree for a region that has trivial dominance. The
implementation also had numerous unnecessary minor efficiencies, e.g.
doing multiple walks of the region tree or tryGetBlocksInSameRegion
building a DenseMap that it didn't need.
This patch switches to an approach where [Post]DominanceInfo is free
to construct, and which lazily constructs DominatorTree's for any
multiblock regions that it needs. This avoids the up-front cost
entirely, making its runtime proportional to the complexity of the
region tree instead of # ops in a region. This also avoids the memory
and time cost of creating DominatorTree's for single block regions.
Finally this rewrites the implementation for simplicity and to avoids
the constant factor problems the old implementation had.
Differential Revision: https://reviews.llvm.org/D103384
Depthwise convolution should support kernel dilation and non-dilation should
not be a special case. Updated op definition to include a dilation attribute.
This also adds a tosa.depthwise_conv2d lowering to linalg to support the new
linalg behavior.
Differential Revision: https://reviews.llvm.org/D103219
Previously, this assumed use of ModuleOp and FuncOp. There is no need to
restrict this, and using interfaces allows these patterns to be used
during dialect conversion to LLVM.
Some assertions were removed due to inconsistent implementation of
FunctionLikeOps.
Differential Revision: https://reviews.llvm.org/D103447