When the iteration graph is cyclic (even after several attempts using less and less constraints), the current sparse compiler bails out, and no rewriting hapens. However, this revision adds some new logic where the sparse compiler tries to find a single input sparse tensor that breaks the cycle, and then adds a proper sparse conversion operation. This way, more incoming kernels can be handled!
Note, the resulting code is not optimal (although it keeps more or less proper "sparse" complexity), and more improvements should be added (especially when the kernel directly yields without computation, such as the transpose example). However, handling is better than not handling ;-)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D128847
This feature is tested by unit test since not many places in the codebase
use SubElementTypeInterface.
Differential Revision: https://reviews.llvm.org/D127539
Since only mutable types and attributes can go into infinite recursion
inside SubElementInterface::walkSubElement, and there are only a few of
them (mutable types and attributes), we introduce new traits for Type
and Attribute: TypeTrait::IsMutable and AttributeTrait::IsMutable,
respectively. They indicate whether a type or attribute is mutable.
Such traits are required if the ImplType defines a `mutate` function.
Then, inside SubElementInterface, we use a set to record visited mutable
types and attributes that have been visited before.
Differential Revision: https://reviews.llvm.org/D127537
We currently generate reproducer configurations using a comment placed at
the top of the generated .mlir file. This is kind of hacky given that comments
have no semantic context in the source file and can easily be dropped. This
strategy also wouldn't work if/when we have a bitcode format. This commit
switches to using an external assembly resource, which is verifiable/can
work with a hypothetical bitcode naturally/and removes the awkward processing
from mlir-opt for splicing comments and re-applying command line options. With
the removal of command line munging, this opens up new possibilities for
executing reproducers in memory.
Differential Revision: https://reviews.llvm.org/D126447
This commit enables support for providing and processing external
resources within MLIR assembly formats. This is a mechanism with which
dialects, and external clients, may attach additional information when
printing IR without that information being encoded in the IR itself.
External resources are not uniqued within the MLIR context, are not
attached directly to any operation, and are solely intended to live and be
processed outside of the immediate IR. There are many potential uses of this
functionality, for example MLIR's pass crash reproducer could utilize this to
attach the pass resource executing when a crash occurs. Other types of
uses may be embedding large amounts of binary data, such as weights in ML
applications, that shouldn't be copied directly into the MLIR context, but
need to be kept adjacent to the IR.
External resources are encoded using a key-value pair nested within a
dictionary anchored by name either on a dialect, or an externally registered
entity. The key is an identifier used to disambiguate the data. The value
may be stored in various limited forms, but general encodings use a string
(human readable) or blob format (binary). Within the textual format, an
example may be of the form:
```mlir
{-#
// The `dialect_resources` section within the file-level metadata
// dictionary is used to contain any dialect resource entries.
dialect_resources: {
// Here is a dictionary anchored on "foo_dialect", which is a dialect
// namespace.
foo_dialect: {
// `some_dialect_resource` is a key to be interpreted by the dialect,
// and used to initialize/configure/etc.
some_dialect_resource: "Some important resource value"
}
},
// The `external_resources` section within the file-level metadata
// dictionary is used to contain any non-dialect resource entries.
external_resources: {
// Here is a dictionary anchored on "mlir_reproducer", which is an
// external entity representing MLIR's crash reproducer functionality.
mlir_reproducer: {
// `pipeline` is an entry that holds a crash reproducer pipeline
// resource.
pipeline: "func.func(canonicalize,cse)"
}
}
```
Differential Revision: https://reviews.llvm.org/D126446
The original code is more readable because the goal is to check if the given
value does *not* lie in the range. It is harder to understand this by
reading the rewritten code.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D128753
Should be NFC. We can just do the base conversion manually and avoid
warnings about it. Clang before Clang 13 didn't implement P1825 and
complains:
mlir/lib/Analysis/Presburger/IntegerRelation.cpp:226:10: warning: local variable 'result' will be copied
despite being returned by name [-Wreturn-std-move]
return result;
^~~~~~
mlir/lib/Analysis/Presburger/IntegerRelation.cpp:226:10: note: call 'std::move' explicitly to avoid copying
return result;
^~~~~~
std::move(result)
Consecutive complex.neg are redundant so that we can canonicalize them to the original operands.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D128781
1. Remove the redundant collapse clause in MLIR OpenMP worksharing-loop
operation.
2. Fix several typos.
3. Refactor the chunk size type conversion since CreateSExtOrTrunc has
both type check and type conversion.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D128338
`enableSplitting` simply enables/disables whether we should split
or use the full buffer. `insertMarkerInOutput` toggles if split markers
should be inserted in between prcessed output chunks.
These options allow for merging the duplicate code paths we have
when splitting is optional.
Differential Revision: https://reviews.llvm.org/D128764
Also added test cases. Also extend support for `computeReprWithOnlyDivLocals` from `IntegerPolyhedron` to `IntegerRelation` and `PresburgerRelation`.
Depends on D128736.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D128737
Also added test cases to test this. Both IntegerRelation::addLocalFloorDiv and the fixed implementation of subtraction need to compute division inequalities from dividend and divisor, so this also adds helper util functions to avoid duplicating this logic.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D128736
Currently, in the Presburger library, we use the words "variables" and
"identifiers" interchangeably. This patch changes this to only use "variables" to
refer to the variables of PresburgerSpace.
The reasoning behind this change is that the current usage of the word "identifier"
is misleading. variables do not "identify" anything. The information attached to them is the
actual "identifier" for the variable. The word "identifier", will later be used
to refer to the information attached to each variable in space.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D128585
This is already partially the case, but we can rely more heavily on interface libraries and how they are imported/exported in other to simplify the implementation of the mlir python functions in Cmake.
This change also makes a couple of other changes:
1) Add a new CMake function which handles "pure" sources. This was done inline previously
2) Moves the headers associated with CAPI libraries to the libraries themselves. These were previously managed in a separate source target. They can now be added directly to the CAPI libraries using DECLARED_HEADERS.
3) Cleanup some dependencies that showed up as an issue during the refactor
This is a big CMake change that should produce no impact on the build of mlir and on the produced *build tree*. However, this change fixes an issue with the *install tree* of mlir which was previously unusable for projects like torch-mlir because both the "pure" and "extension" targets were pointing to either the build or source trees.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D128230
This pattern can kick in when the source of the broadcast has a shape
that is a prefix/suffix of the result of the shape_cast.
Differential Revision: https://reviews.llvm.org/D128734
Enforce the assumption made on tensor buffers explicitly. When in-place,
reuse the buffer, but fill with all zeroes for the non-update case, since
the kernel assumes all elements are written to. When not in-place, zero
out the new buffer when materializing or when no-updates occur. Copy the
original tensor value when updates occur. This prepares migrating to the
new bufferization strategy, where these assumptions must be made explicit.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D128691
This attribute is similar to DenseElementsAttr but does not support
splat. As such it has a much simpler API and does not need any smart
iterator: it exposes direct ArrayRef access.
A new syntax is introduced so that the generic printing/parsing looks
like:
[:i64 1, -2, 3]
This attribute beings like an ArrayAttr but has a `:` token after the
opening square brace to introduce the element type (supported are I8,
I16, I32, I64, F32, F64) and the comma separated list for the data.
This is particularly convenient for attributes intended to be small,
like those referring to shapes.
For example a `transpose` operation with a `dims` attribute could be
defined as such:
let arguments = (ins AnyTensor:$input, DenseI64ArrayAttr:$dims);
let assemblyFormat = "$input `dims` `=` $dims attr-dict : type($input)";
And printed this way (the element type is elided in this case):
transpose %input dims = [0, 2, 1] : tensor<2x3x4xf32>
The C++ API for dims would just directly return an ArrayRef<int64>
RFC: https://discourse.llvm.org/t/rfc-introduce-a-new-dense-array-attribute/63279
Recommit with a custom DenseArrayBaseAttrStorage class to ensure
over-alignment of the storage to the largest type.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D123774
This attribute is similar to DenseElementsAttr but does not support
splat. As such it has a much simpler API and does not need any smart
iterator: it exposes direct ArrayRef access.
A new syntax is introduced so that the generic printing/parsing looks
like:
[:i64 1, -2, 3]
This attribute beings like an ArrayAttr but has a `:` token after the
opening square brace to introduce the element type (supported are I8,
I16, I32, I64, F32, F64) and the comma separated list for the data.
This is particularly convenient for attributes intended to be small,
like those referring to shapes.
For example a `transpose` operation with a `dims` attribute could be
defined as such:
let arguments = (ins AnyTensor:$input, DenseI64ArrayAttr:$dims);
let assemblyFormat = "$input `dims` `=` $dims attr-dict : type($input)";
And printed this way (the element type is elided in this case):
transpose %input dims = [0, 2, 1] : tensor<2x3x4xf32>
The C++ API for dims would just directly return an ArrayRef<int64>
RFC: https://discourse.llvm.org/t/rfc-introduce-a-new-dense-array-attribute/63279
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D123774
This was previous implemented as part of the BufferizableOpInterface of ForEachThreadOp. Moving the implementation to ParallelInsertSliceOp to be consistent with the remaining ops and to have a nice example op that can serve as a blueprint for other ops.
Differential Revision: https://reviews.llvm.org/D128666
Add basic canonicalization for consecutive complex.add and sub operations.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D128702