Commit Graph

107 Commits

Author SHA1 Message Date
gysit b7f2c108eb [mlir][linalg] Replace LinalgOps.h and LinalgTypes.h by a single header.
After removing the range type, Linalg does not define any type. The revision thus consolidates the LinalgOps.h and LinalgTypes.h into a single Linalg.h header. Additionally, LinalgTypes.cpp is renamed to LinalgDialect.cpp to follow the convention adopted by other dialects such as the tensor dialect.

Depends On D115727

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115728
2021-12-15 12:15:03 +00:00
Mehdi Amini be0a7e9f27 Adjust "end namespace" comment in MLIR to match new agree'd coding style
See D115115 and this mailing list discussion:
https://lists.llvm.org/pipermail/llvm-dev/2021-December/154199.html

Differential Revision: https://reviews.llvm.org/D115309
2021-12-08 06:05:26 +00:00
Aart Bik bb8632c1ef [mlir][sparse] fix broken build
rebase and commit crossed the getFunc change

Reviewed By: Chia-hungDuan

Differential Revision: https://reviews.llvm.org/D115270
2021-12-07 11:14:21 -08:00
Aart Bik 4f2ec7f983 [mlir][sparse] finalize sparse output in the presence of reductions
This revision implements sparse outputs (from scratch) in all cases where
the loops can be reordered with all but one parallel loops outer. If the
inner parallel loop appears inside one or more reductions loops, then an
access pattern expansion is required (aka. workspaces in TACO speak).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D115091
2021-12-07 10:54:29 -08:00
wren romano d8731bfc93 [mlir][sparse] Requiring emitCInterface parameter to be explicit
Depends On D115004

Cleans up code legibility by requiring the `emitCInterface` parameter to be explicit at all call-sites, and defining boolean aliases for that parameter.

Reviewed By: aartbik, rriddle

Differential Revision: https://reviews.llvm.org/D115005
2021-12-06 20:50:08 -08:00
wren romano f527fdf51e [mlir][sparse] Code cleanup for SparseTensorConversion
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D115004
2021-12-06 14:13:35 -08:00
Aart Bik 0e85232fa3 [mlir][sparse] refine simply dynamic sparse tensor outputs
Proper test for sparse tensor outputs is a single condition throughout
the whole tensor index expression (not a general conjunction, since this
may include other conditions that cause cancellation).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D114810
2021-11-30 13:45:58 -08:00
Aart Bik 7d4da4e1ab [mlir][sparse] generalize sparse tensor output implementation
Moves sparse tensor output support forward by generalizing from injective
insertions only to include reductions. This revision accepts the case with all
parallel outer and all reduction inner loops, since that can be handled with
an injective insertion still. Next revision will allow the inner parallel loop
to move inward (but that will require "access pattern expansion" aka "workspace").

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D114399
2021-11-29 16:15:53 -08:00
Alexander Belyaev 57470abc41 [mlir] Move memref.[tensor_load|buffer_cast|clone] to "bufferization" dialect.
https://llvm.discourse.group/t/rfc-dialect-for-bufferization-related-ops/4712

Differential Revision: https://reviews.llvm.org/D114552
2021-11-25 11:50:39 +01:00
wren romano d7d7ffe254 [mlir][sparse] Adding wrappers for constantOverheadTypeEncoding
Minor code cleanup

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D114392
2021-11-23 18:30:06 -08:00
Mogball 7c5ecc8b7e [mlir][vector] Insert/extract element can accept index
`vector::InsertElementOp` and `vector::ExtractElementOp` have had their `position`
operand changed to accept `AnySignlessIntegerOrIndex` for better operability with
operations that use `index`, such as affine loops.

LLVM's `extractelement` and `insertelement` can also accept `i64`, so lowering
directly to these operations without explicitly inserting casts is allowed. SPIRV's
equivalent ops can also accept `i64`.

Reviewed By: nicolasvasilache, jpienaar

Differential Revision: https://reviews.llvm.org/D114139
2021-11-18 22:40:29 +00:00
River Riddle 0c7890c844 [mlir] Convert NamedAttribute to be a class
NamedAttribute is currently represented as an std::pair, but this
creates an extremely clunky .first/.second API. This commit
converts it to a class, with better accessors (getName/getValue)
and also opens the door for more convenient API in the future.

Differential Revision: https://reviews.llvm.org/D113956
2021-11-18 05:39:29 +00:00
Aart Bik 1ce77b562d [mlir][sparse] refine lexicographic insertion to any tensor
First version was vectors only. With some clever "path" insertion,
we now support any d-dimensional tensor. Up next: reductions too

Reviewed By: bixia, wrengr

Differential Revision: https://reviews.llvm.org/D114024
2021-11-17 18:08:42 -08:00
Aart Bik f66e5769d4 [mlir][sparse] first version of "truly" dynamic sparse tensors as outputs of kernels
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D113705
2021-11-15 15:33:32 -08:00
Matthias Springer 4397a1baef [mlir][linalg][bufferize] Remove remaining linalg dependencies
* Move "linalg.inplaceable" attr name literals to BufferizableOpInterface.
* Use `memref.copy` by default. Override to `linalg.copy` in ComprehensiveBufferizePass.

These are the last remaining code dependencies on Linalg in Comprehensive Bufferize. The next commit will make ComprehensiveBufferize independent of the Linalg dialect.

Differential Revision: https://reviews.llvm.org/D113457
2021-11-11 19:04:41 +09:00
Mehdi Amini f97e72aaca Use base class AsmParser/AsmPrinter in Types and Attribute print/parse method (NFC)
This decouples the printing/parsing from the "context" in which the parsing occurs.
This will allow to invoke these methods directly using an OpAsmParser/OpAsmPrinter.

Differential Revision: https://reviews.llvm.org/D113637
2021-11-11 06:26:33 +00:00
Mehdi Amini f30a8a6f67 Change the contract with the type/attribute parsing to let the dispatch handle the mnemonic
This breaking change requires to remove printing the mnemonic in the print()
method on Type/Attribute classes.
This makes it consistent with the parsing code which alread handles the
mnemonic outside of the parsing method.

This likely won't break the build for anyone, but tests will start
failing for dialects downstream. The fix is trivial and look like
going from:

void emitc::OpaqueType::print(DialectAsmPrinter &printer) const {
  printer << "opaque<\"";

to:

void emitc::OpaqueAttr::print(DialectAsmPrinter &printer) const {
  printer << "<\"";

Reviewed By: rriddle, aartbik

Differential Revision: https://reviews.llvm.org/D113334
2021-11-10 00:47:15 +00:00
Kazu Hirata ca1a8be06b [Transforms] Fix a warning
This patch fixes:

  mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp:124:3:
  error: default label in switch which covers all enumeration values
  [-Werror,-Wcovered-switch-default]

by removing the default case.
2021-11-05 19:30:14 -07:00
wren romano 845561ec9d [mlir][sparse] Factoring magic numbers into a header
Addresses https://bugs.llvm.org/show_bug.cgi?id=52303

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D112962
2021-11-05 15:59:16 -07:00
Aart Bik 7373cabcda [mlir][sparse] implement full reduction "scalarization" across loop nests
The earlier reduction "scalarization" was only applied to a chain of
*innermost* and *for* loops. This revision generalizes this to any
nesting of for- and while-loops. This implies that reductions can be
implemented with a lot less load and store operations. The chaining
is implemented with a forest of yield statements (but not as bad as
when we would also include the while-induction).

Fixes https://bugs.llvm.org/show_bug.cgi?id=52311

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D113078
2021-11-04 17:38:47 -07:00
Aart Bik 4aa9b39824 [mlir][sparse] reject sparsity annotation in "scalar" tensors
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D113152
2021-11-04 09:49:05 -07:00
wren romano 6be36fd794 [mlir][sparse] Improve handling of dynamic-sizes for sparse=>dense conversion
Allows the result to be more dynamically-sized than the source.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D112854
2021-10-29 17:44:40 -07:00
Aart Bik 185960dc8d [mlir][sparse] fix conversion bug when changing pointer/index sizes
Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D112770
2021-10-28 17:24:38 -07:00
wren romano 5389cdc8f6 [mlir][sparse] Adding dynamic-size support for sparse=>dense conversion
Depends On D110790

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D112674
2021-10-28 16:56:18 -07:00
wren romano 28882b6575 [mlir][sparse] Implementing sparse=>dense conversion.
Depends On D110882, D110883, D110884

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110790
2021-10-28 15:27:35 -07:00
Aart Bik 1e6ef0cfb0 [mlir][sparse] refine trait of sparse_tensor.convert
Rationale:
The currently used trait was demanding that all types are the same
which is not true (since the sparse part may change and the dim sizes
may be relaxed). This revision uses the correct trait and makes the
rank match test explicit in the verify method.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D112576
2021-10-26 14:36:49 -07:00
Aart Bik c8d5dcb035 [mlir][sparse] refactor loop sequence codegen
This refactoring adds a few "event" functions (start/end loop-seq/loop) for
readability of the core function of codegen. This also prepares sparse tensor
output codegen, where these "event" functions will provide convenient
placeholders to start or stop insertion bookkeeping.

This revision also includes a few various minor changes that kept on
pending in my local workspace.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D112506
2021-10-26 13:42:21 -07:00
Aart Bik 1b15160ef3 [mlir][sparse] lower trivial tensor.cast on identical sparse tensors
Even though tensor.cast is not part of the sparse tensor dialect,
it may be used to cast static dimension sizes to dynamic dimension
sizes for sparse tensors without changing the actual sparse tensor
itself. Those cases should be lowered properly when replacing sparse
tensor types with their opaque pointers. Likewise, no op sparse
conversions are handled by this revision in a similar manner.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D112173
2021-10-25 10:30:19 -07:00
Jacques Pienaar cfb72fd3a0 [mlir] Switch arith, llvm, std & shape dialects to accessors prefixed both form.
Following
https://llvm.discourse.group/t/psa-ods-generated-accessors-will-change-to-have-a-get-prefix-update-you-apis/4476,
this follows flipping these dialects to _Both prefixed form. This
changes the accessors to have a prefix. This was possibly mostly without
breaking breaking changes if the existing convenience methods were used.

(https://github.com/jpienaar/llvm-project/blob/main/clang-tools-extra/clang-tidy/misc/AddGetterCheck.cpp
was used to migrate the callers post flipping, using the output from
Operator.cpp)

Differential Revision: https://reviews.llvm.org/D112383
2021-10-24 18:36:33 -07:00
Aart Bik bd5494d127 [mlir][sparse] make index type explicit in public API of support library
The current implementation used explicit index->int64_t casts for some, but
not all instances of passing values of type "index" in and from the sparse
support library. This revision makes the situation more consistent by
using new "index_t" type at all such places  (which allows for less trivial
casting in the generated MLIR code).  Note that the current revision still
assumes that "index" is 64-bit wide. If we want to support targets with
alternative "index" bit widths, we need to build the support library different.
But the current revision is a step forward by making this requirement explicit
and more visible.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D112122
2021-10-20 12:46:31 -07:00
Aart Bik 9d1db3d4a1 [mlir][sparse] generalize sparse_tensor.convert on static/dynamic dimension sizes
This revison lifts the artificial restriction on having exact matches between
source and destination type shapes. A static size may become dynamic. We still
reject changing a dynamic size into a static size to avoid the need for a
runtime "assert" on the conversion. This revision also refactors some of the
conversion code to share same-content buffers.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111915
2021-10-18 13:54:03 -07:00
Aart Bik b24788abd8 [mlir][sparse] implement sparse tensor init operation
Next step towards supporting sparse tensors outputs.
Also some minor refactoring of enum constants as well
as replacing tensor arguments with proper buffer arguments
(latter is required for more general sizes arguments for
the sparse_tensor.init operation, as well as more general
spares_tensor.convert operations later)

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D111771
2021-10-15 09:33:16 -07:00
wren romano 5167c36ab4 [mlir][sparse] Misc code cleanup
Depends On D111763

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D111766
2021-10-13 16:39:29 -07:00
wren romano 63d4fc9483 [mlir][sparse] Factoring out helper functions for generating constants
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D111763
2021-10-13 16:19:55 -07:00
Aart Bik a652e5b53a [mlir][sparse] emergency fix after constant -> arith.constant change
Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D111743
2021-10-13 10:26:17 -07:00
Aart Bik 35517a251d [mlir][sparse] add init sparse tensor operation
This is the first step towards supporting general sparse tensors as output
of operations. The init sparse tensor is used to materialize an empty sparse
tensor of given shape and sparsity into a subsequent computation (similar to
the dense tensor init operation counterpart).

Example:
  %c = sparse_tensor.init %d1, %d2 : tensor<?x?xf32, #SparseMatrix>
  %0 = linalg.matmul
    ins(%a, %b: tensor<?x?xf32>, tensor<?x?xf32>)
    outs(%c: tensor<?x?xf32, #SparseMatrix>) -> tensor<?x?xf32, #SparseMatrix>

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111684
2021-10-13 09:47:56 -07:00
Mogball a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Aart Bik 849f016ce8 [mlir][sparse] accept affine subscripts in outer dimensions of dense memrefs
This relaxes vectorization of dense memrefs a bit so that affine expressions
are allowed in more outer dimensions. Vectorization of non unit stride
references is disabled though, since this seems ineffective anyway.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111469
2021-10-11 11:45:14 -07:00
Aart Bik 16b8f4ddae [mlir][sparse] add a "release" operation to sparse tensor dialect
We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse storage scheme at runtime (other than making an explicit delSparseTensor() library call). To simplify memory management, a sparse_tensor.release operation has been introduced that lowers to the runtime library call while keeping tensors, opague pointers, and memrefs transparent in the initial IR.

*Note* There is obviously some tension between the concept of immutable tensors and memory management methods. This tension is addressed by simply stating that after the "release" call, no further memref related operations are allowed on the tensor value. We expect the design to evolve over time, however, and arrive at a more satisfactory view of tensors and buffers eventually.

Bug:
http://llvm.org/pr52046

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111099
2021-10-05 09:35:59 -07:00
wren romano af7ac1d95b [mlir][sparse] Sharing calls to adaptor.getOperands()[0]
This is preliminary work towards D110790. Depends On D110883.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110884
2021-10-01 14:20:31 -07:00
wren romano 14fffda979 [mlir][sparse] Factoring out allocaIndices()
This is preliminary work towards D110790. Depends On D110882.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110883
2021-10-01 14:18:56 -07:00
wren romano ca01034714 [mlir][sparse] Factoring out getZero() and avoiding unnecessary Type params
This is preliminary work towards D110790

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110882
2021-10-01 14:17:53 -07:00
wren romano 218954865e [mlir][sparse] Correcting a few typos
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110773
2021-09-30 11:42:46 -07:00
Chris Lattner fb093c8314 [ODS/AsmParser] Don't pass MLIRContext with DialectAsmParser.
The former is redundant because the later carries it as part of
its builder.  Add a getContext() helper method to DialectAsmParser
to make this more convenient, and stop passing the context around
explicitly.  This simplifies ODS generated parser hooks for attrs
and types.

This resolves PR51985

Recommit 4b32f8bac4 after fixing a dependency.

Differential Revision: https://reviews.llvm.org/D110796
2021-09-30 05:10:28 +00:00
Mehdi Amini 3310e0020c Revert "[ODS/AsmParser] Don't pass MLIRContext with DialectAsmParser."
This reverts commit 4b32f8bac4.

Seems like the build is broken with -DDBUILD_SHARED_LIBS=ON
2021-09-30 05:01:17 +00:00
Chris Lattner 4b32f8bac4 [ODS/AsmParser] Don't pass MLIRContext with DialectAsmParser.
The former is redundant because the later carries it as part of
its builder.  Add a getContext() helper method to DialectAsmParser
to make this more convenient, and stop passing the context around
explicitly.  This simplifies ODS generated parser hooks for attrs
and types.

This resolves PR51985

Differential Revision: https://reviews.llvm.org/D110796
2021-09-29 21:36:05 -07:00
Aart Bik 7f1cb43d60 [mlir][sparse] simplify negi code generation with subi
The lack of negi details leaked from merger class into codegen part.
Also, special case for vector code was not needed, the type can be used directly!

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110677
2021-09-29 10:00:06 -07:00
Aart Bik ec97a205c3 [mlir][sparse] preserve zero-initialization for materializing buffers
This revision makes sure that when the output buffer materializes locally
(in contrast with the passing in of output tensors either in-place or not
in-place), the zero initialization assumption is preserved. This also adds
a bit more documentation on our sparse kernel assumption (viz. TACO
assumptions).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D110442
2021-09-27 11:22:05 -07:00
Bixia Zheng fbd5821c6f Implement the conversion from sparse constant to sparse tensors.
The sparse constant provides a constant tensor in coordinate format. We first split the sparse constant into a constant tensor for indices and a constant tensor for values. We then generate a loop to fill a sparse tensor in coordinate format using the tensors for the indices and the values. Finally, we convert the sparse tensor in coordinate format to the destination sparse tensor format.

Add tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110373
2021-09-27 09:47:29 -07:00
River Riddle b54c724be0 [mlir:OpConversionPattern] Add overloads for taking an Adaptor instead of ArrayRef
This has been a TODO for a long time, and it brings about many advantages (namely nice accessors, and less fragile code). The existing overloads that accept ArrayRef are now treated as deprecated and will be removed in a followup (after a small grace period). Most of the upstream MLIR usages have been fixed by this commit, the rest will be handled in a followup.

Differential Revision: https://reviews.llvm.org/D110293
2021-09-24 17:51:41 +00:00