Commit Graph

30 Commits

Author SHA1 Message Date
Aart Bik 319072f4e3 [mlir][sparse] migrate sparse operations into new sparse tensor dialect
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.

NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.

Differential Revision: https://reviews.llvm.org/D101573
2021-04-29 15:52:35 -07:00
Mehdi Amini 086e0f05bf Revert "[mlir][sparse] migrate sparse operations into new sparse tensor dialect"
This reverts commit a6d92a9711.

The build with -DBUILD_SHARED_LIBS=ON is broken.
2021-04-29 20:59:41 +00:00
Aart Bik a6d92a9711 [mlir][sparse] migrate sparse operations into new sparse tensor dialect
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.

NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101488
2021-04-29 12:09:10 -07:00
Matthias Springer 64f7fb5dfc [mlir] Support masked N-D vector transfer ops in ProgressiveVectorToSCF.
Mask vectors are handled similar to data vectors in N-D TransferWriteOp. They are copied into a temporary memory buffer, which can be indexed into with non-constant values.

Differential Revision: https://reviews.llvm.org/D101136
2021-04-23 18:23:51 +09:00
Matthias Springer 545f98efc7 [mlir] Support masked 1D vector transfer ops in ProgressiveVectorToSCF
Support for masked N-D vector transfer ops will be added in a subsequent commit.

Differential Revision: https://reviews.llvm.org/D101132
2021-04-23 18:08:50 +09:00
Matthias Springer a819e73393 [mlir] Support broadcast dimensions in ProgressiveVectorToSCF
This commit adds support for broadcast dimensions in permutation maps of vector transfer ops.

Also fixes a bug in VectorToSCF that generated incorrect in-bounds checks for broadcast dimensions.

Differential Revision: https://reviews.llvm.org/D101019
2021-04-23 18:01:32 +09:00
Matthias Springer ab154176bf [mlir] Support dimension permutations in ProgressiveVectorToSCF
This commit adds support for dimension permutations in permutation maps of vector transfer ops.

Differential Revision: https://reviews.llvm.org/D101007
2021-04-23 17:46:35 +09:00
Matthias Springer afaf36b69e [mlir] Handle strided 1D vector transfer ops in ProgressiveVectorToSCF
Strided 1D vector transfer ops are 1D transfers operating on a memref dimension different from the last one. Such transfer ops do not accesses contiguous memory blocks (vectors), but access memory in a strided fashion. In the absence of a mask, strided 1D vector transfer ops can also be lowered using matrix.column.major.* LLVM instructions (in a later commit).

Subsequent commits will extend the pass to handle the remaining missing permutation maps (broadcasts, transposes, etc.).

Differential Revision: https://reviews.llvm.org/D100946
2021-04-23 17:19:22 +09:00
Matthias Springer 7cc8106f67 [mlir] Progressively lower vector to SCF
Add a new ProgressiveVectorToSCF pass that lowers vector transfer ops to SCF by gradually unpacking one dimension at time. Unpacking stops at 1D, but can be configured to stop earlier, should the HW support (N>1)-d vectors.

The current implementation cannot handle permutation maps, masks, tensor types and unrolling yet. These will be added in subsequent commits. Once features are on par with VectorToSCF, this implementation will replace VectorToSCF.

Differential Revision: https://reviews.llvm.org/D100622
2021-04-20 18:49:36 +09:00
Aart Bik 916f3e16bd [mlir][vector][avx] add AVX dot product to X86Vector dialect with lowering
In the long run, we want to unify the dot product codegen solutions between
all target architectures, but this intrinsic enables experimenting with AVX
specific implementations in the meantime.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D100593
2021-04-15 15:01:39 -07:00
Emilio Cota cf20286bcc [mlir] Use default lli JIT in Integration tests
Now that 9b8e7a9d ("[lli] Honor the --entry-function flag in orc and
orc-lazy modes") fixed https://llvm.org/PR49906.

Reviewed By: mehdi_amini, aartbik

Differential Revision: https://reviews.llvm.org/D100407
2021-04-14 12:55:00 -07:00
Matthias Springer 3f4c1e13bc [mlir] Fix return values of AMX tests
Differential Revision: https://reviews.llvm.org/D100422
2021-04-14 09:40:49 +09:00
Emilio Cota 0b63e3222b [mlir] X86Vector: Add AVX Rsqrt
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D99818
2021-04-13 08:43:48 -07:00
Eugene Zhulenev a6628e596e [mlir] Async: add automatic reference counting at async.runtime operations level
Depends On D95311

Previous automatic-ref-counting pass worked with high level async operations (e.g. async.execute), however async values reference counting is a runtime implementation detail.

New pass mostly relies on the save liveness analysis to place drop_ref operations, and does better verification of CFG with different liveIn sets in block successors.

This is almost NFC change. No new reference counting ideas, just a cleanup of the previous version.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D95390
2021-04-12 18:54:55 -07:00
Emilio Cota 1310a19af0 [mlir] Use MCJIT to fix integration tests
Since c42c67ad ('Re-apply "[lli] Make -jit-kind=orc the default JIT
engine"'), ORC is the default JIT. Unfortunately, ORC seems to
ignore the --entry-function flag, which breaks all tests that
use the flag, namely the AMX and X86Vector integration tests.
This has been reported in PR#49906
(https://bugs.llvm.org/show_bug.cgi?id=49906).

Work around this by explicitly selecting MCJIT.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D100344
2021-04-12 18:25:33 -07:00
Emilio Cota 8508a63b88 [mlir] Rename AVX512 dialect to X86Vector
We will soon be adding non-AVX512 operations to MLIR, such as AVX's rsqrt. In https://reviews.llvm.org/D99818 several possibilities were discussed, namely to (1) add non-AVX512 ops to the AVX512 dialect, (2) add more dialects (e.g. AVX dialect for AVX rsqrt), and (3) expand the scope of the AVX512 to include these SIMD x86 ops, thereby renaming the dialect to something more accurate such as X86Vector.

Consensus was reached on option (3), which this patch implements.

Reviewed By: aartbik, ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D100119
2021-04-12 19:20:04 +02:00
Tobias Gysi b614ada0e8 [mlir] add support for index type in vectors.
The patch enables the use of index type in vectors. It is a prerequisite to support vectorization for indexed Linalg operations. This refactoring became possible due to the newly introduced data layout infrastructure. The data layout of a module defines the bitwidth of the index type needed to verify bitcasts and similar vector operations.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D99948
2021-04-08 08:17:13 +00:00
Matthias Springer 65a3f28939 [mlir] Add "mask" operand to vector.transfer_read/write.
Also factors out out-of-bounds mask generation from vector.transfer_read/write into a new MaterializeTransferMask pattern.

Differential Revision: https://reviews.llvm.org/D100001
2021-04-07 21:33:13 +09:00
Matthias Springer 95f8135043 [mlir] Change vector.transfer_read/write "masked" attribute to "in_bounds".
This is in preparation for adding a new "mask" operand. The existing "masked" attribute was used to specify dimensions that may be out-of-bounds. Such transfers can be lowered to masked load/stores. The new "in_bounds" attribute is used to specify dimensions that are guaranteed to be within bounds. (Semantics is inverted.)

Differential Revision: https://reviews.llvm.org/D99639
2021-03-31 18:04:22 +09:00
Mehdi Amini cdb6eb7e83 Update syntax for amx.tile_muli to use two Unit attr to mark the zext case
This makes the annotation tied to the operand and the use of a keyword
more explicit/readable on what it means.

Differential Revision: https://reviews.llvm.org/D99001
2021-03-20 04:12:24 +00:00
Aart Bik 9705cafc0f [mlir][amx] regression test for tile-muli (all zero/sign-extension combinations)
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D98742
2021-03-17 10:04:04 -07:00
Aart Bik b388bbd3f9 [mlir][amx] blocked tilezero integration test
This adds a new integration test. However, it also
adapts to a recent memref.XXX change for existing tests

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D98680
2021-03-16 08:49:31 -07:00
Aart Bik 6ad7b97e20 [mlir][amx] Add Intel AMX dialect (architectural-specific vector dialect)
The Intel Advanced Matrix Extensions (AMX) provides a tile matrix
multiply unit (TMUL), a tile control register (TILECFG), and eight
tile registers TMM0 through TMM7 (TILEDATA). This new MLIR dialect
provides a bridge between MLIR concepts like vectors and memrefs
and the lower level LLVM IR details of AMX.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98470
2021-03-15 17:59:05 -07:00
Alex Zinenko 7aa6f3aa0c [mlir] fix integration tests post e2310704d8
The commit in question moved some ops across dialects but did not update
some of the target-specific integration tests that use these ops,
presumably because the corresponding target hardware was not available.
Fix these tests.
2021-03-15 14:41:27 +01:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Matthias Springer 581672be04 [mlir][AVX512] Add while loop-based sparse vector-vector dot product variants.
Differential Revision: https://reviews.llvm.org/D98480
2021-03-15 16:59:10 +09:00
Matthias Springer c40e0d7609 [mlir][AVX512] Implement sparse vector dot product integration test.
This test operates on two hardware-vector-sized vectors and utilizes vp2intersect and mask.compress.

PHAB_REVIEW=D98099
2021-03-11 13:00:17 +09:00
Matthias Springer acce0ea70c [mlir][AVX512] Add mask.compress to AVX512 dialect.
Adds mask.compress to the AVX512 dialect and defines a lowering to the LLVM dialect.

Differential Revision: https://reviews.llvm.org/D97611
2021-03-06 10:02:48 +09:00
Aart Bik df5ccf5a94 [mlir][vector] add higher dimensional support to gather/scatter
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
   vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97422
2021-02-26 14:20:19 -08:00
Mehdi Amini 99b0032ce0 Move the MLIR integration tests as a subdirectory of test (NFC)
This does not change the behavior directly: the tests only run when
`-DMLIR_INCLUDE_INTEGRATION_TESTS=ON` is configured. However running
`ninja check-mlir` will not run all the tests within a single
lit invocation. The previous behavior would wait for all the integration
tests to complete before starting to run the first regular test. The
test results were also reported separately. This change is unifying all
of this and allow concurrent execution of the integration tests with
regular non-regression and unit-tests.

Differential Revision: https://reviews.llvm.org/D97241
2021-02-23 05:55:47 +00:00