Rationale:
Providing the wrong number of sparse/dense annotations was silently
ignored or caused unrelated crashes. This minor change verifies that
the provided number matches the rank.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D97034
Rationale:
Narrower types for overhead storage yield a smaller memory footprint for
sparse tensors and thus needs to be supported. Also, more value types
need to be supported to deal with all kinds of kernels. Since the
"one-size-fits-all" sparse storage scheme implementation is used
instead of actual codegen, the library needs to be able to support
all combinations of desired types. With some crafty templating and
overloading, the actual code for this is kept reasonably sized though.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D96819
This revision connects the generated sparse code with an actual
sparse storage scheme, which can be initialized from a test file.
Lacking a first-class citizen SparseTensor type (with buffer),
the storage is hidden behind an opaque pointer with some "glue"
to bring the pointer back to tensor land. Rather than generating
sparse setup code for each different annotated tensor (viz. the
"pack" methods in TACO), a single "one-size-fits-all" implementation
has been added to the runtime support library. Many details and
abstractions need to be refined in the future, but this revision
allows full end-to-end integration testing and performance
benchmarking (with on one end, an annotated Lingalg
op and, on the other end, a JIT/AOT executable).
Reviewed By: nicolasvasilache, bixia
Differential Revision: https://reviews.llvm.org/D95847
Rationale:
Since I made the argument that metadata helps with extra
verification checks, I better actually do that ;-)
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D95072
Added the ability to read (an extended version of) the FROSTT
file format, so that we can now read in sparse tensors of arbitrary
rank. Generalized the API to deal with more than two dimensions.
Also added the ability to sort the indices of sparse tensors
lexicographically. This is an important step towards supporting
auto gen of initialization code, since sparse storage formats
are easier to initialize if the indices are sorted. Since most
external formats don't enforce such properties, it is convenient
to have this ability in our runtime support library.
Lastly, the re-entrant problem of the original implementation
is fixed by passing an opaque object around (rather than having
a single static variable, ugh!).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94852
Exposing the C versions of the methods of the sparse runtime support lib
through header files will enable using the same methods in an MLIR program
as well as a C++ program, which will simplify future benchmarking comparisons
(e.g. comparing MLIR generated code with eigen for Matrix Market sparse matrices).
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D91316
Rationale:
More consistent with the other names. Also forward looking to reading
in other kinds of matrices. Also fixes lint issue on hard-coded %llu.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D89005
Setting up input data for benchmarks and integration tests can be tedious in
pure MLIR. With more sparse tensor work planned, this convenience library
simplifies reading sparse matrices in the popular Matrix Market Exchange
Format (see https://math.nist.gov/MatrixMarket). Note that this library
is *not* part of core MLIR. It is merely intended as a convenience library
for benchmarking and integration testing.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D88856