Commit Graph

8479 Commits

Author SHA1 Message Date
Kiran Chandramohan 711aa35759 [MLIR][OpenMP] Add support for declaring critical construct names
Add an operation omp.critical.declare to declare names/symbols of
critical sections. Named omp.critical operations should use symbols
declared by omp.critical.declare. Having a declare operation ensures
that the names of critical sections are global and unique. In the
lowering flow to LLVM IR, the OpenMP IRBuilder creates unique names
for critical sections.

Reviewed By: ftynse, jeanPerier

Differential Revision: https://reviews.llvm.org/D108713
2021-09-02 14:31:19 +00:00
Marius Brehler 2f0750dd2e [mlir] Add Cpp emitter
This upstreams the Cpp emitter, initially presented with [1], from [2]
to MLIR core. Together with the previously upstreamed EmitC dialect [3],
the target allows to translate MLIR to C/C++.

[1] https://reviews.llvm.org/D76571
[2] https://github.com/iml130/mlir-emitc
[3] https://reviews.llvm.org/D103969

Co-authored-by: Jacques Pienaar <jpienaar@google.com>
Co-authored-by: Simon Camphausen <simon.camphausen@iml.fraunhofer.de>
Co-authored-by: Oliver Scherf <oliver.scherf@iml.fraunhofer.de>

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D104632
2021-09-02 13:51:05 +00:00
Alex Zinenko 8647e4c3a0 [mlir] support translating OpenMP loops with reductions
Use the recently introduced OpenMPIRBuilder facility to transate OpenMP
workshare loops with reductions to LLVM IR calling OpenMP runtime. Most of the
heavy lifting is done at the OpenMPIRBuilder. When other OpenMP dialect
constructs grow support for reductions, the translation can be updated to
operate on, e.g., an operation interface for all reduction containers instead
of workshare loops specifically. Designing such a generic translation for the
single operation that currently supports reductions is premature since we don't
know how the reduction modeling itself will be generalized.

Reviewed By: kiranchandramohan

Differential Revision: https://reviews.llvm.org/D107343
2021-09-02 15:38:20 +02:00
Alexander Belyaev f68de11c10 [mlir][linalg] Expose function to create op on buffers during bufferization.
Differential Revision: https://reviews.llvm.org/D109140
2021-09-02 11:09:05 +02:00
Aart Bik 2754604e54 [mlir][sparse] sparse runtime support library improvements
(1) renamed SparseTensor to SparseTensorCOO, the other one remains SparseTensorStorage to focus on contrast

(2) documents difference between public API exclusively for compiler-generated code and methods that could be used by other runtimes (TBD) that want to interact with MLIR

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D109039
2021-09-01 16:51:14 -07:00
Jacques Pienaar f7bf8a8658 [mlir][capi] Add NameLoc
Add method to get NameLoc. Treat null child location as unknown to avoid
needing to create UnknownLoc in C API where child loc is not needed.

Differential Revision: https://reviews.llvm.org/D108678
2021-09-01 16:16:35 -07:00
Weiwei Li a79d7c2c85 [mlir][SPIRV] Add Image Operands for Image Instructions
This patch is to add Image Operands in SPIR-V Dialect and also let ImageDrefGather to use Image Operands.

Image Operands are used in many image instructions. "Image Operands encodes what oprands follow, as per Image Operands". And ususally, they are optional to image instructions.

The format of image operands looks like:

    %0 = spv.ImageXXXX %1, ... %3 : f32 ["Bias|Lod"](%4, %5 : f32, f32) -> ...

This patch doesn’t implement all operands (see Section 3.14 in SPIR-V Spec) but provides a skeleton of it. There is TODO in verifyImageOperands function.

Co-authored: Alan Liu <alanliu.yf@gmail.com>

Reviewed by: antiagainst

Differential Revision: https://reviews.llvm.org/D108501
2021-09-02 04:14:17 +08:00
Mehdi Amini 43a894365e Remove deprecated registration APIs (NFC)
In D104421, we changed the API for pass registration.
Before you would write:

      void registerPass("my-pass", "My Pass Description.",
                        [] { return createMyPass(); });
while now you’d only write:

      void registerPass([] { return createMyPass(); });

If you’re using TableGen to define your pass registration, you shouldn’t have anything to do. If you’re using directly the C++ API here are some changes.
Your project may also be broken even if you use TableGen and you call the
generated registration API in case your pass implementation didn’t inherit from
the MyPassBase class generated by TableGen.

If you don't use TableGen, the "my-pass" and "My Pass Description." fields must
be provided by overriding methods on the pass itself:

  llvm::StringRef getArgument() const final { return "my-pass"; }
  llvm::StringRef getDescription() const final {
    return "My Pass Description.";
  }

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D104429
2021-09-01 18:53:30 +00:00
natashaknk f596acc74d [mlir][tosa] Small refactor to the functionality of Depthwise_Conv2D to add the bias at the end of the convolution
Follow-up to the Conv2d and fully_connected lowering adjustments

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D108949
2021-09-01 10:01:00 -07:00
Tyler Augustine 7105512a34 Support alias.scope and noalias metadata lowering on intrinsics.
Builds on https://reviews.llvm.org/D107870 to support annotating intrinsics with alias.scope and noalias metadata.

Reviewed By: arpith-jacob, ftynse

Differential Revision: https://reviews.llvm.org/D109025
2021-09-01 16:54:20 +00:00
wren romano b04b757a8e [mlir][sparse] Rename the public SparseTensorStorage::asCOO to toCOO
Trying to reduce confusion by having the name of the public method match that of the private method for handling the recursion.  Also adding some comments to SparseTensorStorage::fromCOO to help clarify what the recursive calls are doing in the dense case.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D108954
2021-08-31 15:44:34 -07:00
Mehdi Amini c7515a49b1 Fix MLIR python binding test after changes in ASM printer 2021-08-31 18:50:22 +00:00
MaheshRavishankar b686fdbf92 [mlir][Linalg] Drop output tensor from `linalg.pad_tensor` op.
The output tensor was added for tiling purposes. With use of
`TilingInterface` for tiling pad operations, there is no need for an
explicit operand for the shape of result of `linalg.pad_tensor`
op. The interface allows the tiling pattern to query the value that
can be used for the "init" needed for tiling dynamically.

Differential Revision: https://reviews.llvm.org/D108613
2021-08-31 11:12:24 -07:00
Mehdi Amini 387f95541b Add a new interface allowing to set a default dialect to be used for printing/parsing regions
Currently the builtin dialect is the default namespace used for parsing
and printing. As such module and func don't need to be prefixed.
In the case of some dialects that defines new regions for their own
purpose (like SpirV modules for example), it can be beneficial to
change the default dialect in order to improve readability.

Differential Revision: https://reviews.llvm.org/D107236
2021-08-31 17:52:40 +00:00
Mehdi Amini c41b16c26b Change ASM Op printer to print the operation name in the framework instead of leaving it up to each individual operation
This aligns the printer with the parser contract: the operation isn't part of the user-controllable part of the syntax.

Differential Revision: https://reviews.llvm.org/D108804
2021-08-31 17:52:40 +00:00
Mehdi Amini fd87963eee Change dialect `printOperation()` hook to `getOperationPrinter()`
This makes the hook return a printer if available, instead of using LogicalResult  to
indicate if a printer was available (and invoked). This allows the caller to detect that
the dialect has a printer for a given operation without actually invoking the printer.
It'll be leveraged in a future revision to move printing the op name itself under control
of the ASMPrinter.

Differential Revision: https://reviews.llvm.org/D108803
2021-08-31 17:52:39 +00:00
Tres Popp 44485fcd97 [mlir] Prevent assertion failure in DropUnitDims
Don't assert fail on strided memrefs when dropping unit dims.
Instead just leave them unchanged.

Differential Revision: https://reviews.llvm.org/D108205
2021-08-31 12:15:13 +02:00
marina kolpakova a.k.a. geexie 0080d2aa55 [mlir][gpu] folds memref.dim of gpu.alloc
implements canonicalization which folds memref.dim(gpu.alloc(%size), %idx) -> %size

Differential Revision: https://reviews.llvm.org/D108892
2021-08-31 12:33:10 +03:00
Stella Laurenzo f05ff4f757 [mlir][python] Apply py::module_local() to all classes.
* This allows multiple MLIR-API embedding downstreams to co-exist in the same process.
* I believe this is the last thing needed to enable isolated embedding.

Differential Revision: https://reviews.llvm.org/D108605
2021-08-30 22:18:43 -07:00
MaheshRavishankar 2dfb66833f Fix unused variable in release build.
Differential Revision: https://reviews.llvm.org/D108963
2021-08-30 19:34:52 -07:00
MaheshRavishankar ba72cfe734 [mlir] Add an interface to allow operations to specify how they can be tiled.
An interface to allow for tiling of operations is introduced. The
tiling of the linalg.pad_tensor operation is modified to use this
interface.

Differential Revision: https://reviews.llvm.org/D108611
2021-08-30 16:31:18 -07:00
Chris Lattner faf1c22408 [Builder] Eliminate the StringRef/StringAttr forms of getSymbolRefAttr.
The StringAttr version doesn't need a context, so we can just use the
existing `SymbolRefAttr::get` form.  The StringRef version isn't preferred
so we want to encourage people to use StringAttr.

There is an additional form of getSymbolRefAttr that takes a (SymbolTrait
implementing) operation.  This should also be moved, but I'll do that as
a separate patch.

Differential Revision: https://reviews.llvm.org/D108922
2021-08-30 16:05:36 -07:00
natashaknk 203d38b234 [mlir][tosa] Small refactor to the functionality of Conv2D and Fully_connected to add the bias at the end of the convolution
Made to adjust for a modification to the tiling algorithm

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D108746
2021-08-30 13:18:43 -07:00
Stella Laurenzo 8e6c55c92c [mlir][python] Extend C/Python API to be usable for CFG construction.
* It is pretty clear that no one has tried this yet since it was both incomplete and broken.
* Fixes a symbol hiding issues keeping even the generic builder from constructing an operation with successors.
* Adds ODS support for successors.
* Adds CAPI `mlirBlockGetParentRegion`, `mlirRegionEqual` + tests (and missing test for `mlirBlockGetParentOperation`).
* Adds Python property: `Block.region`.
* Adds Python methods: `Block.create_before` and `Block.create_after`.
* Adds Python property: `InsertionPoint.block`.
* Adds new blocks.py test to verify a plausible CFG construction case.

Differential Revision: https://reviews.llvm.org/D108898
2021-08-30 08:28:00 -07:00
Alex Zinenko 9db95a67d1 Fix interface trait declaration in SymbolInterfaces.td
41d4aa7de6 introduced incorrect code in
extraTraitClassDeclaration: `this` refers to the trait class and not the
operation class so `->getContext()` is not valid. Use `$_op` instead.
2021-08-30 11:15:05 +02:00
Chris Lattner 41d4aa7de6 [SymbolRefAttr] Revise SymbolRefAttr to hold a StringAttr.
SymbolRefAttr is fundamentally a base string plus a sequence
of nested references.  Instead of storing the string data as
a copies StringRef, store it as an already-uniqued StringAttr.

This makes a lot of things simpler and more efficient because:
1) references to the symbol are already stored as StringAttr's:
   there is no need to copy the string data into MLIRContext
   multiple times.
2) This allows pointer comparisons instead of string
   comparisons (or redundant uniquing) within SymbolTable.cpp.
3) This allows SymbolTable to hold a DenseMap instead of a
   StringMap (which again copies the string data and slows
   lookup).

This is a moderately invasive patch, so I kept a lot of
compatibility APIs around.  It would be nice to explore changing
getName() to return a StringAttr for example (right now you have
to use getNameAttr()), and eliminate things like the StringRef
version of getSymbol.

Differential Revision: https://reviews.llvm.org/D108899
2021-08-29 21:54:47 -07:00
Matthias Springer d18ffd61d4 [mlir][SCF] Canonicalize dim(x) where x is an iter_arg
* Add `DimOfIterArgFolder`.
* Move existing cross-dialect canonicalization patterns to `LoopCanonicalization.cpp`.
* Rename `SCFAffineOpCanonicalization` pass to `SCFForLoopCanonicalization`.
* Expand documentaton of scf.for: The type of loop-carried variables may not change with iterations. (Not even the dynamic type.)

Differential Revision: https://reviews.llvm.org/D108806
2021-08-30 01:39:56 +00:00
Matthias Springer eedc997b7d [mlir][Analysis] Add batched version of FlatAffineConstraints::addId
* Add batched version of all `addId` variants, so that multiple IDs can be added at a time.
* Rename `addId` and variants to `insertId` and `appendId`. Most external users call `appendId`. Splitting `addId` into two functions also makes it possible to provide batched version for both. (Otherwise, the overloads are ambigious when calling `addId`.)

Differential Revision: https://reviews.llvm.org/D108532
2021-08-30 00:56:44 +00:00
Lei Zhang a5621e26db [mlir][spirv] Use type dyn_cast when scanning spv.GlobalVariable
This avoids crashes when there are spv.GlobalVariable without
pointer type.
2021-08-29 12:01:19 -04:00
Aart Bik b9f87e24f2 [mlir] add missing include, fix broken build
Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D108873
2021-08-28 09:36:38 -07:00
Markus Böck 0235e3c7a6 [mlir][NFC] Fully qualify default value of Attributes `getStorageType()` in files generated by mlir-tblgen 2021-08-28 15:37:56 +02:00
Uday Bondhugula 4edc9e2acf [MLIR][GPU] Drop mgpuMemHostRegisterMemRef's dependence on LLVM Support
Drop mgpuMemHostRegisterMemRef's dependence on LLVM Support. This
method is the only one in CUDA runtime wrappers library that creates
a dependence on libLLVMSupport due to its use of SmallVector and
ArrayRef. The code can be as easily/compactly written without those ADT.
The dependence on LLVMSupport adds a significant amount of additional
complexity for external things that want to link this library in (both
statically or as a shared object) since libLLVMSupport includes numerous
other objects that are sensitive to C++ compiler version and ABI.

Differential Revision: https://reviews.llvm.org/D108684
2021-08-28 11:37:55 +05:30
Mehdi Amini 022538f276 Remove `const` from `const T &&` in debugString() helper to make it a universal reference (NFC)
It broke lvalue arguments otherwise:
2021-08-28 01:09:00 +00:00
Mehdi Amini 4387975170 Use a universal reference (&& instead of const &) for `debugString()` helper (NFC)
Some classes like mlir::Operation have a non-const print() method.
2021-08-28 00:41:41 +00:00
Mehdi Amini c0b70def21 Specify argument to be `const` for `debugString()` helper (NFC)
This allows using this helper with rvalues.
2021-08-28 00:10:53 +00:00
Aart Bik 0a7b8cc5dd [mlir][sparse] fully implement sparse tensor to sparse tensor conversions
with rigorous integration test

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108721
2021-08-27 15:08:18 -07:00
Butygin 1e35a7690d [mlir][spirv] Initial support for 64 bit index type and builtins
Differential Revision: https://reviews.llvm.org/D108516
2021-08-27 01:38:53 +03:00
Rob Suderman 90478251c7 [mlir][tosa] Tosa reverse to linalg supporting dynamic shapes
Needed to switch to extract to support tosa.reverse using dynamic shapes.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108744
2021-08-26 13:23:59 -07:00
Rob Suderman 0600bb4d18 [mlir][tosa] Elementwise operation dynamic shape support
Added dynamic shape support for elementwise operations. This assumes equal
sizes (broadcasting 1-length dynamic is problematic).

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108730
2021-08-26 11:18:58 -07:00
Aart Bik 6b26857dbf [mlir][sparse] add asCOO() functionality to sparse tensor object
This prepares general sparse to sparse conversions. The code that
needs to be generated using this new feature is now simply:

(1) coo = sparse_tensor_1->asCOO();          // source format1
(2) sparse_tensor_2 = newSparseTensor(coo);  // destination format2

By using COO as an intermediate, we can do *all* conversions without
having to implement the full O(N^2) conversion matrix. Note that we
can always improve particular conversions individually if a faster
solution is required.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108681
2021-08-25 21:50:39 -07:00
Tobias Gysi 8e9808ca3a [mlir][linalg] Tune hasTensorSemantics/hasBufferSemantics methods.
Optimize performance by iterating all operands at once.

Reviewed By: benvanik

Differential Revision: https://reviews.llvm.org/D108716
2021-08-25 19:28:37 +00:00
Tobias Gysi 2b35b372fd [mlir][linalg] Tune getTiedIndexingMap method (NFC).
Optimize the performance by using the range directly.

Reviewed By: benvanik

Differential Revision: https://reviews.llvm.org/D108715
2021-08-25 18:44:01 +00:00
Aart Bik d5f7f356ce [mlir][sparse] add sparse-dense cases to storage integration test
Reviewed By: grosul1

Differential Revision: https://reviews.llvm.org/D108685
2021-08-25 11:33:20 -07:00
River Riddle c8d9e1ce43 [mlir][AttrTypeGen] Add support for specifying a "accessor" type of a parameter
This allows for using a different type when accessing a parameter than the
one used for storage. This allows for returning parameters by reference,
enables using more optimized/convient reference results, and more.

Differential Revision: https://reviews.llvm.org/D108593
2021-08-25 09:27:36 +00:00
River Riddle 9658b061dd [mlir] Update DialectAsmParser::parseString to use std::string instead of StringRef
This allows for parsing strings that have escape sequences, which require constructing
a string (as they can't be represented by looking at the Token contents directly).

Differential Revision: https://reviews.llvm.org/D108589
2021-08-25 09:27:35 +00:00
River Riddle aea3026ea7 [mlir] Move the Operation use iteration utilities to ResultRange
This allows for iterating and interacting with the uses of a specific subset of
results as opposed to just the full range.

Differential Revision: https://reviews.llvm.org/D108586
2021-08-25 09:27:35 +00:00
Tres Popp 868bd9938d [mlir] Add assertion in NamedAttrList to prevent adding null attributes
Differential Revision: https://reviews.llvm.org/D108570
2021-08-25 11:06:53 +02:00
Rob Suderman 5541a05d6a [mlir][tosa] Quantized tosa.avg_pool2d lowering to linalg
Includes the quantized version of average pool lowering to linalg dialect.
This includes a lit test for the transform. It is not 100% correct as the
multiplier / shift should be done in i64 however this is negligable rounding
difference.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108676
2021-08-24 18:54:23 -07:00
Rob Suderman 4ef1770abd [mlir][tosa] Table did not apply offset before extract on i8 input
Lowering to table was incorrect as it did not apply a 128 offset before
extracting the value from the table. Fixed and correct tensor length on input
table.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D108436
2021-08-24 18:52:33 -07:00
Matthias Springer a9cff97f94 [mlir][SCF] Generalize AffineMinSCFCanonicalization to min/max ops
* Add support for affine.max ops to SCF loop peeling pattern.
* Add support for affine.max ops to `AffineMinSCFCanonicalizationPattern`.
* Rename `AffineMinSCFCanonicalizationPattern` to `AffineOpSCFCanonicalizationPattern`.
* Rename `AffineMinSCFCanonicalization` pass to `SCFAffineOpCanonicalization`.

Differential Revision: https://reviews.llvm.org/D108009
2021-08-25 10:40:34 +09:00