* This is the native data layout for PyTorch and npcomp was using the prior version before cleanup.
Differential Revision: https://reviews.llvm.org/D108527
Multiple operations were still defined as TC ops that had equivalent versions
as YAML operations. Reducing to a single compilation path guarantees that
frontends can lower to their equivalent operations without missing the
optimized fastpath.
Some operations are maintained purely for testing purposes (mainly conv{1,2,3}D
as they are included as sole tests in the vectorizaiton transforms.
Differential Revision: https://reviews.llvm.org/D108169
Reduction axis should come after all parallel axis to work with vectorization.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D108005
These operations are not lowered to from any source dialect and are only
used for redundant tests. Removing these named ops, along with their
associated tests, will make migration to YAML operations much more
convenient.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D107993
Existing linalg.conv2d is not well optimized for performance. Changed to a
version that is more aligned for optimziation. Include the corresponding
transposes to use this optimized version.
This also splits the conv and depthwise conv into separate implementations
to avoid overly complex lowerings.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D107504
Avoiding absolute imports allows the code to be relocatable (which is used for out of tree integrations).
Differential Revision: https://reviews.llvm.org/D107617
* Adds source targets (not included in the full set that downstreams use by default) to bundle mlir-c/ headers into the mlir/_mlir_libs/include directory.
* Adds a minimal entry point to get include and library directories.
* Used by npcomp to export a full CAPI (which is then used by the Torch extension to link npcomp).
Reviewed By: mikeurbach
Differential Revision: https://reviews.llvm.org/D107090
* For python projects that don't need JIT/ExecutionEngine, cuts the number of files to compile roughly in half (with similar reduction in end binary size).
Differential Revision: https://reviews.llvm.org/D106992
* Implements all of the discussed features:
- Links against common CAPI libraries that are self contained.
- Stops using the 'python/' directory at the root for everything, opening the namespace up for multiple projects to embed the MLIR python API.
- Separates declaration of sources (py and C++) needed to build the extension from building, allowing external projects to build custom assemblies from core parts of the API.
- Makes the core python API relocatable (i.e. it could be embedded as something like 'npcomp.ir', 'npcomp.dialects', etc). Still a bit more to do to make it truly isolated but the main structural reset is done.
- When building statically, installed python packages are completely self contained, suitable for direct setup and upload to PyPi, et al.
- Lets external projects assemble their own CAPI common runtime library that all extensions use. No more possibilities for TypeID issues.
- Begins modularizing the API so that external projects that just include a piece pay only for what they use.
* I also rolled in a re-organization of the native libraries that matches how I was packaging these out of tree and is a better layering (i.e. all libraries go into a nested _mlir_libs package). There is some further cleanup that I resisted since it would have required source changes that I'd rather do in a followup once everything stabilizes.
* Note that I made a somewhat odd choice in choosing to recompile all extensions for each project they are included into (as opposed to compiling once and just linking). While not leveraged yet, this will let us set definitions controlling the namespacing of the extensions so that they can be made to not conflict across projects (with preprocessor definitions).
* This will be a relatively substantial breaking change for downstreams. I will handle the npcomp migration and will coordinate with the circt folks before landing. We should stage this and make sure it isn't causing problems before landing.
* Fixed a couple of absolute imports that were causing issues.
Differential Revision: https://reviews.llvm.org/D106520
Includes a version of a quantized conv2D operations with a lowering from TOSA
to linalg with corresponding test. We keep the quantized and quantized variants
as separate named ops to avoid the additional operations for non-quantized
convolutions.
Differential Revision: https://reviews.llvm.org/D106407
libMLIRPublicAPI.so came into existence early when the Python and C-API were being co-developed because the Python extensions need a single DSO which exports the C-API to link against. It really should never have been exported as a mondo library in the first place, which has caused no end of problems in different linking modes, etc (i.e. the CAPI tests depended on it).
This patch does a mechanical move that:
* Makes the C-API tests link directly to their respective libraries.
* Creates a libMLIRPythonCAPI as part of the Python bindings which assemble to exact DSO that they need.
This has the effect that the C-API is no longer monolithic and can be subset and used piecemeal in a modular fashion, which is necessary for downstreams to only pay for what they use. There are additional, more fundamental changes planned for how the Python API is assembled which should make it more out of tree friendly, but this minimal first step is necessary to break the fragile dependency between the C-API and Python API.
Downstream actions required:
* If using the C-API and linking against MLIRPublicAPI, you must instead link against its constituent components. As a reference, the Python API dependencies are in lib/Bindings/Python/CMakeLists.txt and approximate the full set of dependencies available.
* If you have a Python API project that was previously linking against MLIRPublicAPI (i.e. to add its own C-API DSO), you will want to `s/MLIRPublicAPI/MLIRPythonCAPI/` and all should be as it was. There are larger changes coming in this area but this part is incremental.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D106369
The unstrided transposed conv can be represented as a regular convolution.
Lower to this variant to handle the basic case. This includes transitioning from
the TC defined convolution operation and a yaml defined one.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D106389
Added the named op variants for quantized matmul and quantized batch matmul
with the necessary lowerings/tests from tosa's matmul/fully connected ops.
Current version does not use the contraction op interface as its verifiers
are not compatible with scalar operations.
Differential Revision: https://reviews.llvm.org/D105063
This deletes all the pooling ops in LinalgNamedStructuredOpsSpec.tc. All the
uses are replaced with the yaml pooling ops.
Reviewed By: gysit, rsuderman
Differential Revision: https://reviews.llvm.org/D106181
Annotate LinalgNamedStructuredOps.yaml with a comment stating the file is auto-generated and should not be edited manually.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105809
Introduce the exp and log function in OpDSL. Add the soft plus operator to test the emitted IR in Python and C++.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105420
Add the min operation to OpDSL and introduce a min pooling operation to test the implementation. The patch is a sibling of the max operation patch https://reviews.llvm.org/D105203 and the min operation is again lowered to a compare and select pair.
Differential Revision: https://reviews.llvm.org/D105345
Add the max operation to the OpDSL and introduce a max pooling operation to test the implementation. As MLIR has no builtin max operation, the max function is lowered to a compare and select pair.
Differential Revision: https://reviews.llvm.org/D105203
Similarly to batch_mat vec outer most dim is a batching dim
and this op does |b| matrix-vector-products :
C[b, i] = sum_k(A[b, i, k] * B[b, k])
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D104739
Extend the OpDSL syntax with an optional `domain` function to specify an explicit dimension order. The extension is needed to provide more control over the dimension order instead of deducing it implicitly depending on the formulation of the tensor comprehension. Additionally, the patch also ensures the symbols are ordered according to the operand definitions of the operation.
Differential Revision: https://reviews.llvm.org/D105117
Add an index_dim annotation to specify the shape to loop mapping of shape-only tensors. A shape-only tensor serves is not accessed withing the body of the operation but is required to span the iteration space of certain operations such as pooling.
Differential Revision: https://reviews.llvm.org/D104767
Extend the OpDSL with index attributes. After tensors and scalars, index attributes are the third operand type. An index attribute represents a compile-time constant that is limited to index expressions. A use cases are the strides and dilations defined by convolution and pooling operations.
The patch only updates the OpDSL. The C++ yaml codegen is updated by a followup patch.
Differential Revision: https://reviews.llvm.org/D104711
Adapt the FillOp definition to use a scalar operand instead of a capture. This patch is a follow up to https://reviews.llvm.org/D104109. As the input operands are in front of the output operands the patch changes the internal operand order of the FillOp. The pretty printed version of the operation remains unchanged though. The patch also adapts the linalg to standard lowering to ensure the c signature of the FillOp remains unchanged as well.
Differential Revision: https://reviews.llvm.org/D104121
The patch replaces the existing capture functionality by scalar operands that have been introduced by https://reviews.llvm.org/D104109. Scalar operands behave as tensor operands except for the fact that they are not indexed. As a result ScalarDefs can be accessed directly as no indexing expression is needed.
The patch only updates the OpDSL. The C++ side is updated by a follow up patch.
Differential Revision: https://reviews.llvm.org/D104220
Currently, passes are registered on a per-dialect basis, which
provides the smallest footprint obviously. But for prototyping
and experimentation, a convenience "all passes" module is provided,
which registers all known MLIR passes in one run.
Usage in Python:
import mlir.all_passes_registration
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D103130
The patch extends the yaml code generation to support the following new OpDSL constructs:
- captures
- constants
- iteration index accesses
- predefined types
These changes have been introduced by revision
https://reviews.llvm.org/D101364.
Differential Revision: https://reviews.llvm.org/D102075
First set of "boilerplate" to get sparse tensor
passes available through CAPI and Python.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D102362
All glue and clutter in the linalg ops has been replaced by proper
sparse tensor type encoding. This code is no longer needed. Thanks
to ntv@ for giving us a temporary home in linalg.
So long, and thanks for all the fish.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D102098
* NFC but has some fixes for CMake glitches discovered along the way (things not cleaning properly, co-mingled depends).
* Includes previously unsubmitted fix in D98681 and a TODO to fix it more appropriately in a smaller followup.
Differential Revision: https://reviews.llvm.org/D101493