Summary: This generates the class declarations for dialects using the existing 'Dialect' tablegen classes.
Differential Revision: https://reviews.llvm.org/D76185
Setting MLIR_TABLEGEN_EXE would prevent building the native tool which is used in cross-compiling
Differential Revision: https://reviews.llvm.org/D75299
Summary:
- remove stale declarations on flat affine constraints
- avoid allocating small vectors where possible
- clean up code comments, rename some variables
Differential Revision: https://reviews.llvm.org/D76117
Summary:
To enable this, two changes are needed:
1) Add an optional attribute `padding` to linalg.conv.
2) Compute if the indices accessing is out of bound in the loops. If so, use the
padding value `0`. Otherwise, use the value derived from load.
In the patch, the padding only works for lowering without other transformations,
e.g., tiling, fusion, etc.
Differential Revision: https://reviews.llvm.org/D75722
Summary:
This revision adds lowering of vector.contract to llvm.intr.matrix_multiply.
Note that there is currently a mismatch between the MLIR vector dialect which
expects row-major layout and the LLVM matrix intrinsics which expect column
major layout.
As a consequence, we currently only match a vector.contract with indexing maps
that express column-major matrix multiplication.
Other cases would require additional transposes and it is better to wait for
LLVM intrinsics to provide a per-operation attribute that would specify which
layout is expected.
A separate integration test, not submitted to MLIR core, has independently
verified that correct execution occurs on a 2x2x2 matrix multiplication.
Differential Revision: https://reviews.llvm.org/D76014
Summary:
The direct lowering of vector.broadcast into LLVM has been replaced by
progressive lowering into elementary vector ops. This also required a
small refactoring of a llvm.mlir test that used a direct vector.broadcast
operator (just to define a matmul).
Reviewers: nicolasvasilache, andydavis1, rriddle
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D76143
Summary: A number of transform import StandardOps despite not being dependent on it. Cleaned it up to better understand what dialects each of these transforms depend on.
Differential Revision: https://reviews.llvm.org/D76112
Previously we only consider the version/capability/extension requirements
on ops themselves. Some types in SPIR-V also require special extensions
or capabilities to be used. For example, non-32-bit integers/floats
will require different capabilities and/or extensions depending on
where they are used because it may mean special hardware abilities.
This commit adds query methods to SPIR-V type class hierarchy to support
querying extensions and capabilities. We don't go through ODS for
auto-generating such information given that we don't have them in
SPIR-V machine readable grammar and there are just a few types.
Differential Revision: https://reviews.llvm.org/D75875
Previously extensions and capabilities requirements are returned as
SmallVector<SmallVector>. It's an anti-pattern; this commit improves
a bit by returning as SmallVector<ArrayRef>. This is possible because
the internal sequence is always known statically (from the spec)
so that we can use a static constant array for it and get an ArrayRef.
Differential Revision: https://reviews.llvm.org/D75874
This commits changes the definition of spv.module to use the #spv.vce
attribute for specifying (version, capabilities, extensions) triple
so that we can have better API and custom assembly form. Since now
we have proper modelling of the triple, (de)serialization is wired up
to use them.
With the new UpdateVCEPass, we don't need to manually specify the
required extensions and capabilities anymore when creating a spv.module.
One just need to call UpdateVCEPass before serialization to get the
needed version/extensions/capabilities.
Differential Revision: https://reviews.llvm.org/D75872
Creates an operation pass that deduces and attaches the minimal version/
capabilities/extensions requirements for spv.module ops.
For each spv.module op, this pass requires a `spv.target_env` attribute on
it or an enclosing module-like op to drive the deduction. The reason is
that an op can be enabled by multiple extensions/capabilities. So we need
to know which one to pick. `spv.target_env` gives the hard limit as for
what the target environment can support; this pass deduces what are
actually needed for a specific spv.module op.
Differential Revision: https://reviews.llvm.org/D75870
Previously we only look at the directly passed-in op for a potential
spv.target_env attribute. This commit switches to use a larger range
and recursively check enclosing symbol tables.
Differential Revision: https://reviews.llvm.org/D75869
We also need the (version, capabilities, extensions) triple on the
spv.module op. Thus far we have been using separate 'extensions'
and 'capabilities' attributes there and 'version' is missing. Creating
a separate attribute for the trip allows us to reuse the assembly
form and verification.
Differential Revision: https://reviews.llvm.org/D75868
This was a previous experiment that didn't pan out and needs to be
replaced, given no current use or tests, deleting instead and can start
new version fresh.
HasNoSideEffect can now be implemented using the MemoryEffectInterface, removing the need to check multiple things for the same information. This also removes an easy foot-gun for users as 'Operation::hasNoSideEffect' would ignore operations that dynamically, or recursively, have no side effects. This also leads to an immediate improvement in some of the existing users, such as DCE, now that they have access to more information.
Differential Revision: https://reviews.llvm.org/D76036
The current mechanism for identifying is a bit hacky and extremely adhoc, i.e. we explicit check 1-result, 0-operand, no side-effect, and always foldable and then assume that this is a constant. Adding a trait adds structure to this, and makes checking for a constant much more efficient as we can guarantee that all of these things have already been verified.
Differential Revision: https://reviews.llvm.org/D76020
These terminator operations don't really have any side effects, and this allows for more accurate side-effect analysis for region operations. For example, currently we can't detect like a loop.for or affine.for are dead because the affine.terminator is "side effecting".
Note: Marking as NoSideEffect doesn't mean that these operations can be opaquely erased.
Differential Revision: https://reviews.llvm.org/D75888
Summary:
This replaces the direct lowering of vector.outerproduct to LLVM with progressive lowering into elementary vectors ops to avoid having the similar lowering logic at several places.
NOTE1: with the new progressive rule, the lowered llvm is slightly more elaborate than with the direct lowering, but the generated assembly is just as optimized; still if we want to stay closer to the original, we should add a "broadcast on extract" to shuffle rewrite (rather than special cases all the lowering steps)
NOTE2: the original outerproduct lowering code should now be removed but some linalg test work directly on vector and contain some dead code, so this requires another CL
Reviewers: nicolasvasilache, andydavis1
Reviewed By: nicolasvasilache, andydavis1
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75956
Summary:
The C runner utils API was still not vanilla enough for certain use
cases on embedded ARM SDKs, this enables such cases.
Adding people more widely for historical Windows related build issues.
Differential Revision: https://reviews.llvm.org/D76031
MLIR does not have a Sphinx configuration, this is just leading to build
failures at the moment.
The website https://mlir.llvm.org/ is using the Hugo generator to
process the markdown files.
Summary:
affineDataCopyGenerate is a monolithinc function that
combines several steps for good reasons, but it makes customizing
the behaivor even harder. The major two steps by affineDataCopyGenerate are:
a) Identify interesting memrefs and collect their uses.
b) Create new buffers to forward these uses.
Step (a) actually has requires tremendous customization options. One could see
that from the recently added filterMemRef parameter.
This patch adds a function that only does (b), in the hope that (a)
can be directly implemented by the callers. In fact, (a) is quite
simple if the caller has only one buffer to consider, or even one use.
Differential Revision: https://reviews.llvm.org/D75965
Summary:
Now that, thanks to ntv, we have the ability to parse and represent an affine
map with rank-0 results, viz. (i,j) -> (), we can pay off some engineering debt
in special casing the verification of such affine maps in dot-product flavored
vector.contract operations.
Reviewers: nicolasvasilache, andydavis1, rriddle
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D76028
Summary: In some situations the name of the attribute is not representable as a bare-identifier, this revision adds support for those cases by formatting the name as a string instead. This has the added benefit of removing the identifier regex from the verifier.
Differential Revision: https://reviews.llvm.org/D75973
The three libs where recently added to the `mlir-cpu-runner`'s
`CMakeLists.txt` file. This prevent the runner to compile on other
platform (e.g. Power in my case). Native codegen is pulled in
by the ExecutionEngine library, so this is redundant in any case.
Differential Revision: https://reviews.llvm.org/D75916
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`
This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.
Differential Revision: https://reviews.llvm.org/D75766
Summary:
1-bit integer is tricky in different dialects sometimes. E.g., there is no
arithmetic instructions on 1-bit integer in SPIR-V, i.e., `spv.IMul %0, %1 : i1`
is not valid. Instead, `spv.LogicalAnd %0, %1 : i1` is valid. Creating the op
directly makes lowering easier because we don't need to match a complicated
pattern like `!(!lhs && !rhs)`. Also, this matches the semantic better.
Also add assertions on inputs.
Differential Revision: https://reviews.llvm.org/D75764
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`
This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.
Differential Revision: https://reviews.llvm.org/D75766
* Adds GpuLaunchFuncToVulkanLaunchFunc conversion pass.
* Moves a serialization of the `spirv::Module` from LaunchFuncToVulkanCalls pass to newly created pass.
* Updates LaunchFuncToVulkanCalls instrumentation pass, adds `initVulkan` and `deinitVulkan` runtime calls.
* Adds `bindResource` call to bind specifc resource by the given descriptor set and descriptor binding.
* Eliminates static construction and desctruction of `VulkanRuntimeManager`.
Differential Revision: https://reviews.llvm.org/D75192