Previously we only consider the version/capability/extension requirements
on ops themselves. Some types in SPIR-V also require special extensions
or capabilities to be used. For example, non-32-bit integers/floats
will require different capabilities and/or extensions depending on
where they are used because it may mean special hardware abilities.
This commit adds query methods to SPIR-V type class hierarchy to support
querying extensions and capabilities. We don't go through ODS for
auto-generating such information given that we don't have them in
SPIR-V machine readable grammar and there are just a few types.
Differential Revision: https://reviews.llvm.org/D75875
This commits changes the definition of spv.module to use the #spv.vce
attribute for specifying (version, capabilities, extensions) triple
so that we can have better API and custom assembly form. Since now
we have proper modelling of the triple, (de)serialization is wired up
to use them.
With the new UpdateVCEPass, we don't need to manually specify the
required extensions and capabilities anymore when creating a spv.module.
One just need to call UpdateVCEPass before serialization to get the
needed version/extensions/capabilities.
Differential Revision: https://reviews.llvm.org/D75872
Creates an operation pass that deduces and attaches the minimal version/
capabilities/extensions requirements for spv.module ops.
For each spv.module op, this pass requires a `spv.target_env` attribute on
it or an enclosing module-like op to drive the deduction. The reason is
that an op can be enabled by multiple extensions/capabilities. So we need
to know which one to pick. `spv.target_env` gives the hard limit as for
what the target environment can support; this pass deduces what are
actually needed for a specific spv.module op.
Differential Revision: https://reviews.llvm.org/D75870
We also need the (version, capabilities, extensions) triple on the
spv.module op. Thus far we have been using separate 'extensions'
and 'capabilities' attributes there and 'version' is missing. Creating
a separate attribute for the trip allows us to reuse the assembly
form and verification.
Differential Revision: https://reviews.llvm.org/D75868
The current mechanism for identifying is a bit hacky and extremely adhoc, i.e. we explicit check 1-result, 0-operand, no side-effect, and always foldable and then assume that this is a constant. Adding a trait adds structure to this, and makes checking for a constant much more efficient as we can guarantee that all of these things have already been verified.
Differential Revision: https://reviews.llvm.org/D76020
Summary:
This replaces the direct lowering of vector.outerproduct to LLVM with progressive lowering into elementary vectors ops to avoid having the similar lowering logic at several places.
NOTE1: with the new progressive rule, the lowered llvm is slightly more elaborate than with the direct lowering, but the generated assembly is just as optimized; still if we want to stay closer to the original, we should add a "broadcast on extract" to shuffle rewrite (rather than special cases all the lowering steps)
NOTE2: the original outerproduct lowering code should now be removed but some linalg test work directly on vector and contain some dead code, so this requires another CL
Reviewers: nicolasvasilache, andydavis1
Reviewed By: nicolasvasilache, andydavis1
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75956
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`
This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.
Differential Revision: https://reviews.llvm.org/D75766
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`
This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.
Differential Revision: https://reviews.llvm.org/D75766
This revision takes advantage of the empty AffineMap to specify the
0-D edge case. This allows removing a bunch of annoying corner cases
that ended up impacting users of Linalg.
Differential Revision: https://reviews.llvm.org/D75831
Summary:
Paying off some technical debt in VectorOps, where I introduced a special
op for a fused accumulator into reduction to avoid some issues around
printing and parsing an optional accumulator. This CL merges the two
into one op again and does things the right way (still would be nice
to have "assemblyFormat" for optional operands though....).
Reviewers: nicolasvasilache, andydavis1, ftynse, rriddle
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75699
Summary:
This revision removes all of the functionality related to successor operands on the core Operation class. This greatly simplifies a lot of handling of operands, as well as successors. For example, DialectConversion no longer needs a special "matchAndRewrite" for branching terminator operations.(Note, the existing method was also broken for operations with variadic successors!!)
This also enables terminator operations to define their own relationships with successor arguments, instead of the hardcoded "pass-through" behavior that exists today.
Differential Revision: https://reviews.llvm.org/D75318
This attribute details the segment sizes for operand groups within the operation. This revision add support for automatically populating this attribute in the declarative parser.
Differential Revision: https://reviews.llvm.org/D75315
This allows for simplifying OpDefGen, as well providing specializing accessors for the different successor counts. This mirrors the existing traits for operands and results.
Differential Revision: https://reviews.llvm.org/D75313
The current setup of the GPU dialect is to model both the host and
device side codegen. For cases (like IREE) the host side modeling
might not directly fit its use case, but device-side codegen is still
valuable. First step in accessing just the device-side functionality
of the GPU dialect is to allow just creating a gpu.func operation from
a gpu.launch operation. In addition this change also "inlines"
operations into the gpu.func op at time of creation instead of this
being a later step.
Differential Revision: https://reviews.llvm.org/D75287
output has zero rank.
While lowering to loops, no indices should be used in the load/store
operation if the buffer is zero-rank.
Differential Revision: https://reviews.llvm.org/D75391
This commit updates SPIR-V dialect to support integer signedness
by relaxing various checks for signless to just normal integers.
The hack for spv.Bitcast can now be removed.
Differential Revision: https://reviews.llvm.org/D75611
A previous commit added support for integer signedness in C++
IntegerType. This change introduces ODS definitions for
integer types and integer (element) attributes w.r.t. signedness.
This commit also updates various existing definitions' descriptions
to mention signless where suitable to make it more clear.
Positive and non-negative integer attributes are removed to avoid
the explosion of subclasses. Instead, one should use more atmoic
constraints together with Confined to model that. For example,
`Confined<..., [IntPositive]>`.
Differential Revision: https://reviews.llvm.org/D75610
This matches loops with a affine.min upper bound, limiting the trip
count to a constant, and rewrites them into two loops, one with constant
upper bound and one with variable upper bound. The assumption is that
the constant upper bound loop will be unrolled and vectorized, which is
preferable if this is the hot path.
Differential Revision: https://reviews.llvm.org/D75240
Summary:
AffineApplyNormalizer provides common logic for folding affine maps that appear
in affine.apply into other affine operations that use the result of said
affine.apply. In the process, affine maps of both operations are composed.
During the composition `A.compose(B)` the symbols from the map A are placed
before those of the map B in a single concatenated symbol list. However,
AffineApplyNormalizer was ordering the operands of the operation being
normalized by iteratively appending the symbols into a single list accoridng to
the operand order, regardless of whether these operands are symbols of the
current operation or of the map that is being folded into it. This could lead
to wrong order of symbols and, when the symbols were bound to constant values,
to visibly incorrect folding of constants into affine maps as reported in
PR45031. Make sure symbols operands to the current operation are always placed
before symbols coming from the folded maps.
Update the test that was exercising the incorrect folder behavior. For some
reason, the order of symbol operands was swapped in the test input compared to
the previous operations, making it easy to assume the correct maps were
produced whereas they were swapping the symbols back due to the problem
described above.
Closes https://bugs.llvm.org/show_bug.cgi?id=45031
Differential Revision: https://reviews.llvm.org/D75247
This commit handles folding spv.LogicalAnd/spv.LogicalOr when
one of the operands is constant true/false.
Differential Revision: https://reviews.llvm.org/D75195
Summary:
The mapper assigns annotations to loop.parallel operations that
are compatible with the loop to gpu mapping pass. The outermost
loop uses the grid dimensions, followed by block dimensions. All
remaining loops are mapped to sequential loops.
Differential Revision: https://reviews.llvm.org/D74963
Previously C++ test passes for SPIR-V were put under
test/Dialect/SPIRV. Move them to test/lib/Dialect/SPIRV
to create a better structure.
Also fixed one of the test pass to use new
PassRegistration mechanism.
Differential Revision: https://reviews.llvm.org/D75066
This exploits the fact that the iterations of parallel loops are
independent so tiling becomes just an index transformation. This pass
only tiles the innermost loop of a loop nest.
The ultimate goal is to allow vectorization of the tiled loops, but I
don't think we're there yet with the current rewriting, as the tiled
loops don't have a constant trip count.
Differential Revision: https://reviews.llvm.org/D74954
This revision add support for formatting successor variables in a similar way to operands, attributes, etc.
Differential Revision: https://reviews.llvm.org/D74789
This revision add support in ODS for specifying the successors of an operation. Successors are specified via the `successors` list:
```
let successors = (successor AnySuccessor:$target, AnySuccessor:$otherTarget);
```
Differential Revision: https://reviews.llvm.org/D74783
This patch implements the RFCs proposed here:
https://llvm.discourse.group/t/rfc-modify-ifop-in-loop-dialect-to-yield-values/463https://llvm.discourse.group/t/rfc-adding-operands-and-results-to-loop-for/459/19.
It introduces the following changes:
- All Loop Ops region, except for ReduceOp, terminate with a YieldOp.
- YieldOp can have variadice operands that is used to return values out of IfOp and ForOp regions.
- Change IfOp and ForOp syntax and representation to define values.
- Add unit-tests and update .td documentation.
- YieldOp is a terminator to loop.for/if/parallel
- YieldOp custom parser and printer
Lowering is not supported at the moment, and will be in a follow-up PR.
Thanks.
Reviewed By: bondhugula, nicolasvasilache, rriddle
Differential Revision: https://reviews.llvm.org/D74174
Summary:
This implements the last step for lowering vector.contract progressively
to LLVM IR (except for masks). Multi-dimensional reductions that remain
after expanding all parallel dimensions are lowered into into simpler
vector.contract operations until a trivial 1-dim reduction remains.
Reviewers: nicolasvasilache, andydavis1
Reviewed By: andydavis1
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74880
Summary:
Lowers all free/batch dimensions in a vector.contract progressively
into simpler vector.contract operations until a direct vector.reduction
operation is reached. Then lowers 1-D reductions into vector.reduce.
Still TBD:
multi-dimensional contractions that remain after removing all the parallel dims
Reviewers: nicolasvasilache, andydavis1, rriddle
Reviewed By: andydavis1
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74797
Summary:
This trait takes three arguments: lhs, rhs, transformer. It verifies that the type of 'rhs' matches the type of 'lhs' when the given 'transformer' is applied to 'lhs'. This allows for adding constraints like: "the type of 'a' must match the element type of 'b'". A followup revision will add support in the declarative parser for using these equality constraints to port more c++ parsers to the declarative form.
Differential Revision: https://reviews.llvm.org/D74647
Summary:
the .row.col variant turns out to be the popular one, contrary to what I
thought as .row.row. Since .row.col is so prevailing (as I inspect
cuDNN's behavior), I'm going to remove the .row.row support here, which
makes the patch a little bit easier.
Reviewers: ftynse
Subscribers: jholewinski, bixia, sanjoy.google, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74655
Fixing a bug where using a zero-rank shaped type operand to
linalg.generic ops hit an unrelated assert. This also meant that
lowering the operation to loops was not supported. Adding roundtrip
tests and lowering to loops test for zero-rank shaped type operand
with fixes to make the test pass.
Differential Revision: https://reviews.llvm.org/D74638
Summary:
Linalg's promotion pass was only supporting f32 buffers due to how the
zero value was build for the `fill` operation.
Moreover, `promoteSubViewOperands` was returning a vector with one entry
per float subview while omitting integer subviews. For a program
with only integer subviews the return vector would be of size 0.
However, `promoteSubViewsOperands` would try to access a non zero
number of entries of this vector, resulting in a sefgault.
Reviewers: nicolasvasilache, ftynse
Reviewed By: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74532
Summary: This revision adds support to the declarative parser for formatting enum attributes in the symbolized form. It uses this new functionality to port several of the SPIRV parsers over to the declarative form.
Differential Revision: https://reviews.llvm.org/D74525
Summary:
This sets the basic framework for lowering vector.contract progressively
into simpler vector.contract operations until a direct vector.reduction
operation is reached. More details will be filled out progressively as well.
Reviewers: nicolasvasilache
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74520
Thus far we have been using builtin func op to model SPIR-V functions.
It was because builtin func op used to have special treatment in
various parts of the core codebase (e.g., pass pipelines, etc.) and
it's easy to bootstrap the development of the SPIR-V dialect. But
nowadays with general op concepts and region support we don't have
such limitations and it's time to tighten the SPIR-V dialect for
completeness.
This commits introduces a spv.func op to properly model SPIR-V
functions. Compared to builtin func op, it can provide the following
benefits:
* We can control the full op so we can integrate SPIR-V information
bits (e.g., function control) in a more integrated way and define
our own assembly form and enforcing better verification.
* We can have a better dialect and library boundary. At the current
moment only functions are modelled with an external op. With this
change, all ops modelling SPIR-V concpets will be spv.* ops and
registered to the SPIR-V dialect.
* We don't need to special-case func op anymore when creating
ConversionTarget declaring SPIR-V dialect as legal. This is quite
important given we'll see more and more conversions in the future.
In the process, bumps a few FuncOp methods to the FunctionLike trait.
Differential Revision: https://reviews.llvm.org/D74226
In the previous state, we were relying on forcing the linker to include
all libraries in the final binary and the global initializer to self-register
every piece of the system. This change help moving away from this model, and
allow users to compose pieces more freely. The current change is only "fixing"
the dialect registration and avoiding relying on "whole link" for the passes.
The translation is still relying on the global registry, and some refactoring
is needed to make this all more convenient.
Differential Revision: https://reviews.llvm.org/D74461
Summary:
After D72555 has been landed, `linalg.indexed_generic` also accepts ranked
tensor as input and output. Add a test for it.
Differential Revision: https://reviews.llvm.org/D74267