This patch adds support for producer-consumer fusion scenarios with
multiple producer stores to the AffineLoopFusion pass. The patch
introduces some changes to the producer-consumer algorithm, including:
* For a given consumer loop, producer-consumer fusion iterates over its
producer candidates until a fixed point is reached.
* Producer candidates are gathered beforehand for each iteration of the
consumer loop and visited in reverse program order (not strictly guaranteed)
to maximize the number of loops fused per iteration.
In general, these changes were needed to simplify the multi-store producer
support and remove some of the workarounds that were introduced in the past
to support more fusion cases under the single-store producer limitation.
This patch also preserves the existing functionality of AffineLoopFusion with
one minor change in behavior. Producer-consumer fusion didn't fuse scenarios
with escaping memrefs and multiple outgoing edges (from a single store).
Multi-store producer scenarios will usually (always?) have multiple outgoing
edges so we couldn't fuse any with escaping memrefs, which would greatly limit
the applicability of this new feature. Therefore, the patch enables fusion for
these scenarios. Please, see modified tests for specific details.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D92876
This revision starts evolving the APIs to manipulate ops with offsets, sizes and operands towards a ValueOrAttr abstraction that is already used in folding under the name OpFoldResult.
The objective, in the future, is to allow such manipulations all the way to the level of ODS to avoid all the genuflexions involved in distinguishing between values and attributes for generic constant foldings.
Once this evolution is accepted, the next step will be a mechanical OpFoldResult -> ValueOrAttr.
Differential Revision: https://reviews.llvm.org/D95310
This revision addresses a remaining comment that was overlooked in https://reviews.llvm.org/D95243:
the pad hoisting transformation is made to additionally bail out on side effecting ops other than LoopLikeOps.
This transformation anchors on a padding op whose result is only used as an input
to a Linalg op and pulls it out of a given number of loops.
The result is a packing of padded tailes of ops that is amortized just before
the outermost loop from which the pad operation is hoisted.
Differential revision: https://reviews.llvm.org/D95243
This revision allows the base Linalg tiling pattern to optionally require padding to
a constant bounding shape.
When requested, a simple analysis is performed, similar to buffer promotion.
A temporary `linalg.simple_pad` op is added to model padding for the purpose of
connecting the dots. This will be replaced by a more fleshed out `linalg.pad_tensor`
op when it is available.
In the meantime, this temporary op serves the purpose of exhibiting the necessary
properties required from a more fleshed out pad op, to compose with transformations
properly.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D95149
* As discussed, fixes the ordering or (operands, results) -> (results, operands) in various `create` like methods.
* Fixes a syntax error in an ODS accessor method.
* Removes the linalg example in favor of a test case that exercises the same.
* Fixes FuncOp visibility to properly use None instead of the empty string and defaults it to None.
* Implements what was documented for requiring that trailing __init__ args `loc` and `ip` are keyword only.
* Adds a check to `InsertionPoint.insert` so that if attempting to insert past the terminator, an exception is raised telling you what to do instead. Previously, this would crash downstream (i.e. when trying to print the resultant module).
* Renames `_ods_build_default` -> `build_generic` and documents it.
* Removes `result` from the list of prohibited words and for single-result ops, defaults to naming the result `result`, thereby matching expectations and what is already implemented on the base class.
* This was intended to be a relatively small set of changes to be inlined with the broader support for ODS generating the most specific builder, but it spidered out once actually testing various combinations, so rolling up separately.
Differential Revision: https://reviews.llvm.org/D95320
* Adds a flag to MlirOperationState to enable result type inference using the InferTypeOpInterface.
* I chose this level of implementation for a couple of reasons:
a) In the creation flow is naturally where generated and custom builder code will be invoking such a thing
b) it is a bit more efficient to share the data structure and unpacking vs having a standalone entry-point
c) we can always decide to expose more of these interfaces with first-class APIs, but that doesn't preclude that we will always want to use this one in this way (and less API surface area for common things is better for API stability and evolution).
* I struggled to find an appropriate way to test it since we don't link the test dialect into anything CAPI accessible at present. I opted instead for one of the simplest ops I found in a regular dialect which implements the interface.
* This does not do any trait-based type selection. That will be left to generated tablegen wrappers.
Differential Revision: https://reviews.llvm.org/D95283
Fusion of generic/indexed_generic operations with tensor_reshape by
expansion when the latter just adds/removes unit-dimensions is
disabled since it just adds unit-trip count loops.
Differential Revision: https://reviews.llvm.org/D94626
This revision adds support for using either operand or result types to anchor an optional group. It also removes the arbitrary restriction that type directives must refer to variables in the same group, which is overly limiting for a declarative format syntax.
Fixes PR#48784
Differential Revision: https://reviews.llvm.org/D95109
representing dependence from producer result to consumer.
With Linalg on tensors the dependence between operations can be from
the result of the producer to the consumer. This change just does a
NFC refactoring of the LinalgDependenceGraphElem to allow representing
both OpResult and OpOperand*.
Differential Revision: https://reviews.llvm.org/D95208
spv.Ordered/spv.Unordered are meant for OpenCL Kernel capability.
For Vulkan Shader capability, we should use spv.IsNan to check
whether a number is NaN.
Add a new pattern for converting `std.cmpf ord|uno` to spv.IsNan
and bumped the pattern converting to spv.Ordered/spv.Unordered
to a higher benefit. The SPIR-V target environment will properly
select between these two patterns.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D95237
Previously we only autogen the availability for ops that are
direct instantiating `SPV_Op` and expected other subclasses of
`SPV_Op` to define aggregated availability for all ops. This is
quite error prone and we can miss capabilities for certain ops.
Also it's arguable to have multiple levels of subclasses and try
to deduplicate too much: having the availability directly in the
op can be quite explicit and clear. A few extra lines of
declarative code is fine.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D95236
This PR only has coro intrinsics needed for the Async to LLVM lowering. Will add other intrinsics as needed in the followup PRs.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95143
- Extend spirv::ConstantOp::getZero/One to handle float, vector of int, and vector of float.
- Refactor ZeroExtendI1Pattern to use getZero/One methods.
- Add one more test for lowering std.zexti which extends vector<4xi1> to vector<4xi64>.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D95120
Define OrderedOp and UnorderedOp instructions in SPIR-V and convert
cmpf operations with `ord` and `uno` tag to these instructions
respectively.
Differential Revision: https://reviews.llvm.org/D95098
An `unrealized_conversion_cast` operation represents an unrealized conversion
from one set of types to another, that is used to enable the inter-mixing of
different type systems. This operation should not be attributed any special
representational or execution semantics, and is generally only intended to be
used to satisfy the temporary intermixing of type systems during the conversion
of one type system to another.
This operation was discussed in the following RFC(and ODM):
https://llvm.discourse.group/t/open-meeting-1-14-dialect-conversion-and-type-conversion-the-question-of-cast-operations/
Differential Revision: https://reviews.llvm.org/D94832
Like SubView, SubTensor/SubTensorInsertOp are allowed to have rank-reducing/expanding semantics. In the case of SubTensorInsertOp , the rank of offsets/sizes/strides should be the rank of the destination tensor.
Also, add a builder flavor for SubTensorOp to return a rank-reduced tensor.
Differential Revision: https://reviews.llvm.org/D95076
This patch adds support for producer-consumer fusion scenarios with
multiple producer stores to the AffineLoopFusion pass. The patch
introduces some changes to the producer-consumer algorithm, including:
* For a given consumer loop, producer-consumer fusion iterates over its
producer candidates until a fixed point is reached.
* Producer candidates are gathered beforehand for each iteration of the
consumer loop and visited in reverse program order (not strictly guaranteed)
to maximize the number of loops fused per iteration.
In general, these changes were needed to simplify the multi-store producer
support and remove some of the workarounds that were introduced in the past
to support more fusion cases under the single-store producer limitation.
This patch also preserves the existing functionality of AffineLoopFusion with
one minor change in behavior. Producer-consumer fusion didn't fuse scenarios
with escaping memrefs and multiple outgoing edges (from a single store).
Multi-store producer scenarios will usually (always?) have multiple outgoing
edges so we couldn't fuse any with escaping memrefs, which would greatly limit
the applicability of this new feature. Therefore, the patch enables fusion for
these scenarios. Please, see modified tests for specific details.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D92876
Add a check if regions do not implement the RegionBranchOpInterface. This is not
allowed in the current deallocation steps. Furthermore, we handle edge-cases,
where a single region is attached and the parent operation has no results.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=48575
Differential Revision: https://reviews.llvm.org/D94586
The runtime-wrappers depend on LLVMSupport, pulling in static initialization code (e.g. command line arguments). Dynamically loading multiple such libraries results in ODR violoations.
So far this has not been an issue, but in D94421, I would like to load both the async-runtime and the cuda-runtime-wrappers as part of a cuda-runner integration test. When doing this, code that asserts that an option category is only registered once fails (note that I've only experienced this in Google's bazel where the async-runtime depends on LLVMSupport, but a similar issue would happen in cmake if more than one runtime-wrapper starts to depend on LLVMSupport).
The underlying issue is that we have a mix of static and dynamic linking. If all dependencies were loaded as shared objects (i.e. if LLVMSupport was linked dynamically to the runtime wrappers), each dependency would only get loaded once. However, linking dependencies dynamically would require special attention to paths (one could dynamically load the dependencies first given explicit paths). The simpler approach seems to be to link all dependencies statically into a single shared object.
This change basically applies the same logic that we have in the c_runner_utils: we have a shared object target that can be loaded dynamically, and we have a static library target that can be linked to other runtime-wrapper shared object targets.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D94399
Use cases with 16- or even 8-bit pointer/index structures have been identified.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D95015
* Matches how all of the other shaped types are declared.
* No super principled reason fro this ordering beyond that it makes the one that was different be like the rest.
* Also matches ordering of things like ndarray, et al.
Reviewed By: ftynse, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94812
* This allows us to hoist trait level information for regions and sized-variadic to class level attributes (_ODS_REGIONS, _ODS_OPERAND_SEGMENTS, _ODS_RESULT_SEGMENTS).
* Eliminates some splicey python generated code in favor of a native helper for it.
* Makes it possible to implement custom, variadic and region based builders with one line of python, without needing to manually code access to the segment attributes.
* Needs follow-on work for region based callbacks and support for SingleBlockImplicitTerminator.
* A follow-up will actually add ODS support for generating custom Python builders that delegate to this new method.
* Also includes the start of an e2e sample for constructing linalg ops where this limitation was discovered (working progressively through this example and cleaning up as I go).
Differential Revision: https://reviews.llvm.org/D94738
In prehistorical times, AffineApplyOp was allowed to produce multiple values.
This allowed the creation of intricate SSA use-def chains.
AffineApplyNormalizer was originally introduced as a means of reusing the AffineMap::compose method to write SSA use-def chains.
Unfortunately, symbols that were produced by an AffineApplyOp needed to be promoted to dims and reordered for the mathematical composition to be valid.
Since then, single result AffineApplyOp became the law of the land but the original assumptions were not revisited.
This revision revisits these assumptions and retires AffineApplyNormalizer.
Differential Revision: https://reviews.llvm.org/D94920
The operantion is an identity if the values yielded by the operation
is the argument of the basic block of that operation. Add this missing check.
Differential Revision: https://reviews.llvm.org/D94819
This commit adds support to generate an additional builder for
each named op that has attributes. This gives better experience
when creating the named ops.
Along the way adds support for i64.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D94733
This is a very minor improvement during iteration graph construction.
If the first attempt considering the dimension order of all tensors fails,
a second attempt is made using the constraints of sparse tensors only.
Dense tensors prefer dimension order (locality) but provide random access
if needed, enabling the compilation of more sparse kernels.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D94709
With the recent changes to linalg on tensor semantics, the tiling
operations works out-of-the-box for generic operations. Add a test to
verify that and some minor refactoring.
Differential Revision: https://reviews.llvm.org/D93077
Add canonicalization to replace use of the result of a linalg
operation on tensors in a dim operation, to use one of the operands of
the linalg operations instead. This allows the linalg op itself to be
deleted when all its non-dim uses are removed (say through tiling, etc.)
Differential Revision: https://reviews.llvm.org/D93076
linalg.generic/indexed_generic operations on tensors whose body is
just yielding the (non-induction variable) arguments of the operation
can be canonicalized by replacing uses of the result with the
corresponding arguments.
Differential Revision: https://reviews.llvm.org/D94581
The standard and gpu dialect both have `alloc` operations which use the
memory effect `MemAlloc`. In both cases, it is specified on both the
operation itself and on the result. This results in two memory effects
being created for these operations. When `MemAlloc` is defined on an
operation, it represents some background effect which the compiler
cannot reason about, and inhibits the ability of the compiler to
remove dead `std.alloc` operations. This change removes the uneeded
`MemAlloc` effect from these operations and leaves the effect on the
result, which allows dead allocs to be erased.
There is the same problem, but to a lesser extent, with MemFree, MemRead
and MemWrite. Over-specifying these traits is not currently inhibiting
any optimization.
Differential Revision: https://reviews.llvm.org/D94662