Fusion of generic/indexed_generic operations with tensor_reshape by
expansion when the latter just adds/removes unit-dimensions is
disabled since it just adds unit-trip count loops.
Differential Revision: https://reviews.llvm.org/D94626
This revision adds support for using either operand or result types to anchor an optional group. It also removes the arbitrary restriction that type directives must refer to variables in the same group, which is overly limiting for a declarative format syntax.
Fixes PR#48784
Differential Revision: https://reviews.llvm.org/D95109
representing dependence from producer result to consumer.
With Linalg on tensors the dependence between operations can be from
the result of the producer to the consumer. This change just does a
NFC refactoring of the LinalgDependenceGraphElem to allow representing
both OpResult and OpOperand*.
Differential Revision: https://reviews.llvm.org/D95208
spv.Ordered/spv.Unordered are meant for OpenCL Kernel capability.
For Vulkan Shader capability, we should use spv.IsNan to check
whether a number is NaN.
Add a new pattern for converting `std.cmpf ord|uno` to spv.IsNan
and bumped the pattern converting to spv.Ordered/spv.Unordered
to a higher benefit. The SPIR-V target environment will properly
select between these two patterns.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D95237
Previously we only autogen the availability for ops that are
direct instantiating `SPV_Op` and expected other subclasses of
`SPV_Op` to define aggregated availability for all ops. This is
quite error prone and we can miss capabilities for certain ops.
Also it's arguable to have multiple levels of subclasses and try
to deduplicate too much: having the availability directly in the
op can be quite explicit and clear. A few extra lines of
declarative code is fine.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D95236
This PR only has coro intrinsics needed for the Async to LLVM lowering. Will add other intrinsics as needed in the followup PRs.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95143
- Fix arguments name for subview and subtensor.
- Fix a typo in a comment of subtensor's method.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D95211
With this, we have complete support for finding integer sample points in FlatAffineConstraints.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D95047
- Extend spirv::ConstantOp::getZero/One to handle float, vector of int, and vector of float.
- Refactor ZeroExtendI1Pattern to use getZero/One methods.
- Add one more test for lowering std.zexti which extends vector<4xi1> to vector<4xi64>.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D95120
Add factory to create streams for logging the reproducer. Allows for more general logging (beyond file) and logging the configuration/module separately (logged in order, configuration before module).
Also enable querying filename of ToolOutputFile.
Differential Revision: https://reviews.llvm.org/D94868
This extracts the implementation of getType, setType, and getBody from
FunctionSupport.h into the mlir::impl namespace and defines them
generically in FunctionSupport.cpp. This allows them to be used
elsewhere for any FunctionLike ops that use FunctionType for their
type signature.
Using the new helpers, FuncOpSignatureConversion is generalized to
work with all such FunctionLike ops. Convenience helpers are added to
configure the pattern for a given concrete FunctionLike op type.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95021
Define OrderedOp and UnorderedOp instructions in SPIR-V and convert
cmpf operations with `ord` and `uno` tag to these instructions
respectively.
Differential Revision: https://reviews.llvm.org/D95098
The SPIR-V spec uses OpSpecConstantOp. Using an inconsistent name
makes the dialect generation scripts fail. Update to use the right
operation name, and fix the auto generation scripts as well.
Differential Revision: https://reviews.llvm.org/D95097
I attempted to write a test case for this, but the situations in which the kind is used for RegionDirective and ResultsDirective have zero overlap; meaning that there isn't a situation in which sharing the kind creates a conflict.
Differential Revision: https://reviews.llvm.org/D94988
Having this function in a public scope is helpful to register dialects that are
defined at runtime, and thus that need a runtime-defined TypeID.
Also, a similar function in DialectRegistry, insert(TypeID, StringRef, ...), has
a public scope.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95091
An `unrealized_conversion_cast` operation represents an unrealized conversion
from one set of types to another, that is used to enable the inter-mixing of
different type systems. This operation should not be attributed any special
representational or execution semantics, and is generally only intended to be
used to satisfy the temporary intermixing of type systems during the conversion
of one type system to another.
This operation was discussed in the following RFC(and ODM):
https://llvm.discourse.group/t/open-meeting-1-14-dialect-conversion-and-type-conversion-the-question-of-cast-operations/
Differential Revision: https://reviews.llvm.org/D94832
A cast-like operation is one that converts from a set of input types to a set of output types. The arity of the inputs may be from 0-N, whereas the arity of the outputs may be anything from 1-N. Cast-like operations are removable in cases where they produce a "no-op", i.e when the input types and output types match 1-1.
Differential Revision: https://reviews.llvm.org/D94831
Rationale:
Since I made the argument that metadata helps with extra
verification checks, I better actually do that ;-)
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D95072
Resumed coroutine potentially can deallocate the token/value/group and destroy the mutex before the std::unique_ptr destructor.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95037
Like SubView, SubTensor/SubTensorInsertOp are allowed to have rank-reducing/expanding semantics. In the case of SubTensorInsertOp , the rank of offsets/sizes/strides should be the rank of the destination tensor.
Also, add a builder flavor for SubTensorOp to return a rank-reduced tensor.
Differential Revision: https://reviews.llvm.org/D95076
The patch adapts the rocm runtime wrapper due to subtle differences between the cuda and the rocm/hip runtime api.
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D95027
This patch adds support for producer-consumer fusion scenarios with
multiple producer stores to the AffineLoopFusion pass. The patch
introduces some changes to the producer-consumer algorithm, including:
* For a given consumer loop, producer-consumer fusion iterates over its
producer candidates until a fixed point is reached.
* Producer candidates are gathered beforehand for each iteration of the
consumer loop and visited in reverse program order (not strictly guaranteed)
to maximize the number of loops fused per iteration.
In general, these changes were needed to simplify the multi-store producer
support and remove some of the workarounds that were introduced in the past
to support more fusion cases under the single-store producer limitation.
This patch also preserves the existing functionality of AffineLoopFusion with
one minor change in behavior. Producer-consumer fusion didn't fuse scenarios
with escaping memrefs and multiple outgoing edges (from a single store).
Multi-store producer scenarios will usually (always?) have multiple outgoing
edges so we couldn't fuse any with escaping memrefs, which would greatly limit
the applicability of this new feature. Therefore, the patch enables fusion for
these scenarios. Please, see modified tests for specific details.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D92876
Add a check if regions do not implement the RegionBranchOpInterface. This is not
allowed in the current deallocation steps. Furthermore, we handle edge-cases,
where a single region is attached and the parent operation has no results.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=48575
Differential Revision: https://reviews.llvm.org/D94586
The runtime-wrappers depend on LLVMSupport, pulling in static initialization code (e.g. command line arguments). Dynamically loading multiple such libraries results in ODR violoations.
So far this has not been an issue, but in D94421, I would like to load both the async-runtime and the cuda-runtime-wrappers as part of a cuda-runner integration test. When doing this, code that asserts that an option category is only registered once fails (note that I've only experienced this in Google's bazel where the async-runtime depends on LLVMSupport, but a similar issue would happen in cmake if more than one runtime-wrapper starts to depend on LLVMSupport).
The underlying issue is that we have a mix of static and dynamic linking. If all dependencies were loaded as shared objects (i.e. if LLVMSupport was linked dynamically to the runtime wrappers), each dependency would only get loaded once. However, linking dependencies dynamically would require special attention to paths (one could dynamically load the dependencies first given explicit paths). The simpler approach seems to be to link all dependencies statically into a single shared object.
This change basically applies the same logic that we have in the c_runner_utils: we have a shared object target that can be loaded dynamically, and we have a static library target that can be linked to other runtime-wrapper shared object targets.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D94399
Use cases with 16- or even 8-bit pointer/index structures have been identified.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D95015
* Matches how all of the other shaped types are declared.
* No super principled reason fro this ordering beyond that it makes the one that was different be like the rest.
* Also matches ordering of things like ndarray, et al.
Reviewed By: ftynse, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94812
* This isn't exclusive with other mechanisms for more ODS centric op definitions, but based on discussions, we feel that we will always benefit from a python escape hatch, and that is the most natural way to write things that don't fit the mold.
* I suspect this facility needs further tweaking, and once it settles, I'll document it and add more tests.
* Added extensions for linalg, since it is unusable without them and continued to evolve my e2e example.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D94752
* This allows us to hoist trait level information for regions and sized-variadic to class level attributes (_ODS_REGIONS, _ODS_OPERAND_SEGMENTS, _ODS_RESULT_SEGMENTS).
* Eliminates some splicey python generated code in favor of a native helper for it.
* Makes it possible to implement custom, variadic and region based builders with one line of python, without needing to manually code access to the segment attributes.
* Needs follow-on work for region based callbacks and support for SingleBlockImplicitTerminator.
* A follow-up will actually add ODS support for generating custom Python builders that delegate to this new method.
* Also includes the start of an e2e sample for constructing linalg ops where this limitation was discovered (working progressively through this example and cleaning up as I go).
Differential Revision: https://reviews.llvm.org/D94738