Define OrderedOp and UnorderedOp instructions in SPIR-V and convert
cmpf operations with `ord` and `uno` tag to these instructions
respectively.
Differential Revision: https://reviews.llvm.org/D95098
The SPIR-V spec uses OpSpecConstantOp. Using an inconsistent name
makes the dialect generation scripts fail. Update to use the right
operation name, and fix the auto generation scripts as well.
Differential Revision: https://reviews.llvm.org/D95097
I attempted to write a test case for this, but the situations in which the kind is used for RegionDirective and ResultsDirective have zero overlap; meaning that there isn't a situation in which sharing the kind creates a conflict.
Differential Revision: https://reviews.llvm.org/D94988
Having this function in a public scope is helpful to register dialects that are
defined at runtime, and thus that need a runtime-defined TypeID.
Also, a similar function in DialectRegistry, insert(TypeID, StringRef, ...), has
a public scope.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95091
An `unrealized_conversion_cast` operation represents an unrealized conversion
from one set of types to another, that is used to enable the inter-mixing of
different type systems. This operation should not be attributed any special
representational or execution semantics, and is generally only intended to be
used to satisfy the temporary intermixing of type systems during the conversion
of one type system to another.
This operation was discussed in the following RFC(and ODM):
https://llvm.discourse.group/t/open-meeting-1-14-dialect-conversion-and-type-conversion-the-question-of-cast-operations/
Differential Revision: https://reviews.llvm.org/D94832
A cast-like operation is one that converts from a set of input types to a set of output types. The arity of the inputs may be from 0-N, whereas the arity of the outputs may be anything from 1-N. Cast-like operations are removable in cases where they produce a "no-op", i.e when the input types and output types match 1-1.
Differential Revision: https://reviews.llvm.org/D94831
Rationale:
Since I made the argument that metadata helps with extra
verification checks, I better actually do that ;-)
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D95072
Resumed coroutine potentially can deallocate the token/value/group and destroy the mutex before the std::unique_ptr destructor.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D95037
Like SubView, SubTensor/SubTensorInsertOp are allowed to have rank-reducing/expanding semantics. In the case of SubTensorInsertOp , the rank of offsets/sizes/strides should be the rank of the destination tensor.
Also, add a builder flavor for SubTensorOp to return a rank-reduced tensor.
Differential Revision: https://reviews.llvm.org/D95076
The patch adapts the rocm runtime wrapper due to subtle differences between the cuda and the rocm/hip runtime api.
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D95027
This patch adds support for producer-consumer fusion scenarios with
multiple producer stores to the AffineLoopFusion pass. The patch
introduces some changes to the producer-consumer algorithm, including:
* For a given consumer loop, producer-consumer fusion iterates over its
producer candidates until a fixed point is reached.
* Producer candidates are gathered beforehand for each iteration of the
consumer loop and visited in reverse program order (not strictly guaranteed)
to maximize the number of loops fused per iteration.
In general, these changes were needed to simplify the multi-store producer
support and remove some of the workarounds that were introduced in the past
to support more fusion cases under the single-store producer limitation.
This patch also preserves the existing functionality of AffineLoopFusion with
one minor change in behavior. Producer-consumer fusion didn't fuse scenarios
with escaping memrefs and multiple outgoing edges (from a single store).
Multi-store producer scenarios will usually (always?) have multiple outgoing
edges so we couldn't fuse any with escaping memrefs, which would greatly limit
the applicability of this new feature. Therefore, the patch enables fusion for
these scenarios. Please, see modified tests for specific details.
Reviewed By: andydavis1, bondhugula
Differential Revision: https://reviews.llvm.org/D92876
Add a check if regions do not implement the RegionBranchOpInterface. This is not
allowed in the current deallocation steps. Furthermore, we handle edge-cases,
where a single region is attached and the parent operation has no results.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=48575
Differential Revision: https://reviews.llvm.org/D94586
The runtime-wrappers depend on LLVMSupport, pulling in static initialization code (e.g. command line arguments). Dynamically loading multiple such libraries results in ODR violoations.
So far this has not been an issue, but in D94421, I would like to load both the async-runtime and the cuda-runtime-wrappers as part of a cuda-runner integration test. When doing this, code that asserts that an option category is only registered once fails (note that I've only experienced this in Google's bazel where the async-runtime depends on LLVMSupport, but a similar issue would happen in cmake if more than one runtime-wrapper starts to depend on LLVMSupport).
The underlying issue is that we have a mix of static and dynamic linking. If all dependencies were loaded as shared objects (i.e. if LLVMSupport was linked dynamically to the runtime wrappers), each dependency would only get loaded once. However, linking dependencies dynamically would require special attention to paths (one could dynamically load the dependencies first given explicit paths). The simpler approach seems to be to link all dependencies statically into a single shared object.
This change basically applies the same logic that we have in the c_runner_utils: we have a shared object target that can be loaded dynamically, and we have a static library target that can be linked to other runtime-wrapper shared object targets.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D94399
Use cases with 16- or even 8-bit pointer/index structures have been identified.
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D95015
* Matches how all of the other shaped types are declared.
* No super principled reason fro this ordering beyond that it makes the one that was different be like the rest.
* Also matches ordering of things like ndarray, et al.
Reviewed By: ftynse, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94812
* This isn't exclusive with other mechanisms for more ODS centric op definitions, but based on discussions, we feel that we will always benefit from a python escape hatch, and that is the most natural way to write things that don't fit the mold.
* I suspect this facility needs further tweaking, and once it settles, I'll document it and add more tests.
* Added extensions for linalg, since it is unusable without them and continued to evolve my e2e example.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D94752
* This allows us to hoist trait level information for regions and sized-variadic to class level attributes (_ODS_REGIONS, _ODS_OPERAND_SEGMENTS, _ODS_RESULT_SEGMENTS).
* Eliminates some splicey python generated code in favor of a native helper for it.
* Makes it possible to implement custom, variadic and region based builders with one line of python, without needing to manually code access to the segment attributes.
* Needs follow-on work for region based callbacks and support for SingleBlockImplicitTerminator.
* A follow-up will actually add ODS support for generating custom Python builders that delegate to this new method.
* Also includes the start of an e2e sample for constructing linalg ops where this limitation was discovered (working progressively through this example and cleaning up as I go).
Differential Revision: https://reviews.llvm.org/D94738
This commit adds a new trait that can be attached to ops that have
signed semantics.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D94896
cmake_minimum_required(VERSION) calls cmake_policy(VERSION),
which sets all policies up to VERSION to NEW.
LLVM started requiring CMake 3.13 last year, so we can remove
a bunch of code setting policies prior to 3.13 to NEW as it
no longer has any effect.
Reviewed By: phosek, #libunwind, #libc, #libc_abi, ldionne
Differential Revision: https://reviews.llvm.org/D94374
In prehistorical times, AffineApplyOp was allowed to produce multiple values.
This allowed the creation of intricate SSA use-def chains.
AffineApplyNormalizer was originally introduced as a means of reusing the AffineMap::compose method to write SSA use-def chains.
Unfortunately, symbols that were produced by an AffineApplyOp needed to be promoted to dims and reordered for the mathematical composition to be valid.
Since then, single result AffineApplyOp became the law of the land but the original assumptions were not revisited.
This revision revisits these assumptions and retires AffineApplyNormalizer.
Differential Revision: https://reviews.llvm.org/D94920
* Development setup recommendations.
* Test updates to match what we actually do.
* Update cmake variable `PYTHON_EXECUTABLE` -> `Python3_EXECUTABLE` to match the upgrade to python3 repo wide.
This patch adds support for checking if two PresburgerSets are equal. In particular, one can check if two FlatAffineConstraints are equal by constructing PrebsurgerSets from them and comparing these.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D94915
Use cross-compilation approach for `mlir-linalg-ods-gen` application
similar to TblGen tools.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94598
Added the ability to read (an extended version of) the FROSTT
file format, so that we can now read in sparse tensors of arbitrary
rank. Generalized the API to deal with more than two dimensions.
Also added the ability to sort the indices of sparse tensors
lexicographically. This is an important step towards supporting
auto gen of initialization code, since sparse storage formats
are easier to initialize if the indices are sorted. Since most
external formats don't enforce such properties, it is convenient
to have this ability in our runtime support library.
Lastly, the re-entrant problem of the original implementation
is fixed by passing an opaque object around (rather than having
a single static variable, ugh!).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94852
The operantion is an identity if the values yielded by the operation
is the argument of the basic block of that operation. Add this missing check.
Differential Revision: https://reviews.llvm.org/D94819