In a subsequent commit, getResultBuffer can return a "null" Value. This is the case when the returned buffer from an scf.if is not unique.
This commit is in preparation for scf.if support to keep the next commit smaller.
Differential Revision: https://reviews.llvm.org/D111927
This is required for bufferization of scf::IfOp, which is added in a subsequent commit.
Some ops (scf::ForOp, TiledLoopOp) require PreOrder traversal to make sure that bbArgs are mapped before bufferizing the loop body.
Differential Revision: https://reviews.llvm.org/D111924
This patch supports the ordered construct in OpenMP dialect following
Section 2.19.9 of the OpenMP 5.1 standard. Also lowering to LLVM IR
using OpenMP IRBduiler. Lowering to LLVM IR for ordered simd directive
is not supported yet since LLVM optimization passes do not support it
for now.
Reviewed By: kiranchandramohan, clementval, ftynse, shraiysh
Differential Revision: https://reviews.llvm.org/D110015
The current implementation used explicit index->int64_t casts for some, but
not all instances of passing values of type "index" in and from the sparse
support library. This revision makes the situation more consistent by
using new "index_t" type at all such places (which allows for less trivial
casting in the generated MLIR code). Note that the current revision still
assumes that "index" is 64-bit wide. If we want to support targets with
alternative "index" bit widths, we need to build the support library different.
But the current revision is a step forward by making this requirement explicit
and more visible.
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D112122
Add a pattern to take a rank-reducing subview and drop inner most
contiguous unit dim.
This is useful when lowering vector to backends with 1d vector types.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D111561
According to the OpenMP 5.0 standard, names and hints of critical operation are
closely related. The following are the restrictions on them:
- Unless the effect is as if `hint(omp_sync_hint_none)` was specified, the
critical construct must specify a name.
- If the hint clause is specified, each of the critical constructs with the
same name must have a hint clause for which the hint-expression evaluates to
the same value.
These restrictions will be enforced by design if the hint expression is a part
of the `omp.critical.declare` operation.
- Any operation with no "name" will be considered to have
`hint(omp_sync_hint_none)`.
- All the operations with the same "name" will have the same hint value.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D112134
Follow up to also use the prefixed emitters in OpFormatGen (moved
getGetterName(s) and getSetterName(s) to Operator as that is most
convenient usage wise even though it just depends on Dialect). Prefix
accessors in Test dialect and follow up on missed changes in
OpDefinitionsGen.
Differential Revision: https://reviews.llvm.org/D112118
This revision uses the newly refactored StructuredGenerator to create a simple vectorization for conv1d_nwc_wcf.
Note that the pattern is not specific to the op and is technically not even specific to the ConvolutionOpInterface (modulo minor details related to dilations and strides).
The overall design follows the same ideas as the lowering of vector::ContractionOp -> vector::OuterProduct: it seeks to be minimally complex, composable and extensible while avoiding inference analysis. Instead, we metaprogram the maps/indexings we expect and we match against them.
This is just a first stab and still needs to be evaluated for performance.
Other tradeoffs are possible that should be explored.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111894
This canonicalizer replaces reshapes of constant tensors that contain the updated shape (skipping the reshape operation).
Differential Revision: https://reviews.llvm.org/D112038
The functions are moved above the parseClauses function as they
will be used inside it to parse `hint` clause
Reviewed By: clementval
Differential Revision: https://reviews.llvm.org/D112071
Code reorganized in OpenMPDialect.cpp to have all functions corresponding to an operation together.
Added parseClauses function to avoid code duplication while parsing clauses in OpenMP operations. Also added printers and verifiers for clauses, which are being used for multiple operations.
Reviewed By: kiranchandramohan, peixin
Differential Revision: https://reviews.llvm.org/D110903
The change is based on the proposal from the following discussion:
https://llvm.discourse.group/t/rfc-memreftype-affine-maps-list-vs-single-item/3968
* Introduce `MemRefLayoutAttr` interface to get `AffineMap` from an `Attribute`
(`AffineMapAttr` implements this interface).
* Store layout as a single generic `MemRefLayoutAttr`.
This change removes the affine map composition feature and related API.
Actually, while the `MemRefType` itself supported it, almost none of the upstream
can work with more than 1 affine map in `MemRefType`.
The introduced `MemRefLayoutAttr` allows to re-implement this feature
in a more stable way - via separate attribute class.
Also the interface allows to use different layout representations rather than affine maps.
For example, the described "stride + offset" form, which is currently supported in ASM parser only,
can now be expressed as separate attribute.
Reviewed By: ftynse, bondhugula
Differential Revision: https://reviews.llvm.org/D111553
- `assign` with ArrayRef was calling `append`
- `assign` with empty ArrayRef was not clearing storage
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D112043
This helper function checks if two given ops are in mutually exclusive branches of the same scf::IfOp.
Differential Revision: https://reviews.llvm.org/D111957
This revison lifts the artificial restriction on having exact matches between
source and destination type shapes. A static size may become dynamic. We still
reject changing a dynamic size into a static size to avoid the need for a
runtime "assert" on the conversion. This revision also refactors some of the
conversion code to share same-content buffers.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111915
The functionality already exists in AsmParser to parse optional ArrayAttrs and
StringAttrs, but only if they are added to a NamedAttrList. This moves the
code to parse an optional attribute and add it to an list into a common
template, and exposes the simpler functionality of just parsing the optional
attributes.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111918
Use wider range for approximating Tanh to match results computed in Eigen with AVX.
Reviewed By: cota
Differential Revision: https://reviews.llvm.org/D112011
Starting with a mostly NFC change to be able to differentiate between
mechanical changes from ones that require more detailed review.
This will be used to flush out flow before flipping dialects used
outside local testing. As this dialect is not intended to be used
generally rather than in tests in core, I will not be following 2 week
staging approach here.
Besides accessing the record, there is currently no way to access all possible
constraint informations, such as the base constraint of a variadic constraint
for example.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111719
AnyAttrOf, similar to AnyTypeOf, expects the attribute to be one of the
given attributes.
For instance, `AnyAttrOf<[I32Attr, StrAttr]>` expects either a `I32Attr`,
or a `StrAttr`.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111739
This removes edge cases where the default flags we want to use
during printing (e.g. local scope, eliding attributes, etc.)
get missed/dropped.
Differential Revision: https://reviews.llvm.org/D111761
When folding A->B->C => A->C only accept A->C that is valid shape cast
Reviewed By: ThomasRaoux, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111473
The no-result version of createOrFold calls 'tryFold' but
ignores the result since it doesn't matter what it produced.
Explicitly cast to void to silence this warning:
../llvm/mlir/include/mlir/IR/Builders.h:454:5: warning: ignoring return value of function declared with 'nodiscard' attribute [-Wunused-result]
tryFold(op.getOperation(), unused);
^~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
Differential Revision: https://reviews.llvm.org/D111951
The existing message hints that the dialect may not be loaded, but there
is also the possibility that the dialect was loaded and the initialize()
method didn't include the Type/Attribute.
The rules were too restrictive, causing out-of-place bufferization when the result of two ExtractSliceOp is fed into an InsertSliceOp.
Differential Revision: https://reviews.llvm.org/D111861
This patch removes code very specific to affine dependence analysis and
refactors it as a FlatAfffineRelation.
A FlatAffineRelation represents a set of ordered pairs (domain -> range) where
"domain" and "range" are tuples of identifiers. These relations are used to
represent an "access relation" for memory access on a memref. An access
relation maps elements of an iteration domain to the element(s) of an array
domain accessed by that iteration of the associated statement through some
array reference. The dependence relation representing the dependence
constraints between two memory accesses can be built by composing the access
relation of the destination access by the inverse of the access relation of
source access.
This patch does not change the functionality of the existing dependence
analysis in checkMemrefAccessDependence, but refactors it to use
FlatAffineRelations to deduplicate code and enable code reuse for future
development of features like scheduling, value-based dependence analysis, etc.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D110563
This revision also adds a few passes to the sparse compiler part to unify the transformation sequence with all other paths we currently use.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111900
This is the only lowering to Linalg Tosa has, so it's needlessly
verbose. Likely this was a carry over from IREE's usage where we
originally lowered to linalg on buffers (the only linalg that existed at
the time), so the everything on tensors needed the suffix. We're dropping
it in IREE also, having transitioned entirely to using Linalg on
tensors.
Reviewed By: sjarus
Differential Revision: https://reviews.llvm.org/D111911
Next step towards supporting sparse tensors outputs.
Also some minor refactoring of enum constants as well
as replacing tensor arguments with proper buffer arguments
(latter is required for more general sizes arguments for
the sparse_tensor.init operation, as well as more general
spares_tensor.convert operations later)
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D111771
This adds a new parser and printer for text which may be a keyword or a
string. When printing, it will attempt to print the text as a keyword,
but if it has any special or non-printable characters, it will be
printed as an escaped string. When parsing, it will parse either a
valid keyword or a potentially escaped string. The printer allows for an
empty string, in which case it prints `""`.
This new function is used for printing the name in NamedAttributes, and
for printing the symbol name after the `@`. In CIRCT we are using this
to print module port names, which are conceptually similar to named
function arguments.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111683
`DefaultValuedAttr<StrAttr, "">` and `ConstantAttr<StrAttr, "">`
result in bugs in which TableGen will not recognize that the attribute
has a default value, because `""` is an empty TableGen string.
Strings no longer have special treatment. Instead, string values must be
wrapped in quotes: "\"foo\"". Two helpers, `DefaultValuedStrAttr` and
`ConstantStrAttr` have been added to keep code clean.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111855
From the perspective of analysis, scf::ForOp is treated as a black box. Basic block arguments do not alias with their respective OpOperands on the ForOp, so they do not participate in conflict analysis with ops defined outside of the loop.
However, bufferizesToMemoryRead and bufferizesToMemoryWrite on the scf::ForOp itself are used to determine how the scf::ForOp interacts with its surrounding ops.
Differential Revision: https://reviews.llvm.org/D111775
For each memory read, follow SSA use-def chains to find the op that produces the data being read (i.e., the most recent write). A memory write to an alias is a conflict if it takes places after the "most recent write" but before the read.
This CL introduces two main changes:
* There is a concise definition of a conflict. Given a piece of IR with InPlaceSpec annotations and a computes alias set, it is easy to compute whether this program has a conflict. No need to consider multiple cases such as "read of operand after in-place write" etc.
* No need to check for clobbering.
Differential Revision: https://reviews.llvm.org/D111287
Allow emitting get & set prefix for accessors generated for ops. If
enabled, then the argument/return/region name gets converted from
snake_case to UpperCamel and prefix added. The attribute also allows
generating both the current "raw" method along with the prefix'd one to
make it easier to stage changes.
The option is added on the dialect and currently defaults to existing
raw behavior. The expectation is that the staging where both are
generated would be short lived and so optimized to keeping the changes
local/less invasive (it just generates two functions for each accessor
with the same body - most of these internally again call a helper
function). But generation can be optimized if needed.
I'm unsure about OpAdaptor classes as there it is all get methods (it is
a named view into raw data structures), so prefix doesn't add much.
This starts with emitting raw-only form (as current behavior) as
default, then one can opt-in to raw & prefixed, then just prefixed. The
default in OpBase will switch to prefixed-only to be consistent with
MLIR style guide. And the option potentially removed later (considered
enabling specifying prefix but current discussion more pro keeping it
limited and stuck with that).
Also add more explicit checking for pruned functions to avoid emitting
where no function was added (and so avoiding dereferencing nullptr)
during op def/decl generation.
See https://bugs.llvm.org/show_bug.cgi?id=51916 for further discussion.
Differential Revision: https://reviews.llvm.org/D111033
Emit reduction during op vectorization instead of doing it when creating the
transfer write. This allow us to not broadcast output arguments for reduction
initial value.
Differential Revision: https://reviews.llvm.org/D111825
Part of the arith update broke UiToFp32. Fixed the lowering and included a new
test to detect a regression.
Differential Revision: https://reviews.llvm.org/D111772
I am unclear this is reproducible with correct IR but atm the verifier for InsertSliceOp
is not powerful enough and this triggers an infinite loop that is worth fixing independently.
Differential Revision: https://reviews.llvm.org/D111812
Improve support for variadic regions in ODS-generated operation view classes.
In particular, make generated constructors take an extra argument that
specifies the number of variadic regions if the operation has them. Previously,
there was no mechanism to specify a non-zero number of variadic regions. Also
generate named accessors to regions.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111783
MemRefType was using a wrong `isa` function in the bindings code, which
could lead to invalid IR being constructed. Also run the verifier in
memref dialect tests.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111784
Setting the nofold attribute enables packing an operand. At the moment, the attribute is set by default. The pack introduces a callback to control the flag.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111718
After removing the last LinalgOps that have no region attached we can verify there is a region. The patch performs the following changes:
- Move the SingleBlockImplicitTerminator trait further up the the structured op base class.
- Adapt the LinalgOp verification since the trait only check if there is 0 or 1 block.
- Introduce a getBlock method on the LinalgOp interface.
- Access the LinalgOp body using either getBlock() or getBody() if the concrete operation type is known.
This patch is a follow up to https://reviews.llvm.org/D111233.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111393
* Incorporates a reworked version of D106419 (which I have closed but has comments on it).
* Extends the standalone example to include a minimal CAPI (for registering its dialect) and a test which, from out of tree, creates an aggregate dylib and links a little sample program against it. This will likely only work today in *static* MLIR builds (until the TypeID fiasco is finally put to bed). It should work on all platforms, though (including Windows - albeit I haven't tried this exact incarnation there).
* This is the biggest pre-requisite to being able to build out of tree MLIR Python-based projects from an installed MLIR/LLVM.
* I am rather nauseated by the CMake shenanigans I had to endure to get this working. The primary complexity, above and beyond the previous patch is because (with no reason given), it is impossible to export target properties that contain generator expressions... because, of course it isn't. In this case, the primary reason we use generator expressions on the individual embedded libraries is to support arbitrary ordering. Since that need doesn't apply to out of tree (which import everything via FindPackage at the outset), we fall back to a more imperative way of doing the same thing if we detect that the target was imported. Gross, but I don't expect it to need a lot of maintenance.
* There should be a relatively straight-forward path from here to rebase libMLIR.so on top of this facility and also make it include the CAPI.
Differential Revision: https://reviews.llvm.org/D111504
This is the first step towards supporting general sparse tensors as output
of operations. The init sparse tensor is used to materialize an empty sparse
tensor of given shape and sparsity into a subsequent computation (similar to
the dense tensor init operation counterpart).
Example:
%c = sparse_tensor.init %d1, %d2 : tensor<?x?xf32, #SparseMatrix>
%0 = linalg.matmul
ins(%a, %b: tensor<?x?xf32>, tensor<?x?xf32>)
outs(%c: tensor<?x?xf32, #SparseMatrix>) -> tensor<?x?xf32, #SparseMatrix>
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111684
The type can be inferred trivially, but it is currently done as string
stitching between ODS and C++ and is not easily exposed to Python.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111712
When writing the user-facing documentation, I noticed several inconsistencies
and asymmetries in the Python API we provide. Fix them by adding:
- the `owner` property to regions, similarly to blocks;
- the `isinstance` method to any class derived from `PyConcreteAttr`,
`PyConcreteValue` and `PyConreteAffineExpr`, similar to `PyConcreteType` to
enable `isa`-like calls without having to handle exceptions;
- a mechanism to create the first block in the region as we could only create
blocks relative to other blocks, with is impossible in an empty region.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111556
Skip the check on "hasOperandStorage" since the array will be indexed anyway.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111696
This exposes creating a CallSiteLoc with a callee & list of frames for
callers. Follows the creation approach in C++ side where a list of
frames may be provided.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111670
As discussed on discord, we have never actually been able to build with the project-wide published min version of 3.14.3. The buildbot that tests the Python configuration is currently pinned to 3.19.1, and there are a number of non-version/policy controlled features that Python building relies on that makes it unreliable with older versions. Some of the issues are pretty fundamental and I don't know how to do them on the older version. I think that, as an optional feature, at least advertising the PSA as in this patch is a good middle ground until the next project-wide CMake version bump.
Also moves setup logic to a macro so that everyone can use it.
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
1. To avoid two ExecutionModeOp using the same name, adding the value of execution mode in name when converting to LLVM dialect.
2. To avoid syntax error in spv.OpLoad, add OpTypeSampledImage into SPV_Type.
Reviewed by:antiagainst
Differential revision:https://reviews.llvm.org/D111193
By doing so, it is not necessary to get the OpOperand a second time via
getAliasingOpOperand. Also, code slightly more readable because we do
not have to deal with Optional<> return value.
Differential Revision: https://reviews.llvm.org/D110918
We shouldn't broadcast the original value when doing reduction. Instead
we compute the reduction and then combine it with the original value.
Differential Revision: https://reviews.llvm.org/D111666
This patch teaches `isProjectedPermutation` and `inverseAndBroadcastProjectedPermutation`
utilities to deal with maps representing an explicit broadcast, e.g., (d0, d1) -> (d0, 0).
This extension is needed to enable vectorization of such explicit broadcast in Linalg.
Reviewed By: pifon2a, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111563
Average pool assumed the same input/output type. Result type for integers
is always an i32, should be updated appropriately.
Reviewed By: GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D111590
Adapt CodegenStartegy to used the vector transfer lowering patterns by default.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111649
If I remember correctly this wasn't done previously because dim used to
be in the memref dialect.
Differential Revision: https://reviews.llvm.org/D111651
Some random changes that were hanging around in my workspace. Also,
a tiny step towards creating a header file for the sparse utils lib.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111589
Add a switch to code gen strategy to disable/enable the vector transfer lowering and disable it by default.
Differential Revision: https://reviews.llvm.org/D111647
Add the vector transfer patterns and introduce the max transfer rank option on the codegen strategy.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111635
This revision takes advantage of the recently added support for 0-d transfers and vector.multi_reduction that return a scalar.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D111626
This revision updates the op semantics, printer, parser and verifier to allow 0-d transfers.
Until 0-d vectors are available, such transfers have a special form that transits through vector<1xt>.
This is a stepping stone towards the longer term work of adding 0-d vectors and will help significantly reduce corner cases in vectorization.
Transformations and lowerings do not yet support this form, extensions will follow.
Differential Revision: https://reviews.llvm.org/D111559
vector.multi_reduction currently does not allow reducing down to a scalar.
This creates corner cases that are hard to handle during vectorization.
This revision extends the semantics and adds the proper transforms, lowerings and canonicalizations to allow lowering out of vector.multi_reduction to other abstractions all the way to LLVM.
In a future, where we will also allow 0-d vectors, scalars will still be relevant: 0-d vector and scalars are not equivalent on all hardware.
In the process, splice out the implementation patterns related to vector.multi_reduce into a new file.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D111442
`hint-expression` is an IntegerAttr, because it can be a combination of multiple values from the enum `omp_sync_hint_t` (Section 2.17.12 of OpenMP 5.0)
Reviewed By: ftynse, kiranchandramohan
Differential Revision: https://reviews.llvm.org/D111360
Make `raw_ostream operator<<` follow const correctness semantic,
since it is a requirement of FormatVariadic implementation.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111547
* Change callback signature `bool(Operation *)` -> `Optional<bool>(Operation *)`
* addDynamicallyLegalOp add callback to the chain
* If callback returned empty `Optional` next callback in chain will be called
Differential Revision: https://reviews.llvm.org/D110487
Call `printType(subElemType)` instead of `os << subElemType` for them.
It allows to handle type aliases inside complex types.
As a side effect, fixed `test.int` parsing.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111536
* Call `llvm_canonicalize_cmake_booleans` for all CMake options,
which are propagated to `lit.local.cfg` files.
* Use Python native boolean values instead of strings for such options.
This fixes the cases, when CMake variables have values other than `ON` (like `TRUE`).
This might happen due to IDE integration or due to CMake preset usage.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D110073
Operations that have the InferTypeOpInterface trait can now omit the return
types in their custom assembly formats.
Differential Revision: https://reviews.llvm.org/D111326
This relaxes vectorization of dense memrefs a bit so that affine expressions
are allowed in more outer dimensions. Vectorization of non unit stride
references is disabled though, since this seems ineffective anyway.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111469
Until now, we only had documentation oriented towards developers of the
bindings. Provide some documentation for users of the bindings that don't want
or need to understand the inner workings.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111540
1. Add support to vectorize induction variables of loops that are
not mapped to any vector dimension in SuperVectorize pass.
2. Fix a bug in getForInductionVarOwner.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D111370
This test is crashing 9 out of 10 runs in CI, but I can't reproduce
locally right now. Disabling to get the CI back to green and avoid
backsliding with more ASAN issues that would go unnoticed.
This moves the registry higher in the LLVM library dependency stack.
Every client of the target registry needs to link against MC anyway to
actually use the target, so we might as well move this out of Support.
This allows us to ensure that Support doesn't have includes from MC/*.
Differential Revision: https://reviews.llvm.org/D111454
Instead of hard-coding results for both Intel and AMD, let's relax
the checks to simplify the test while supporting both implementations.
Note that:
- If a new hardware implementation comes up in the future, it is likely
to pass the relaxed tests, i.e. no future maintenance burden for us.
- If something terribly wrong happens (e.g. instead of rsqrt we
execute 1/sqrt), the tests will probably catch it, since the relaxed
tests expect low precision (e.g. rsqrt(1) != 1.0).
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D111461
TensorLiteralParser::getHexAttr does a isIntOrIndexOrFloat check and properly handles index elements, but TensorLiteralParser::getAttr that calls into it has a mismatched check. This just makes the checks match so that index element attrs can parse when of type tensor.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111374
The purpose of this revision is to make "write into non-writable memory" conflict detection easier to understand.
The main idea is that there is a conflict in the case of inplace bufferization if:
1. Someone writes to (an alias of) opOperand, opResult or the to-be-bufferized op writes itself.
2. And, opOperand or opResult aliases a non-writable buffer.
Differential Revision: https://reviews.llvm.org/D111379
This commit adds a pattern to perform constant folding on linalg
generic ops which are essentially transposes. We see real cases
where model importers may generate such patterns.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D110597
Introduce support for accepting ops instead of values when constructing ops. A
single-result op can be used instead of a value, including in lists of values,
and any op can be used instead of a list of values. This is similar to, but
more powerful, than the C++ API that allows for implicitly casting an OpType to
Value if it is statically known to have a single result - the cast in Python is
based on the op dynamically having a single result, and also handles the
multi-result case. This allows to build IR in a more concise way:
op = dialect.produce_multiple_results()
other = dialect.produce_single_result()
dialect.consume_multiple_results(other, op)
instead of having to access the results manually
op = dialect.produce.multiple_results()
other = dialect.produce_single_result()
dialect.consume_multiple_results(other.result, op.operation.results)
The dispatch is implemented directly in Python and is triggered automatically
for autogenerated OpView subclasses. Extension OpView classes should use the
functions provided in ods_common.py if they want to implement this behavior.
An alternative could be to implement the dispatch in the C++ bindings code, but
it would require to forward opaque types through all Python functions down to a
binding call, which makes it hard to inspect them in Python, e.g., to obtain
the types of values.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111306
The convolution op is one of the remaining hard coded Linalg operations that have no region attached. It got obsolete due to the OpDSL convolution operations. Removing it allows us to delete specialized code and tests that are not needed for the OpDSL counterparts that rely on the standard code paths.
Test needed due to specialized implementations are removed. Tiling and fusion tests are replaced by variants using linalg.conv_2d.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111233
This reverts commit 7aebdfc4fc.
The build is broken with errors like:
GPUPasses.cpp:(.text.pybind11_object_init[pybind11_object_init]+0x118): undefined reference to `PyExc_TypeError'
After CMake 3.18, we are able to limit the scope of the
find_package(Python3 ...) search to just Development.Module. Searching
for Development will fail in manylinux builds, and isn't necessary
since we are not embedding the Python interpreter. For more information, see:
https://pybind11.readthedocs.io/en/stable/compiling.html#findpython-mode
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D111383
These kind of function can behave differently on these X86 chips, there
isn't really "one true answer" so we'll accept both.
Also remove spurious passes and use mattr="avx" to match the instruction
used here.
Differential Revision: https://reviews.llvm.org/D111373
Currently Affine LICM checks iterOperands and does not hoist out any
instruction containing iterOperands. We should check iterArgs instead.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D111090
Add an interface for outlineable OpenMP operations.
This patch was initially done in fir-dev and is now needed
for the upstreaming.
Reviewed By: schweitz
Differential Revision: https://reviews.llvm.org/D111310
* Need to investigate the proper solution to https://github.com/pybind/pybind11/issues/3336 or engineer something different.
* The attempt to produce an empty buffer_info as a workaround triggers asan/ubsan.
* Usage of this API does not arise naturally in practice yet, and it is more important to be asan/crash clean than have a solution right now.
* Switching back to raising an exception, even though that triggers terminate().
* This already half existed in terms of reading the raw buffer backing a DenseElementsAttr.
* Documented the precise expectations of the buffer layout.
* Extended the Python API to support construction from bitcasted buffers, allowing construction of all primitive element types (even those that lack a compatible representation in Python).
* Specifically, the Python API can now load all integer types at all bit widths and all floating point types (f16, f32, f64, bf16).
Differential Revision: https://reviews.llvm.org/D111284
The signature of this function was confusing. Check for hasKnownBufferizationAliasingBehavior separately when needed.
Differential Revision: https://reviews.llvm.org/D110916
It was bundling quite a lot of patterns that convert high-D
vector ops into low-D elementary ops. It might not be good
for all of the patterns to happen for a particular downstream
user. For example, `ShapeCastOpRewritePattern` rewrites
`vector.shape_cast` into data movement extract/insert ops.
Instead, split the entry point into multiple ones so users
can pull in patterns on demand.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111225
Move getInplaceableOpResult() call into bufferizableInPlaceAnalysis.
Note: The only goal of this change is to make the signature of bufferizableInPlaceAnalysis smaller. (Fewer arguments.)
Differential Revision: https://reviews.llvm.org/D110915
ConstShapeOp has a constant shape, so its type can always be static.
We still allow it to have ShapeType though.
Differential Revision: https://reviews.llvm.org/D111139