- String binary search does 1 less string comparison
- Identifier linear scan on large attribute list is switched to string binary search
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D112970
OpAdaptor::verify performs string lookups on an attribute dictionary. By
calling OpAdaptor::verify, Op::verify is not able to use cached attribute
identifiers for faster lookups.
Reviewed By: jpienaar, rriddle
Differential Revision: https://reviews.llvm.org/D113039
This allows for external users of Comprehensive Bufferize to specify their own InitTensorOp elimination procedures.
Differential Revision: https://reviews.llvm.org/D112686
The main benefits of this change are faster access to operands
(no need to compute the offset, as it is now right after the
operation), simpler code(no need to manage a lot of the "is the
operand storage trailing" logic we had to before). The major
downside to this though, is that operand holding operations now
grow in size by 1 word (as no matter how we do this change, there
will need to be some additional book keeping).
Differential Revision: https://reviews.llvm.org/D111695
A quick grep for NDEBUG in MLIR revealed a use in DebugActions.h that breaks ABI. This patch changes the use of NDEBUG to LLVM_ENABLE_ABI_BREAKING_CHECKS which has the advantage of being independent of whether clients build their own app in debug or release as it is purely dependant on how MLIR itself was built.
Differential Revision: https://reviews.llvm.org/D113088
- Provide the operator overloads for constructing (semi-)affine expressions in
Python by combining existing expressions with constants.
- Make AffineExpr, AffineMap and IntegerSet hashable in Python.
- Expose the AffineExpr composition functionality.
Reviewed By: gysit, aoyal
Differential Revision: https://reviews.llvm.org/D113010
This better decouples transfer read/write from vector-only rewrite of conv.
This form is close to ready to plop into a new vector.conv op and the vector.transfer operations to be generalized as part of generic vectorization once the properties ConvolutionOpInterface are inferred from the indexing maps.
This also results in a nice perf boost in the dw == 1 cases.
Differential revision: https://reviews.llvm.org/D112822
This refactoring prepares conv1d vectorization for a future integration into
the generic codegen path.
Once transfer_read / transfer_write vectorization also supports sliding windows,
the special pattern for conv can disappear.
This will also likely need a vector.conv operation.
Differential Revision: https://reviews.llvm.org/D112797
The current setup of LinalgTransformationFilter allows a
transformation to trigger when either
1) The StringAttr is not set and no filter identifier is specified.
2) The StringAttr is set and its value matches (one of) the provided
identifier.
This misses the case where the transformation should trigger either
when the attribute is not set or its value matches (one of) the
provided identifier. Since `Identifier` does not allow empty strings,
add a boolean option to match when the attribute is not set. This
option is by default off.
Differential Revision: https://reviews.llvm.org/D113057
The 2-D case can be rewritten to generate quite fewer instructions and a single vector.shuffle which seems to provide a nice performance boost.
Add this arrow to our quiver by exposing it with a new vector transform option.
Differential Revision: https://reviews.llvm.org/D113062
We'd like to take a progressive approach towards Fconvolution op
CodeGen, by 1) tiling it to fit compute hierarchy first, and then
2) tiling along window dimensions with size 1 to reduce the problem
to be matmul-like. After that, we can 3) downscale high-D convolution
ops to low-D by removing the size-1 window dimensions. The final
step would be 4) vectorizing the low-D convolution op directly.
We have patterns for 1), 2), and 4). This commit adds a pattern for
3) for `linalg.conv_2d_nhwc_hwcf` ops as a starter. Supporting other
high-D convolution ops should be similar and mechanical.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112928
Symbol tables are a largely useful top-level IR construct, for example, they
make it easy to access functions in a module by name instead of traversing the
list of module's operations to find the corresponding function.
Depends On D112886
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D112821
Inserting a symbol into a SymbolTable may lead to the name of the symbol being
changed in order to ensure uniqueness of symbol names in the table. Return this
new name to spare the caller the need to extract it from the symbol operation.
Depends On D112700
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D112886
This commit moves parts of the existing bufferization code into external op interface implementations. Furthermore, Comprehensive Bufferize is adapted to use the new interface.
Future commits will decouple the interface and its op implementations from Comprehensive Bufferize and the Linalg dialect, as well as split them into multiple files with their own build targets. This commit leaves the file structure and build rules mostly unchanged.
Differential Revision: https://reviews.llvm.org/D112900
This commit adds a new op interface: BufferizableOpInterface. In the future, ops that implement this interface can be bufferized using Comprehensive Bufferize.
Note: The interface methods of this interface correspond to the "op interface" in ComprehensiveBufferize.cpp.
Differential Revision: https://reviews.llvm.org/D112974
In order to support fusion with mma matrix type we need to be able to
execute elementwise operations on them. This add an op to be able to
support some basic elementwise operations. This is a is not a full
solution as it only supports a limited scope or operations. Ideally we would
want to be able to fuse with more kind of operations.
Differential Revision: https://reviews.llvm.org/D112857
wmma intrinsics have a large number of combinations, ideally we want to be able
to target all the different variants. To avoid a combinatorial explosion in the
number of mlir op we use attributes to represent the different variation of
load/store/mma ops. We also can generate with tablegen helpers to know which
combinations are available. Using this we can avoid having too hardcode a path
for specific shapes and can support more types.
This patch also adds boiler plates for tf32 op support.
Differential Revision: https://reviews.llvm.org/D112689
Add the shufflevector conversion. It only handles the static, i.e., IntegerAttr, index.
Co-authored: Xinyi Liu <xyliuhelen@gmail.com>
Reviewed by: antiagainst
Differential revision: https://reviews.llvm.org/D112161
This makes the class usable with types that do not provide their own operator<.
Update MLIR Linalg ComprehensiveBufferize to take advantage of the new template param.
Differential Revision: https://reviews.llvm.org/D112052
Provide support for removing an operation from the block that contains it and
moving it back to detached state. This allows for the operation to be moved to
a different block, a common IR manipulation for, e.g., module merging.
Also fix a potential one-past-end iterator dereference in Operation::moveAfter
discovered in the process.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D112700
The list of operations that do neither read nor write, but create an alias when bufferizing inplace, is getting longer. This commit adds a helper function so that we do not have to spell out the entire list each time.
Differential Revision: https://reviews.llvm.org/D112515
This patch reorders mergeLocalIds usage to merge locals only after number of
dimensions and symbols are same. This does not change any functionality
because it does not matter in what order identifiers are merged, since
the reason to do it is to ensure that two FACs are aligned.
The order ensured in this patch simplifies a subsequent patch to improve
mergeLocalIds which requires dimensions and symbols to be aligned.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D112841
Added a type with different pointer/index bit width. Also
added some sanity CHECKs on the stored indices.
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D112778
When operand is a subview we don't infer in_bounds and some default cases (e.g case in the tests) will crash with `operand is NULL` when converting to LLVM
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D112772
Add a strategy pass that pads and hoists after tiling and fusion.
Depends On D112412
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112480
Adding a padding and hoisting pattern, a test pass, and tests. The patch prepares the split of tiling/fusion and padding.
Depends On D112255
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112412
Using [1] for representing shape of a scalar is incorrect, and will break with vectors of size 1.
- remove redundant helper functions
- fix couple of style warnings
Reviewed By: cota
Differential Revision: https://reviews.llvm.org/D112764
InsertionGuards move constructor is currently the compiler synthesized implementation which is very bug prone. A move constructed InsertionGuard will get the same builder and insertion point as the one it is constructed from, leading to insertion point being restored twice. This can even happen in non obvious situations on some compilers, such as when returning a move constructible struct from a function.
This patch fixes the issue by properly implementing the move constructor. An InsertionGuard that was used to move construct another InsertionGuard is simply inactive and will not restore the insertion point.
I chose to explicitly delete the move assign operator as its semantics are not clear cut. If one were to strictly follow the rule of 5, you'd have to restore the insertion point before then taking ownership of the others guards fields. I believe that to be rather confusing and/or surprising however. One may still get such semantics using llvm::Optional or std::optional and the emplace method if really needed.
Differential Revision: https://reviews.llvm.org/D112749
Adapt hoistPaddingOnTensors to leave replacing and erasing the old pad tensor operation to the caller. This change makes the function pattern friendly.
Depends On D112003
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112255
Adapt the rewriteAsPaddedOp method to use the OpBuilder instead of the PatterRewriter.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112003
This patch extends the SubElementAttr interface to allow replacing a contained sub attribute. The attribute that should be replaced is identified by an index which denotes the n-th element returned by the accompanying walkImmediateSubElements method.
Using this addition the patch implements replacing SymbolRefAttrs contained within any dialect attributes.
Differential Revision: https://reviews.llvm.org/D111357
This patch fixes:
mlir/lib/IR/BuiltinAttributes.cpp:876:39: error: unused function
'isComplexOfIntType' [-Werror,-Wunused-function]
in a release build.
Rationale:
The silent exit(1) gives little clues on where the error occurs on failure
and may even be confusing at first. The CHECK testing of all computed values
and indices may be a little bit more elaborate, but it directly pinpoints
where errors happen if they occur. This style is also consistent with
the other tests, which I actually prefer.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D112688
* Move SmallVectors outside of inner loops to avoid frequent
allocations and deallocations
* Calculate linearized index and call flat range getters to
avoid internal shape querying behind `getValue`.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D112099
Add llvm.mlir.global_ctors and global_dtors ops and their translation
support to LLVM global_ctors/global_dtors global variables.
Differential Revision: https://reviews.llvm.org/D112524
This patch adds the inclusive clause (which was missed in previous
reorganization - https://reviews.llvm.org/D110903) in omp.wsloop operation.
Added a test for validating it.
Also fixes the order clause, which was not accepting any values. It now accepts
"concurrent" as a value, as specified in the standard.
Reviewed By: kiranchandramohan, peixin, clementval
Differential Revision: https://reviews.llvm.org/D112198
Allow lowering of wmma ops with 64bits indexes. Change the default
version of the test to use default layout.
Differential Revision: https://reviews.llvm.org/D112479
Analyze ops in a pseudo-random order to see if any assertions are triggered. Randomizing the order of analysis likely worsens the quality of the bufferization result (more out-of-place bufferizations). However, assertions should never fail, as that would indicate a problem with our implementation.
Differential Revision: https://reviews.llvm.org/D112581
This patch supports the atomic construct (read and write) following
section 2.17.7 of OpenMP 5.0 standard. Also added tests and
verifier for the same.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D111992
This also fixes the vector.shuffle C++ builder which had an incorrect type assumption that triggers with this new rewrite.
The vector.shuffle semantics were correct though.
Differential revision: https://reviews.llvm.org/D112578
This patch fixes:
mlir/lib/Transforms/Utils/DialectConversion.cpp:2775:5: error:
default label in switch which covers all enumeration values
[-Werror,-Wcovered-switch-default]
by removing the default case. This way, the compiler should issue a
warning in the future when somebody adds a new enum value without a
corresponding case in the switch statement.
The current implementation invokes materializations
whenever an input operand does not have a mapping for the
desired type, i.e. it requires materialization at the earliest possible
point. This conflicts with goal of dialect conversion (and also the
current documentation) which states that a materialization is only
required if the materialization is supposed to persist after the
conversion process has finished.
This revision refactors this such that whenever a target
materialization "might" be necessary, we insert an
unrealized_conversion_cast to act as a temporary materialization.
This allows for deferring the invocation of the user
materialization hooks until the end of the conversion process,
where we actually have a better sense if it's actually
necessary. This has several benefits:
* In some cases a target materialization hook is no longer
necessary
When performing a full conversion, there are some situations
where a temporary materialization is necessary. Moving forward,
these users won't need to provide any target materializations,
as the temporary materializations do not require the user to
provide materialization hooks.
* getRemappedValue can now handle values that haven't been
converted yet
Before this commit, it wasn't well supported to get the remapped
value of a value that hadn't been converted yet (making it
difficult/impossible to convert multiple operations in many
situations). This commit updates getRemappedValue to properly
handle this case by inserting temporary materializations when
necessary.
Another code-health related benefit is that with this change we
can move a majority of the complexity related to materializations
to the end of the conversion process, instead of handling adhoc
while conversion is happening.
Differential Revision: https://reviews.llvm.org/D111620
Dyn-cast should be checked and bailed out if the dyn_cast failed.
Reviewed By: sjarus, NatashaKnk
Differential Revision: https://reviews.llvm.org/D112574
Rationale:
The currently used trait was demanding that all types are the same
which is not true (since the sparse part may change and the dim sizes
may be relaxed). This revision uses the correct trait and makes the
rank match test explicit in the verify method.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D112576
This refactoring adds a few "event" functions (start/end loop-seq/loop) for
readability of the core function of codegen. This also prepares sparse tensor
output codegen, where these "event" functions will provide convenient
placeholders to start or stop insertion bookkeeping.
This revision also includes a few various minor changes that kept on
pending in my local workspace.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D112506
Polynomial approximation can be extented to support N-d vectors.
N-dimensional vectors are useful when vectorizing operations on N-dimensional
tiles. Before lowering to LLVM these vectors are usually unrolled or flattened
to 1-dimensional vectors.
Differential Revision: https://reviews.llvm.org/D112566
Added a notification in the placeholder section. While writing things
like preciate of an attribute, we may embed certain placeholder in the C
expression. Note that the type of the placeholder is only guaranteed to
be the base type like mlir::Type, it's better not to use the derived
type which is based on the implementation.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D112396
1.Combining kind min/max of Vector reduction op has been changed to
minf/maxf, minsi/maxsi, and minui/maxui. Modify getVectorReductionOp
accordingly.
2.Add min/max to supported reductions.
Reviewed By: dcaballe, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D112246
Fix AffineExpr `getLargestKnownDivisor` for ceil/floor div cases.
In these cases, nothing can be inferred on the divisor of the
result.
Add test case for `mod` as well.
Differential Revision: https://reviews.llvm.org/D112523
The current behavior is conveniently allowing to iterate on the regions of an operation
implicitly by exposing an operation as Iterable. However this is also error prone and
code that may intend to iterate on the results or the operands could end up "working"
apparently instead of throwing a runtime error.
The lack of static type checking in Python contributes to the ambiguity here, it seems
safer to not do this and require and explicit qualification to iterate (`op.results`, `op.regions`, ...).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111697
Specification specified the output type for quantized average pool should be
an i32. Only accumulator should be an i32, result type should match the input
type.
Caused in https://reviews.llvm.org/D111590
Reviewed By: sjarus, GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D112484
Using callbacks for allocation/deallocation allows users to override
the default.
Also add an option to comprehensive bufferization pass to use `alloca`
instead of `alloc`s. Note that this option is just for testing. The
option to use `alloca` does not work well with the option to allow for
returning memrefs.
Even though tensor.cast is not part of the sparse tensor dialect,
it may be used to cast static dimension sizes to dynamic dimension
sizes for sparse tensors without changing the actual sparse tensor
itself. Those cases should be lowered properly when replacing sparse
tensor types with their opaque pointers. Likewise, no op sparse
conversions are handled by this revision in a similar manner.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D112173
Using callbacks for allocation/deallocation allows users to override
the default.
Also add an option to comprehensive bufferization pass to use `alloca`
instead of `alloc`s. Note that this option is just for testing. The
option to use `alloca` does not work well with the option to allow for
returning memrefs.
Differential Revision: https://reviews.llvm.org/D112166
In several cases, operation result types can be unambiguously inferred from
operands and attributes at operation construction time. Stop requiring the user
to provide these types as arguments in the ODS-generated constructors in Python
bindings. In particular, handle the SameOperandAndResultTypes and
FirstAttrDerivedResultType traits as well as InferTypeOpInterface using the
recently added interface support. This is a significant usability improvement
for IR construction, similar to what C++ ODS provides.
Depends On D111656
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111811
Introduce the initial support for operation interfaces in C API and Python
bindings. Interfaces are a key component of MLIR's extensibility and should be
available in bindings to make use of full potential of MLIR.
This initial implementation exposes InferTypeOpInterface all the way to the
Python bindings since it can be later used to simplify the operation
construction methods by inferring their return types instead of requiring the
user to do so. The general infrastructure for binding interfaces is defined and
InferTypeOpInterface can be used as an example for binding other interfaces.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111656
Splitting the WsLoop tests they were getting harder to debug with the offsets over 100 for some of them.
Reviewed By: clementval
Differential Revision: https://reviews.llvm.org/D112407
This removes duplication and makes nesting more clear.
It also reduces the amount of changes necessary for exposing future options.
Differential revision: https://reviews.llvm.org/D112344
This allows to clear an OpPassManager and populated it again with a new
pipeline, while preserving all the other options (including instrumentations).
Differential Revision: https://reviews.llvm.org/D112393
This patch fixes a bug in implementation `mergeSymbolIds` where symbol
identifiers were not unique after merging them. Asserts for checking uniqueness
before and after the merge are also added. The asserts checking uniqueness
after the merge fail without the fix on existing test cases.
Reviewed By: arjunp
Differential Revision: https://reviews.llvm.org/D111958
This removes duplication and makes nesting more clear.
It also reduces the amount of changes necessary for exposing future options.
Differential revision: https://reviews.llvm.org/D112344
This patch adds a polynomial approximation that matches the
approximation in Eigen.
Note that the approximation only applies to vectorized inputs;
the scalar rsqrt is left unmodified.
The approximation is protected with a flag since it emits an AVX2
intrinsic (generated via the X86Vector). This is the only reasonably
clean way that I could find to generate the exact approximation that
I wanted (i.e. an identical one to Eigen's).
I considered two alternatives:
1. Introduce a Rsqrt intrinsic in LLVM, which doesn't exist yet.
I believe this is because there is no definition of Rsqrt that
all backends could agree on, since hardware instructions that
implement it have widely varying degrees of precision.
This is something that the standard could mandate, but Rsqrt is
not part of IEEE754, so I don't think this option is feasible.
2. Emit fdiv(1.0, sqrt) with fast math flags to allow reciprocal
transformations. Although portable, this doesn't allow us
to generate exactly the code we want; it is the LLVM backend,
and not MLIR, who controls what code is generated based on the
target CPU.
Reviewed By: ezhulenev
Differential Revision: https://reviews.llvm.org/D112192
Pass the modifiers from the Flang parser to FIR/MLIR workshare
loop operation.
Not yet supporting the SIMD modifier, which is a bit more work
than just adding it to the list of modifiers, so will go in a
separate patch.
This adds a new field to the WsLoopOp.
Also add test for dynamic WSLoop, checking that dynamic schedule calls
the init and next functions as expected.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111053
This commit adds support for scf::IfOp to comprehensive bufferization. Support is currently limited to cases where both branches yield tensors that bufferize to the same buffer.
To keep the analysis simple, scf::IfOp are treated as memory writes for analysis purposes, even if no op inside any branch is writing. (scf::ForOps are handled in the same way.)
Differential Revision: https://reviews.llvm.org/D111929
ConstantOp should be used instead of ConstantIntOp to be able to support index type.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D112191
The summary can contain references to e.g. attribute defaults, which
can contain special characters. So these strings need to be C++
escaped.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D112249
When we escape strings for C++, make sure we use C++ escape
sequences. (In particular, \x22 instead of \22)
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D112269
Handle contraction op like all the other generic op reductions. This
simpifies the code. We now rely on contractionOp canonicalization to
keep the same code quality.
Differential Revision: https://reviews.llvm.org/D112171
add several patterns that will simplify contraction vectorization in the
future. With those canonicalizationns we will be able to remove the special
case for contration during vectorization and rely on those transformations to
avoid materizalizing broadcast ops.
Differential Revision: https://reviews.llvm.org/D112121
This effectively mirrors the logging in dialect conversion, which has proven
very useful for understanding the pattern application process.
Differential Revision: https://reviews.llvm.org/D112120
In the stride == 1 case, conv1d reads contiguous data along the input dimension. This can be advantageaously used to bulk memory transfers and compute while avoiding unrolling. Experimentally, this can yield speedups of up to 50%.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D112139
An InitTensorOp is replaced with an ExtractSliceOp on the InsertSliceOp's destination. This optimization is applied after analysis and only to InsertSliceOps that were decided to bufferize inplace. Another analysis on the new ExtractSliceOp is needed after the rewrite.
Differential Revision: https://reviews.llvm.org/D111955
This commit is in preparation for scf.if support.
* `condition` in findValueInReverseUseDefChain takes a Value instead of OpOperand*.
* Return a SetVector<Value> instead of a single Value. This SetVector always contains exactly one Value at the moment.
Differential Revision: https://reviews.llvm.org/D111928
This patch supports the ordered construct in OpenMP dialect following
Section 2.19.9 of the OpenMP 5.1 standard. Also lowering to LLVM IR
using OpenMP IRBduiler. Lowering to LLVM IR for ordered simd directive
is not supported yet since LLVM optimization passes do not support it
for now.
Reviewed By: kiranchandramohan, clementval, ftynse, shraiysh
Differential Revision: https://reviews.llvm.org/D110015
In a subsequent commit, getResultBuffer can return a "null" Value. This is the case when the returned buffer from an scf.if is not unique.
This commit is in preparation for scf.if support to keep the next commit smaller.
Differential Revision: https://reviews.llvm.org/D111927
This is required for bufferization of scf::IfOp, which is added in a subsequent commit.
Some ops (scf::ForOp, TiledLoopOp) require PreOrder traversal to make sure that bbArgs are mapped before bufferizing the loop body.
Differential Revision: https://reviews.llvm.org/D111924
This patch supports the ordered construct in OpenMP dialect following
Section 2.19.9 of the OpenMP 5.1 standard. Also lowering to LLVM IR
using OpenMP IRBduiler. Lowering to LLVM IR for ordered simd directive
is not supported yet since LLVM optimization passes do not support it
for now.
Reviewed By: kiranchandramohan, clementval, ftynse, shraiysh
Differential Revision: https://reviews.llvm.org/D110015
The current implementation used explicit index->int64_t casts for some, but
not all instances of passing values of type "index" in and from the sparse
support library. This revision makes the situation more consistent by
using new "index_t" type at all such places (which allows for less trivial
casting in the generated MLIR code). Note that the current revision still
assumes that "index" is 64-bit wide. If we want to support targets with
alternative "index" bit widths, we need to build the support library different.
But the current revision is a step forward by making this requirement explicit
and more visible.
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D112122
Add a pattern to take a rank-reducing subview and drop inner most
contiguous unit dim.
This is useful when lowering vector to backends with 1d vector types.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D111561
According to the OpenMP 5.0 standard, names and hints of critical operation are
closely related. The following are the restrictions on them:
- Unless the effect is as if `hint(omp_sync_hint_none)` was specified, the
critical construct must specify a name.
- If the hint clause is specified, each of the critical constructs with the
same name must have a hint clause for which the hint-expression evaluates to
the same value.
These restrictions will be enforced by design if the hint expression is a part
of the `omp.critical.declare` operation.
- Any operation with no "name" will be considered to have
`hint(omp_sync_hint_none)`.
- All the operations with the same "name" will have the same hint value.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D112134
Follow up to also use the prefixed emitters in OpFormatGen (moved
getGetterName(s) and getSetterName(s) to Operator as that is most
convenient usage wise even though it just depends on Dialect). Prefix
accessors in Test dialect and follow up on missed changes in
OpDefinitionsGen.
Differential Revision: https://reviews.llvm.org/D112118
This revision uses the newly refactored StructuredGenerator to create a simple vectorization for conv1d_nwc_wcf.
Note that the pattern is not specific to the op and is technically not even specific to the ConvolutionOpInterface (modulo minor details related to dilations and strides).
The overall design follows the same ideas as the lowering of vector::ContractionOp -> vector::OuterProduct: it seeks to be minimally complex, composable and extensible while avoiding inference analysis. Instead, we metaprogram the maps/indexings we expect and we match against them.
This is just a first stab and still needs to be evaluated for performance.
Other tradeoffs are possible that should be explored.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111894
This canonicalizer replaces reshapes of constant tensors that contain the updated shape (skipping the reshape operation).
Differential Revision: https://reviews.llvm.org/D112038
The functions are moved above the parseClauses function as they
will be used inside it to parse `hint` clause
Reviewed By: clementval
Differential Revision: https://reviews.llvm.org/D112071
Code reorganized in OpenMPDialect.cpp to have all functions corresponding to an operation together.
Added parseClauses function to avoid code duplication while parsing clauses in OpenMP operations. Also added printers and verifiers for clauses, which are being used for multiple operations.
Reviewed By: kiranchandramohan, peixin
Differential Revision: https://reviews.llvm.org/D110903
The change is based on the proposal from the following discussion:
https://llvm.discourse.group/t/rfc-memreftype-affine-maps-list-vs-single-item/3968
* Introduce `MemRefLayoutAttr` interface to get `AffineMap` from an `Attribute`
(`AffineMapAttr` implements this interface).
* Store layout as a single generic `MemRefLayoutAttr`.
This change removes the affine map composition feature and related API.
Actually, while the `MemRefType` itself supported it, almost none of the upstream
can work with more than 1 affine map in `MemRefType`.
The introduced `MemRefLayoutAttr` allows to re-implement this feature
in a more stable way - via separate attribute class.
Also the interface allows to use different layout representations rather than affine maps.
For example, the described "stride + offset" form, which is currently supported in ASM parser only,
can now be expressed as separate attribute.
Reviewed By: ftynse, bondhugula
Differential Revision: https://reviews.llvm.org/D111553
- `assign` with ArrayRef was calling `append`
- `assign` with empty ArrayRef was not clearing storage
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D112043
This helper function checks if two given ops are in mutually exclusive branches of the same scf::IfOp.
Differential Revision: https://reviews.llvm.org/D111957
This revison lifts the artificial restriction on having exact matches between
source and destination type shapes. A static size may become dynamic. We still
reject changing a dynamic size into a static size to avoid the need for a
runtime "assert" on the conversion. This revision also refactors some of the
conversion code to share same-content buffers.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111915
The functionality already exists in AsmParser to parse optional ArrayAttrs and
StringAttrs, but only if they are added to a NamedAttrList. This moves the
code to parse an optional attribute and add it to an list into a common
template, and exposes the simpler functionality of just parsing the optional
attributes.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111918
Use wider range for approximating Tanh to match results computed in Eigen with AVX.
Reviewed By: cota
Differential Revision: https://reviews.llvm.org/D112011
Starting with a mostly NFC change to be able to differentiate between
mechanical changes from ones that require more detailed review.
This will be used to flush out flow before flipping dialects used
outside local testing. As this dialect is not intended to be used
generally rather than in tests in core, I will not be following 2 week
staging approach here.
Besides accessing the record, there is currently no way to access all possible
constraint informations, such as the base constraint of a variadic constraint
for example.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111719
AnyAttrOf, similar to AnyTypeOf, expects the attribute to be one of the
given attributes.
For instance, `AnyAttrOf<[I32Attr, StrAttr]>` expects either a `I32Attr`,
or a `StrAttr`.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111739
This removes edge cases where the default flags we want to use
during printing (e.g. local scope, eliding attributes, etc.)
get missed/dropped.
Differential Revision: https://reviews.llvm.org/D111761
When folding A->B->C => A->C only accept A->C that is valid shape cast
Reviewed By: ThomasRaoux, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111473
The no-result version of createOrFold calls 'tryFold' but
ignores the result since it doesn't matter what it produced.
Explicitly cast to void to silence this warning:
../llvm/mlir/include/mlir/IR/Builders.h:454:5: warning: ignoring return value of function declared with 'nodiscard' attribute [-Wunused-result]
tryFold(op.getOperation(), unused);
^~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
Differential Revision: https://reviews.llvm.org/D111951
The existing message hints that the dialect may not be loaded, but there
is also the possibility that the dialect was loaded and the initialize()
method didn't include the Type/Attribute.
The rules were too restrictive, causing out-of-place bufferization when the result of two ExtractSliceOp is fed into an InsertSliceOp.
Differential Revision: https://reviews.llvm.org/D111861
This patch removes code very specific to affine dependence analysis and
refactors it as a FlatAfffineRelation.
A FlatAffineRelation represents a set of ordered pairs (domain -> range) where
"domain" and "range" are tuples of identifiers. These relations are used to
represent an "access relation" for memory access on a memref. An access
relation maps elements of an iteration domain to the element(s) of an array
domain accessed by that iteration of the associated statement through some
array reference. The dependence relation representing the dependence
constraints between two memory accesses can be built by composing the access
relation of the destination access by the inverse of the access relation of
source access.
This patch does not change the functionality of the existing dependence
analysis in checkMemrefAccessDependence, but refactors it to use
FlatAffineRelations to deduplicate code and enable code reuse for future
development of features like scheduling, value-based dependence analysis, etc.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D110563
This revision also adds a few passes to the sparse compiler part to unify the transformation sequence with all other paths we currently use.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111900
This is the only lowering to Linalg Tosa has, so it's needlessly
verbose. Likely this was a carry over from IREE's usage where we
originally lowered to linalg on buffers (the only linalg that existed at
the time), so the everything on tensors needed the suffix. We're dropping
it in IREE also, having transitioned entirely to using Linalg on
tensors.
Reviewed By: sjarus
Differential Revision: https://reviews.llvm.org/D111911
Next step towards supporting sparse tensors outputs.
Also some minor refactoring of enum constants as well
as replacing tensor arguments with proper buffer arguments
(latter is required for more general sizes arguments for
the sparse_tensor.init operation, as well as more general
spares_tensor.convert operations later)
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D111771
This adds a new parser and printer for text which may be a keyword or a
string. When printing, it will attempt to print the text as a keyword,
but if it has any special or non-printable characters, it will be
printed as an escaped string. When parsing, it will parse either a
valid keyword or a potentially escaped string. The printer allows for an
empty string, in which case it prints `""`.
This new function is used for printing the name in NamedAttributes, and
for printing the symbol name after the `@`. In CIRCT we are using this
to print module port names, which are conceptually similar to named
function arguments.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111683
`DefaultValuedAttr<StrAttr, "">` and `ConstantAttr<StrAttr, "">`
result in bugs in which TableGen will not recognize that the attribute
has a default value, because `""` is an empty TableGen string.
Strings no longer have special treatment. Instead, string values must be
wrapped in quotes: "\"foo\"". Two helpers, `DefaultValuedStrAttr` and
`ConstantStrAttr` have been added to keep code clean.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111855