This was added without full specification like other generated methods.
This then leads to other downstream dialects failing to compile the
generated code when they are not in the mlir namespace.
Differential Revision: https://reviews.llvm.org/D94132
If an operation defines an optional attribute (OptionalAttr or
UnitAttr), transformations may wish to remove these attributes while
maintaining invariants established by the operation. Currently, the only
way to do this is by calling `Operation::removeAttr("attrName")`, which
requires developers to know the exact name of the attribute used by
table-gen. Furthermore, if the attribute name changes, this won't be
detected at compile time. Instead, `removeAttr` would return an empty
attribute and no errors would be raised, unless the caller checks for
the returned value.
This patch adds table gen support for generating `remove<AttrName>Attr`
methods for OptionalAttributes defined by operations.
Implementation choice: to preserve camelCase for the method's name, the
first character of an attribute called `myAttr` is changed to upper case
in order to preserve the coding style, so the final method would be
called `removeMyAttr`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D93903
Remove unnecessary `&` from loop variables.
Fix warnings: "loop variable is always a copy because the range does not
return a reference".
```
[240/2862] Building CXX object tools/mlir/tools/mlir-tblgen/CMakeFiles/mlir-tblgen.dir/TypeDefGen.cpp.o
llvm-project/mlir/tools/mlir-tblgen/TypeDefGen.cpp:50:25: warning: loop variable 'typeDef' is always a copy because the range of type 'llvm::iterator_range<llvm::mapped_iterator<std::__1::__wrap_iter<llvm::Record **>, (lambda at llvm-project/mlir/tools/mlir-tblgen/TypeDefGen.cpp:40:16), mlir::tblgen::TypeDef> >' does not return a reference [-Wrange-loop-analysis]
for (const TypeDef &typeDef : defs)
^
llvm-project/mlir/tools/mlir-tblgen/TypeDefGen.cpp:50:10: note: use non-reference type 'mlir::tblgen::TypeDef'
for (const TypeDef &typeDef : defs)
^~~~~~~~~~~~~~~~~~~~~~~~
llvm-project/mlir/tools/mlir-tblgen/TypeDefGen.cpp:64:23: warning: loop variable 'typeDef' is always a copy because the range of type 'llvm::iterator_range<llvm::mapped_iterator<std::__1::__wrap_iter<llvm::Record **>, (lambda at llvm-project/mlir/tools/mlir-tblgen/TypeDefGen.cpp:40:16), mlir::tblgen::TypeDef> >' does not return a reference [-Wrange-loop-analysis]
for (const TypeDef &typeDef : defs)
^
llvm-project/mlir/tools/mlir-tblgen/TypeDefGen.cpp:64:8: note: use non-reference type 'mlir::tblgen::TypeDef'
for (const TypeDef &typeDef : defs)
^~~~~~~~~~~~~~~~~~~~~~~~
2 warnings generated.
[1934/2862] Building CXX object tools...Files/toyc-ch4.dir/mlir/MLIRGen.cpp.o
llvm-project/mlir/examples/toy/Ch4/mlir/MLIRGen.cpp:139:22: warning: loop variable 'name_value' is always a copy because the range of type 'detail::zippy<detail::zip_shortest, ArrayRef<unique_ptr<VariableExprAST, default_delete<VariableExprAST> > > &, MutableArrayRef<BlockArgument> >' does not return a reference [-Wrange-loop-analysis]
for (const auto &name_value :
^
llvm-project/mlir/examples/toy/Ch4/mlir/MLIRGen.cpp:139:10: note: use non-reference type 'std::__1::tuple<const std::__1::unique_ptr<toy::VariableExprAST, std::__1::default_delete<toy::VariableExprAST> > &, mlir::BlockArgument &>'
for (const auto &name_value :
^~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
[1940/2862] Building CXX object tools...Files/toyc-ch5.dir/mlir/MLIRGen.cpp.o
llvm-project/mlir/examples/toy/Ch5/mlir/MLIRGen.cpp:139:22: warning: loop variable 'name_value' is always a copy because the range of type 'detail::zippy<detail::zip_shortest, ArrayRef<unique_ptr<VariableExprAST, default_delete<VariableExprAST> > > &, MutableArrayRef<BlockArgument> >' does not return a reference [-Wrange-loop-analysis]
for (const auto &name_value :
^
llvm-project/mlir/examples/toy/Ch5/mlir/MLIRGen.cpp:139:10: note: use non-reference type 'std::__1::tuple<const std::__1::unique_ptr<toy::VariableExprAST, std::__1::default_delete<toy::VariableExprAST> > &, mlir::BlockArgument &>'
for (const auto &name_value :
^~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
```
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D94003
BEGIN_PUBLIC
[mlir] Remove LLVMType, LLVM dialect types now derive Type directly
This class has become a simple `isa` hook with no proper functionality.
Removing will allow us to eventually make the LLVM dialect type infrastructure
open, i.e., support non-LLVM types inside container types, which itself will
make the type conversion more progressive.
Introduce a call `LLVM::isCompatibleType` to be used instead of
`isa<LLVMType>`. For now, this is strictly equivalent.
END_PUBLIC
Depends On D93681
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D93713
* When porting npcomp to use these bindings, I ran into enough patterns of collisions that I decided to be somewhat draconian about not polluting the namespace.
* With these changes all of the npcomp dialects generate and pass what tests we have.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D93920
This fixes an incorrect fatal error in TableGen. This code probably comes
from before attributes were allowed to interleave with operands in ODS.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D93915
Implement Bug 46698, making ODS synthesize a getType() method that returns a
specific C++ class for OneResult methods where we know that class. This eliminates
a common source of casts in things like:
myOp.getType().cast<FIRRTLType>().getPassive()
because we know that myOp always returns a FIRRTLType. This also encourages
op authors to type their results more tightly (which is also good for
verification).
I chose to implement this by splitting the OneResult trait into itself plus a
OneTypedResult trait, given that many things are using `hasTrait<OneResult>`
to conditionalize various logic.
While this changes makes many many ops get more specific getType() results, it
is generally drop-in compatible with the previous behavior because 'x.cast<T>()'
is allowed when x is already known to be a T. The one exception to this is that
we need declarations of the types used by ops, which is why a couple headers
needed additional #includes.
I updated a few things in tree to remove the now-redundant `.cast<>`'s, but there
are probably many more than can be removed.
Differential Revision: https://reviews.llvm.org/D93790
Previously for each op we generate a separate serialization
method for it. Those serialization methods duplicate the logic
of parsing operands/results/attributes and such.
This commit creates a generic method and let suitable op-specific
serialization method to call into it.
wc -l SPIRVSerialization.inc: before 8304; after: 5597 (So -2707)
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93535
Previously for each op we generate a separate deserialization
method for it. Those deserialization methods duplicate the logic
of parsing operands/results/attributes and such.
This commit creates a generic method and let suitable op-specific
deserialization method to call into it.
wc -l SPIRVSerialization.inc: before 13290; after: 8304 (So -4986)
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93504
This commit renames various SPIR-V related conversion files for
consistency. It drops the "Convert" prefix to various files and
fixes various comment headers.
Reviewed By: hanchung, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D93489
Adds rewrite patterns to convert select+cmp instructions into clamp
instructions whenever possible. Support is added to convert:
- FOrdLessThan, FOrdLessThanEqual to GLSLFClampOp.
- SLessThan, SLessThanEqual to GLSLSClampOp.
- ULessThan, ULessThanEqual to GLSLUClampOp.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D93618
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent.
This significantly simplifies the usage of Linalg on tensors and is a stepping stone for
its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19.
Differential Revision: https://reviews.llvm.org/D93469
This class used to serve a few useful purposes:
* Allowed containing a null DictionaryAttr
* Provided some simple mutable API around a DictionaryAttr
The first of which is no longer an issue now that there is much better caching support for attributes in general, and a cache in the context for empty dictionaries. The second results in more trouble than it's worth because it mutates the internal dictionary on every action, leading to a potentially large number of dictionary copies. NamedAttrList is a much better alternative for the second use case, and should be modified as needed to better fit it's usage as a DictionaryAttrBuilder.
Differential Revision: https://reviews.llvm.org/D93442
This commit shuffles SPIR-V code around to better follow MLIR
convention. Specifically,
* Created IR/, Transforms/, Linking/, and Utils/ subdirectories and
moved suitable code inside.
* Created SPIRVEnums.{h|cpp} for SPIR-V C/C++ enums generated from
SPIR-V spec. Previously they are cluttered inside SPIRVTypes.{h|cpp}.
* Fixed include guards in various header files (both .h and .td).
* Moved serialization tests under test/Target/SPIRV.
* Renamed TableGen backend -gen-spirv-op-utils into -gen-spirv-attr-utils
as it is only generating utility functions for attributes.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D93407
This exposes several issues with the current generation that this revision also fixes.
* TypeDef now allows specifying the base class to use when generating.
* TypeDef now inherits from DialectType, which allows for using it as a TypeConstraint
* Parser/Printers are now no longer generated in the header(removing duplicate symbols), and are now only generated when necessary.
- Now that generatedTypeParser/Printer are only generated in the definition file,
existing users will need to manually expose this functionality when necessary.
* ::get() is no longer generated for singleton types, because it isn't necessary.
Differential Revision: https://reviews.llvm.org/D93270
This revision adds a new `StaticVerifierFunctionEmitter` class that emits local static functions in the .cpp file for shared operation verification. This class deduplicates shared operation verification code by emitting static functions alongside the op definitions. These methods are local to the definition file, and are invoked within the operation verify methods. The first bit of shared verification is for the type constraints used when verifying operands and results. An example is shown below:
```
static LogicalResult localVerify(...) {
...
}
LogicalResult OpA::verify(...) {
if (failed(localVerify(...)))
return failure();
...
}
LogicalResult OpB::verify(...) {
if (failed(localVerify(...)))
return failure();
...
}
```
This allowed for saving >400kb of code size from a downstream TensorFlow project (~15% of MLIR code size).
Differential Revision: https://reviews.llvm.org/D91381
This revision adds a new `printNewline` hook to OpAsmPrinter that allows for printing a newline within the custom format of an operation, that is then indented to the start of the operation. Support for the declarative assembly format is also added, in the form of a `\n` literal.
Differential Revision: https://reviews.llvm.org/D93151
When printing verification errors for ops with the incorrect number of
operand segments, print the required number as well as the actual
number. Split off from D93005.
Differential Revision: https://reviews.llvm.org/D93145
The check for formatting enum attributes was missing a call to get the base attribute, which is necessary to strip off the top-level OptionalAttr<> wrapper.
Differential Revision: https://reviews.llvm.org/D92713
- Instead of hardcoding the parameters and return types of 'inferReturnTypes', use the
InferTypeOpInterface trait to generate the method declaration.
- Fix InferTypeOfInterface to use fully qualified type for inferReturnTypes results.
Differential Revision: https://reviews.llvm.org/D92585
Given that OpState already implicit converts to Operator*, this seems reasonable.
The alternative would be to add more functions to OpState which forward to Operation.
Reviewed By: rriddle, ftynse
Differential Revision: https://reviews.llvm.org/D92266
PDL patterns are now supported via a new `PDLPatternModule` class. This class contains a ModuleOp with the pdl::PatternOp operations representing the patterns, as well as a collection of registered C++ functions for native constraints/creations/rewrites/etc. that may be invoked via the pdl patterns. Instances of this class are added to an OwningRewritePatternList in the same fashion as C++ RewritePatterns, i.e. via the `insert` method.
The PDL bytecode is an in-memory representation of the PDL interpreter dialect that can be efficiently interpreted/executed. The representation of the bytecode boils down to a code array(for opcodes/memory locations/etc) and a memory buffer(for storing attributes/operations/values/any other data necessary). The bytecode operations are effectively a 1-1 mapping to the PDLInterp dialect operations, with a few exceptions in cases where the in-memory representation of the bytecode can be more efficient than the MLIR representation. For example, a generic `AreEqual` bytecode op can be used to represent AreEqualOp, CheckAttributeOp, and CheckTypeOp.
The execution of the bytecode is split into two phases: matching and rewriting. When matching, all of the matched patterns are collected to avoid the overhead of re-running parts of the matcher. These matched patterns are then considered alongside the native C++ patterns, which rewrite immediately in-place via `RewritePattern::matchAndRewrite`, for the given root operation. When a PDL pattern is matched and has the highest benefit, it is passed back to the bytecode to execute its rewriter.
Differential Revision: https://reviews.llvm.org/D89107
- Change InferTypeOpInterface::inferResultTypes to use fully qualified types matching
the ones generated by genTypeInterfaceMethods, so the redundancy can be detected.
- Move genTypeInterfaceMethods() before genOpInterfaceMethods() so that the
inferResultTypes method generated by genTypeInterfaceMethods() takes precedence
over the declaration that might be generated by genOpInterfaceMethods()
- Modified an op in the test dialect to exercise this (the modified op would fail to
generate valid C++ code due to duplicate inferResultTypes methods).
Differential Revision: https://reviews.llvm.org/D92414
The InlineAsmOp mirrors the underlying LLVM semantics with a notable
exception: the embedded `asm_string` is not allowed to define or reference
any symbol or any global variable: only the operands of the op may be read,
written, or referenced.
Attempting to define or reference any symbol or any global behavior is
considered undefined behavior at this time.
The asm dialect syntax is currently specified with an integer (0 [default] for the "att dialect", 1 for the intel dialect) to circumvent the ODS limitation on string enums.
Translation to LLVM is provided and raises the fact that the asm constraints string must be well-formed with respect to in/out operands. No check is performed on the asm_string.
An InlineAsm instruction in LLVM is a special call operation to a function that is constructed on the fly.
It does not fit the current model of MLIR calls with symbols.
As a consequence, the current implementation constructs the function type in ModuleTranslation.cpp.
This should be refactored in the future.
The mlir-cpu-runner is augmented with the global initialization of the X86 asm parser to allow proper execution in JIT mode. Previously, only the X86 asm printer was initialized.
Differential revision: https://reviews.llvm.org/D92166
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.
Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).
This commit reverts d9da4c3e73. Fixes
missing headers (don't know how that was working locally).
Differential Revision: https://reviews.llvm.org/D91672
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.
Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).
Differential Revision: https://reviews.llvm.org/D91672
The ops are very similar to the std variants, but support async GPU execution.
gpu.alloc does not currently support an alignment attribute, and the new ops do not have
canonicalizers/folders like their std siblings do.
Reviewed By: herhut
Differential Revision: https://reviews.llvm.org/D91698
Use the correct interface base type name when generating attribute interfaces
with TabeGen.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D92023
Attributes represent additional data about an operation and are intended to be
modifiable during the lifetime of the operation. In the dialect-specific Python
bindings, attributes are exposed as properties on the operation class. Allow
for assigning values to these properties. Also support creating new and
deleting existing attributes through the generic "attributes" property of an
operation. Any validity checking must be performed by the op verifier after the
mutation, similarly to C++. Operations are not invalidated in the process: no
dangling pointers can be created as all attributes are owned by the context and
will remain live even if they are not used in any operation.
Introduce a Python Test dialect by analogy with the Test dialect and to avoid
polluting the latter with Python-specific constructs. Use this dialect to
implement a test for the attribute access and mutation API.
Reviewed By: stellaraccident, mehdi_amini
Differential Revision: https://reviews.llvm.org/D91652
Enhance the tile+fuse logic to allow fusing a sequence of operations.
Make sure the value used to obtain tile shape is a
SubViewOp/SubTensorOp. Current logic used to get the bounds of loop
depends on the use of `getOrCreateRange` method on `SubViewOp` and
`SubTensorOp`. Make sure that the value/dim used to compute the range
is from such ops. This fix is a reasonable WAR, but a btter fix would
be to make `getOrCreateRange` method be a method of `ViewInterface`.
Differential Revision: https://reviews.llvm.org/D90991
- Fixes bug 48242 point 3 crash.
- Makes the improvments from points 1 & 2.
https://bugs.llvm.org/show_bug.cgi?id=48262
```
def RTLValueType : Type<CPred<"isRTLValueType($_self)">, "Type"> {
string cppType = "::mlir::Type";
}
```
Works now, but merely by happenstance. Parameters expects a `TypeParameter` class def or a string representing a c++ type but doesn't enforce it.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D91939
This reverts commit f8284d21a8.
Revert "[mlir][Linalg] NFC: Expose some utility functions used for promotion."
This reverts commit 0c59f51592.
Revert "Remove unused isZero function"
This reverts commit 0f9f0a4046.
Change f8284d21 led to multiple failures in IREE compilation.
This commit starts a new pass and patterns for converting Linalg
named ops to generic ops. This enables us to leverage the flexbility
from generic ops during transformations. Right now only linalg.conv
is supported; others will be added when useful.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D91357
For intrinsics with multiple returns where one or more operands are overloaded, the overloaded type is inferred from the corresponding field of the resulting struct, instead of accessing the result directly.
As such, the hasResult parameter of LLVM_IntrOpBase (and derived classes) is replaced with numResults. TableGen for intrinsics also updated to populate this field with the total number of results.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D91680
This allows for operations that exclusively affect symbol operations to better describe their side effects.
Differential Revision: https://reviews.llvm.org/D91581
As discussed in https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020
this CL is the start of sparse tensor compiler support in MLIR. Starting with a
"dense" kernel expressed in the Linalg dialect together with per-dimension
sparsity annotations on the tensors, the compiler automatically lowers the
kernel to sparse code using the methods described in Fredrik Kjolstad's thesis.
Many details are still TBD. For example, the sparse "bufferization" is purely
done locally since we don't have a global solution for propagating sparsity
yet. Furthermore, code to input and output the sparse tensors is missing.
Nevertheless, with some hand modifications, the generated MLIR can be
easily converted into runnable code already.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D90994
This utility function is helpful for dialect-specific builders that need
to access the context through location, and the location itself may be
either provided as an argument or expected to be recovered from the
implicit location stack.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91623