The Dominance analysis currently misses a utility function to find the nearest common dominator of two given blocks. This is required for a huge variety of different control-flow analyses and transformations. This commit adds this function and moves the getNode function from DominanceInfo to DominanceInfoBase, as it also works for post dominators.
Differential Revision: https://reviews.llvm.org/D75507
Summary:
This revisions performs several cleanups to the generated dialect documentation:
* Standardizes format of attributes/operands/results sections
* Splits out operation/type/dialect documentation generation to allow for composing generated and hand-written documentation
* Add section for declarative assembly syntax and successors
* General cleanup
Differential Revision: https://reviews.llvm.org/D76573
Summary:
This removes the static pass registration, and also cleans up some lingering technical debt.
Differential Revision: https://reviews.llvm.org/D76554
Summary:
This file only contains references to test passes, and was never removed when the test passes were moved to the test/ directory.
Differential Revision: https://reviews.llvm.org/D76553
Summary:
Change AffineOps Dialect structure to better group both IR and Tranforms. This included extracting transforms directly related to AffineOps. Also move AffineOps to Affine.
Differential Revision: https://reviews.llvm.org/D76161
The Vector Dialect [document](https://mlir.llvm.org/docs/Dialects/Vector/) discusses the vector abstractions that MLIR supports and the various tradeoffs involved.
One of the layer that is missing in OSS atm is the Hardware Vector Ops (HWV) level.
This revision proposes an AVX512-specific to add a new Dialect/Targets/AVX512 Dialect that would directly target AVX512-specific intrinsics.
Atm, we rely too much on LLVM’s peephole optimizer to do a good job from small insertelement/extractelement/shufflevector. In the future, when possible, generic abstractions such as VP intrinsics should be preferred.
The revision will allow trading off HW-specific vs generic abstractions in MLIR.
Differential Revision: https://reviews.llvm.org/D75987
Summary:
This adds support in RewriterGen for calling into the new `PatternRewriter::notifyMatchFailure` hook. This lets derived pattern rewriters display this information to users, an example from DialectConversion is shown below:
```
Legalizing operation : 'std.and'(0x60e0000066a0) {
* Fold {
} -> FAILURE : unable to fold
* Pattern : 'std.and -> (spv.BitwiseAnd)' {
** Failure : operand 0 of op 'std.and' failed to satisfy constraint: '8/16/32/64-bit integer or vector of 8/16/32/64-bit integer values of length 2/3/4'
} -> FAILURE : pattern failed to match
* Pattern : 'std.and -> (spv.LogicalAnd)' {
** Failure : operand 0 of op 'std.and' failed to satisfy constraint: 'bool or vector of bool values of length 2/3/4'
} -> FAILURE : pattern failed to match
} -> FAILURE : no matched legalization pattern
```
Differential Revision: https://reviews.llvm.org/D76335
A memref argument is converted into a pointer-to-struct argument
of type `{T*, T*, i64, i64[N], i64[N]}*` in the wrapper function,
where T is the converted element type and N is the memref rank.
Differential Revision: https://reviews.llvm.org/D76059
Summary: This adds bitfields that map to the dialect attribute verifier hooks. This also moves over the Test dialect to have its declaration generated.
Differential Revision: https://reviews.llvm.org/D76254
Summary: This generates the class declarations for dialects using the existing 'Dialect' tablegen classes.
Differential Revision: https://reviews.llvm.org/D76185
Setting MLIR_TABLEGEN_EXE would prevent building the native tool which is used in cross-compiling
Differential Revision: https://reviews.llvm.org/D75299
Previously extensions and capabilities requirements are returned as
SmallVector<SmallVector>. It's an anti-pattern; this commit improves
a bit by returning as SmallVector<ArrayRef>. This is possible because
the internal sequence is always known statically (from the spec)
so that we can use a static constant array for it and get an ArrayRef.
Differential Revision: https://reviews.llvm.org/D75874
This commits changes the definition of spv.module to use the #spv.vce
attribute for specifying (version, capabilities, extensions) triple
so that we can have better API and custom assembly form. Since now
we have proper modelling of the triple, (de)serialization is wired up
to use them.
With the new UpdateVCEPass, we don't need to manually specify the
required extensions and capabilities anymore when creating a spv.module.
One just need to call UpdateVCEPass before serialization to get the
needed version/extensions/capabilities.
Differential Revision: https://reviews.llvm.org/D75872
This was a previous experiment that didn't pan out and needs to be
replaced, given no current use or tests, deleting instead and can start
new version fresh.
HasNoSideEffect can now be implemented using the MemoryEffectInterface, removing the need to check multiple things for the same information. This also removes an easy foot-gun for users as 'Operation::hasNoSideEffect' would ignore operations that dynamically, or recursively, have no side effects. This also leads to an immediate improvement in some of the existing users, such as DCE, now that they have access to more information.
Differential Revision: https://reviews.llvm.org/D76036
The current mechanism for identifying is a bit hacky and extremely adhoc, i.e. we explicit check 1-result, 0-operand, no side-effect, and always foldable and then assume that this is a constant. Adding a trait adds structure to this, and makes checking for a constant much more efficient as we can guarantee that all of these things have already been verified.
Differential Revision: https://reviews.llvm.org/D76020
The three libs where recently added to the `mlir-cpu-runner`'s
`CMakeLists.txt` file. This prevent the runner to compile on other
platform (e.g. Power in my case). Native codegen is pulled in
by the ExecutionEngine library, so this is redundant in any case.
Differential Revision: https://reviews.llvm.org/D75916
* Adds GpuLaunchFuncToVulkanLaunchFunc conversion pass.
* Moves a serialization of the `spirv::Module` from LaunchFuncToVulkanCalls pass to newly created pass.
* Updates LaunchFuncToVulkanCalls instrumentation pass, adds `initVulkan` and `deinitVulkan` runtime calls.
* Adds `bindResource` call to bind specifc resource by the given descriptor set and descriptor binding.
* Eliminates static construction and desctruction of `VulkanRuntimeManager`.
Differential Revision: https://reviews.llvm.org/D75192
Summary:
New classes are added to ODS to enable specifying additional information on the arguments and results of an operation. These classes, `Arg` and `Res` allow for adding a description and a set of 'decorators' along with the constraint. This enables specifying the side effects of an operation directly on the arguments and results themselves.
Example:
```
def LoadOp : Std_Op<"load"> {
let arguments = (ins Arg<AnyMemRef, "the MemRef to load from",
[MemRead]>:$memref,
Variadic<Index>:$indices);
}
```
Differential Revision: https://reviews.llvm.org/D74440
This revision introduces the infrastructure for defining side-effects and attaching them to operations. This infrastructure allows for defining different types of side effects, that don't interact with each other, but use the same internal mechanisms. At the base of this is an interface that allows operations to specify the different effect instances that are exhibited by a specific operation instance. An effect instance is comprised of the following:
* Effect: The specific effect being applied.
For memory related effects this may be reading from memory, storing to memory, etc.
* Value: A specific value, either operand/result/region argument, the effect pertains to.
* Resource: This is a global entity that represents the domain within which the effect is being applied.
MLIR serves many different abstractions, which cover many different domains. Simple effects are may have very different context, for example writing to an in-memory buffer vs a database. This revision defines uses this infrastructure to define a set of initial MemoryEffects. The are effects that generally correspond to memory of some kind; Allocate, Free, Read, Write.
This set of memory effects will be used in follow revisions to generalize various parts of the compiler, and make others more powerful(e.g. DCE).
This infrastructure was originally proposed here:
https://groups.google.com/a/tensorflow.org/g/mlir/c/v2mNl4vFCUM
Differential Revision: https://reviews.llvm.org/D74439
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components.
Previous version of this patch broke depencies on TableGen
targets. This appears to be because it compiled all
libraries to OBJECT libraries (probably because cmake
is generating different target names). Avoiding object
libraries results in correct dependencies.
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
add_llvm_library and add_llvm_executable may need to create new targets with
appropriate dependencies. As a result, it is not sufficient in some
configurations (namely LLVM_BUILD_LLVM_DYLIB=on) to only call
add_dependencies(). Instead, the explicit TableGen dependencies must
be passed to add_llvm_library() or add_llvm_executable() using the DEPENDS
keyword.
Differential Revision: https://reviews.llvm.org/D74930
CMake allows calling target_link_libraries() without a keyword,
but this usage is not preferred when also called with a keyword,
and has surprising behavior. This patch explicitly specifies a
keyword when using target_link_libraries().
Differential Revision: https://reviews.llvm.org/D75725
Summary:
This revision removes all of the functionality related to successor operands on the core Operation class. This greatly simplifies a lot of handling of operands, as well as successors. For example, DialectConversion no longer needs a special "matchAndRewrite" for branching terminator operations.(Note, the existing method was also broken for operations with variadic successors!!)
This also enables terminator operations to define their own relationships with successor arguments, instead of the hardcoded "pass-through" behavior that exists today.
Differential Revision: https://reviews.llvm.org/D75318
This greatly simplifies the requirements for builders using this mechanism for managing variadic operands.
Differential Revision: https://reviews.llvm.org/D75317
This attribute details the segment sizes for operand groups within the operation. This revision add support for automatically populating this attribute in the declarative parser.
Differential Revision: https://reviews.llvm.org/D75315
This interface contains the necessary components to provide the same builtin behavior that terminators have. This will be used in future revisions to remove many of the hardcoded constraints placed on successors and successor operands. The interface initially contains three methods:
```c++
// Return a set of values corresponding to the operands for successor 'index', or None if the operands do not correspond to materialized values.
Optional<OperandRange> getSuccessorOperands(unsigned index);
// Return true if this terminator can have it's successor operands erased.
bool canEraseSuccessorOperand();
// Erase the operand of a successor. This is only valid to call if 'canEraseSuccessorOperand' returns true.
void eraseSuccessorOperand(unsigned succIdx, unsigned opIdx);
```
Differential Revision: https://reviews.llvm.org/D75314
This allows for simplifying OpDefGen, as well providing specializing accessors for the different successor counts. This mirrors the existing traits for operands and results.
Differential Revision: https://reviews.llvm.org/D75313
This commit adds timestamp query commands in Vulkan runner's
compute pipeline to gain insights into how long it takes to
run the compute shader. This commit also adds timing from CPU
side for VkQueueSubmit and vkQueueWaitIdle.
Differential Revision: https://reviews.llvm.org/D75531
For ODS generated operations enable querying whether there is a derived
attribute with a given name.
Rollforward of commit 5aa57c2 without using llvm::is_contained.
This reverts commit 5aa57c2812.
The source code generated due to this ods change does not compile,
as it passes to few arguments to llvm::is_contained.
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components.
Previous version of this patch broke depencies on TableGen
targets. This appears to be because it compiled all
libraries to OBJECT libraries (probably because cmake
is generating different target names). Avoiding object
libraries results in correct dependencies.
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
add_llvm_library and add_llvm_executable may need to create new targets with
appropriate dependencies. As a result, it is not sufficient in some
configurations (namely LLVM_BUILD_LLVM_DYLIB=on) to only call
add_dependencies(). Instead, the explicit TableGen dependencies must
be passed to add_llvm_library() or add_llvm_executable() using the DEPENDS
keyword.
Differential Revision: https://reviews.llvm.org/D74930
When compiling libLLVM.so, add_llvm_library() manipulates the link libraries
being used. This means that when using add_llvm_library(), we need to pass
the list of libraries to be linked (using the LINK_LIBS keyword) instead of
using the standard target_link_libraries call. This is preparation for
properly dealing with creating libMLIR.so as well.
Differential Revision: https://reviews.llvm.org/D74864
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
add_llvm_library and add_llvm_executable may need to create new targets with
appropriate dependencies. As a result, it is not sufficient in some
configurations (namely LLVM_BUILD_LLVM_DYLIB=on) to only call
add_dependencies(). Instead, the explicit TableGen dependencies must
be passed to add_llvm_library() or add_llvm_executable() using the DEPENDS
keyword.
Differential Revision: https://reviews.llvm.org/D74930
When compiling libLLVM.so, add_llvm_library() manipulates the link libraries
being used. This means that when using add_llvm_library(), we need to pass
the list of libraries to be linked (using the LINK_LIBS keyword) instead of
using the standard target_link_libraries call. This is preparation for
properly dealing with creating libMLIR.so as well.
Differential Revision: https://reviews.llvm.org/D74864
Previously, lib/Support/JitRunner.cpp was essentially a complete application,
performing all library initialization, along with dealing with command line
arguments and actually running passes. This differs significantly from
mlir-opt and required a dependency on InitAllDialects.h. This dependency
is significant, since it requires a dependency on all of the resulting
libraries.
This patch refactors the code so that tools are responsible for library
initialization, including registering all dialects, prior to calling
JitRunnerMain. This places the concern about what dialect to support
with the end application, enabling more extensibility at the cost of
a small amount of code duplication between tools. It also fixes
BUILD_SHARED_LIBS=on.
Differential Revision: https://reviews.llvm.org/D75272
Collect a list of conversion libraries in cmake, so we don't have to
list these explicitly in most binaries.
Differential Revision: https://reviews.llvm.org/D75222
Instead of creating extra libraries we don't really need, collect a
list of all dialects and use that instead.
Differential Revision: https://reviews.llvm.org/D75221
Display the list of dialects known to mlir-opt. This is useful
for ensuring that linkage has happened correctly, for instance.
Differential Revision: https://reviews.llvm.org/D74865
Originally, intrinsics generator for the LLVM dialect has been producing
customized code fragments for the translation of MLIR operations to LLVM IR
intrinsics. LLVM dialect ODS now provides a generalized version of the
translation code, parameterizable with the properties of the operation.
Generate ODS that uses this version of the translation code instead of
generating a new version of it for each intrinsic.
Differential Revision: https://reviews.llvm.org/D74893
Summary:
The mapper assigns annotations to loop.parallel operations that
are compatible with the loop to gpu mapping pass. The outermost
loop uses the grid dimensions, followed by block dimensions. All
remaining loops are mapped to sequential loops.
Differential Revision: https://reviews.llvm.org/D74963
Previously C++ test passes for SPIR-V were put under
test/Dialect/SPIRV. Move them to test/lib/Dialect/SPIRV
to create a better structure.
Also fixed one of the test pass to use new
PassRegistration mechanism.
Differential Revision: https://reviews.llvm.org/D75066
This revision add support for formatting successor variables in a similar way to operands, attributes, etc.
Differential Revision: https://reviews.llvm.org/D74789
This revision add support in ODS for specifying the successors of an operation. Successors are specified via the `successors` list:
```
let successors = (successor AnySuccessor:$target, AnySuccessor:$otherTarget);
```
Differential Revision: https://reviews.llvm.org/D74783
This matches the '(print|parse)OptionalAttrDictWithKeyword' functionality provided by the assembly parser/printer.
Differential Revision: https://reviews.llvm.org/D74682
When operations have optional attributes, or optional operands(i.e. empty variadic operands), the assembly format often has an optional section to represent these arguments. This revision adds basic support for defining an "optional group" in the assembly format to support this. An optional group is defined by wrapping a set of elements in `()` followed by `?` and requires the following:
* The first element of the group must be either a literal or an operand argument.
- This is because the first element must be optionally parsable.
* There must be exactly one argument variable within the group that is marked as the anchor of the group. The anchor is the element whose presence controls whether the group should be printed/parsed. An element is marked as the anchor by adding a trailing `^`.
* The group must only contain literals, variables, and type directives.
- Any attribute variables may be used, but only optional attributes can be marked as the anchor.
- Only variadic, i.e. optional, operand arguments can be used.
- The elements of a type directive must be defined within the same optional group.
An example of this can be seen with the assembly format for ReturnOp, which has a variadic number of operands.
```
def ReturnOp : ... {
let arguments = (ins Variadic<AnyType>:$operands);
// We only print the operands+types if there are a non-zero number
// of operands.
let assemblyFormat = "attr-dict ($operands^ `:` type($operands))?";
}
```
Differential Revision: https://reviews.llvm.org/D74681
This allows for injecting type constraints that are not direct 1-1 mappings, for example when one type is equal to the element type of another. This allows for moving over several more parsers to the declarative form.
Differential Revision: https://reviews.llvm.org/D74648
Summary:
This trait takes three arguments: lhs, rhs, transformer. It verifies that the type of 'rhs' matches the type of 'lhs' when the given 'transformer' is applied to 'lhs'. This allows for adding constraints like: "the type of 'a' must match the element type of 'b'". A followup revision will add support in the declarative parser for using these equality constraints to port more c++ parsers to the declarative form.
Differential Revision: https://reviews.llvm.org/D74647
In some dialects, attributes may have default values that may be
determined only after shape inference. For example, attributes that
are dependent on the rank of the input cannot be assigned a default
value until the rank of the tensor is inferred.
While we can set attributes without explicit setters, referring to
the attributes via accessors instead of having to use the string
interface is better for compile time verification.
The proposed patch add one method per operation attribute that let us
set its value. The code is a very small modification of the existing
getter methods.
Differential Revision: https://reviews.llvm.org/D74143
Add an initial version of mlir-vulkan-runner execution driver.
A command line utility that executes a MLIR file on the Vulkan by
translating MLIR GPU module to SPIR-V and host part to LLVM IR before
JIT-compiling and executing the latter.
Differential Revision: https://reviews.llvm.org/D72696
This patch extends affine data copy optimization utility with an
optional memref filter argument. When the memref filter is used, data
copy optimization will only generate copies for such a memref.
Note: this patch is just porting the memref filter feature from Uday's
'hop' branch: https://github.com/bondhugula/llvm-project/tree/hop.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D74342
Summary: This revision adds support to the declarative parser for formatting enum attributes in the symbolized form. It uses this new functionality to port several of the SPIRV parsers over to the declarative form.
Differential Revision: https://reviews.llvm.org/D74525
Implement a pass to convert gpu.launch_func op into a sequence of
Vulkan runtime calls. The Vulkan runtime API surface is huge so currently we
don't expose separate external functions in IR for each of them, instead we
expose a few external functions to wrapper libraries which manages
Vulkan runtime.
Differential Revision: https://reviews.llvm.org/D74549
Summary:
This was broken recently when moving from dialect registration via
static initializers to explicit intialization.
Differential Revision: https://reviews.llvm.org/D74480
In the previous state, we were relying on forcing the linker to include
all libraries in the final binary and the global initializer to self-register
every piece of the system. This change help moving away from this model, and
allow users to compose pieces more freely. The current change is only "fixing"
the dialect registration and avoiding relying on "whole link" for the passes.
The translation is still relying on the global registry, and some refactoring
is needed to make this all more convenient.
Differential Revision: https://reviews.llvm.org/D74461
* Rename CMake target MLIROptMain to MLIROptLib:
The target provides the main library
* Rename CMake target MLIRMlirOptLib to MLIRMlirOptMain:
The target provides the main() entry function
At the moment, the Bazel configuration of TenorFlow maps the target
MlirOptLib to "lib/Support/MlirOptMain.cpp" and MlirOptMain to
"tools/mlir-opt/mlir-opt.cpp". This is the other way around in the CMake
configuration. As discussed in the context of the pull request
https://github.com/tensorflow/tensorflow/pull/36301, it seems useful to
revise the naming in the MLIR repo.
Differential Revision: https://reviews.llvm.org/D73778
The existing (default) calling convention for memrefs in standard-to-LLVM
conversion was motivated by interfacing with LLVM IR produced from C sources.
In particular, it passes a pointer to the memref descriptor structure when
calling the function. Therefore, the descriptor is allocated on stack before
the call. This convention leads to several problems. PR44644 indicates a
problem with stack exhaustion when calling functions with memref-typed
arguments in a loop. Allocating outside of the loop may lead to concurrent
access problems in case the loop is parallel. When targeting GPUs, the contents
of the stack-allocated memory for the descriptor (passed by pointer) needs to
be explicitly copied to the device. Using an aggregate type makes it impossible
to attach pointer-specific argument attributes pertaining to alignment and
aliasing in the LLVM dialect.
Change the default calling convention for memrefs in standard-to-LLVM
conversion to transform a memref into a list of arguments, each of primitive
type, that are comprised in the memref descriptor. This avoids stack allocation
for ranked memrefs (and thus stack exhaustion and potential concurrent access
problems) and simplifies the device function invocation on GPUs.
Provide an option in the standard-to-LLVM conversion to generate auxiliary
wrapper function with the same interface as the previous calling convention,
compatible with LLVM IR porduced from C sources. These auxiliary functions
pack the individual values into a descriptor structure or unpack it. They also
handle descriptor stack allocation if necessary, serving as an allocation
scope: the memory reserved by `alloca` will be freed on exiting the auxiliary
function.
The effect of this change on MLIR-generated only LLVM IR is minimal. When
interfacing MLIR-generated LLVM IR with C-generated LLVM IR, the integration
only needs to require auxiliary functions and change the function name to call
the wrapper function instead of the original function.
This also opens the door to forwarding aliasing and alignment information from
memrefs to LLVM IR pointers in the standrd-to-LLVM conversion.
This revision adds support in the declarative assembly form for printing attributes with buildable types without the type, and moves several more parsers over to the declarative form.
Differential Revision: https://reviews.llvm.org/D74276
mlir-opt needs to link against MLIRLoopAnalysis
This shouldn't be needed but MLIR "hack" for
"whole-archive" linking is not compatible with
CMake transitive dependencies management.
Differential Revision: https://reviews.llvm.org/D74097
Summary:
This revision adds basic support for emitting line table information when exporting to LLVMIR. We don't yet have a story for supporting all of the LLVM debug metadata, so this revision stubs some features(like subprograms) to enable emitting line tables.
Differential Revision: https://reviews.llvm.org/D73934
The recent refactoring of build files broke building with the MIR CUDA
integration enabled. This fixes it by adding some additional
dependencies to mlir-opt.
Differential Revision: https://reviews.llvm.org/D74041
This binplaces `mlir-translate`, `mlir-cuda-runner`, and `mlir-cpu-runner` when building the CMake install target.
Differential Revision: https://reviews.llvm.org/D73986
Summary:
This patch is a step towards enabling BUILD_SHARED_LIBS=on, which
builds most libraries as DLLs instead of statically linked libraries.
The main effect of this is that incremental build times are greatly
reduced, since usually only one library need be relinked in response
to isolated code changes.
The bulk of this patch is fixing incorrect usage of cmake, where library
dependencies are listed under add_dependencies rather than under
target_link_libraries or under the LINK_LIBS tag. Correct usage should be
like this:
add_dependencies(MLIRfoo MLIRfooIncGen)
target_link_libraries(MLIRfoo MLIRlib1 MLIRlib2)
A separate issue is that in cmake, dependencies between static libraries
are automatically included in dependencies. In the above example, if MLIBlib1
depends on MLIRlib2, then it is sufficient to have only MLIRlib1 in the
target_link_libraries. When compiling with shared libraries, it is necessary
to have both MLIRlib1 and MLIRlib2 specified if MLIRfoo uses symbols from both.
Reviewers: mravishankar, antiagainst, nicolasvasilache, vchuravy, inouehrs, mehdi_amini, jdoerfert
Reviewed By: nicolasvasilache, mehdi_amini
Subscribers: Joonsoo, merge_guards_bot, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, csigg, arpith-jacob, mgester, lucyrfox, herhut, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73653
This revision makes sure that errors emitted outside of testing are treated as fatal errors. This avoids the current silent failures that occur when the format is invalid.
Summary:
Currently BuildableType is assumed to be preceded by a builder. This prevents constructing types that don't have a callable 'get' method with the builder. This revision reworks the format to be like attribute builders, i.e. by accepting $_builder within the format itself.
Differential Revision: https://reviews.llvm.org/D73736
Summary: This revision add support for accepting a few type constraints, e.g. AllTypesMatch, when inferring types for operands and results. This is used to remove the c++ parsers for several additional operations.
Differential Revision: https://reviews.llvm.org/D73735
This commit adds a pattern to lower linalg.generic for reduction
to spv.GroupNonUniform* ops. Right now this only supports integer
reduction on 1-D input memref. Shader entry point ABI is queried
to make sure that the input memref's shape matches the local
workgroup's invocation configuration. This makes sure that the
workload fits in one local workgroup so that we can leverage
SPIR-V group non-uniform operations.
linglg.generic is a structured op that preserves the right level
of information. It is easier to recognize reduction at this level
than performing analysis on loops.
This commit also exposes `getElementPtr` in SPIRVLowering.h given
that it's a generally useful utility function.
Differential Revision: https://reviews.llvm.org/D73437
Summary:
MLIR materializes various enumeration-based LLVM IR operands as enumeration
attributes using ODS. This requires bidirectional conversion between different
but very similar enums, currently hardcoded. Extend the ODS modeling of
LLVM-specific enumeration attributes to include the name of the corresponding
enum in the LLVM C++ API as well as the names of specific enumerants. Use this
new information to automatically generate the conversion functions between enum
attributes and LLVM API enums in the two-way conversion between the LLVM
dialect and LLVM IR proper.
Differential Revision: https://reviews.llvm.org/D73468
Summary:
This revision add support, and testing, for generating the parser and printer from the declarative operation format.
Differential Revision: https://reviews.llvm.org/D73406
Summary:
This is the first revision in a series that adds support for declaratively specifying the asm format of an operation. This revision
focuses solely on parsing the format. Future revisions will add support for generating the proper parser/printer, as well as
transitioning the syntax definition of many existing operations.
This was originally proposed here:
https://llvm.discourse.group/t/rfc-declarative-op-assembly-format/340
Differential Revision: https://reviews.llvm.org/D73405
Summary:
In some cases, one may want to use different names for C++ symbol of an
enumerand from its string representation. In particular, in the LLVM dialect
for, e.g., Linkage, we would like to preserve the same enumerand names as LLVM
API and the same textual IR form as LLVM IR, yet the two are different
(CamelCase vs snake_case with additional limitations on not being a C++
keyword).
Modify EnumAttrCaseInfo in OpBase.td to include both the integer value and its
string representation. By default, this representation is the same as C++
symbol name. Introduce new IntStrAttrCaseBase that allows one to use different
names. Exercise it for LLVM Dialect Linkage attribute. Other attributes will
follow as separate changes.
Differential Revision: https://reviews.llvm.org/D73362
Summary:
Barrier is a simple operation that takes no arguments and returns
nothing, but implies a side effect (synchronization of all threads)
Reviewers: jdoerfert
Subscribers: mgorny, guansong, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72400
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.
This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.
This doesn't actually modify StringRef yet, I'll do that in a follow-up.
Summary:
LLVMIRIntrinsicGen is using LLVM_Op as the base class for intrinsics.
This works for LLVM intrinsics in the LLVM Dialect, but when we are
trying to convert custom intrinsics that originate from a custom
LLVM dialect (like NVVM or ROCDL) these usually have a different
"cppNamespace" that needs to be applied to these dialect.
These dialect specific characteristics (like "cppNamespace")
are typically organized by creating a custom op (like NVVM_Op or
ROCDL_Op) that passes the correct dialect to the LLVM_OpBase class.
It seems natural to allow LLVMIRIntrinsicGen to take that into
consideration when generating the conversion code from one of these
dialect to a set of target specific intrinsics.
Reviewers: rriddle, andydavis1, antiagainst, nicolasvasilache, ftynse
Subscribers: jdoerfert, mehdi_amini, jpienaar, burmako, shauheen, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73233
Summary:
llvm::to_vector() accepts a Range value and not the pair of arguments
we are currently passing. Also we probably want the lowered LLVM
values in the vector, while operand_begin()/operand_end() on MLIR ops
returns MLIR types. lookupValues() seems the correct way to collect
such values.
Reviewers: rriddle, andydavis1, antiagainst, nicolasvasilache, ftynse
Subscribers: jdoerfert, mehdi_amini, jpienaar, burmako, shauheen, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73137
Summary:
Add method in ODS to specify verification for operations implementing a
OpInterface. Use this with infer type op interface to verify that the
inferred type matches the return type and remove special case in
TestPatterns.
This could also have been achieved by using OpInterfaceMethod but verify
seems pretty common and it is not an arbitrary method that just happened
to be named verifyTrait, so having it be defined in special way seems
appropriate/better documenting.
Differential Revision: https://reviews.llvm.org/D73122
Summary:
If an intrinsic has overloadable types like llvm_anyint_ty or
llvm_anyfloat_ty then to getDeclaration() we need to pass a list
of the types that are "undefined" essentially concretizing them.
This patch add support for deriving such types from the MLIR op
that has been matched.
Reviewers: andydavis1, ftynse, nicolasvasilache, antiagainst
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72974
For the generated builder taking in unwrapped attribute values,
if the argument is a string, we should avoid wrapping it in quotes;
otherwise we are always setting the string attribute to contain
the string argument's name. The quotes come from StrinAttr's
`constBuilderCall`, which is reasonable for string literals, but
not function arguments containing strings.
Differential Revision: https://reviews.llvm.org/D72977
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.
Reverts "Revert "[mlir] Create a gpu.module operation for the GPU Dialect.""
This reverts commit ac446302ca4145cdc89f377c0c364c29ee303be5 after
fixing internal Google issues.
This additionally updates ROCDL lowering to use the new gpu.module.
Reviewers: herhut, mravishankar, antiagainst, nicolasvasilache
Subscribers: jholewinski, mgorny, mehdi_amini, jpienaar, burmako, shauheen, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits, mravishankar, rriddle, antiagainst, bkramer
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72921
Summary:
Generalize broadcastable trait to variadic operands. Update the
documentation that still talked about element type as part of
broadcastable trait (that bug was already fixed). Also rename
Broadcastable to ResultBroadcastableShape to be more explicit that the
trait affects the result shape (it is possible for op to allow
broadcastable operands but not have result shape that is broadcast
compatible with operands).
Doing some intermediate work to have getBroadcastedType take an optional
elementType as input and use that if specified, instead of the common
element type of type1 and type2 in this function.
Differential Revision: https://reviews.llvm.org/D72559
Introduce a new generator for MLIR tablegen driver that consumes LLVM IR
intrinsic definitions and produces MLIR ODS definitions. This is useful to
bulk-generate MLIR operations equivalent to existing LLVM IR intrinsics, such
as additional arithmetic instructions or NVVM.
A test exercising the generation is also added. It reads the main LLVM
intrinsics file and produces ODS to make sure the TableGen model remains in
sync with what is used in LLVM.
Differential Revision: https://reviews.llvm.org/D72926
This makes the local variable `implies` to have the correct
type to satisfy ArrayRef's constructor:
/*implicit*/ constexpr ArrayRef(const T (&Arr)[N])
Hopefully this should please GCC 5.
Differential Revision: https://reviews.llvm.org/D72924
In SPIR-V, when a new version is introduced, it is possible some
existing extensions will be incorporated into it so that it becomes
implicitly declared if targeting the new version. This affects
conversion target specification because we need to take this into
account when allowing what extensions to use.
For a capability, it may also implies some other capabilities,
for example, the `Shader` capability implies `Matrix` the capability.
This should also be taken into consideration when preparing the
conversion target: when we specify an capability is allowed, all
its recursively implied capabilities are also allowed.
This commit adds utility functions to query implied extensions for
a given version and implied capabilities for a given capability
and updated SPIRVConversionTarget to use them.
This commit also fixes a bug in availability spec. When a symbol
(op or enum case) can be enabled by an extension, we should drop
it's minimal version requirement. Being enabled by an extension
naturally means the symbol can be used by *any* SPIR-V version
as long as the extension is supported. The grammar still encodes
the 'version' field for such cases, but it should be interpreted
as a different way: rather than meaning a minimal version
requirement, it says the symbol becomes core at that specific
version.
Differential Revision: https://reviews.llvm.org/D72765
Summary:
* Add shaped container type interface which allows infering the shape, element
type and attribute of shaped container type separately. Show usage by way of
tensor type inference trait which combines the shape & element type in
infering a tensor type;
- All components need not be specified;
- Attribute is added to allow for layout attribute that was previously
discussed;
* Expand the test driver to make it easier to test new creation instances
(adding new operands or ops with attributes or regions would trigger build
functions/type inference methods);
- The verification part will be moved out of the test and to verify method
instead of ops implementing the type inference interface in a follow up;
* Add MLIRContext as arg to possible to create type for ops without arguments,
region or location;
* Also move out the section in OpDefinitions doc to separate ShapeInference doc
where the shape function requirements can be captured;
- Part of this would move to the shape dialect and/or shape dialect ops be
included as subsection of this doc;
* Update ODS's variable usage to match camelBack format for builder,
state and arg variables;
- I could have split this out, but I had to make some changes around
these and the inconsistency bugged me :)
Differential Revision: https://reviews.llvm.org/D72432
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.
Reviewers: herhut, mravishankar, antiagainst, rriddle
Reviewed By: herhut, antiagainst, rriddle
Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72336
Thus far we can only generate the same set of methods even for
operations in different dialects. This is problematic for dialects that
want to generate additional operation class methods programmatically,
e.g., a special builder method or attribute getter method. Apparently
we cannot update the OpDefinitionsGen backend every time when such
a need arises. So this CL introduces a hook into the OpDefinitionsGen
backend to allow dialects to emit additional methods and traits to
operation classes.
Differential Revision: https://reviews.llvm.org/D72514
Previously we only check that each field is of the correct
mlir::Attribute subclass. This commit enhances to also consider
the attribute's types, by leveraging the constraints already
encoded in TableGen attribute definitions.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D72162
Summary:
This diff adds support to allow `linalg.generic` and
`linalg.indexed_generic` to take tensor input and output
arguments.
The subset of output tensor operand types must appear
verbatim in the result types after an arrow. The parser,
printer and verifier are extended to accomodate this
behavior.
The Linalg operations now support variadic ranked tensor
return values. This extension exhibited issues with the
current handling of NativeCall in RewriterGen.cpp. As a
consequence, an explicit cast to `SmallVector<Value, 4>`
is added in the proper place to support the new behavior
(better suggestions are welcome).
Relevant cleanups and name uniformization are applied.
Relevant invalid and roundtrip test are added.
Reviewers: mehdi_amini, rriddle, jpienaar, antiagainst, ftynse
Subscribers: burmako, shauheen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72022
Lots of SPIR-V ops take enum attributes and certain enum cases
need extra capabilities or extensions to be available. This commit
extends to allow specifying availability spec on enum cases.
Extra utility functions are generated for the corresponding enum
classes to return the availability requirement. The availability
interface implemention for a SPIR-V op now goes over all enum
attributes to collect the availability requirements.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D71947
for (const auto &x : llvm::zip(..., ...))
->
for (auto x : llvm::zip(..., ...))
The return type of zip() is a wrapper that wraps a tuple of references.
> warning: loop variable 'p' is always a copy because the range of type 'detail::zippy<detail::zip_shortest, ArrayRef<long> &, ArrayRef<long> &>' does not return a reference [-Wrange-loop-analysis]
SPIR-V has a few mechanisms to control op availability: version,
extension, and capabilities. These mechanisms are considered as
different availability classes.
This commit introduces basic definitions for modelling SPIR-V
availability classes. Specifically, an `Availability` class is
added to SPIRVBase.td, along with two subclasses: MinVersion
and MaxVersion for versioning. SPV_Op is extended to take a
list of `Availability`. Each `Availability` instance carries
information for generating op interfaces for the corresponding
availability class and also the concrete availability
requirements.
With the availability spec on ops, we can now auto-generate the
op interfaces of all SPIR-V availability classes and also
synthesize the op's implementations of these interfaces. The
interface generation is done via new TableGen backends
-gen-avail-interface-{decls|defs}. The op's implementation is
done via -gen-spirv-avail-impls.
Differential Revision: https://reviews.llvm.org/D71930
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
This enables providing a default implementation of an interface method. This method is defined on the Trait that is attached to the operation, and thus has all of the same constraints and properties as any other interface method. This allows for interface authors to provide a conservative default implementation for certain methods, without requiring that all users explicitly define it. The default implementation can be specified via the argument directly after the interface method body:
StaticInterfaceMethod<
/*desc=*/"Returns whether two array of types are compatible result types for an op.",
/*retTy=*/"bool",
/*methodName=*/"isCompatibleReturnTypes",
/*args=*/(ins "ArrayRef<Type>":$lhs, "ArrayRef<Type>":$rhs),
/*methodBody=*/[{
return ConcreteOp::isCompatibleReturnTypes(lhs, rhs);
}],
/*defaultImplementation=*/[{
/// Returns whether two arrays are equal as strongest check for
/// compatibility by default.
return lhs == rhs;
}]
PiperOrigin-RevId: 286226054
Scope and Memory Semantics attributes need to be serialized as a
constant integer value and the <id> needs to be used to specify the
value. Fix the auto-generated SPIR-V (de)serialization to handle this.
PiperOrigin-RevId: 285849431
This is needed for calling the generator on a .td file that contains both OpInterface definitions and op definitions with DeclareOpInterfaceMethods<...> Traits.
PiperOrigin-RevId: 285465784
Add variant that does invoke infer type op interface where defined. Also add entry function that invokes that different separate argument builders for wrapped, unwrapped and inference variant.
PiperOrigin-RevId: 285220709
Currently named accessors are generated for attributes returning a consumer
friendly type. But sometimes the attributes are used while transforming an
existing op and then the returned type has to be converted back into an
attribute or the raw `getAttr` needs to be used. Generate raw named accessor
for attributes to reference the raw attributes without having to use the string
interface for better compile time verification. This allows calling
`blahAttr()` instead of `getAttr("blah")`.
Raw here refers to returning the underlying storage attribute.
PiperOrigin-RevId: 284583426
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.
PiperOrigin-RevId: 284360710
Previously the error case was using a sentinel in the error case which was bad. Also make the one `build` invoke the other `build` to reuse verification there.
And follow up on suggestion to use formatv which I missed during previous review.
PiperOrigin-RevId: 284265762
For ops with infer type op interface defined, generate version that calls the inferal method on build. This is intermediate step to removing special casing of SameOperandsAndResultType & FirstAttrDereivedResultType. After that would be generating the inference code, with the initial focus on shaped container types. In between I plan to refactor these a bit to reuse generated paths. The intention would not be to add the type inference trait in multiple places, but rather to take advantage of the current modelling in ODS where possible to emit it instead.
Switch the `inferReturnTypes` method to be static.
Skipping ops with regions here as I don't like the Region vs unique_ptr<Region> difference at the moment, and I want the infer return type trait to be useful for verification too. So instead, just skip it for now to avoid churn.
PiperOrigin-RevId: 284217913
This CL refactors some of the MLIR vector dependencies to allow decoupling VectorOps, vector analysis, vector transformations and vector conversions from each other.
This makes the system more modular and allows extracting VectorToVector into VectorTransforms that do not depend on vector conversions.
This refactoring exhibited a bunch of cyclic library dependencies that have been cleaned up.
PiperOrigin-RevId: 283660308
Existing builders generated by ODS require attributes to be passed
in as mlir::Attribute or its subclasses. This is okay foraggregate-
parameter builders, which is primarily to be used by programmatic
C++ code generation; it is inconvenient for separate-parameter
builders meant to be called in manually written C++ code because
it requires developers to wrap raw values into mlir::Attribute by
themselves.
This CL extends to generate additional builder methods that
take raw values for attributes and handles the wrapping in the
builder implementation. Additionally, if an attribute appears
late in the arguments list and has a default value, the default
value is supplied in the declaration if possible.
PiperOrigin-RevId: 283355919
Right now op argument matching in DRR is position-based, meaning we need to
specify N arguments for an op with N ODS-declared argument. This can be annoying
when we don't want to capture all the arguments. `$_` is to remedy the situation.
PiperOrigin-RevId: 283339992
Certain operations can have multiple variadic operands and their size
relationship is not always known statically. For such cases, we need
a per-op-instance specification to divide the operands into logical
groups or segments. This can be modeled by attributes.
This CL introduces C++ trait AttrSizedOperandSegments for operands and
AttrSizedResultSegments for results. The C++ trait just guarantees
such size attribute has the correct type (1D vector) and values
(non-negative), etc. It serves as the basis for ODS sugaring that
with ODS argument declarations we can further verify the number of
elements match the number of ODS-declared operands and we can generate
handy getter methods.
PiperOrigin-RevId: 282467075
This changes changes the OpDefinitionsGen to automatically add the OpAsmOpInterface for operations with multiple result groups using the provided ODS names. We currently just limit the generation to multi-result ops as most single result operations don't have an interesting name(result/output/etc.). An example is shown below:
// The following operation:
def MyOp : ... {
let results = (outs AnyType:$first, Variadic<AnyType>:$middle, AnyType);
}
// May now be printed as:
%first, %middle:2, %0 = "my.op" ...
PiperOrigin-RevId: 281834156
This CL uses the pattern rewrite infrastructure to implement a simple VectorOps -> VectorOps legalization strategy to unroll coarse-grained vector operations into finer grained ones.
The transformation is written using local pattern rewrites to allow composition with other rewrites. It proceeds by iteratively introducing fake cast ops and cleaning canonicalizing or lowering them away where appropriate.
This is an example of writing transformations as compositions of local pattern rewrites that should enable us to make them significantly more declarative.
PiperOrigin-RevId: 281555100
Thus far DRR always invokes the separate-parameter builder (i.e., requiring
a separate parameter for each result-type/operand/attribute) for creating
ops, no matter whether we can auto-generate a builder with type-deduction
ability or not.
This CL changes the path for ops that we can auto-generate type-deduction
builders, i.e., with SameOperandsAndResultType/FirstAttrDerivedResultType
traits. Now they are going through a aggregate-parameter builder (i.e.,
requiring one parameter for all result-types/operands/attributes).
attributes.)
It is expected this approach will be more friendly for future shape inference
function autogen and calling those autogen'd shape inference function without
excessive packing and repacking operand/attribute lists.
Also, it would enable better support for creating ops with optional attributes
because we are not required to provide an Attribute() as placeholder for
an optional attribute anymore.
PiperOrigin-RevId: 280654800
Refactoring the conversion from StandardOps/GPU dialect to SPIR-V
dialect:
1) Move the SPIRVTypeConversion and SPIRVOpLowering class into SPIR-V
dialect.
2) Add header files that expose functions to add patterns for the
dialects to SPIR-V lowering, as well as a pass that does the
dialect to SPIR-V lowering.
3) Make SPIRVOpLowering derive from OpLowering class.
PiperOrigin-RevId: 280486871
Since VariableOp is serialized during processBlock, we add two more fields,
`functionHeader` and `functionBody`, to collect instructions for a function.
After all the blocks have been processed, we append them to the `functions`.
Also, fix a bug in processGlobalVariableOp. The global variables should be
encoded into `typesGlobalValues`.
PiperOrigin-RevId: 280105366
This CL adds an extra pointer to the memref descriptor to allow specifying alignment.
In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing.
After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers.
This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ).
PiperOrigin-RevId: 279959463
MLIR translation tools can emit diagnostics and we want to be able to check if
it is indeed the case in tests. Reuse the source manager error handlers
provided for mlir-opt to support the verification in mlir-translate. This
requires us to change the signature of the functions that are registered to
translate sources to MLIR: it now takes a source manager instead of a memory
buffer.
PiperOrigin-RevId: 279132972
This makes the generated doc easier to read and it is also
more friendly to certain markdown parsers like kramdown.
Fixestensorflow/mlir#221
PiperOrigin-RevId: 278643469
BitEnumAttr is a mechanism for modelling attributes whose value is
a bitfield. It should not be scoped to the SPIR-V dialect and can
be used by other dialects too.
This CL is mostly shuffling code around and adding tests and docs.
Functionality changes are:
* Fixed to use `getZExtValue()` instead of `getSExtValue()` when
getting the value from the underlying IntegerAttr for a case.
* Changed to auto-detect whether there is a case whose value is
all bits unset (i.e., zero). If so handle it specially in all
helper methods.
PiperOrigin-RevId: 277964926
Previously DRR assumes attributes to appear after operands. This was the
previous requirements on ODS, but that has changed some time ago. Fix
DRR to also support interleaved operands and attributes.
PiperOrigin-RevId: 275983485
Otherwise, we'll see the following warning when compiling with GCC 8:
warning: this ?for? clause does not guard... [-Wmisleading-indentation]
PiperOrigin-RevId: 275735925
NativeCodeCall is handled differently than normal op creation in RewriterGen
(because its flexibility). It will only be materialized to output stream if
it is used. But when using it for auxiliary patterns, we still want the side
effect even if it is not replacing matched root op's results.
PiperOrigin-RevId: 275265467
It's usually hard to understand what went wrong if mlir-tblgen
crashes on some input. This CL adds a few useful LLVM_DEBUG
statements so that we can use mlir-tblegn -debug to figure
out the culprit for a crash.
PiperOrigin-RevId: 275253532
Add a pass to decorate the composite types used by
composite objects in the StorageBuffer, PhysicalStorageBuffer,
Uniform, and PushConstant storage classes with layout information.
Closestensorflow/mlir#156
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/156 from denis0x0D:sandbox/layout_info_decoration 7c50840fd38ca169a2da7ce9886b52b50c868b84
PiperOrigin-RevId: 273634140
MLIR uses symbol references to model references to many global entities, such as functions/variables/etc. Before this change, there is no way to actually reason about the uses of such entities. This change provides a walker for symbol references(via SymbolTable::walkSymbolUses), as well as 'use_empty' support(via SymbolTable::symbol_use_empty). It also resolves some deficiencies in the LangRef definition of SymbolRefAttr, namely the restrictions on where a SymbolRefAttr can be stored, ArrayAttr and DictionaryAttr, and the relationship with operations containing the SymbolTable trait.
PiperOrigin-RevId: 273549331
Originally, we were attaching attributes containing CUBIN blobs to the kernel
function called by `gpu.launch_func`. This kernel is now contained in a nested
module that is used as a compilation unit. Attach compiled CUBIN blobs to the
module rather than to the function since we were compiling the module. This
also avoids duplication of the attribute on multiple kernels within the same
module.
PiperOrigin-RevId: 273497303
Now that the accessor function is a trivial getter of the global variable, it
makes less sense to have the getter generation as a separate pass. Move the
getter generation into the lowering of `gpu.launch_func` to CUDA calls. This
change is mostly code motion, but the process can be simplified further by
generating the addressof inplace instead of using a call. This is will be done
in a follow-up.
PiperOrigin-RevId: 273492517
Add new `typeDescription` (description was already used by base constraint class) field to type to allow writing longer descriptions about a type being defined. This allows for providing additional information/rationale for a defined type. This currently uses `description` as the heading/name for the type in the generated documentation.
PiperOrigin-RevId: 273299332
This makes the name of the conversion pass more consistent with the naming
scheme, since it actually converts from the Loop dialect to the Standard
dialect rather than working with arbitrary control flow operations.
PiperOrigin-RevId: 272612112
This is a follow-up to the PRtensorflow/mlir#146 which introduced the ROCDL Dialect. This PR introduces a pass to lower GPU Dialect to the ROCDL Dialect. As with the previous PR, this one builds on the work done by @whchung, and addresses most of the review comments in the original PR.
Closestensorflow/mlir#154
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/154 from deven-amd:deven-lower-gpu-to-rocdl 809893e08236da5ab6a38e3459692fa04247773d
PiperOrigin-RevId: 272390729
Add DeclareOpInterfaceFunctions to enable specifying whether OpInterfaceMethods
for an OpInterface should be generated automatically. This avoids needing to
declare the extra methods, while also allowing adding function declaration by way of trait/inheritance.
Most of this change is mechanical/extracting classes to be reusable.
PiperOrigin-RevId: 272042739
This CL finishes the implementation of the lowering part of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Strided memrefs correspond conceptually to the following templated C++ struct:
```
template <typename Elem, size_t Rank>
struct {
Elem *ptr;
int64_t offset;
int64_t sizes[Rank];
int64_t strides[Rank];
};
```
The linearization procedure for address calculation for strided memrefs is the same as for linalg views:
`base_offset + SUM_i index_i * stride_i`.
The following CL will unify Linalg and Standard by removing !linalg.view in favor of strided memrefs.
PiperOrigin-RevId: 272033399
The strided MemRef RFC discusses a normalized descriptor and interaction with library calls (https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
Lowering of nested LLVM structs as value types does not play nicely with externally compiled C/C++ functions due to ABI issues.
Solving the ABI problem generally is a very complex problem and most likely involves taking
a dependence on clang that we do not want atm.
A simple workaround is to pass pointers to memref descriptors at function boundaries, which this CL implement.
PiperOrigin-RevId: 271591708
This commit introduces the ROCDL Dialect (i.e. the ROCDL ops + the code to lower those ROCDL ops to LLWM intrinsics/functions). Think of ROCDL Dialect as analogous to the NVVM Dialect, but for AMD GPUs. This patch contains just the essentials needed to get a simple example up and running. We expect to make further additions to the ROCDL Dialect.
This is the first of 3 commits, the follow-up will be:
* add a pass that lowers GPU Dialect to ROCDL Dialect
* add a "mlir-rocm-runner" utility
Closestensorflow/mlir#146
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/146 from deven-amd:deven-rocdl-dialect e78e8005c75a78912631116c78dc844fcc4b0de9
PiperOrigin-RevId: 271511259
The support for functions taking and returning memrefs of floats was introduced
in the first version of the runner, created before MLIR had reliable lowering
of allocation/deallocation to library calls. It forcibly runs MLIR
transformation convering affine, loop and standard dialects into the LLVM
dialect, unlike the other runner flows that accept the LLVM dialect directly.
Memref support leads to more complex layering and is generally fragile. Drop
it in favor of functions returning a scalar, or library-based function calls to
print memrefs and other data structures.
PiperOrigin-RevId: 271330839
1) Process and ignore the following debug instructions: OpSource,
OpSourceContinued, OpSourceExtension, OpString, OpModuleProcessed.
2) While processing OpTypeInt instruction, ignore the signedness
specification. Currently MLIR doesnt make a distinction between signed
and unsigned integer types.
3) Process and ignore BufferBlock decoration (similar to Buffer
decoration). StructType needs to be enhanced to track this attribute
since its needed for proper validation checks.
4) Report better error for unhandled instruction during
deserialization.
PiperOrigin-RevId: 271057060
This change adds support for documenting interfaces and their methods. A tablegen generator for the interface documentation is also added(gen-op-interface-doc).
Documentation is added to an OpInterface via the `description` field:
def MyOpInterface : OpInterface<"MyOpInterface"> {
let description = [{
My interface is very interesting.
}];
}
Documentation is added to an InterfaceMethod via a new `description` field that comes right before the optional body:
InterfaceMethod<"void", "foo", (ins), [{
This is the foo method.
}]>,
PiperOrigin-RevId: 270965485
Similar to mlir-opt, having a -split-input-file mode is quite useful
in mlir-translate. It allows to put logically related tests in the
same test file for better organization.
PiperOrigin-RevId: 270805467
Roll forward of commit 5684a12.
When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.
PiperOrigin-RevId: 270639748
When outlining GPU kernels, put the kernel function inside a nested module. Then use a nested pipeline to generate the cubins, independently per kernel. In a final pass, move the cubins back to the parent module.
PiperOrigin-RevId: 269987720
This CL changes translation functions to take MemoryBuffer
as input and raw_ostream as output. It is generally better to
avoid handling files directly in a library (unless the library
is specifically for file manipulation) and we can unify all
file handling to the mlir-translate binary itself.
PiperOrigin-RevId: 269625911
A generic mechanism for (de)serialization of extended instruction sets
is added with this CL. To facilitate this, a new class
"SPV_ExtendedInstSetOp" is added which is a base class for all
operations corresponding to extended instruction sets. The methods to
(de)serialization such ops as well as its dispatch is generated
automatically.
The behavior controlled by autogenSerialization and hasOpcode is also
slightly modified to enable this. They are now decoupled.
1) Setting hasOpcode=1 means the operation has a corresponding
opcode in SPIR-V binary format, and its dispatch for
(de)serialization is automatically generated.
2) Setting autogenSerialization=1 generates the function for
(de)serialization automatically.
So now it is possible to have hasOpcode=0 and autogenSerialization=1
(for example SPV_ExtendedInstSetOp).
Since the dispatch functions is also auto-generated, the input file
needs to contain all operations. To this effect, SPIRVGLSLOps.td is
included into SPIRVOps.td. This makes the previously added
SPIRVGLSLOps.h and SPIRVGLSLOps.cpp unnecessary, and are deleted.
The SPIRVUtilsGen.cpp is also changed to make better use of
formatv,making the code more readable.
PiperOrigin-RevId: 269456263
Certain enum classes in SPIR-V, like function/loop control and memory
access, are bitmasks. This CL introduces a BitEnumAttr to properly
model this and drive auto-generation of verification code and utility
functions. We still store the attribute using an 32-bit IntegerAttr
for minimal memory footprint and easy (de)serialization. But utility
conversion functions are adjusted to inspect each bit and generate
"|"-concatenated strings for the bits; vice versa.
Each such enum class has a "None" case that means no bit is set. We
need special handling for "None". Because of this, the logic is not
general anymore. So right now the definition is placed in the SPIR-V
dialect. If later this turns out to be useful for other dialects,
then we can see how to properly adjust it and move to OpBase.td.
Added tests for SPV_MemoryAccess to check and demonstrate.
PiperOrigin-RevId: 269350620
This allows for explicitly specifying the pipeline to add to the pass manager. This includes the nesting structure, as well as the passes/pipelines to run. A textual pipeline string is defined as a series of names, each of which may in itself recursively contain a nested pipeline description. A name is either the name of a registered pass, or pass pipeline, (e.g. "cse") or the name of an operation type (e.g. "func").
For example, the following pipeline:
$ mlir-opt foo.mlir -cse -canonicalize -lower-to-llvm
Could now be specified as:
$ mlir-opt foo.mlir -pass-pipeline='func(cse, canonicalize), lower-to-llvm'
This will allow for running pipelines on nested operations, like say spirv modules. This does not remove any of the current functionality, and in fact can be used in unison. The new option is available via 'pass-pipeline'.
PiperOrigin-RevId: 268954279
This is done via a new set of instrumentation hooks runBeforePipeline/runAfterPipeline, that signal the lifetime of a pass pipeline on a specific operation type. These hooks also provide the parent thread of the pipeline, allowing for accurate merging of timers running on different threads.
PiperOrigin-RevId: 267909193
This change generalizes the structure of the pass manager to allow arbitrary nesting pass managers for other operations, at any level. The only user visible change to existing code is the fact that a PassManager must now provide an MLIRContext on construction. A new class `OpPassManager` has been added that represents a pass manager on a specific operation type. `PassManager` will remain the top-level entry point into the pipeline, with OpPassManagers being nested underneath. OpPassManagers will still be implicitly nested if the operation type on the pass differs from the pass manager. To explicitly build a pipeline, the 'nest' methods on OpPassManager may be used:
// Pass manager for the top-level module.
PassManager pm(ctx);
// Nest a pipeline operating on FuncOp.
OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(...);
// Nest a pipeline under the FuncOp pipeline that operates on spirv::ModuleOp
OpPassManager &spvModulePM = pm.nest<spirv::ModuleOp>();
// Nest a pipeline on FuncOps inside of the spirv::ModuleOp.
OpPassManager &spvFuncPM = spvModulePM.nest<FuncOp>();
To help accomplish this a new general OperationPass is added that operates on opaque Operations. This pass can be inserted in a pass manager of any type to operate on any operation opaquely. An example of this opaque OperationPass is a VerifierPass, that simply runs the verifier opaquely on the current operation.
/// Pass to verify an operation and signal failure if necessary.
class VerifierPass : public OperationPass<VerifierPass> {
void runOnOperation() override {
Operation *op = getOperation();
if (failed(verify(op)))
signalPassFailure();
markAllAnalysesPreserved();
}
};
PiperOrigin-RevId: 266840344
- the list of passes run by mlir-cpu-runner included -lower-affine and
-lower-to-llvm but was missing -lower-to-cfg (because -lower-affine at
some point used to lower straight to CFG); add -lower-to-cfg in
between. IR with affine ops can now be run by mlir-cpu-runner.
- update -lower-to-cfg to be consistent with other passes (create*Pass methods
were changed to return unique ptrs, but -lower-to-cfg appears to have been
missed).
- mlir-cpu-runner was unable to parse custom form of affine op's - fix
link options
- drop unnecessary run options from test/mlir-cpu-runner/simple.mlir
(none of the test cases had loops)
- -convert-to-llvmir was changed to -lower-to-llvm at some point, but the
create pass method name wasn't updated (this pass converts/lowers to LLVM
dialect as opposed to LLVM IR). Fix this.
(If we prefer "convert", the cmd-line options could be changed to
"-convert-to-llvm/cfg" then.)
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#115
PiperOrigin-RevId: 266666909
Similar to enum, added a generator for structured data. This provide Dictionary that stores a fixed set of values and guarantees the values are valid. It is intended to store a fixed number of values by a given name.
PiperOrigin-RevId: 266437460
Instead of lowering the program in two steps (Standard->LLVM followed
by GPU->NVVM), leading to invalid IR inbetween, the runner now uses
one pattern based rewrite step to go directly from Standard+GPU to
LLVM+NVVM.
PiperOrigin-RevId: 265861934
Operation interfaces generally require a bit of boilerplate code to connect all of the pieces together. This cl introduces mechanisms in the ODS to allow for generating operation interfaces via the 'OpInterface' class.
Providing a definition of the `OpInterface` class will auto-generate the c++
classes for the interface. An `OpInterface` includes a name, for the c++ class,
along with a list of interface methods. There are two types of methods that can be used with an interface, `InterfaceMethod` and `StaticInterfaceMethod`. They are both comprised of the same core components, with the distinction that `StaticInterfaceMethod` models a static method on the derived operation.
An `InterfaceMethod` is comprised of the following components:
* ReturnType
- A string corresponding to the c++ return type of the method.
* MethodName
- A string corresponding to the desired name of the method.
* Arguments
- A dag of strings that correspond to a c++ type and variable name
respectively.
* MethodBody (Optional)
- An optional explicit implementation of the interface method.
def MyInterface : OpInterface<"MyInterface"> {
let methods = [
// A simple non-static method with no inputs.
InterfaceMethod<"unsigned", "foo">,
// A new non-static method accepting an input argument.
InterfaceMethod<"Value *", "bar", (ins "unsigned":$i)>,
// Query a static property of the derived operation.
StaticInterfaceMethod<"unsigned", "fooStatic">,
// Provide the definition of a static interface method.
// Note: `ConcreteOp` corresponds to the derived operation typename.
StaticInterfaceMethod<"Operation *", "create",
(ins "OpBuilder &":$builder, "Location":$loc), [{
return builder.create<ConcreteOp>(loc);
}]>,
// Provide a definition of the non-static method.
// Note: `op` corresponds to the derived operation variable.
InterfaceMethod<"unsigned", "getNumInputsAndOutputs", (ins), [{
return op.getNumInputs() + op.getNumOutputs();
}]>,
];
PiperOrigin-RevId: 264754898
This CL extends declarative rewrite rules to support matching and
generating ops with variadic operands/results. For this, the
generated `matchAndRewrite()` method for each pattern now are
changed to
* Use "range" types for the local variables used to store captured
values (`operand_range` for operands, `ArrayRef<Value *>` for
values, *Op for results). This allows us to have a unified way
of handling both single values and value ranges.
* Create local variables for each operand for op creation. If the
operand is variadic, then a `SmallVector<Value*>` will be created
to collect all values for that operand; otherwise a `Value*` will
be created.
* Use a collective result type builder. All result types are
specified via a single parameter to the builder.
We can use one result pattern to replace multiple results of the
matched root op. When that happens, it will require specifying
types for multiple results. Add a new collective-type builder.
PiperOrigin-RevId: 264588559
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.
PiperOrigin-RevId: 263953918
This CL is step 3/n towards building a simple, programmable and portable vector abstraction in MLIR that can go all the way down to generating assembly vector code via LLVM's opt and llc tools.
This CL adds support for converting MLIR n-D vector types to (n-1)-D arrays of 1-D LLVM vectors and a conversion VectorToLLVM that lowers the `vector.extractelement` and `vector.outerproduct` instructions to the proper mix of `llvm.vectorshuffle`, `llvm.extractelement` and `llvm.mulf`.
This has been independently verified to produce proper avx2 code.
Input:
```
func @vec_1d(%arg0: vector<4xf32>, %arg1: vector<8xf32>) -> vector<8xf32> {
%2 = vector.outerproduct %arg0, %arg1 : vector<4xf32>, vector<8xf32>
%3 = vector.extractelement %2[0 : i32]: vector<4x8xf32>
return %3 : vector<8xf32>
}
```
Command:
```
mlir-opt vector-to-llvm.mlir -vector-lower-to-llvm-dialect --disable-pass-threading | mlir-opt -lower-to-cfg -lower-to-llvm | mlir-translate --mlir-to-llvmir | opt -O3 | llc -O3 -march=x86-64 -mcpu=haswell -mattr=fma,avx2
```
Output:
```
vec_1d: # @vec_1d
# %bb.0:
vbroadcastss %xmm0, %ymm0
vmulps %ymm1, %ymm0, %ymm0
retq
```
PiperOrigin-RevId: 262895929
In declarative rewrite rules, a symbol can be bound to op arguments or
results in the source pattern, and it can be bound to op results in the
result pattern. This means given a symbol in the pattern, it can stands
for different things: op operand, op attribute, single op result,
op result pack. We need a better way to model this complexity so that
we can handle according to the specific kind a symbol corresponds to.
Created SymbolInfo class for maintaining the information regarding a
symbol. Also created a companion SymbolInfoMap class for a map of
such symbols, providing insertion and querying depending on use cases.
PiperOrigin-RevId: 262675515
The translation code predates the introduction of LogicalResult and was relying
on the obsolete LLVM convention of returning false on success. Change it to
use MLIR's LogicalResult abstraction instead. NFC.
PiperOrigin-RevId: 262589432
Previously we are emitting separate match() and rewrite()
methods, which requires conveying a match state struct
in a unique_ptr across these two methods. Changing to
emit matchAndRewrite() simplifies the picture.
PiperOrigin-RevId: 261906804
Instead of setting the attributes for decorations one by one
after constructing the op, this CL changes to attach all
the attributes for decorations to the attribute vector for
constructing the op. This should be simpler and more
efficient.
PiperOrigin-RevId: 261905578
This allows for proper forward declaration, as opposed to leaking the internal implementation via a using directive. This also allows for all pattern building to go through 'insert' methods on the OwningRewritePatternList, replacing uses of 'push_back' and 'RewriteListBuilder'.
PiperOrigin-RevId: 261816316
verifyUnusedValue is a bit strange given that it is specified in a
result pattern but used to generate match statements. Now we are
able to support multi-result ops better, we can retire it and replace
it with a HasNoUseOf constraint. This reduces the number of mechanisms.
PiperOrigin-RevId: 261166863
We allow to generate more ops than what are needed for replacing
the matched root op. Only the last N static values generated are
used as replacement; the others serve as auxiliary ops/values for
building the replacement.
With the introduction of multi-result op support, an op, if used
as a whole, may be used to replace multiple static values of
the matched root op. We need to consider this when calculating
the result range an generated op is to replace.
For example, we can have the following pattern:
```tblgen
def : Pattern<(ThreeResultOp ...),
[(OneResultOp ...), (OneResultOp ...), (OneResultOp ...)]>;
// Two op to replace all three results
def : Pattern<(ThreeResultOp ...),
[(TwoResultOp ...), (OneResultOp ...)]>;
// One op to replace all three results
def : Pat<(ThreeResultOp ...), (ThreeResultOp ...)>;
def : Pattern<(ThreeResultOp ...),
[(AuxiliaryOp ...), (ThreeResultOp ...)]>;
```
PiperOrigin-RevId: 261017235
Previously we use one single method with lots of branches to
generate multiple builders. This makes the method difficult
to follow and modify. This CL splits the method into multiple
dedicated ones, by extracting common logic into helper methods
while leaving logic specific to each builder in their own
methods.
PiperOrigin-RevId: 261011082
During serialization, the operand number must be used to get the
values assocaited with an operand. Using the argument number in Op
specification was wrong since some of the elements in the arguments
list might be attributes on the operation. This resulted in a segfault
during serialization.
Add a test that exercise that path.
PiperOrigin-RevId: 260977758
All non-argument attributes specified for an operation are treated as
decorations on the result value and (de)serialized using OpDecorate
instruction. An error is generated if an attribute is not an argument,
and the name doesn't correspond to a Decoration enum. Name of the
attributes that represent decoerations are to be the snake-case-ified
version of the Decoration name.
Add utility methods to convert to snake-case and camel-case.
PiperOrigin-RevId: 260792638
Add a missed library that needs to be linked with mlir-opt. This
results in a test failure in the MLIR due to the pass
`-convert-gpu-to-spirv` not being found.
PiperOrigin-RevId: 260773067
This CL adds an initial implementation for translation of kernel
function in GPU Dialect (used with a gpu.launch_kernel) op to a
spv.Module. The original function is translated into an entry
function.
Most of the heavy lifting is done by adding TypeConversion and other
utility functions/classes that provide most of the functionality to
translate from Standard Dialect to SPIR-V Dialect. These are intended
to be reusable in implementation of different dialect conversion
pipelines.
Note : Some of the files for have been renamed to be consistent with
the norm used by the other Conversion frameworks.
PiperOrigin-RevId: 260759165
RewriterGen was emitting invalid C++ code if the pattern required to create a
zero-result operation due to the absence of a special case that would avoid
generating a spurious comma. Handle this case. Also add rewriter tests for
zero-argument operations.
PiperOrigin-RevId: 260576998
Automatic generation of spirv::AccessChainOp (de)serialization needs
the (de)serialization emitters to handle argument specified as
Variadic<...>. To handle this correctly, this argument can only be
the last entry in the arguments list.
Add a test to (de)serialize spirv::AccessChainOp
PiperOrigin-RevId: 260532598
It's quite common that we want to put further constraints on the matched
multi-result op's specific results. This CL enables referencing symbols
bound to source op with the `__N` syntax.
PiperOrigin-RevId: 260122401
Per tacit agreement, individual dialects should now live in lib/Dialect/Name
with headers in include/mlir/Dialect/Name and tests in test/Dialect/Name.
PiperOrigin-RevId: 259896851
* Let them return `LogicalResult` so we can chain them together
with other functions returning `LogicalResult`.
* Added "Into" as the suffix to the function name and made the
`binary` as the first parameter so that it reads more naturally.
PiperOrigin-RevId: 259311636
We already have two levels of controls in SPIRVBase.td: hasOpcode and
autogenSerialization. The former controls whether to add an entry to
the dispatch table, while the latter controls whether to autogenerate
the op's (de)serialization method specialization. This is enough for
our cases. Remove the indirection from processOp to processOpImpl
to simplify the picture.
PiperOrigin-RevId: 259308711
Since the serialization of EntryPointOp contains the name of the
function as well, the function serialization emits the function name
using OpName instruction, which is used during deserialization to get
the correct function name.
PiperOrigin-RevId: 259158784
This specific PatternRewriter will allow for exposing hooks in the future that are only useful for the conversion framework, e.g. type conversions.
PiperOrigin-RevId: 258818122
For ops in SPIR-V dialect that are a direct mirror of SPIR-V
operations, the serialization/deserialization methods can be
automatically generated from the Op specification. To enable this an
'autogenSerialization' field is added to SPV_Ops. When set to
non-zero, this will enable the automatic (de)serialization function
generation
Also adding tests that verify the spv.Load, spv.Store and spv.Variable
ops are serialized and deserialized correctly. To fully support these
tests also add serialization and deserialization of float types and
spv.ptr types
PiperOrigin-RevId: 258684764
These ops should not belong to the std dialect.
This CL extracts them in their own dialect and updates the corresponding conversions and tests.
PiperOrigin-RevId: 258123853
following SPIRV Instructions serializaiton/deserialization are added
as well
OpFunction
OpFunctionParameter
OpFunctionEnd
OpReturn
PiperOrigin-RevId: 257869806
This CL splits the lowering of affine to LLVM into 2 parts:
1. affine -> std
2. std -> LLVM
The conversions mostly consists of splitting concerns between the affine and non-affine worlds from existing conversions.
Short-circuiting of affine `if` conditions was never tested or exercised and is removed in the process, it can be reintroduced later if needed.
LoopParametricTiling.cpp is updated to reflect the newly added ForOp::build.
PiperOrigin-RevId: 257794436
cuMemHostRegister expects the size of registered memory in bytes whereas the
memref descriptor in memref_t contains the number of elements. Get the actual
size in bytes instead.
PiperOrigin-RevId: 257589116
JSON spec into the SPIRBase.td file. This is done incrementally to
only import those opcodes that are needed, through use of the script
define_opcode.sh added.
PiperOrigin-RevId: 257517343
Extend the utility that converts affine loop nests to support other types of
loops by abstracting away common behavior through templates. This also
slightly simplifies the existing Affine to GPU conversion by always passing in
the loop step as an additional kernel argument even though it is a known
constant. If it is used, it will be propagated into the loop body by the
existing canonicalization pattern and can be further constant-folded, otherwise
it will be dropped by canonicalization.
This prepares for the common loop abstraction that will be used for converting
to GPU kernels, which is conceptually close to Linalg loops, while maintaining
the existing conversion operational.
PiperOrigin-RevId: 257172216
Some operations need to override the default behavior of builders, in
particular region-holding operations such as affine.for or tf.graph want to
inject default terminators into the region upon construction, which default
builders won't do. Provide a flag that disables the generation of default
builders so that the custom builders could use the same function signatures.
This is an intentionally low-level and heavy-weight feature that requires the
entire builder to be implemented, and it should be used sparingly. Injecting
code into the end of a default builder would depend on the naming scheme of the
default builder arguments that is not visible in the ODS. Checking that the
signature of a custom builder conflicts with that of a default builder to
prevent emission would require teaching ODG to differentiate between types and
(optional) argument names in the generated C++ code. If this flag ends up
being used a lot, we should consider adding traits that inject specific code
into the default builder.
PiperOrigin-RevId: 256640069
This tool allows to execute MLIR IR snippets written in the GPU dialect
on a CUDA capable GPU. For this to work, a working CUDA install is required
and the build has to be configured with MLIR_CUDA_RUNNER_ENABLED set to 1.
PiperOrigin-RevId: 256551415
This CL introduces a new syntax for creating multi-result ops and access their
results in result patterns. Specifically, if a multi-result op is unbound or
bound to a name without a trailing `__N` suffix, it will act as a value pack
and expand to all its values. If a multi-result op is bound to a symbol with
`__N` suffix, only the N-th result will be extracted and used.
PiperOrigin-RevId: 256465208
As with Functions, Module will soon become an operation, which are value-typed. This eases the transition from Module to ModuleOp. A new class, OwningModuleRef is provided to allow for owning a reference to a Module, and will auto-delete the held module on destruction.
PiperOrigin-RevId: 256196193
Move the data members out of Function and into a new impl storage class 'FunctionStorage'. This allows for Function to become value typed, which will greatly simplify the transition of Function to FuncOp(given that FuncOp is also value typed).
PiperOrigin-RevId: 255983022
In ODS, right now we use StringAttrs to emulate enum attributes. It is
suboptimal if the op actually can and wants to store the enum as a
single integer value; we are paying extra cost on storing and comparing
the attribute value.
This CL introduces a new enum attribute subclass that are backed by
IntegerAttr. The downside with IntegerAttr-backed enum attributes is
that the assembly form now uses integer values, which is less obvious
than the StringAttr-backed ones. However, that can be remedied by
defining custom assembly form with the help of the conversion utility
functions generated via EnumsGen.
Choices are given to the dialect writers to decide which one to use for
their enum attributes.
PiperOrigin-RevId: 255935542
Split out class to command line parser for translate methods into standalone
class. Similar to splitting up mlir-opt to reuse functionality with different
initialization.
PiperOrigin-RevId: 255225790
The actual transformation from PTX source to a CUDA binary is now factored out,
enabling compiling and testing the transformations independently of a CUDA
runtime.
MLIR has still to be built with NVPTX target support for the conversions to be
built and tested.
PiperOrigin-RevId: 255167139
Enable reusing the real mlir-opt main from unit tests and in case where
additional initialization needs to happen before main is invoked (e.g., when
using different command line flag libraries).
PiperOrigin-RevId: 254764575
This CL adds the basic SPIR-V serializer and deserializer for converting
SPIR-V module into the binary format and back. Right now only an empty
module with addressing model and memory model is supported; (de)serialize
other components will be added gradually with subsequent CLs.
The purpose of this library is to enable importing SPIR-V binary modules
to run transformations on them and exporting SPIR-V modules to be consumed
by execution environments. The focus is transformations, which inevitably
means changes to the binary module; so it is not designed to be a general
tool for investigating the SPIR-V binary module and does not guarantee
roundtrip equivalence (at least for now).
PiperOrigin-RevId: 254473019
https://www.khronos.org/registry/spir-v/specs/1.0/SPIRV.html#OpTypeImage.
Add new enums to describe Image dimensionality, Image Depth, Arrayed
information, Sampling, Sampler User information, and Image format.
Doesn's support the Optional Access qualifier at this stage
Fix Enum generator for tblgen to add "_" at the beginning if the enum
starts with a number.
PiperOrigin-RevId: 254091423
Support for ops with variadic operands/results will come later; but right now
a proper message helps to avoid deciphering confusing error messages later in
the compilation stage.
PiperOrigin-RevId: 254071820
This name has caused some confusion because it suggests that it's running op verification (and that this verification isn't getting run by default).
PiperOrigin-RevId: 254035268
Conversions from dialect A to dialect B depend on both A and B. Therefore, it
is reasonable for them to live in a separate library that depends on both
DialectA and DialectB library, and does not forces dependees of DialectA or
DialectB to also link in the conversion. Create the directory layout for the
conversions and move the Standard to LLVM dialect conversion as the first
example.
PiperOrigin-RevId: 253312252
This converts entire loops into threads/blocks. No check on the size of the
block or grid, or on the validity of parallelization is performed, it is under
the responsibility of the caller to strip-mine the loops and to perform the
dependence analysis before calling the conversion.
PiperOrigin-RevId: 253189268
This CL enables verification code generation for variadic operands and results.
In verify(), we use fallback getter methods to access all the dynamic values
belonging to one static variadic operand/result to reuse the value range
calculation there.
PiperOrigin-RevId: 252288219
This CL added getODSOperands() and getODSResults() as fallback getter methods for
getting all the dynamic values corresponding to a static operand/result (which
can be variadic). It should provide a uniform way of calculating the value ranges.
All named getter methods are layered on top of these methods now.
PiperOrigin-RevId: 252284270
Enum attributes can be defined using `EnumAttr`, which requires all its cases
to be defined with `EnumAttrCase`. To facilitate the interaction between
`EnumAttr`s and their C++ consumers, add a new EnumsGen TableGen backend
to generate a few common utilities, including an enum class, `llvm::DenseMapInfo`
for the enum class, conversion functions from/to strings.
This is controlled via the `-gen-enum-decls` and `-gen-enum-defs` command-line
options of `mlir-tblgen`.
PiperOrigin-RevId: 252209623
Considered adding more placeholders to designate types in the replacement pattern, but convinced for now sticking to simpler approach. This should at least enable specifying constraints across operands/results/attributes and we can start getting rid of the special cases.
PiperOrigin-RevId: 251564893
When manipulating generic operations, such as in dialect conversion /
rewriting, it is often necessary to view a list of Values as operands to an
operation without creating the operation itself. The absence of such view
makes dialect conversion patterns, among others, to use magic numbers to obtain
specific operands from a list of rewritten values when converting an operation.
Introduce XOpOperandAdaptor classes that wrap an ArrayRef<Value *> and provide
accessor functions identical to those available in XOp. This makes it possible
for conversions to use these adaptors to address the operands with names rather
than rely on their position in the list. The adaptors are generated from ODS
together with the actual operation definitions.
This is another step towards making dialect conversion patterns specific for a
given operation.
Illustrate the approach on conversion patterns in the standard to LLVM dialect
conversion.
PiperOrigin-RevId: 251232899
Similar to arguments and results, now we require region definition in ops to
be specified as a DAG expression with the 'region' operator. This way we can
specify the constraints for each region and optionally give the region a name.
Two kinds of region constraints are added, one allowing any region, and the
other requires a certain number of blocks.
--
PiperOrigin-RevId: 250790211
This allow specifying $x to refer to an operand's named argument (operand or attribute) or result. Skip variadic operands/results for now pending autogenerated discussion of their accessors.
This adds a new predicate, following feedback on the naming but does not remove the old one. Post feedback I'll do that, potentially in follow up.
--
PiperOrigin-RevId: 250720003
Report errors using the file and line location using SourceMgr's diagnostic reporting. Reduce some horizontal white spacing too.
--
PiperOrigin-RevId: 250193646
This CL sets up the basic structure for a SPIR-V dialect: operation
definition specification, dialect registration, testing, etc.
A single op, FMul, is defined and tested to showcase.
The SPIR-V dialect aims to be a simple proxy for the SPIR-V binary format
to enable straightforward and lightweight conversion from/to the binary
format. Ops in this dialect should stay as the same semantic level and
try to be a mechanical mapping to the corresponding SPIR-V instructions;
but they can deviate representationally to allow using MLIR mechanisms.
--
PiperOrigin-RevId: 250040830
This does tracks the location by recording all the ops in the source pattern and using the fused location for the transformed op. Track the locations via the rewrite state which is a bit heavy weight, in follow up to change to matchAndRewrite this will be addressed (and need for extra array go away).
--
PiperOrigin-RevId: 249986555
This adds the basic passes needed and ties them into mlir-opt. Also adds two specific unit tests that exercise them.
Next step is a standalone quantizer tool and additional cleanup.
Tested:
ninja check-mlir
--
PiperOrigin-RevId: 249167690
Previously we force the C++ namespaces to be `NS` if `SomeOp` is defined as
`NS_SomeOp`. This is too rigid as it does not support nested namespaces
well. This CL adds a "namespace" field into the Dialect class to allow
flexible namespaces.
--
PiperOrigin-RevId: 249064981
Originally, ExecutionEngine was created before MLIR had a proper pass
management infrastructure or an LLVM IR dialect (using the LLVM target
directly). It has been running a bunch of lowering passes to convert the input
IR from Standard+Affine dialects to LLVM IR and, later, to the LLVM IR dialect.
This is no longer necessary and is even undesirable for compilation flows that
perform their own conversion to the LLVM IR dialect. Drop this integration and
make ExecutionEngine accept only the LLVM IR dialect. Users of the
ExecutionEngine can call the relevant passes themselves.
--
PiperOrigin-RevId: 249004676
This CL performs post-commit cleanups.
It adds the ability to specify which shared libraries to load dynamically in ExecutionEngine. The linalg integration test is updated to use a shared library.
Additional minor cleanups related to LLVM lowering of Linalg are also included.
--
PiperOrigin-RevId: 248346589
Adding the additional layer of directory was discussed offline and matches the Target/ tree. The names match the defacto convention we seem to be following where the C++ namespace is ^(.+)Ops/$ matched against the directory name.
This is in preparation for patching the Quantizer into this tree, which would have been confusing without moving the Quantization dialect to its more proper home. It is left to others to move other dialects if desired.
Tested:
ninja check-mlir
--
PiperOrigin-RevId: 248171982
This CL extends the execution engine to allow the additional resolution of symbols names
that have been registered explicitly. This allows linking static library symbols that have not been explicitly exported with the -rdynamic linking flag (which is deemed too intrusive).
--
PiperOrigin-RevId: 247969504
If the attribute needs to exist for the validity of the op, then no need to use
dyn_cast_or_null as the op would be invalid in the cases where cast fails, so
just use cast.
--
PiperOrigin-RevId: 247617696
This CL adds support for functions in the Linalg dialect to run with mlir-cpu-runner.
For this purpose, this CL adds BufferAllocOp, BufferDeallocOp, LoadOp and StoreOp to the Linalg dialect as well as their lowering to LLVM. To avoid collisions with mlir::LoadOp/StoreOp (which should really become mlir::affine::LoadOp/StoreOp), the mlir::linalg namespace is added.
The execution uses a dummy linalg_dot function that just returns for now. In the future a proper library call will be used.
--
PiperOrigin-RevId: 247476061
Simple mechanism to allow specifying arbitrary function declarations. The modelling will not cover all cases so allow a means for users to declare a method function that they will define in their C++ files. The goal is to allow full C++ flexibility as the goal is to cover cases not modelled.
--
PiperOrigin-RevId: 245889819
This CL implements the previously unsupported parsing for Range, View and Slice operations.
A pass is introduced to lower to the LLVM.
Tests are moved out of C++ land and into mlir/test/Examples.
This allows better fitting within standard developer workflows.
--
PiperOrigin-RevId: 245796600