Introduce a new generator for MLIR tablegen driver that consumes LLVM IR
intrinsic definitions and produces MLIR ODS definitions. This is useful to
bulk-generate MLIR operations equivalent to existing LLVM IR intrinsics, such
as additional arithmetic instructions or NVVM.
A test exercising the generation is also added. It reads the main LLVM
intrinsics file and produces ODS to make sure the TableGen model remains in
sync with what is used in LLVM.
Differential Revision: https://reviews.llvm.org/D72926
When lowering `loop.if` to `spv.selection` we explicitly create
a selection header block before the control flow diverges and a
merge block where control flow subsequently converges.
Differential Revision: https://reviews.llvm.org/D72836
This makes the local variable `implies` to have the correct
type to satisfy ArrayRef's constructor:
/*implicit*/ constexpr ArrayRef(const T (&Arr)[N])
Hopefully this should please GCC 5.
Differential Revision: https://reviews.llvm.org/D72924
In SPIR-V, when a new version is introduced, it is possible some
existing extensions will be incorporated into it so that it becomes
implicitly declared if targeting the new version. This affects
conversion target specification because we need to take this into
account when allowing what extensions to use.
For a capability, it may also implies some other capabilities,
for example, the `Shader` capability implies `Matrix` the capability.
This should also be taken into consideration when preparing the
conversion target: when we specify an capability is allowed, all
its recursively implied capabilities are also allowed.
This commit adds utility functions to query implied extensions for
a given version and implied capabilities for a given capability
and updated SPIRVConversionTarget to use them.
This commit also fixes a bug in availability spec. When a symbol
(op or enum case) can be enabled by an extension, we should drop
it's minimal version requirement. Being enabled by an extension
naturally means the symbol can be used by *any* SPIR-V version
as long as the extension is supported. The grammar still encodes
the 'version' field for such cases, but it should be interpreted
as a different way: rather than meaning a minimal version
requirement, it says the symbol becomes core at that specific
version.
Differential Revision: https://reviews.llvm.org/D72765
By default, for an enum attribute, we will generate a list of equality
comparisons for all supported cases inside it's predicate. This list
can be fairly large for certain SPIR-V enum attributes. Instead, we
already have such a list generated by EnumsGen in the symbolize
functions. Leverage that to simplify the generated C++ code.
Differential Revision: https://reviews.llvm.org/D72763
Certain SPIR-V capabilities are only available in certain SPIR-V versions
or extensions. Also a SPIR-V capability may implicitly declares other
capabilities.
This commit updates gen_spirv_dialect.py to support generating such
information into SPIRVBase.td. It requires us to topologically sort
all capabilities because now a capability can refer to another one.
This commits also registers a few extensions because their symbols are
used by capability availability.
Note that this commit hasn't updated SPIRVConversionTarget to take
into consideration such relationship yet. That will be done in a
following-up commit.
Differential Revision: https://reviews.llvm.org/D72760
This is (more?) usable by GDB pretty printers and seems nicer to write.
There's one tricky caveat that in C++14 (LLVM's codebase today) the
static constexpr member declaration is not a definition - so odr use of
this constant requires an out of line definition, which won't be
provided (that'd make all these trait classes more annoyidng/expensive
to maintain). But the use of this constant in the library implementation
is/should always be in a non-odr context - only two unit tests needed to
be touched to cope with this/avoid odr using these constants.
Based on/expanded from D72590 by Christian Sigg.
Summary:
MLIR unlike LLVM IR supports multidimensional vector types. Such types are
lowered to nested LLVM IR arrays wrapping an LLVM IR vector for the innermost
dimension of the MLIR vector. MLIR supports constants of such types using
ElementsAttr for values. Introduce support for converting ElementsAttr into
LLVM IR Constant Aggregates recursively. This enables translation of
multidimensional vector constants from MLIR to LLVM IR.
Differential Revision: https://reviews.llvm.org/D72846
Summary:
* Add shaped container type interface which allows infering the shape, element
type and attribute of shaped container type separately. Show usage by way of
tensor type inference trait which combines the shape & element type in
infering a tensor type;
- All components need not be specified;
- Attribute is added to allow for layout attribute that was previously
discussed;
* Expand the test driver to make it easier to test new creation instances
(adding new operands or ops with attributes or regions would trigger build
functions/type inference methods);
- The verification part will be moved out of the test and to verify method
instead of ops implementing the type inference interface in a follow up;
* Add MLIRContext as arg to possible to create type for ops without arguments,
region or location;
* Also move out the section in OpDefinitions doc to separate ShapeInference doc
where the shape function requirements can be captured;
- Part of this would move to the shape dialect and/or shape dialect ops be
included as subsection of this doc;
* Update ODS's variable usage to match camelBack format for builder,
state and arg variables;
- I could have split this out, but I had to make some changes around
these and the inconsistency bugged me :)
Differential Revision: https://reviews.llvm.org/D72432
Summary:
This diff moves the conversion pass declaration closer to its definition
and makes the namespacing of passes consistent with the rest of the
infrastructure (i.e. `mlir::linalg::createXXXPass` -> `mlir::createXXXPass`).
Reviewers: ftynse, jpienaar, mehdi_amini
Subscribers: rriddle, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72766
The current implementation of the LLVM-to-MLIR translation could not handle
functions used as constant values in instructions. The handling is added
trivially as `llvm.mlir.constant` can define constants of function type using
SymbolRef attributes, which works even for functions that have not been
declared yet.
This commit defines a new SPIR-V dialect attribute for specifying
a SPIR-V target environment. It is a dictionary attribute containing
the SPIR-V version, supported extension list, and allowed capability
list. A SPIRVConversionTarget subclass is created to take in the
target environment and sets proper dynmaically legal ops by querying
the op availability interface of SPIR-V ops to make sure they are
available in the specified target environment. All existing conversions
targeting SPIR-V is changed to use this SPIRVConversionTarget. It
probes whether the input IR has a `spv.target_env` attribute,
otherwise, it uses the default target environment: SPIR-V 1.0 with
Shader capability and no extra extensions.
Differential Revision: https://reviews.llvm.org/D72256
Summary:
This allows for users to cache printer state, which can be costly to recompute. Each of the IR print methods gain a new overload taking this new state class.
Depends On D72293
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72294
Summary:
This was previously disabled as FunctionType TypeAttrs could not be roundtripped in the IR. This has been fixed, so we can now generically print FuncOp.
Depends On D72429
Reviewed By: jpienaar, mehdi_amini
Differential Revision: https://reviews.llvm.org/D72642
Summary:
This diff fixes issues with the semantics of linalg.generic on tensors that appeared when converting directly from HLO to linalg.generic.
The changes are self-contained within MLIR and can be captured and tested independently of XLA.
The linalg.generic and indexed_generic are updated to:
To allow progressive lowering from the value world (a.k.a tensor values) to
the buffer world (a.k.a memref values), a linalg.generic op accepts
mixing input and output ranked tensor values with input and output memrefs.
```
%1 = linalg.generic #trait_attribute %A, %B {other-attributes} :
tensor<?x?xf32>,
memref<?x?xf32, stride_specification>
-> (tensor<?x?xf32>)
```
In this case, the number of outputs (args_out) must match the sum of (1) the
number of output buffer operands and (2) the number of tensor return values.
The semantics is that the linalg.indexed_generic op produces (i.e.
allocates and fills) its return values.
Tensor values must be legalized by a buffer allocation pass before most
transformations can be applied. Such legalization moves tensor return values
into output buffer operands and updates the region argument accordingly.
Transformations that create control-flow around linalg.indexed_generic
operations are not expected to mix with tensors because SSA values do not
escape naturally. Still, transformations and rewrites that take advantage of
tensor SSA values are expected to be useful and will be added in the near
future.
Subscribers: bmahjour, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72555
Summary: bfloat16 doesn't have a valid APFloat format, so we have to use double semantics when storing it. This change makes sure that hexadecimal values can be round-tripped properly given this fact.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D72667
Summary:
When converting splat constants for nested sequential LLVM IR types wrapped in
MLIR, the constant conversion was erroneously assuming it was always possible
to recursively construct a constant of a sequential type given only one value.
Instead, wait until all sequential types are unpacked recursively before
constructing a scalar constant and wrapping it into the surrounding sequential
type.
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72688
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.
Reviewers: herhut, mravishankar, antiagainst, rriddle
Reviewed By: herhut, antiagainst, rriddle
Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72336
I used the codemod python tool to do this with the following commands:
codemod 'tensorflow/mlir/blob/master/include' 'llvm/llvm-project/blob/master/mlir/include'
codemod 'tensorflow/mlir/blob/master' 'llvm/llvm-project/blob/master/mlir'
codemod 'tensorflow/mlir' 'llvm-project/llvm'
Differential Revision: https://reviews.llvm.org/D72244
Summary:
The visibility defines the structural reachability of the symbol within the IR. Symbols can define one of three visibilities:
* Public
The symbol \may be accessed from outside of the visible IR. We cannot assume that we can observe all of the uses of this symbol.
* Private
The symbol may only be referenced from within the operations in the current symbol table, via SymbolRefAttr.
* Nested
The symbol may be referenced by operations in symbol tables above the current symbol table, as long as each symbol table parent also defines a non-private symbol. This allows or referencing the symbol from outside of the defining symbol table, while retaining the ability for the compiler to see all uses.
These properties help to reason about the properties of a symbol, and will be used in a follow up to implement a dce pass on dead symbols.
A few examples of what this would look like in the IR are shown below:
module @public_module {
// This function can be accessed by 'live.user'
func @nested_function() attributes { sym_visibility = "nested" }
// This function cannot be accessed outside of 'public_module'
func @private_function() attributes { sym_visibility = "private" }
}
// This function can only be accessed from within this module.
func @private_function() attributes { sym_visibility = "private" }
// This function may be referenced externally.
func @public_function()
"live.user"() {uses = [@public_module::@nested_function,
@private_function,
@public_function]} : () -> ()
Depends On D72043
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D72044
Summary:
This enables tracking calls that cross symbol table boundaries. It also simplifies some of the implementation details of CallableOpInterface, i.e. there can only be one region within the callable operation.
Depends On D72042
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72043
Summary: This updates the use list algorithms to support querying from a specific symbol, allowing for the collection and detection of nested references. This works by walking the parent "symbol scopes" and applying the existing algorithm at each level.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72042
Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D72429
Summary: Introduce m_Constant() which allows matching a constant operation without forcing the user also to capture the attribute value.
Differential Revision: https://reviews.llvm.org/D72397
Summary:
This diff makes it easier to create a `linalg.reshape` op
and adds an EDSC builder api test to exercise the new builders.
Reviewers: ftynse, jpienaar
Subscribers: mehdi_amini, rriddle, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72580
Summary:
- update zero_extendi and sign_extendi in edsc/intrinsic namespace
- Builder API test for zero_extendi and sign_extendi
Differential Revision: https://reviews.llvm.org/D72298
Thus far we can only generate the same set of methods even for
operations in different dialects. This is problematic for dialects that
want to generate additional operation class methods programmatically,
e.g., a special builder method or attribute getter method. Apparently
we cannot update the OpDefinitionsGen backend every time when such
a need arises. So this CL introduces a hook into the OpDefinitionsGen
backend to allow dialects to emit additional methods and traits to
operation classes.
Differential Revision: https://reviews.llvm.org/D72514
Summary: Some data values have a different storage width than the corresponding MLIR type, e.g. bfloat is currently stored as a double.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D72478
We were seeing some occasional build failures that would come and go.
It appeared to be this missing dependence.
Differential Revision: https://reviews.llvm.org/D72419
This change fixes the build on Windows, so that cblas_interface.dll
exports functions correctly and an implib is created and installed
correctly.
Currently, LLVM cannot be consumed on Windows after it has been
installed in a location because cblas_interface.lib is not
created/installed, thus failing the import check in `LLVMExports.cmake`.
Differential Revision: https://reviews.llvm.org/D72384
Summary:
This reduces the complexity of OperationPrinter and simplifies the code by quite a bit. The SSANameState is now held by ModuleState. This is in preparation for a future revision that molds ModuleState into something that can be used by users for caching the printer state, as well as for implementing printAsOperand style methods.
Depends On D72292
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D72293
This patch fixes a test failure on a non-intel (PowerPC64) box.
The two affine.load are independent and hence llvm may reorder them.
The CHECK lines are modified for supporting reordered case.
Differential Revision: https://reviews.llvm.org/D72435
Introduce a set of function that promote a memref argument of a `gpu.func` to
workgroup memory using memory attribution. The promotion boils down to
additional loops performing the copy from the original argument to the
attributed memory in the beginning of the function, and back at the end of the
function using all available threads. The loop bounds are specified so as to
adapt to any size of the workgroup. These utilities are intended to compose
with other existing utilities (loop coalescing and tiling) in cases where the
distribution of work across threads is uneven, e.g. copying a 2D memref with
only the threads along the "x" dimension. Similarly, specialization of the
kernel to specific launch sizes should be implemented as a separate pass
combining constant propagation and canonicalization.
Introduce a simple attribute-driven pass to test the promotion transformation
since we don't have a heuristic at the moment.
Differential revision: https://reviews.llvm.org/D71904
Summary:
This diff implements the progressive lowering of insert_strided_slice.
Two cases appear:
1. when the source and dest vectors have different ranks, extract the dest
subvector at the proper offset and reduce to case 2.
2. when they have the same rank N:
a. if the source and dest type are the same, the insertion is trivial:
just forward the source
b. otherwise, iterate over all N-1 D subvectors and create an
extract/insert_strided_slice/insert replacement, reducing the problem
to vecotrs of the same N-1 rank.
This combines properly with the other conversion patterns to lower all the way to LLVM.
Reviewers: ftynse, rriddle, AlexEichenberger, andydavis1, tetuante, nicolasvasilache
Reviewed By: andydavis1
Subscribers: merge_guards_bot, mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72317
Summary:
This diff implements the progressive lowering of strided_slice to either:
1. extractelement + insertelement for the 1-D case
2. extract + optional strided_slice + insert for the n-D case.
This combines properly with the other conversion patterns to lower all the way to LLVM.
Appropriate tests are added.
Reviewers: ftynse, rriddle, AlexEichenberger, andydavis1, tetuante
Reviewed By: andydavis1
Subscribers: merge_guards_bot, mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72310
Summary: Right now the path for each lib in whole_archive_link when MSVC is used as the compiler is not a full path - and it's not even the correct path when VS is used to build. This patch sets the lib path to a full path using CMAKE_CFG_INTDIR which means the path will be correct regardless of whether ninja, make or VS is used and it will always be a full path.
Reviewers: denis13, jpienaar
Reviewed By: jpienaar
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, llvm-commits, asmith
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72403
Summary: This reduces the complexity of ModuleState and simplifies the code. A future revision will mold ModuleState into something that can be used by users for caching of printer state, as well as for implementing printAsOperand style methods.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D72292
Summary:
This diff adds lowering of the linalg.reshape op to LLVM.
A new descriptor is created with fields initialized as follows:
1. allocatedPTr, alignedPtr and offset are copied from the source descriptor
2. sizes are copied from the static destination shape
3. strides are copied from the static strides collected with `getStridesAndOffset`
Only the static case in which the target view conforms to strided memref
semantics is supported. Other cases are left for future work and will be added on
a per-need basis.
Reviewers: ftynse, mravishankar
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72316
Summary:
This diff adds a new operation to linalg to allow reshaping of an
existing view into a new view in the same buffer at the same offset.
More specifically:
The `linalg.reshape` op produces a new view whose sizes are a reassociation
of the original `view`. Depending on whether or not the reassociated
MemRefType is contiguous, the resulting memref may require explicit alloc
and copies.
A reassociation is defined as a continous grouping of dimensions and is
represented with a affine map array attribute. In the future, non-continous
groupings may be allowed (i.e. permutations, reindexings etc).
For now, it is assumed that either:
1. a reassociation produces and consumes contiguous MemRefType or,
2. the reshape op will be folded into its consumers (by changing the shape
of the computations).
All other cases are undefined behavior and a reshape op may not lower to
LLVM if it cannot be proven statically that it does not require alloc+copy.
A reshape may either collapse or expand dimensions, depending on the
relationship between source and target memref ranks. The verification rule
is that the reassociation maps are applied to the memref with the larger
rank to obtain the memref with the smaller rank. In the case of a dimension
expansion, the reassociation maps can be interpreted as inverse maps.
Examples:
```mlir
// Dimension collapse (i, j) -> i' and k -> k'
%1 = linalg.reshape %0 [(i, j, k) -> (i, j),
(i, j, k) -> (k)] :
memref<?x?x?xf32, stride_spec> into memref<?x?xf32, stride_spec_2>
```
```mlir
// Dimension expansion i -> (i', j') and (k) -> (k')
%1 = linalg.reshape %0 [(i, j, k) -> (i, j),
(i, j, k) -> (k)] :
memref<?x?xf32, stride_spec> into memref<?x?x?xf32, stride_spec_2>
```
The relevant invalid and roundtripping tests are added.
Reviewers: AlexEichenberger, ftynse, rriddle, asaadaldien, yangjunpro
Subscribers: kiszk, merge_guards_bot, mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72168
Summary: This diff reimplements getStridesAndOffset in a significantly simpler way by operating on the AffineExpr and calling into simplifyAffineExpr instead of rolling its own saturating arithmetic.
As a consequence it becomes quite simple to extend the behavior of getStridesAndOffset to encompass more cases by manipulating the AffineExpr more directly.
The divisions are still filtered out and continue to yield fully dynamic strides.
Simplifying the divisions is left for a later time if compelling use cases arise.
Relevant tests are added.
Reviewers: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72098
This is needed to consume mlir after it has been installed of the source
tree. Without this, consuming mlir results a build error.
Differential Revision: https://reviews.llvm.org/D72232
Summary:
Prior to this, "ninja install-mlir-headers" failed with an error indicating
the missing target. Verified that from a clean build, the installed
headers include generated files.
Subscribers: mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72045
Summary: This fixes the return value of helper methods on the base range class.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72127
Previously we only check that each field is of the correct
mlir::Attribute subclass. This commit enhances to also consider
the attribute's types, by leveraging the constraints already
encoded in TableGen attribute definitions.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D72162
This fixes the error:
```
mlir/include/mlir/Dialect/Linalg/Utils/Utils.h:72:3: error: from definition of 'template<class LoopTy> mlir::edsc::GenericLoopNestRangeBuilder<LoopTy>::GenericLoopNestRangeBuilder(llvm::ArrayRef<mlir::edsc::ValueHandle*>, llvm::ArrayRef<mlir::Value>)' [-fpermissive]
GenericLoopNestRangeBuilder(ArrayRef<edsc::ValueHandle *> ivs,
```
This was tested independently on a Docker image with gcc-5 by jpienaar@
This should fix the error:
```
mlir/include/mlir/Dialect/Linalg/Utils/Utils.h:72:3: error: from definition of 'template<class LoopTy> mlir::edsc::GenericLoopNestRangeBuilder<LoopTy>::GenericLoopNestRangeBuilder(llvm::ArrayRef<mlir::edsc::ValueHandle*>, llvm::ArrayRef<mlir::Value>)' [-fpermissive]
GenericLoopNestRangeBuilder(ArrayRef<edsc::ValueHandle *> ivs,
```
This commit fixes shader ABI attributes to use `spv.` as the prefix
so that they match the dialect's namespace. This enables us to add
verification hooks in the SPIR-V dialect to verify them.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72062
Summary:
This changes the implementation of OpResult to have some of the results be represented inline in Value, via a pointer int pair of Operation*+result number, and the rest being trailing objects on the main operation. The full details of the new representation is detailed in the proposal here:
https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
The only difference between here and the above proposal is that we only steal 2-bits for the Value kind instead of 3. This means that we can only fit 2-results inline instead of 6. This allows for other users to steal the final bit for PointerUnion/etc. If necessary, we can always steal this bit back in the future to save more space if 3-6 results are common enough.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72020
This commit updates gen_spirv_dialect.py to query the grammar and
generate availability spec for various enum attribute definitions
and all defined ops.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72095
Summary:
This diff adds support to allow `linalg.generic` and
`linalg.indexed_generic` to take tensor input and output
arguments.
The subset of output tensor operand types must appear
verbatim in the result types after an arrow. The parser,
printer and verifier are extended to accomodate this
behavior.
The Linalg operations now support variadic ranked tensor
return values. This extension exhibited issues with the
current handling of NativeCall in RewriterGen.cpp. As a
consequence, an explicit cast to `SmallVector<Value, 4>`
is added in the proper place to support the new behavior
(better suggestions are welcome).
Relevant cleanups and name uniformization are applied.
Relevant invalid and roundtrip test are added.
Reviewers: mehdi_amini, rriddle, jpienaar, antiagainst, ftynse
Subscribers: burmako, shauheen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72022
Lots of SPIR-V ops take enum attributes and certain enum cases
need extra capabilities or extensions to be available. This commit
extends to allow specifying availability spec on enum cases.
Extra utility functions are generated for the corresponding enum
classes to return the availability requirement. The availability
interface implemention for a SPIR-V op now goes over all enum
attributes to collect the availability requirements.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D71947
* replaceAllUsesWith may be supplied with a null value.
* some compilers fail to implicitly convert single result operations to
OpaqueValue, so add an explicit OpOperand::set(Value) method.
Summary: This is part of an ongoing cleanup and uniformization work.
Reviewers: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72084
Summary:
This is part of an ongoing cleanup and uniformization work.
This diff performs 3 types of cleanups:
1. Uniformize transformation names.
2. Replace all pattern operands that need not be captured by `$_`
3. Replace all usage of pattern captured op by the normalized `op` name (instead of positional parameters such as `$0`)
Reviewers: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72081
Summary: This is part of an ongoing cleanup and uniformization work.
Reviewers: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72079
Summary: This is part of an ongoing cleanup and uniformization work.
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72078
This allows us to include the definitions of these attributes in
other files without pulling in all dependencies for lowering.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72054
Summary:
This commit fixes links to code directories and uses doc links on
mlir.llvm.org where possible. The docs in TableGen dialect definition
is also updated to reflect recent developments.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72051
for (const auto &x : llvm::zip(..., ...))
->
for (auto x : llvm::zip(..., ...))
The return type of zip() is a wrapper that wraps a tuple of references.
> warning: loop variable 'p' is always a copy because the range of type 'detail::zippy<detail::zip_shortest, ArrayRef<long> &, ArrayRef<long> &>' does not return a reference [-Wrange-loop-analysis]
The issue is that /WHOLEARCHIVE is interpreted differently in LLD, which needs the same exact path as the .lib; whereas link.exe can take the library name, withoutout a path or extension, if that was already supplied on the cmd-line. I'll write a follow-up patch to fix the issue in LLD.
Fixes: warning: comparison of integers of different signs: 'const unsigned int' and '(anonymous namespace)::OperationPrinter::(anonymous enum at F:\llvm-project\mlir\lib\IR\AsmPrinter.cpp:1444:3)' [-Wsign-compare]
Summary: A new class is added, IRMultiObjectWithUseList, that allows for representing an IR use list that holds multiple sub values(used in this case for OpResults). This class provides all of the same functionality as the base IRObjectWithUseList, but for specific sub-values. This saves a word per operation result and is a necessary step in optimizing the layout of operation results. For now the use list is placed on the operation itself, so zero-result operations grow by a word. When the work for optimizing layout is finished, this can be moved back to being a trailing object based on memory/runtime benchmarking.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D71955
Currently when you build the `install` target, TableGen files don't get
installed.
TableGen files are needed when authoring new MLIR dialects, but right
now they're missing when using the pre-built binaries.
Differential Revision: https://reviews.llvm.org/D71958
Summary: The successor operand counts are directly tied to block operands anyways, and this simplifies the trailing objects of Operation(i.e. one less computation to perform).
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D71949
SPIR-V has a few mechanisms to control op availability: version,
extension, and capabilities. These mechanisms are considered as
different availability classes.
This commit introduces basic definitions for modelling SPIR-V
availability classes. Specifically, an `Availability` class is
added to SPIRVBase.td, along with two subclasses: MinVersion
and MaxVersion for versioning. SPV_Op is extended to take a
list of `Availability`. Each `Availability` instance carries
information for generating op interfaces for the corresponding
availability class and also the concrete availability
requirements.
With the availability spec on ops, we can now auto-generate the
op interfaces of all SPIR-V availability classes and also
synthesize the op's implementations of these interfaces. The
interface generation is done via new TableGen backends
-gen-avail-interface-{decls|defs}. The op's implementation is
done via -gen-spirv-avail-impls.
Differential Revision: https://reviews.llvm.org/D71930
This commit expands on the steps of defining a new SPIR-V op and
also provides pointers on how to define a new SPIR-V specific type.
Differential Revision: https://reviews.llvm.org/D71928
The conversion from std.and/std.or to spv.LogicalAnd/spv.LogicalOr is
only valid for boolean (i1) types. Modify BinaryOpPattern in
StandardToSPIRV.td to allow limiting the type of the operands for
which the pattern is applied.
Differential Revision: https://reviews.llvm.org/D71881
Summary:
This commit updates links to SPIR-V dialect code to LLVM monorepo
on GitHub. It also points to the operation doc on mlir.llvm.org.
Reviewers: mravishankar, denis13, ftynse
Reviewed By: ftynse
Subscribers: merge_guards_bot, mehdi_amini, rriddle, jpienaar, burmako, shauheen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71926
Summary:
`mlir-translate -import-llvm test.ll` was going into segmentation fault if `test.ll` had `float` or `double` constants.
For example,
```
%3 = fadd double 3.030000e+01, %0
```
Now, it is handled in `Importer::getConstantAsAttr` (similar behaviour as normal integers)
Added tests for FP arithmetic
Reviewers: ftynse, mehdi_amini
Reviewed By: ftynse, mehdi_amini
Subscribers: shauheen, mehdi_amini, rriddle, jpienaar, burmako, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71912
MSVC has trouble resolving the static 'printOptionValue' from the method on llvm:🆑:opt/list. This change renames the static method to avoid this conflict.
These bindings were added as an experiment, and never had a CMake configuration.
We will bring back python bindings after picking carefully our dependency and the kind
of layering we expect to expose for these bindings.
PiperOrigin-RevId: 286963717
This change refactors pass options to be more similar to how statistics are modeled. More specifically, the options are specified directly on the pass instead of in a separate options class. (Note that the behavior and specification for pass pipelines remains the same.) This brings about several benefits:
* The specification of options is much simpler
* The round-trip format of a pass can be generated automatically
* This gives a somewhat deeper integration with "configuring" a pass, which we could potentially expose to users in the future.
PiperOrigin-RevId: 286953824
This means that in-place, or root, updates need to use explicit calls to `startRootUpdate`, `finalizeRootUpdate`, and `cancelRootUpdate`. The major benefit of this change is that it enables in-place updates in DialectConversion, which simplifies the FuncOp pattern for example. The major downside to this is that the cases that *may* modify an operation in-place will need an explicit cancel on the failure branches(assuming that they started an update before attempting the transformation).
PiperOrigin-RevId: 286933674
This CL updates SPIR-V.md to reflect recent developments
in the SPIR-V dialect and its conversions.
Along the way, also updates the doc for define_inst.sh.
PiperOrigin-RevId: 286933546
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.
PiperOrigin-RevId: 286930047
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.
PiperOrigin-RevId: 286919966
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.
Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').
The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.
Closestensorflow/mlir#226
PiperOrigin-RevId: 286803388
This requires using explicitly default copy constructor and copy assignment
operator instead of hand-rolled ones. These classes are indeed cheap to copy
since they are wrappers around a pointer to the implementation. This change
makes sure templated code can use standard type traits to understand that
copying such objects is cheap and appeases analysis tools such as clang-tidy.
PiperOrigin-RevId: 286725565
- a block argument associated with an arbitrary op can't be a valid
dimensional identifier; it has to be the block argument of either
a function op or an affine.for.
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#331
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/331 from bondhugula:valid_dim 3273b4fcbaa31fb7b6671d93c9e42a6b2a6a4e4c
PiperOrigin-RevId: 286593693
This will allow us to lower most of gpu.all_reduce (when all_reduce
doesn't exist in the target dialect) within the GPU dialect, and only do
target-specific lowering for the shuffle op.
PiperOrigin-RevId: 286548256
This is the block argument equivalent of the existing `getAsmResultNames` hook.
Closestensorflow/mlir#329
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/329 from plaidml:flaub-region-arg-names fc7876f2d1335024e441083cd25263fd6247eb7d
PiperOrigin-RevId: 286523299
Concatting lists in TableGen is easy, creating unique lists less so. There is no reason for duplicated op traits so we could throw an error instead but duplicates could occur due to concatting different list of traits in ODS (e.g., for convenience reasons), so just dedup them during Operator trait construction instead.
PiperOrigin-RevId: 286488423
Update vector transfer_read/write ops to operatate on memrefs with vector element type.
This handle cases where the memref vector element type represents the minimal memory transfer unit (or multiple of the minimal memory transfer unit).
PiperOrigin-RevId: 286482115
This CL allows specifying an additional name for specifying the .td file that is used to generate the doc for a dialect. This is necessary for a dialect like Linalg which has different "types" of ops that are used in different contexts.
This CL also restructures the Linalg documentation and renames LinalgLibraryOps -> LinalgStructuredOps but is otherwise NFC.
PiperOrigin-RevId: 286450414
Adds vector ReshapeOp to the VectorOps dialect. An aggregate vector reshape operation, which aggregates multiple hardware vectors, can enable optimizations during decomposition (e.g. loading one input hardware vector and performing multiple rotate and scatter store operations to the vector output).
PiperOrigin-RevId: 286440658
Introduces some centralized methods to move towards
consistent use of i32 as vector subscripts.
Note: sizes/strides/offsets attributes are still i64
PiperOrigin-RevId: 286434133
This function template has been introduced in the early days of MLIR to work
around the absence of common type for ranges of values (operands, block
argumeents, vectors, etc). Core IR now provides ValueRange for exactly this
purpose. Use it instead of the template parameter.
PiperOrigin-RevId: 286431338
Added test cases for the newly added LLVM operations and lowering features.
Closestensorflow/mlir#300
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/300 from dfki-jugr:std_to_llvm da6168bbc1a369ae2e99ad3881fdddd82f075dd4
PiperOrigin-RevId: 286231169
This enables providing a default implementation of an interface method. This method is defined on the Trait that is attached to the operation, and thus has all of the same constraints and properties as any other interface method. This allows for interface authors to provide a conservative default implementation for certain methods, without requiring that all users explicitly define it. The default implementation can be specified via the argument directly after the interface method body:
StaticInterfaceMethod<
/*desc=*/"Returns whether two array of types are compatible result types for an op.",
/*retTy=*/"bool",
/*methodName=*/"isCompatibleReturnTypes",
/*args=*/(ins "ArrayRef<Type>":$lhs, "ArrayRef<Type>":$rhs),
/*methodBody=*/[{
return ConcreteOp::isCompatibleReturnTypes(lhs, rhs);
}],
/*defaultImplementation=*/[{
/// Returns whether two arrays are equal as strongest check for
/// compatibility by default.
return lhs == rhs;
}]
PiperOrigin-RevId: 286226054
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax
highlighting, while '{.mlir}' was a markdown extension that used a style file
to color the background differently of the code block. The background color
extension was a custom one that we can retire given we have syntax
highlighting.
Also change '```td' to '```tablegen' to match chroma syntax highlighting
designation.
PiperOrigin-RevId: 286222976
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.
PiperOrigin-RevId: 286222654
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.
Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.
affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#225
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
The definition of the function template LLVM::ModuleTranslation::lookupValues
has been located in a source file. As long as it has been the only file that
actually called into the function, this did not cause any problem. However, it
creates linking issues if the function is used from other translation units.
PiperOrigin-RevId: 286203078
When memory attributions are present in `gpu.func`, require that they are of
memref type and live in memoryspaces 3 and 5 for workgroup and private memory
attributions, respectively. Adapt the conversion from the GPU dialect to the
NVVM dialect to drop the private memory space from attributions as NVVM is able
to model them as local `llvm.alloca`s in the default memory space.
PiperOrigin-RevId: 286161763
The inline interface uses two methods to check legality of inling:
1) Can a region be inlined into another.
2) Can an operation be inlined into another.
Setting the former to true, allows the inliner to use the second for
legality checks. Add this method to the SPIR-V dialect inlining
interface.
PiperOrigin-RevId: 286041734
The lowering of MemRef types to the LLVM dialect is connected to the underlying
runtime representation of structured memory buffers. It has changed several
times in the past and reached the current state of a LLVM structured-typed
descriptor containing two pointers and all sizes. In several reported use
cases, a different, often simpler, lowering scheme is required. For example,
lowering statically-shaped memrefs to bare LLVM pointers to simplify aliasing
annotation. Split the pattern population functions into those include
memref-related operations and the remaining ones. Users are expected to extend
TypeConverter::convertType to handle the memref types differently.
PiperOrigin-RevId: 286030610
The syntax for LLVM dialect types changed twice since this document was
introduced. First, the quoted types are only prefixed with the dialect name
`!llvm` rather than with `!llvm.type`. Second, for types that are simple enough
(e.g., MLIR identifiers), the pretty form can be used instead of the quoted
form. The relevant commits updated the dialect documentation, but not the
conversion documentation. Use the valid type names in the conversion
documentation.
PiperOrigin-RevId: 286026153
This function has become redundant with MemRefDescriptor::getElementType and is
no longer necessary. Use the MemRefDescriptor pervasively to concentrate
descriptor-related logic in one place and drop the utility function.
PiperOrigin-RevId: 286024168
The conversion procedure has been updated to reflect the most recent MemRef
descriptor proposal, but the documentation was only updated for the type
conversion, omitting the address computation section. Make sure the two
sections agree.
PiperOrigin-RevId: 286022684
This class provides a simplified mechanism for defining a switch over a set of types using llvm casting functionality. More specifically, this allows for defining a switch over a value of type T where each case corresponds to a type(CaseT) that can be used with dyn_cast<CaseT>(...). An example is shown below:
// Traditional piece of code:
Operation *op = ...;
if (auto constant = dyn_cast<ConstantOp>(op))
...;
else if (auto return = dyn_cast<ReturnOp>(op))
...;
else
...;
// New piece of code:
Operation *op = ...;
TypeSwitch<Operation *>(op)
.Case<ConstantOp>([](ConstantOp constant) { ... })
.Case<ReturnOp>([](ReturnOp return) { ... })
.Default([](Operation *op) { ... });
Aside from the above, TypeSwitch supports return values, void return, multiple types per case, etc. The usability is intended to be very similar to StringSwitch.
(Using c++14 template lambdas makes everything even nicer)
More complex example of how this makes certain things easier:
LogicalResult process(Constant op);
LogicalResult process(ReturnOp op);
LogicalResult process(FuncOp op);
TypeSwitch<Operation *, LogicalResult>(op)
.Case<ConstantOp, ReturnOp, FuncOp>([](auto op) { return process(op); })
.Default([](Operation *op) { return op->emitError() << "could not be processed"; });
PiperOrigin-RevId: 286003613
Scope and Memory Semantics attributes need to be serialized as a
constant integer value and the <id> needs to be used to specify the
value. Fix the auto-generated SPIR-V (de)serialization to handle this.
PiperOrigin-RevId: 285849431
This CL adds more Linalg EDSC ops and tests to support building pointwise operations along with conv and dilated_conv.
This also fixes a bug in the existing linalg_matmul EDSC and beefs up the test.
The current set of ops is already enough to build an interesting, albeit simple, model used internally.
PiperOrigin-RevId: 285838012
This updates the lowering pipelines from the GPU dialect to lower-level
dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation
instead of a standard function annotated with an attribute. In particular, the
kernel outlining is updated to produce gpu.func instead of std.func and the
individual conversions are updated to consume gpu.funcs and disallow standard
funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved
in the generic syntax, but can also be used with the custom syntax on
gpu.funcs. The special kind of function for GPU allows one to use additional
features such as memory attribution.
PiperOrigin-RevId: 285822272
This keeps the IR valid and consistent as it is expected that each block should have a valid parent region/operation. Previously, converted blocks were kept floating without a valid parent region.
PiperOrigin-RevId: 285821687
The conversion from the Loops dialect to the Standard dialect, also known as
loop-to-cfg lowering, has been historically a function pass. It can be required
on non-Standard function Ops, in particular the recently introduced GPU
functions. Make the conversion an operation pass instead of a function pass.
PiperOrigin-RevId: 285814560
This PR targest issue tensorflow/mlir#295. It exposes the already existing
subiew promotion pass as a declarative pattern
Change-Id: If901ebef9fb53fcd0b12ecc536f6b174ce320b92
Closestensorflow/mlir#315
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/315 from tetuante:issue295 8e5f268b6d85f31015c33505329dbd7a4db97ac5
PiperOrigin-RevId: 285801463
Similar to insert/extract vector instructions but
(1) work on 1-D vectors only
(2) allow for a dynamic index
%c3 = constant 3 : index
%0 = vector.insertelement %arg0, %arg1[%c : index] : vector<4xf32>
%1 = vector.extractelement %arg0[%c3 : index] : vector<4xf32>
PiperOrigin-RevId: 285792205
ExtractSlicesOp extracts slices of its vector operand and with a specified tiling scheme.
This operation centralizes the tiling scheme around a single op, which simplifies vector op unrolling and subsequent pattern rewrite transformations.
PiperOrigin-RevId: 285761129
During the conversion from the standard dialect to the LLVM dialect,
memref-typed arguments are promoted from registers to memory and passed into
functions by pointer. This had been introduced into the lowering to work around
the abesnce of calling convention modeling in MLIR to enable better
interoperability with LLVM IR generated from C, and has been exerciced for
several months. Make this promotion the default calling covention when
converting to the LLVM dialect. This adds the documentation, simplifies the
code and makes the conversion consistent across function operations and
function types used in other places, e.g. in high-order functions or
attributes, which would not follow the same rule previously.
PiperOrigin-RevId: 285751280
- bring op description comment in sync with the doc
- fix misformat in doc
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#317
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/317 from bondhugula:quickfix 7fcd945b318c973b2488b702874c87526855c8ef
PiperOrigin-RevId: 285574527
This is needed for calling the generator on a .td file that contains both OpInterface definitions and op definitions with DeclareOpInterfaceMethods<...> Traits.
PiperOrigin-RevId: 285465784
This will be evolved into a simple programming model for custom ops and custom layers in followup CLs.
This CL also deletes the obsolete tablegen's reference-impl.td that was using EDSCs.
PiperOrigin-RevId: 285459545
This change allows for DialectConversion to attempt folding as a mechanism to legalize illegal operations. This also expands folding support in OpBuilder::createOrFold to generate new constants when folding, and also enables it to work in the context of a PatternRewriter.
PiperOrigin-RevId: 285448440
The clamp value determines the returned predicate. Previously, the clamp value was fixed to 31 and the predicate was therefore always true. This is incorrect for partial warp reductions, but went unnoticed because the returned values happened to be zero (but it could be anything).
PiperOrigin-RevId: 285343160
This cleans up the implementation of the various operation print methods. This is done via a combination of code cleanup, adding new streaming methods to the printer(e.g. operand ranges), etc.
PiperOrigin-RevId: 285285181
Add variant that does invoke infer type op interface where defined. Also add entry function that invokes that different separate argument builders for wrapped, unwrapped and inference variant.
PiperOrigin-RevId: 285220709
This type is not used anymore now that Linalg view and subview have graduated to std and that alignment is supported on alloc.
PiperOrigin-RevId: 285213424
This allows reusing the implementation in various places by just including and permits more easily writing test functions without explicit template instantiations.
This also modifies UnrankedMemRefType to take a template type parameter since it cannot be type agnostic atm.
PiperOrigin-RevId: 285187711
Both work for the current use case, but the latter allows implementing
prefix sums and is a little easier to understand for partial warps.
PiperOrigin-RevId: 285145287
It is sometimes useful to create operations separately from the builder before insertion as it may be easier to erase them in isolation if necessary. One example use case for this is folding, as we will only want to insert newly generated constant operations on success. This has the added benefit of fixing some silent PatternRewriter failures related to cloning, as the OpBuilder 'clone' methods don't call createOperation.
PiperOrigin-RevId: 285086242
This CL adds more common information to StructuredOpsUtils.h
The n_view attribute is retired in favor of args_in + args_out but the CL is otherwise NFC.
PiperOrigin-RevId: 285000621
Add one more simplification for floordiv and mod affine expressions.
Examples:
(2*d0 + 1) floordiv 2 is simplified to d0
(8*d0 + 4*d1 + d2) floordiv 4 simplified to 4*d0 + d1 + d2 floordiv 4.
etc.
Similarly, (4*d1 + 1) mod 2 is simplified to 1,
(2*d0 + 8*d1) mod 8 simplified to 2*d0 mod 8.
Change getLargestKnownDivisor to return int64_t to be consistent and
to avoid casting at call sites (since the return value is used in expressions
of int64_t/index type).
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#202
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/202 from bondhugula:affine b13fcb2f1c00a39ca5434613a02408e085a80e77
PiperOrigin-RevId: 284866710
Move the definition of gpu.launch_func operation from hand-rolled C++
implementation to the ODS framework. Also move the documentation. This only
performs the move and remains a non-functional change, a follow-up will clean
up the custom functions that can be auto-generated using ODS.
PiperOrigin-RevId: 284842252
This has several benefits:
* The implementation is much cleaner and more efficient.
* The ranges now have support for many useful operations: operator[], slice, drop_front, size, etc.
* Value ranges can now directly query a range for their types via 'getTypes()': e.g:
void foo(Operation::operand_range operands) {
auto operandTypes = operands.getTypes();
}
PiperOrigin-RevId: 284834912
This patch closes issue tensorflow/mlir#272
We add a standalone iterator permutation transformation to Linalg.
This transformation composes a permutation map with the maps in the
"indexing_maps" attribute. It also permutes "iterator_types"
accordingly.
Change-Id: I7c1e693b8203aeecc595a7c012e738ca1100c857
Closestensorflow/mlir#307
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/307 from tetuante:issue272 f7908d58792f4111119721885e247045104f1131
PiperOrigin-RevId: 284824102
This reorganizes the vector transformations to be more easily testable as patterns and more easily composable into fused passes in the future.
PiperOrigin-RevId: 284817474
Add some convenience build methods to SPIR-V ops and update the
lowering to use these methods where possible.
For SPIRV::CompositeExtractOp move the method to deduce type of
element based on base and indices into a convenience function. Some
additional functionality needed to handle differences between parsing
and verification methods.
PiperOrigin-RevId: 284794404
These come from a non-standard extenion that is not available on Github, so it
only clutters the documentation source with {.mlir} or {.ebnf} tags.
PiperOrigin-RevId: 284733003
Avoid `error: could not convert ?(const char*)"reduction"? from ?const char*? to ?llvm::StringLiteral?`. Tested with gcc-5.5.
PiperOrigin-RevId: 284677810
For example
%0 = vector.shuffle %x, %y [3 : i32, 2 : i32, 1 : i32, 0 : i32] : vector<2xf32>, vector<2xf32>
yields a vector<4xf32> result with a permutation of the elements of %x and %y
PiperOrigin-RevId: 284657191
Each of the support classes for Block are now moved into a new header BlockSupport.h. The successor iterator class is also reimplemented as an indexed_accessor_range. This makes the class more efficient, and expands on its available functionality.
PiperOrigin-RevId: 284646792
Many ranges want similar functionality from a range type(e.g. slice/drop_front/operator[]/etc.), so these classes provide a generic implementation that may be used by many different types of ranges. This removes some code duplication, and also empowers many of the existing range types in MLIR(e.g. result type ranges, operand ranges, ElementsAttr ranges, etc.). This change only updates RegionRange and ValueRange, more ranges will be updated in followup commits.
PiperOrigin-RevId: 284615679
This CL starts extracting commonalities between dialects that use the structured ops abstractions. Also fixes an OSS build issue where StringRef were incorrectly used with constexpr.
PiperOrigin-RevId: 284591114
Currently named accessors are generated for attributes returning a consumer
friendly type. But sometimes the attributes are used while transforming an
existing op and then the returned type has to be converted back into an
attribute or the raw `getAttr` needs to be used. Generate raw named accessor
for attributes to reference the raw attributes without having to use the string
interface for better compile time verification. This allows calling
`blahAttr()` instead of `getAttr("blah")`.
Raw here refers to returning the underlying storage attribute.
PiperOrigin-RevId: 284583426
The existing GPU to SPIR-V lowering created a spv.module for every
function with gpu.kernel attribute. A better approach is to lower the
module that the function lives in (which has the attribute
gpu.kernel_module) to a spv.module operation. This better captures the
host-device separation modeled by GPU dialect and simplifies the
lowering as well.
PiperOrigin-RevId: 284574688
Unifies vector op unrolling transformation, by using the same unrolling implementation for contraction and elementwise operations.
Removes fakefork/join operations which are non longer needed now that we have the InsertStridedSlice operation.
PiperOrigin-RevId: 284570784
This CL uses the newly expanded matcher support to easily detect when a linalg.generic has a multiply-accumulate body. A linalg.generic with such a body is rewritten as a vector contraction.
This CL additionally limits the rewrite to the case of matrix multiplication on contiguous and statically shaped memrefs for now.
Before expanding further, we should harden the infrastructure for expressing custom ops with the structured ops abstraction.
PiperOrigin-RevId: 284566659
Follows ValueRange in representing a generic abstraction over the different
ways to represent a range of Regions. This wrapper is not as ValueRange and only
considers the current cases of interest: MutableArrayRef<Region> and
ArrayRef<std::unique_ptr<Region>> as occurs during op construction vs op region
querying.
Note: ArrayRef<std::unique_ptr<Region>> allows for unset regions, so this range
returns a pointer to a Region instead of a Region.
PiperOrigin-RevId: 284563229
This CL adds support for building matchers recursively.
The following matchers are provided:
1. `m_any()` can match any value
2. `m_val(Value *)` binds to a value and must match it
3. `RecursivePatternMatcher<OpType, Matchers...>` n-arity pattern that matches `OpType` and whose operands must be matched exactly by `Matchers...`.
This allows building expression templates for patterns, declaratively, in a very natural fashion.
For example pattern `p9` defined as follows:
```
auto mul_of_muladd = m_Op<MulFOp>(m_Op<MulFOp>(), m_Op<AddFOp>());
auto mul_of_anyadd = m_Op<MulFOp>(m_any(), m_Op<AddFOp>());
auto p9 = m_Op<MulFOp>(m_Op<MulFOp>(
mul_of_muladd, m_Op<MulFOp>()),
m_Op<MulFOp>(mul_of_anyadd, mul_of_anyadd));
```
Successfully matches `%6` in:
```
%0 = addf %a, %b: f32
%1 = addf %a, %c: f32 // matched
%2 = addf %c, %b: f32
%3 = mulf %a, %2: f32 // matched
%4 = mulf %3, %1: f32 // matched
%5 = mulf %4, %4: f32 // matched
%6 = mulf %5, %5: f32 // matched
```
Note that 0-ary matchers can be used as leaves in place of n-ary matchers. This alleviates from passing explicit `m_any()` leaves.
In the future, we may add extra patterns to specify that operands may be matched in any order.
PiperOrigin-RevId: 284469446
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.
PiperOrigin-RevId: 284360710
This class represents a generic abstraction over the different ways to represent a range of Values: ArrayRef<Value *>, operand_range, result_range. This class will allow for removing the many instances of explicit SmallVector<Value *, N> construction. It has the same memory cost as ArrayRef, and only suffers cost from indexing(if+elsing the different underlying representations).
This change only updates a few of the existing usages, with more to be changed in followups; e.g. 'build' API.
PiperOrigin-RevId: 284307996
This adds an additional filtering mode for printing after a pass that checks to see if the pass actually changed the IR before printing it. This "change" detection is implemented using a SHA1 hash of the current operation and its children.
PiperOrigin-RevId: 284291089
- for the symbol rules, the code was updated but the doc wasn't.
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#284
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/284 from bondhugula:doc 9aad8b8a715559f7ce61265f3da3f8a3c11b45ea
PiperOrigin-RevId: 284283712
Previously the error case was using a sentinel in the error case which was bad. Also make the one `build` invoke the other `build` to reuse verification there.
And follow up on suggestion to use formatv which I missed during previous review.
PiperOrigin-RevId: 284265762
During lowering, spv.module might be within other modules (for example
gpu kernel module). Walk the module op to find spirv module to
serialize.
PiperOrigin-RevId: 284262550
Move the definition of the GPU launch opreation from hand-rolled C++ code to
ODS framework. This only does the moves, a follow-up is necessary to clean up
users of custom functions that could be auto-generated by ODS.
PiperOrigin-RevId: 284261856
The "FunctionLike" and "IsIsolatedFromAbove" op traits are now defined as named
records in base ODS file. Use those instead of NativeOpTrait referring to the
C++ class name in the ODS definition of LLVMFuncOp. NFC.
PiperOrigin-RevId: 284260891
Since these operations lower to [insert|extract][element|value] at LLVM
dialect level, neither element nor value would correctly reflect the meaning.
PiperOrigin-RevId: 284240727
Accept the address space of the global as a builder argument when constructing
an LLVM::GlobalOp instance. This decreases the reliance of LLVM::GlobalOp users
on the internal name of the attribute used for this purpose. Update several
uses of the address space in GPU to NVVM conversion.
PiperOrigin-RevId: 284233254
Move the definition of the GPU function opreation from hand-rolled C++ code to
ODS framework. This only does the moves, a follow-up is necessary to clean up
users of custom functions that could be auto-generated by ODS.
PiperOrigin-RevId: 284233245
For ops with infer type op interface defined, generate version that calls the inferal method on build. This is intermediate step to removing special casing of SameOperandsAndResultType & FirstAttrDereivedResultType. After that would be generating the inference code, with the initial focus on shaped container types. In between I plan to refactor these a bit to reuse generated paths. The intention would not be to add the type inference trait in multiple places, but rather to take advantage of the current modelling in ODS where possible to emit it instead.
Switch the `inferReturnTypes` method to be static.
Skipping ops with regions here as I don't like the Region vs unique_ptr<Region> difference at the moment, and I want the infer return type trait to be useful for verification too. So instead, just skip it for now to avoid churn.
PiperOrigin-RevId: 284217913
GPU functions use memory attributions, a combination of Op attributes and
region arguments, to specify function-wide buffers placed in workgroup or
private memory spaces. Introduce a lowering pattern for GPU functions to be
converted to LLVM functions taking into account memory attributions. Workgroup
attributions get transformed into module-level globals with unique names
derived from function names. Private attributions get converted into
llvm.allocas inside the function body. In both cases, we inject at the
beginning of the function the IR that obtains the raw pointer to the data and
populates a MemRef descriptor based on the MemRef type of buffer, making
attributions compose with the rest of the MemRef lowering and transparent for
use with std.load and std.store. While using raw pointers instead of
descriptors might have been more efficient, it is better implemented as a
canonicalization or a separate transformation so that non-attribution memrefs
could also benefit from it.
PiperOrigin-RevId: 284208396
It would be nice if we could detect if stats were enabled or not and use 'Requires', but this isn't possible to do at configure time.
Fixestensorflow/mlir#296
PiperOrigin-RevId: 284200271
Updates vector ContractionOp to use proper vector masks (produced by CreateMaskOp/ConstantMaskOp).
Leverages the following canonicalizations in unrolling unit test: CreateMaskOp -> ConstantMaskOp, StridedSliceOp(ConstantMaskOp) -> ConstantMaskOp
Removes IndexTupleOp (no longer needed now that we have vector mask ops).
Updates all unit tests.
PiperOrigin-RevId: 284182168
The iterator should be erased before adding a new entry
into blockMergeInfo to avoid iterator invalidation.
Closestensorflow/mlir#299
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/299 from denis0x0D:sandbox/reoder_erase 983be565809aa0aadfc7e92962e4d4b282f63c66
PiperOrigin-RevId: 284173235
The AddressOf operation in the LLVM dialect return a pointer to a global
variable. The latter may be in a non-default address space as indicated by the
"addr_space" attribute. Check that the address space of the pointer returned by
AddressOfOp matches that of the referenced GlobalOp. Update the AddressOfOp
builder to respect this constraint.
PiperOrigin-RevId: 284138860