mlir currently fails to build on Solaris:
/vol/llvm/src/llvm-project/dist/mlir/lib/Conversion/VectorToLoops/ConvertVectorToLoops.cpp:78:20: error: reference to 'index_t' is ambiguous
IndexHandle zero(index_t(0)), one(index_t(1));
^
/usr/include/sys/types.h:103:16: note: candidate found by name lookup is 'index_t'
typedef short index_t;
^
/vol/llvm/src/llvm-project/dist/mlir/include/mlir/EDSC/Builders.h:27:8: note: candidate found by name lookup is 'mlir::edsc::index_t'
struct index_t {
^
and many more.
Given that POSIX reserves all identifiers ending in `_t` 2.2.2 The Name Space <https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html>, it seems
quite unwise to use such identifiers in user code, even more so without a distinguished
prefix.
The following patch fixes this by renaming `index_t` to `index_type`.
cases.
Tested on `amd64-pc-solaris2.11` and `sparcv9-sun-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D72619
Summary:
Modernize some of the existing custom parsing code in the LLVM dialect.
While this reduces some boilerplate code, it also reduces the precision
of the diagnostic error messges.
Reviewers: ftynse, nicolasvasilache, rriddle
Reviewed By: rriddle
Subscribers: merge_guards_bot, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72967
Summary:
This is a simple extension to allow vectorization to work not only on GenericLinalgOp
but more generally across named ops too.
For now, this still only vectorizes matmul-like ops but is a step towards more
generic vectorization of Linalg ops.
Reviewers: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72942
Summary:
First step towards the consolidation
of a lot of vector related utilities
that are now all over the place
(or even duplicated).
Reviewers: nicolasvasilache, andydavis1
Reviewed By: nicolasvasilache, andydavis1
Subscribers: merge_guards_bot, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72955
Summary:
This op is the counterpart to LLVM's atomicrmw instruction. Note that
volatile and syncscope attributes are not yet supported.
This will be useful for upcoming parallel versions of `affine.for` and generally
for reduction-like semantics.
Differential Revision: https://reviews.llvm.org/D72741
Summary:
This is a temporary implementation to support Flang. The LLVM-IR parser
will need to be extended in some way to support recursive types. The
exact approach here is still a work-in-progress.
Unfortunately, this won't pass roundtrip testing yet. Adding a comment
to the test file as a reminder.
Differential Revision: https://reviews.llvm.org/D72542
Summary: This field is currently not used by anything, and using a ClassID instance provides better support for more efficient classof.
Reviewers: mehdi_amini, nicolasvasilache
Reviewed By: mehdi_amini
Subscribers: merge_guards_bot, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72822
Introduce a new generator for MLIR tablegen driver that consumes LLVM IR
intrinsic definitions and produces MLIR ODS definitions. This is useful to
bulk-generate MLIR operations equivalent to existing LLVM IR intrinsics, such
as additional arithmetic instructions or NVVM.
A test exercising the generation is also added. It reads the main LLVM
intrinsics file and produces ODS to make sure the TableGen model remains in
sync with what is used in LLVM.
Differential Revision: https://reviews.llvm.org/D72926
When lowering `loop.if` to `spv.selection` we explicitly create
a selection header block before the control flow diverges and a
merge block where control flow subsequently converges.
Differential Revision: https://reviews.llvm.org/D72836
This makes the local variable `implies` to have the correct
type to satisfy ArrayRef's constructor:
/*implicit*/ constexpr ArrayRef(const T (&Arr)[N])
Hopefully this should please GCC 5.
Differential Revision: https://reviews.llvm.org/D72924
In SPIR-V, when a new version is introduced, it is possible some
existing extensions will be incorporated into it so that it becomes
implicitly declared if targeting the new version. This affects
conversion target specification because we need to take this into
account when allowing what extensions to use.
For a capability, it may also implies some other capabilities,
for example, the `Shader` capability implies `Matrix` the capability.
This should also be taken into consideration when preparing the
conversion target: when we specify an capability is allowed, all
its recursively implied capabilities are also allowed.
This commit adds utility functions to query implied extensions for
a given version and implied capabilities for a given capability
and updated SPIRVConversionTarget to use them.
This commit also fixes a bug in availability spec. When a symbol
(op or enum case) can be enabled by an extension, we should drop
it's minimal version requirement. Being enabled by an extension
naturally means the symbol can be used by *any* SPIR-V version
as long as the extension is supported. The grammar still encodes
the 'version' field for such cases, but it should be interpreted
as a different way: rather than meaning a minimal version
requirement, it says the symbol becomes core at that specific
version.
Differential Revision: https://reviews.llvm.org/D72765
By default, for an enum attribute, we will generate a list of equality
comparisons for all supported cases inside it's predicate. This list
can be fairly large for certain SPIR-V enum attributes. Instead, we
already have such a list generated by EnumsGen in the symbolize
functions. Leverage that to simplify the generated C++ code.
Differential Revision: https://reviews.llvm.org/D72763
Certain SPIR-V capabilities are only available in certain SPIR-V versions
or extensions. Also a SPIR-V capability may implicitly declares other
capabilities.
This commit updates gen_spirv_dialect.py to support generating such
information into SPIRVBase.td. It requires us to topologically sort
all capabilities because now a capability can refer to another one.
This commits also registers a few extensions because their symbols are
used by capability availability.
Note that this commit hasn't updated SPIRVConversionTarget to take
into consideration such relationship yet. That will be done in a
following-up commit.
Differential Revision: https://reviews.llvm.org/D72760
This is (more?) usable by GDB pretty printers and seems nicer to write.
There's one tricky caveat that in C++14 (LLVM's codebase today) the
static constexpr member declaration is not a definition - so odr use of
this constant requires an out of line definition, which won't be
provided (that'd make all these trait classes more annoyidng/expensive
to maintain). But the use of this constant in the library implementation
is/should always be in a non-odr context - only two unit tests needed to
be touched to cope with this/avoid odr using these constants.
Based on/expanded from D72590 by Christian Sigg.
Summary:
MLIR unlike LLVM IR supports multidimensional vector types. Such types are
lowered to nested LLVM IR arrays wrapping an LLVM IR vector for the innermost
dimension of the MLIR vector. MLIR supports constants of such types using
ElementsAttr for values. Introduce support for converting ElementsAttr into
LLVM IR Constant Aggregates recursively. This enables translation of
multidimensional vector constants from MLIR to LLVM IR.
Differential Revision: https://reviews.llvm.org/D72846
Summary:
* Add shaped container type interface which allows infering the shape, element
type and attribute of shaped container type separately. Show usage by way of
tensor type inference trait which combines the shape & element type in
infering a tensor type;
- All components need not be specified;
- Attribute is added to allow for layout attribute that was previously
discussed;
* Expand the test driver to make it easier to test new creation instances
(adding new operands or ops with attributes or regions would trigger build
functions/type inference methods);
- The verification part will be moved out of the test and to verify method
instead of ops implementing the type inference interface in a follow up;
* Add MLIRContext as arg to possible to create type for ops without arguments,
region or location;
* Also move out the section in OpDefinitions doc to separate ShapeInference doc
where the shape function requirements can be captured;
- Part of this would move to the shape dialect and/or shape dialect ops be
included as subsection of this doc;
* Update ODS's variable usage to match camelBack format for builder,
state and arg variables;
- I could have split this out, but I had to make some changes around
these and the inconsistency bugged me :)
Differential Revision: https://reviews.llvm.org/D72432
Summary:
This diff moves the conversion pass declaration closer to its definition
and makes the namespacing of passes consistent with the rest of the
infrastructure (i.e. `mlir::linalg::createXXXPass` -> `mlir::createXXXPass`).
Reviewers: ftynse, jpienaar, mehdi_amini
Subscribers: rriddle, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72766
The current implementation of the LLVM-to-MLIR translation could not handle
functions used as constant values in instructions. The handling is added
trivially as `llvm.mlir.constant` can define constants of function type using
SymbolRef attributes, which works even for functions that have not been
declared yet.
This commit defines a new SPIR-V dialect attribute for specifying
a SPIR-V target environment. It is a dictionary attribute containing
the SPIR-V version, supported extension list, and allowed capability
list. A SPIRVConversionTarget subclass is created to take in the
target environment and sets proper dynmaically legal ops by querying
the op availability interface of SPIR-V ops to make sure they are
available in the specified target environment. All existing conversions
targeting SPIR-V is changed to use this SPIRVConversionTarget. It
probes whether the input IR has a `spv.target_env` attribute,
otherwise, it uses the default target environment: SPIR-V 1.0 with
Shader capability and no extra extensions.
Differential Revision: https://reviews.llvm.org/D72256
Summary:
This allows for users to cache printer state, which can be costly to recompute. Each of the IR print methods gain a new overload taking this new state class.
Depends On D72293
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D72294
Summary:
This was previously disabled as FunctionType TypeAttrs could not be roundtripped in the IR. This has been fixed, so we can now generically print FuncOp.
Depends On D72429
Reviewed By: jpienaar, mehdi_amini
Differential Revision: https://reviews.llvm.org/D72642
Summary:
This diff fixes issues with the semantics of linalg.generic on tensors that appeared when converting directly from HLO to linalg.generic.
The changes are self-contained within MLIR and can be captured and tested independently of XLA.
The linalg.generic and indexed_generic are updated to:
To allow progressive lowering from the value world (a.k.a tensor values) to
the buffer world (a.k.a memref values), a linalg.generic op accepts
mixing input and output ranked tensor values with input and output memrefs.
```
%1 = linalg.generic #trait_attribute %A, %B {other-attributes} :
tensor<?x?xf32>,
memref<?x?xf32, stride_specification>
-> (tensor<?x?xf32>)
```
In this case, the number of outputs (args_out) must match the sum of (1) the
number of output buffer operands and (2) the number of tensor return values.
The semantics is that the linalg.indexed_generic op produces (i.e.
allocates and fills) its return values.
Tensor values must be legalized by a buffer allocation pass before most
transformations can be applied. Such legalization moves tensor return values
into output buffer operands and updates the region argument accordingly.
Transformations that create control-flow around linalg.indexed_generic
operations are not expected to mix with tensors because SSA values do not
escape naturally. Still, transformations and rewrites that take advantage of
tensor SSA values are expected to be useful and will be added in the near
future.
Subscribers: bmahjour, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72555
Summary: bfloat16 doesn't have a valid APFloat format, so we have to use double semantics when storing it. This change makes sure that hexadecimal values can be round-tripped properly given this fact.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D72667
Summary:
When converting splat constants for nested sequential LLVM IR types wrapped in
MLIR, the constant conversion was erroneously assuming it was always possible
to recursively construct a constant of a sequential type given only one value.
Instead, wait until all sequential types are unpacked recursively before
constructing a scalar constant and wrapping it into the surrounding sequential
type.
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72688
Summary:
This is based on the use of code constantly checking for an attribute on
a model and instead represents the distinct operaion with a different
op. Instead, this op can be used to provide better filtering.
Reviewers: herhut, mravishankar, antiagainst, rriddle
Reviewed By: herhut, antiagainst, rriddle
Subscribers: liufengdb, aartbik, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72336
I used the codemod python tool to do this with the following commands:
codemod 'tensorflow/mlir/blob/master/include' 'llvm/llvm-project/blob/master/mlir/include'
codemod 'tensorflow/mlir/blob/master' 'llvm/llvm-project/blob/master/mlir'
codemod 'tensorflow/mlir' 'llvm-project/llvm'
Differential Revision: https://reviews.llvm.org/D72244