Next step towards supporting sparse tensors outputs.
Also some minor refactoring of enum constants as well
as replacing tensor arguments with proper buffer arguments
(latter is required for more general sizes arguments for
the sparse_tensor.init operation, as well as more general
spares_tensor.convert operations later)
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D111771
This adds a new parser and printer for text which may be a keyword or a
string. When printing, it will attempt to print the text as a keyword,
but if it has any special or non-printable characters, it will be
printed as an escaped string. When parsing, it will parse either a
valid keyword or a potentially escaped string. The printer allows for an
empty string, in which case it prints `""`.
This new function is used for printing the name in NamedAttributes, and
for printing the symbol name after the `@`. In CIRCT we are using this
to print module port names, which are conceptually similar to named
function arguments.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111683
`DefaultValuedAttr<StrAttr, "">` and `ConstantAttr<StrAttr, "">`
result in bugs in which TableGen will not recognize that the attribute
has a default value, because `""` is an empty TableGen string.
Strings no longer have special treatment. Instead, string values must be
wrapped in quotes: "\"foo\"". Two helpers, `DefaultValuedStrAttr` and
`ConstantStrAttr` have been added to keep code clean.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111855
From the perspective of analysis, scf::ForOp is treated as a black box. Basic block arguments do not alias with their respective OpOperands on the ForOp, so they do not participate in conflict analysis with ops defined outside of the loop.
However, bufferizesToMemoryRead and bufferizesToMemoryWrite on the scf::ForOp itself are used to determine how the scf::ForOp interacts with its surrounding ops.
Differential Revision: https://reviews.llvm.org/D111775
For each memory read, follow SSA use-def chains to find the op that produces the data being read (i.e., the most recent write). A memory write to an alias is a conflict if it takes places after the "most recent write" but before the read.
This CL introduces two main changes:
* There is a concise definition of a conflict. Given a piece of IR with InPlaceSpec annotations and a computes alias set, it is easy to compute whether this program has a conflict. No need to consider multiple cases such as "read of operand after in-place write" etc.
* No need to check for clobbering.
Differential Revision: https://reviews.llvm.org/D111287
Allow emitting get & set prefix for accessors generated for ops. If
enabled, then the argument/return/region name gets converted from
snake_case to UpperCamel and prefix added. The attribute also allows
generating both the current "raw" method along with the prefix'd one to
make it easier to stage changes.
The option is added on the dialect and currently defaults to existing
raw behavior. The expectation is that the staging where both are
generated would be short lived and so optimized to keeping the changes
local/less invasive (it just generates two functions for each accessor
with the same body - most of these internally again call a helper
function). But generation can be optimized if needed.
I'm unsure about OpAdaptor classes as there it is all get methods (it is
a named view into raw data structures), so prefix doesn't add much.
This starts with emitting raw-only form (as current behavior) as
default, then one can opt-in to raw & prefixed, then just prefixed. The
default in OpBase will switch to prefixed-only to be consistent with
MLIR style guide. And the option potentially removed later (considered
enabling specifying prefix but current discussion more pro keeping it
limited and stuck with that).
Also add more explicit checking for pruned functions to avoid emitting
where no function was added (and so avoiding dereferencing nullptr)
during op def/decl generation.
See https://bugs.llvm.org/show_bug.cgi?id=51916 for further discussion.
Differential Revision: https://reviews.llvm.org/D111033
Emit reduction during op vectorization instead of doing it when creating the
transfer write. This allow us to not broadcast output arguments for reduction
initial value.
Differential Revision: https://reviews.llvm.org/D111825
Part of the arith update broke UiToFp32. Fixed the lowering and included a new
test to detect a regression.
Differential Revision: https://reviews.llvm.org/D111772
I am unclear this is reproducible with correct IR but atm the verifier for InsertSliceOp
is not powerful enough and this triggers an infinite loop that is worth fixing independently.
Differential Revision: https://reviews.llvm.org/D111812
Improve support for variadic regions in ODS-generated operation view classes.
In particular, make generated constructors take an extra argument that
specifies the number of variadic regions if the operation has them. Previously,
there was no mechanism to specify a non-zero number of variadic regions. Also
generate named accessors to regions.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111783
MemRefType was using a wrong `isa` function in the bindings code, which
could lead to invalid IR being constructed. Also run the verifier in
memref dialect tests.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111784
Setting the nofold attribute enables packing an operand. At the moment, the attribute is set by default. The pack introduces a callback to control the flag.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111718
After removing the last LinalgOps that have no region attached we can verify there is a region. The patch performs the following changes:
- Move the SingleBlockImplicitTerminator trait further up the the structured op base class.
- Adapt the LinalgOp verification since the trait only check if there is 0 or 1 block.
- Introduce a getBlock method on the LinalgOp interface.
- Access the LinalgOp body using either getBlock() or getBody() if the concrete operation type is known.
This patch is a follow up to https://reviews.llvm.org/D111233.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111393
* Incorporates a reworked version of D106419 (which I have closed but has comments on it).
* Extends the standalone example to include a minimal CAPI (for registering its dialect) and a test which, from out of tree, creates an aggregate dylib and links a little sample program against it. This will likely only work today in *static* MLIR builds (until the TypeID fiasco is finally put to bed). It should work on all platforms, though (including Windows - albeit I haven't tried this exact incarnation there).
* This is the biggest pre-requisite to being able to build out of tree MLIR Python-based projects from an installed MLIR/LLVM.
* I am rather nauseated by the CMake shenanigans I had to endure to get this working. The primary complexity, above and beyond the previous patch is because (with no reason given), it is impossible to export target properties that contain generator expressions... because, of course it isn't. In this case, the primary reason we use generator expressions on the individual embedded libraries is to support arbitrary ordering. Since that need doesn't apply to out of tree (which import everything via FindPackage at the outset), we fall back to a more imperative way of doing the same thing if we detect that the target was imported. Gross, but I don't expect it to need a lot of maintenance.
* There should be a relatively straight-forward path from here to rebase libMLIR.so on top of this facility and also make it include the CAPI.
Differential Revision: https://reviews.llvm.org/D111504
This is the first step towards supporting general sparse tensors as output
of operations. The init sparse tensor is used to materialize an empty sparse
tensor of given shape and sparsity into a subsequent computation (similar to
the dense tensor init operation counterpart).
Example:
%c = sparse_tensor.init %d1, %d2 : tensor<?x?xf32, #SparseMatrix>
%0 = linalg.matmul
ins(%a, %b: tensor<?x?xf32>, tensor<?x?xf32>)
outs(%c: tensor<?x?xf32, #SparseMatrix>) -> tensor<?x?xf32, #SparseMatrix>
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111684
The type can be inferred trivially, but it is currently done as string
stitching between ODS and C++ and is not easily exposed to Python.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111712
When writing the user-facing documentation, I noticed several inconsistencies
and asymmetries in the Python API we provide. Fix them by adding:
- the `owner` property to regions, similarly to blocks;
- the `isinstance` method to any class derived from `PyConcreteAttr`,
`PyConcreteValue` and `PyConreteAffineExpr`, similar to `PyConcreteType` to
enable `isa`-like calls without having to handle exceptions;
- a mechanism to create the first block in the region as we could only create
blocks relative to other blocks, with is impossible in an empty region.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111556
Skip the check on "hasOperandStorage" since the array will be indexed anyway.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111696
This exposes creating a CallSiteLoc with a callee & list of frames for
callers. Follows the creation approach in C++ side where a list of
frames may be provided.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111670
As discussed on discord, we have never actually been able to build with the project-wide published min version of 3.14.3. The buildbot that tests the Python configuration is currently pinned to 3.19.1, and there are a number of non-version/policy controlled features that Python building relies on that makes it unreliable with older versions. Some of the issues are pretty fundamental and I don't know how to do them on the older version. I think that, as an optional feature, at least advertising the PSA as in this patch is a good middle ground until the next project-wide CMake version bump.
Also moves setup logic to a macro so that everyone can use it.
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
1. To avoid two ExecutionModeOp using the same name, adding the value of execution mode in name when converting to LLVM dialect.
2. To avoid syntax error in spv.OpLoad, add OpTypeSampledImage into SPV_Type.
Reviewed by:antiagainst
Differential revision:https://reviews.llvm.org/D111193
By doing so, it is not necessary to get the OpOperand a second time via
getAliasingOpOperand. Also, code slightly more readable because we do
not have to deal with Optional<> return value.
Differential Revision: https://reviews.llvm.org/D110918
We shouldn't broadcast the original value when doing reduction. Instead
we compute the reduction and then combine it with the original value.
Differential Revision: https://reviews.llvm.org/D111666
This patch teaches `isProjectedPermutation` and `inverseAndBroadcastProjectedPermutation`
utilities to deal with maps representing an explicit broadcast, e.g., (d0, d1) -> (d0, 0).
This extension is needed to enable vectorization of such explicit broadcast in Linalg.
Reviewed By: pifon2a, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111563
Average pool assumed the same input/output type. Result type for integers
is always an i32, should be updated appropriately.
Reviewed By: GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D111590
Adapt CodegenStartegy to used the vector transfer lowering patterns by default.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111649
If I remember correctly this wasn't done previously because dim used to
be in the memref dialect.
Differential Revision: https://reviews.llvm.org/D111651
Some random changes that were hanging around in my workspace. Also,
a tiny step towards creating a header file for the sparse utils lib.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111589
Add a switch to code gen strategy to disable/enable the vector transfer lowering and disable it by default.
Differential Revision: https://reviews.llvm.org/D111647
Add the vector transfer patterns and introduce the max transfer rank option on the codegen strategy.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111635
This revision takes advantage of the recently added support for 0-d transfers and vector.multi_reduction that return a scalar.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D111626
This revision updates the op semantics, printer, parser and verifier to allow 0-d transfers.
Until 0-d vectors are available, such transfers have a special form that transits through vector<1xt>.
This is a stepping stone towards the longer term work of adding 0-d vectors and will help significantly reduce corner cases in vectorization.
Transformations and lowerings do not yet support this form, extensions will follow.
Differential Revision: https://reviews.llvm.org/D111559
vector.multi_reduction currently does not allow reducing down to a scalar.
This creates corner cases that are hard to handle during vectorization.
This revision extends the semantics and adds the proper transforms, lowerings and canonicalizations to allow lowering out of vector.multi_reduction to other abstractions all the way to LLVM.
In a future, where we will also allow 0-d vectors, scalars will still be relevant: 0-d vector and scalars are not equivalent on all hardware.
In the process, splice out the implementation patterns related to vector.multi_reduce into a new file.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D111442
`hint-expression` is an IntegerAttr, because it can be a combination of multiple values from the enum `omp_sync_hint_t` (Section 2.17.12 of OpenMP 5.0)
Reviewed By: ftynse, kiranchandramohan
Differential Revision: https://reviews.llvm.org/D111360
Make `raw_ostream operator<<` follow const correctness semantic,
since it is a requirement of FormatVariadic implementation.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111547
* Change callback signature `bool(Operation *)` -> `Optional<bool>(Operation *)`
* addDynamicallyLegalOp add callback to the chain
* If callback returned empty `Optional` next callback in chain will be called
Differential Revision: https://reviews.llvm.org/D110487
Call `printType(subElemType)` instead of `os << subElemType` for them.
It allows to handle type aliases inside complex types.
As a side effect, fixed `test.int` parsing.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111536
* Call `llvm_canonicalize_cmake_booleans` for all CMake options,
which are propagated to `lit.local.cfg` files.
* Use Python native boolean values instead of strings for such options.
This fixes the cases, when CMake variables have values other than `ON` (like `TRUE`).
This might happen due to IDE integration or due to CMake preset usage.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D110073
Operations that have the InferTypeOpInterface trait can now omit the return
types in their custom assembly formats.
Differential Revision: https://reviews.llvm.org/D111326
This relaxes vectorization of dense memrefs a bit so that affine expressions
are allowed in more outer dimensions. Vectorization of non unit stride
references is disabled though, since this seems ineffective anyway.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111469
Until now, we only had documentation oriented towards developers of the
bindings. Provide some documentation for users of the bindings that don't want
or need to understand the inner workings.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111540
1. Add support to vectorize induction variables of loops that are
not mapped to any vector dimension in SuperVectorize pass.
2. Fix a bug in getForInductionVarOwner.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D111370
This test is crashing 9 out of 10 runs in CI, but I can't reproduce
locally right now. Disabling to get the CI back to green and avoid
backsliding with more ASAN issues that would go unnoticed.
This moves the registry higher in the LLVM library dependency stack.
Every client of the target registry needs to link against MC anyway to
actually use the target, so we might as well move this out of Support.
This allows us to ensure that Support doesn't have includes from MC/*.
Differential Revision: https://reviews.llvm.org/D111454
Instead of hard-coding results for both Intel and AMD, let's relax
the checks to simplify the test while supporting both implementations.
Note that:
- If a new hardware implementation comes up in the future, it is likely
to pass the relaxed tests, i.e. no future maintenance burden for us.
- If something terribly wrong happens (e.g. instead of rsqrt we
execute 1/sqrt), the tests will probably catch it, since the relaxed
tests expect low precision (e.g. rsqrt(1) != 1.0).
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D111461
TensorLiteralParser::getHexAttr does a isIntOrIndexOrFloat check and properly handles index elements, but TensorLiteralParser::getAttr that calls into it has a mismatched check. This just makes the checks match so that index element attrs can parse when of type tensor.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D111374
The purpose of this revision is to make "write into non-writable memory" conflict detection easier to understand.
The main idea is that there is a conflict in the case of inplace bufferization if:
1. Someone writes to (an alias of) opOperand, opResult or the to-be-bufferized op writes itself.
2. And, opOperand or opResult aliases a non-writable buffer.
Differential Revision: https://reviews.llvm.org/D111379
This commit adds a pattern to perform constant folding on linalg
generic ops which are essentially transposes. We see real cases
where model importers may generate such patterns.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D110597
Introduce support for accepting ops instead of values when constructing ops. A
single-result op can be used instead of a value, including in lists of values,
and any op can be used instead of a list of values. This is similar to, but
more powerful, than the C++ API that allows for implicitly casting an OpType to
Value if it is statically known to have a single result - the cast in Python is
based on the op dynamically having a single result, and also handles the
multi-result case. This allows to build IR in a more concise way:
op = dialect.produce_multiple_results()
other = dialect.produce_single_result()
dialect.consume_multiple_results(other, op)
instead of having to access the results manually
op = dialect.produce.multiple_results()
other = dialect.produce_single_result()
dialect.consume_multiple_results(other.result, op.operation.results)
The dispatch is implemented directly in Python and is triggered automatically
for autogenerated OpView subclasses. Extension OpView classes should use the
functions provided in ods_common.py if they want to implement this behavior.
An alternative could be to implement the dispatch in the C++ bindings code, but
it would require to forward opaque types through all Python functions down to a
binding call, which makes it hard to inspect them in Python, e.g., to obtain
the types of values.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111306
The convolution op is one of the remaining hard coded Linalg operations that have no region attached. It got obsolete due to the OpDSL convolution operations. Removing it allows us to delete specialized code and tests that are not needed for the OpDSL counterparts that rely on the standard code paths.
Test needed due to specialized implementations are removed. Tiling and fusion tests are replaced by variants using linalg.conv_2d.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111233
This reverts commit 7aebdfc4fc.
The build is broken with errors like:
GPUPasses.cpp:(.text.pybind11_object_init[pybind11_object_init]+0x118): undefined reference to `PyExc_TypeError'
After CMake 3.18, we are able to limit the scope of the
find_package(Python3 ...) search to just Development.Module. Searching
for Development will fail in manylinux builds, and isn't necessary
since we are not embedding the Python interpreter. For more information, see:
https://pybind11.readthedocs.io/en/stable/compiling.html#findpython-mode
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D111383
These kind of function can behave differently on these X86 chips, there
isn't really "one true answer" so we'll accept both.
Also remove spurious passes and use mattr="avx" to match the instruction
used here.
Differential Revision: https://reviews.llvm.org/D111373
Currently Affine LICM checks iterOperands and does not hoist out any
instruction containing iterOperands. We should check iterArgs instead.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D111090
Add an interface for outlineable OpenMP operations.
This patch was initially done in fir-dev and is now needed
for the upstreaming.
Reviewed By: schweitz
Differential Revision: https://reviews.llvm.org/D111310
* Need to investigate the proper solution to https://github.com/pybind/pybind11/issues/3336 or engineer something different.
* The attempt to produce an empty buffer_info as a workaround triggers asan/ubsan.
* Usage of this API does not arise naturally in practice yet, and it is more important to be asan/crash clean than have a solution right now.
* Switching back to raising an exception, even though that triggers terminate().
* This already half existed in terms of reading the raw buffer backing a DenseElementsAttr.
* Documented the precise expectations of the buffer layout.
* Extended the Python API to support construction from bitcasted buffers, allowing construction of all primitive element types (even those that lack a compatible representation in Python).
* Specifically, the Python API can now load all integer types at all bit widths and all floating point types (f16, f32, f64, bf16).
Differential Revision: https://reviews.llvm.org/D111284
The signature of this function was confusing. Check for hasKnownBufferizationAliasingBehavior separately when needed.
Differential Revision: https://reviews.llvm.org/D110916
It was bundling quite a lot of patterns that convert high-D
vector ops into low-D elementary ops. It might not be good
for all of the patterns to happen for a particular downstream
user. For example, `ShapeCastOpRewritePattern` rewrites
`vector.shape_cast` into data movement extract/insert ops.
Instead, split the entry point into multiple ones so users
can pull in patterns on demand.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111225
Move getInplaceableOpResult() call into bufferizableInPlaceAnalysis.
Note: The only goal of this change is to make the signature of bufferizableInPlaceAnalysis smaller. (Fewer arguments.)
Differential Revision: https://reviews.llvm.org/D110915
ConstShapeOp has a constant shape, so its type can always be static.
We still allow it to have ShapeType though.
Differential Revision: https://reviews.llvm.org/D111139
Update OpDSL to support unsigned integers by adding unsigned min/max/cast signatures. Add tests in OpDSL and on the C++ side to verify the proper signed and unsigned operations are emitted.
The patch addresses an issue brought up in https://reviews.llvm.org/D111170.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D111230
Add folding rule for std.or op when an operand has all bits set.
or(x, <all bits set>) -> <all bits set>
Differential Revision: https://reviews.llvm.org/D111206
This was causing a subsequent assert/crash when a type converter failed to convert a block argument.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D110985
Create the Arithmetic dialect that contains basic integer and floating
point arithmetic operations. Ops that did not meet this criterion were
moved to the Math dialect.
First of two atomic patches to remove integer and floating point
operations from the standard dialect. Ops will be removed from the
standard dialect in a subsequent patch.
Reviewed By: ftynse, silvas
Differential Revision: https://reviews.llvm.org/D110200
clang-tidy, fix redundant return statement at the end of the void method.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D111251
It's nice for users to have more information when debugging failures and
these are only triggered in a failure path.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D107676
For the type lattice, we (now) use the "less specialized or equal" partial
order, leading to the bottom representing the empty set, and the top
representing any type.
This naming is more in line with the generally used conventions, where the top
of the lattice is the full set, and the bottom of the lattice is the empty set.
A typical example is the powerset of a finite set: generally, meet would be the
intersection, and join would be the union.
```
top: {a,b,c}
/ | \
{a,b} {a,c} {b,c}
| X X |
{a} { b } {c}
\ | /
bottom: { }
```
This is in line with the examined lattice representations in LLVM:
* lattice for `BitTracker::BitValue` in `Hexagon/BitTracker.h`
* lattice for constant propagation in `HexagonConstPropagation.cpp`
* lattice in `VarLocBasedImpl.cpp`
* lattice for address space inference code in `InferAddressSpaces.cpp`
Reviewed By: silvas, jpienaar
Differential Revision: https://reviews.llvm.org/D110766
As described on D111049, we're trying to remove the <string> dependency from error handling and replace uses of report_fatal_error(const std::string&) with the Twine() variant which can be forward declared.
Instead just emit a warning that analysis failed and the result will be treated conservatively.
Differential Revision: https://reviews.llvm.org/D111217
Implement min and max using the newly introduced std operations instead of relying on compare and select.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D111170
This fixes round-trip / ambiguity when an operation in the standard dialect would
have the same name as an operation in the default dialect.
Differential Revision: https://reviews.llvm.org/D111204
This patch extends Linalg core vectorization with support for min/max reductions
in linalg.generic ops. It enables the reduction detection for min/max combiner ops.
It also renames MIN/MAX combining kinds to MINS/MAXS to make the sign explicit for
floating point and signed integer types. MINU/MAXU should be introduce din the future
for unsigned integer types.
Reviewed By: pifon2a, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D110854
This avoids keeping references to passes that may be freed by
the time that the pass manager has finished executing (in the
non-crash case).
Fixes PR#52069
Differential Revision: https://reviews.llvm.org/D111106
This otherwise loses a lot of debugging info and results in a painful
debugging experience.
Reviewed By: mravishankar, stellaraccident
Differential Revision: https://reviews.llvm.org/D111107
We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse storage scheme at runtime (other than making an explicit delSparseTensor() library call). To simplify memory management, a sparse_tensor.release operation has been introduced that lowers to the runtime library call while keeping tensors, opague pointers, and memrefs transparent in the initial IR.
*Note* There is obviously some tension between the concept of immutable tensors and memory management methods. This tension is addressed by simply stating that after the "release" call, no further memref related operations are allowed on the tensor value. We expect the design to evolve over time, however, and arrive at a more satisfactory view of tensors and buffers eventually.
Bug:
http://llvm.org/pr52046
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D111099
This allows us to generate interfaces in a namespace,
following other TableGen'erated code.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D108311
Move the generalization pattern to the other Linalg transforms to make it available to the codegen strategy.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D110728
These are considered noops.
Buferization will still fail on scf.execute_region which yield values.
This is used to make comprehensive bufferization interoperate better with external clients.
Differential Revision: https://reviews.llvm.org/D111130
This option is needed for passes that are known to reach a fix point, but may
need many iterations depending on the size of the input IR.
Differential Revision: https://reviews.llvm.org/D111058
This change allows better interop with external clients of comprehensive bufferization functions
but is otherwise NFC for the MLIR pass itself.
Differential Revision: https://reviews.llvm.org/D111121
This fixes some typos in OpDefinitions.md and DeclarativeRewrites.md,
and wrap function/class names in backticks.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D110582
The discussion in https://reviews.llvm.org/D110425 demonstrated that "packing"
may be a confusing term to define the behavior of this op in presence of the
attribute. Instead, indicate the intended effect of preventing the folder from
being applied.
Reviewed By: nicolasvasilache, silvas
Differential Revision: https://reviews.llvm.org/D111046
This patch is mainly to propogate location attribute from spv.GlobalVariable to llvm.mlir.global.
It also contains three small changes.
1. Remove the restriction on UniformConstant In SPIRVToLLVM.cpp;
2. Remove the errorCheck on relaxedPrecision when deserializering SPIR-V in Deserializer.cpp
3. In SPIRVOps.cpp, let ConstantOp take signedInteger too.
Co-authered: Alan Liu <alanliu.yf@gmail.com> and Xinyi Liu <xyliuhelen@gmail.com>
Reviewed by:antiagainst
Differential revision: https://reviews.llvm.org/D110207
The new constructor relies on type-based dynamic dispatch and allows one to
construct call operations given an object representing a FuncOp or its name as
a string, as opposed to requiring an explicitly constructed attribute.
Depends On D110947
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110948
Constructing a ConstantOp using the default-generated API is verbose and
requires to specify the constant type twice: for the result type of the
operation and for the type of the attribute. It also requires to explicitly
construct the attribute. Provide custom constructors that take the type once
and accept a raw value instead of the attribute. This requires dynamic dispatch
based on type in the constructor. Also provide the corresponding accessors to
raw values.
In addition, provide a "refinement" class ConstantIndexOp similar to what
exists in C++. Unlike other "op view" Python classes, operations cannot be
automatically downcasted to this class since it does not correspond to a
specific operation name. It only exists to simplify construction of the
operation.
Depends On D110946
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110947
Provide a couple of quality-of-life usability improvements for Python bindings,
in particular:
* give access to the list of types for the list of op results or block
arguments, similarly to ValueRange->TypeRange,
* allow for constructing empty dictionary arrays,
* support construction of array attributes by concatenating an existing
attribute with a Python list of attributes.
All these are required for the upcoming customization of builtin and standard
ops.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110946
Add missing mlir-capi-*-test tool substitutions in order to fix CAPI
test failures when mlir is not installed yet.
Differential Revision: https://reviews.llvm.org/D110991
Include mlir_tools_dir in the PATH used in test environment,
as otherwise mlir-reduce is unable to find mlir-opt when building
standalone (and hence mlir_tools_dir != llvm_tools_dir).
Differential Revision: https://reviews.llvm.org/D110992
First the leak sanitizer has to be disabled, as even an empty script
leads to leak detection with Python.
Then we need to preload the ASAN runtime, as the main binary (python)
won't be linked against it. This will only work on Linux right now.
Differential Revision: https://reviews.llvm.org/D111004
NFC. Drop unnecessary use of OpBuilder in buildTripCountMapAndOperands.
Rename this to getTripCountMapAndOperands and remove stale comments.
Differential Revision: https://reviews.llvm.org/D110993