This patch adds `affine.vector_load` and `affine.vector_store` ops to
the Affine dialect and lowers them to `vector.transfer_read` and
`vector.transfer_write`, respectively, in the Vector dialect.
Reviewed By: bondhugula, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D79658
For IR generated by a compiler, this is really simple: you just take the
datalayout from the beginning of the file, and apply it to all the IR
later in the file. For optimization testcases that don't care about the
datalayout, this is also really simple: we just use the default
datalayout.
The complexity here comes from the fact that some LLVM tools allow
overriding the datalayout: some tools have an explicit flag for this,
some tools will infer a datalayout based on the code generation target.
Supporting this properly required plumbing through a bunch of new
machinery: we want to allow overriding the datalayout after the
datalayout is parsed from the file, but before we use any information
from it. Therefore, IR/bitcode parsing now has a callback to allow tools
to compute the datalayout at the appropriate time.
Not sure if I covered all the LLVM tools that want to use the callback.
(clang? lli? Misc IR manipulation tools like llvm-link?). But this is at
least enough for all the LLVM regression tests, and IR without a
datalayout is not something frontends should generate.
This change had some sort of weird effects for certain CodeGen
regression tests: if the datalayout is overridden with a datalayout with
a different program or stack address space, we now parse IR based on the
overridden datalayout, instead of the one written in the file (or the
default one, if none is specified). This broke a few AVR tests, and one
AMDGPU test.
Outside the CodeGen tests I mentioned, the test changes are all just
fixing CHECK lines and moving around datalayout lines in weird places.
Differential Revision: https://reviews.llvm.org/D78403
Summary:
The assembly format for std.rank expects the operand type and not the
result type after the colon.
Differential Revision: https://reviews.llvm.org/D79857
Generally speaking, this is bad practice. It also causes the build to
break if there are editor temporary files.
Differential Revision: https://reviews.llvm.org/D79906
The Vulkan runtime wrapper will be compiled to a shared library
that are loaded by the JIT runner. Depending on LLVM libraries
means that LLVM symbols will be compiled into the shared library.
That can cause problems if we are using it with other shared
libraries depending on LLVM, notably Mesa, the open-source graphics
driver framework. The Vulkan API wrappers invoked by the JIT runner
links to the system libvulkan.so. If it's Mesa providing the
implementation, Mesa will normally try to load the system libLLVM.so
for its shader compilation. That causes issues because the JIT runner
already loaded the Vulkan runtime wrapper which has LLVM sybmols
compiled in. So system linker will instruct Mesa to use those symbols
instead.
Differential Revision: https://reviews.llvm.org/D79860
The CMake structure of the toy example is non-standard. encourage people to
copy the standalone example instead.
Differential Revision: https://reviews.llvm.org/D79889
All ops of the SCF dialect now use the `scf.` prefix instead of `loop.`. This
is a part of dialect renaming.
Differential Revision: https://reviews.llvm.org/D79844
The existing implementation of SubViewOp::getRanges relies on all
offsets/sizes/strides to be dynamic values and does not work in
combination with canonicalization. This revision adds a
SubViewOp::getOrCreateRanges to create the missing constants in the
canonicalized case.
This allows reactivating the fused pass with staged pattern
applications.
However another issue surfaces that the SubViewOp verifier is now too
strict to allow folding. The existing folding pattern is turned into a
canonicalization pattern which rewrites memref_cast + subview into
subview + memref_cast.
The transform-patterns-matmul-to-vector can then be reactivated.
Differential Revision: https://reviews.llvm.org/D79759
Due to the extension of Liveness, Buffer Assignment can now work on nested regions. This PR provides a test case to show that existing functionally of BA works properly.
Differential Revision: https://reviews.llvm.org/D79332
The current standard Alloca node is not annotated with the
MemEffect<Alloc> trait. This CL updates the Alloc and Alloca
memory-effect annotations using the latest Resource objects. Alloca
nodes will use a newly defined AutomaticAllocationScopeResource to
distinguish between Alloc and Alloca memory effects.
Differential Revision: https://reviews.llvm.org/D79620
This is only valid if the source tensors (result tensor) is static
shaped with all unit-extents when the reshape is collapsing
(expanding) dimensions.
Differential Revision: https://reviews.llvm.org/D79764
Summary:
Makes this operation runnable on CPU by generating MLIR instructions
that are eventually folded into an LLVM IR constant for the mask.
Reviewers: nicolasvasilache, ftynse, reidtatge, bkramer, andydavis1
Reviewed By: nicolasvasilache, ftynse, andydavis1
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79815
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.
In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.
The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.
The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.
Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.
Lowering to LLVM is updated, simplified and now supports all cases.
A mixed static-dynamic mode test that wouldn't previously lower is added.
It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.
Differential Revision: https://reviews.llvm.org/D79662
Conversion/ folders were originally intended to store patterns for
DialectA->DialectB conversions that depend on both dialects and do not
conceptually belong to either of the dialects. As such, DialectA->DialectA
conversion does not make sense under Conversion/ and should rather live with
the dialect it operates on.
Differential Revision: https://reviews.llvm.org/D79569
cmake does not truly support dependencies on automatically generated files
which are not in the same directory as the targets which depend on them.
It works with ninja, but doesn't work with make
This patch adds an explicit dependence so that all dialects are built
before the analysis libraries.
Differential Revision: https://reviews.llvm.org/D79805
This normalize the name of the tablegen file with the name of the generated
files (SideEffectInterfaces.h.inc) and the other Interface tablegen files,
which all end in Interface(s).td
Differential Revision: https://reviews.llvm.org/D79517
This reverts commit 80d133b24f.
Per Stephan Herhut: The canonicalizer pattern that was added creates
forms of the subview op that cannot be lowered.
This is shown by failing Tensorflow XLA tests such as:
tensorflow/compiler/xla/service/mlir_gpu/tests:abs.hlo.test
Will provide more details offline, they rely on logs from private CI.
Summary:
The scalar zero + splat yields more intermediate code than the direct
dense zero constant, and ultimately is lowered to exactly the same
LLVM IR operations, so no point wasting the intermediate code.
Reviewers: nicolasvasilache, andydavis1, reidtatge
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79758
Summary:
- Fix comments in several places
- Eliminate extra ' in AST dump and adjust tests accordingly
Differential Revision: https://reviews.llvm.org/D78399
Summary:
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.
In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.
The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.
The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.
Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.
It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.
Reviewers: ftynse, mravishankar, antiagainst, rriddle!, andydavis1, timshen, asaadaldien, stellaraccident
Reviewed By: mravishankar
Subscribers: aartbik, bondhugula, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79662
Summary:
This revision introduces a helper function to allow applying rewrite patterns, interleaved with more global transformations, in a staged fashion:
1. the first stage consists of an OwningRewritePatternList. The RewritePattern in this list are applied once and in order.
2. the second stage consists of a single OwningRewritePattern that is applied greedily until convergence.
3. the third stage consists of applying a lambda, generally used for non-local transformation effects.
This allows creating custom fused transformations where patterns can be ordered and applied at a finer granularity than a sequence of traditional compiler passes.
A test that exercises these behaviors is added.
Differential Revision: https://reviews.llvm.org/D79518
Summary:
This makes a common pattern of
`dyn_cast_or_null<OpTy>(v.getDefiningOp())` more concise.
Differential Revision: https://reviews.llvm.org/D79681
This [discussion](https://llvm.discourse.group/t/viewop-isnt-expressive-enough/991/2) raised some concerns with ViewOp.
In particular, the handling of offsets is incorrect and does not match the op description.
Note that with an elemental type change, offsets cannot be part of the type in general because sizeof(srcType) != sizeof(dstType).
Howerver, offset is a poorly chosen term for this purpose and is renamed to byte_shift.
Additionally, for all intended purposes, trying to support non-identity layouts for this op does not bring expressive power but rather increases code complexity.
This revision simplifies the existing semantics and implementation.
This simplification effort is voluntarily restrictive and acts as a stepping stone towards supporting richer semantics: treat the non-common cases as YAGNI for now and reevaluate based on concrete use cases once a round of simplification occurred.
Differential revision: https://reviews.llvm.org/D79541
Summary: This revision introduces LinalgPromotionOptions to more easily control the application of promotion patterns. It also simplifies the different entry points into Promotion in preparation for some behavior change in subsequent revisions.
Differential Revision: https://reviews.llvm.org/D79489
This dialect contains various structured control flow operaitons, not only
loops, reflect this in the name. Drop the Ops suffix for consistency with other
dialects.
Note that this only moves the files and changes the C++ namespace from 'loop'
to 'scf'. The visible IR prefix remains the same and will be updated
separately. The conversions will also be updated separately.
Differential Revision: https://reviews.llvm.org/D79578
Summary:
Cast from a value interpreted as floating-point to the corresponding signed
integer value. Similar to an element-wise `static_cast` in C++, performs an
element-wise conversion operation.
Differential Revision: https://reviews.llvm.org/D79373
Functions checking whether an SSA value is a valid dimension or symbol for
affine operations can be called on values defined in a detached region (a
region that is not yet attached to an operation), for example, during parsing
or operation construction. These functions will attempt to uncondtionally
dereference a pointer to the parent operation of a region, which may be null
(as fixed by the previous commit, uninitialized before that). Since one cannot
know to which operation a region will be attached, conservatively this
operation would not be a valid affine scope and act accordingly, instead of
crashing.
Region has a default constructor that is called when a region is constructed
while an operation is being created, and therefore before the region can be
attached to this operation. The `container` field is uninitialized, which makes
it impossible to check programmatically if a Region is attached to an operation
or not, leading to sly memory errors when this field is read. Initialize it to
nullptr by default and thus make sure one can check if a region is attached to
an operation or not.
Complex addition and substraction are the first two binary operations on complex
numbers.
Remaining operations will follow the same pattern.
Differential Revision: https://reviews.llvm.org/D79479
The SideEffect interface definitions currently use string expressions to
reference custom resource objects. This CL introduces Resource objects in
tablegen definitions to simplify linking of resource reference to resource
objects.
Differential Revision: https://reviews.llvm.org/D78917
This is a wrapper around vector of NamedAttributes that keeps track of whether sorted and does some minimal effort to remain sorted (doing more, e.g., appending attributes in sorted order, could be done in follow up). It contains whether sorted and if a DictionaryAttr is queried, it caches the returned DictionaryAttr along with whether sorted.
Change MutableDictionaryAttr to always return a non-null Attribute even when empty (reserve null cases for errors). To this end change the getter to take a context as input so that the empty DictionaryAttr could be queried. Also create one instance of the empty dictionary attribute that could be reused without needing to lock context etc.
Update infer type op interface to use DictionaryAttr and use NamedAttrList to avoid incurring multiple conversion costs.
Fix bug in sorting helper function.
Differential Revision: https://reviews.llvm.org/D79463
vulkan-runtime-wrappers does not need MLIRSPIRVSerialization,
which is used by the ConvertGpuLaunchFuncToVulkanLaunchFunc pass
under the hood.
Differential Revision: https://reviews.llvm.org/D79577
SPIR-V ops can mix operands and attributes in the definition. These
operands and attributes are serialized in the exact order of the definition
to match SPIR-V binary format requirements. It can cause excessive
generated code bloat because we are emitting code to handle each
operand/attribute separately. So here we probe first to check whether all
the operands are ahead of attributes. Then we can serialize all operands
together.
This removes ~1000 lines of code from the generated inc file.
Differential Revision: https://reviews.llvm.org/D79446
These template functions are used in the serializer, where we can
actually directly query the opcode from the op's definition and
use that in the auto-generated serialization logic.
This removes a set of templates accounting for 319 lines from
the auto-generated inc file.
Differential Revision: https://reviews.llvm.org/D79444
Originally, these operations were folded only if all expressions in their
affine maps could be folded to a constant expression that can be then subject
to numeric min/max computation. This introduces a more advanced version that
partially folds the affine map by lifting individual constant expression in it
even if some of the expressions remain variable. The folding can update the
operation in place to use a simpler map. Note that this is not as powerful as
canonicalization, in particular this does not remove dimensions or symbols that
became useless. This allows for better composition of Linalg tiling and
promotion transformation, where the latter can handle some canonical forms of
affine.min that the folding can now produce.
Differential Revision: https://reviews.llvm.org/D79502
In the Vector to LLVM conversion, the `replaceTransferOp` function calls
into a type converter that may fail and suppresses the status. Change
the function to return the failure status instead, Since it is called
from a pattern, the failure can be readily propagated to the rest of
infrastructure.
The list of destination load ops while evaluating producer-consumer
fusion wasn't being maintained as a set, and as such, duplicate load ops
were being added to it. Although this is harmless correctness-wise, it's
a killer efficiency-wise and it prevents interesting/useful fusions
(including for eg. reshapes into a matmul). The reason the latter
fusions would be missed is that a slice union would be unnecessarily
needed due to the duplicate load ops on a memref added to the 'dst
loads' list. Since slice union is unimplemented for the local var case,
a single destination load op that leads to local vars (like a floordiv /
mod producing fusion), a common case, would not get fused due to an
unnecessary union being tried with itself. (The union would actually be
the same thing but we would bail out.)
Besides the above, this would also significantly speed up fusion as all
the unnecessary slice computations / unions, checks, etc. due to the
duplicates go away.
Differential Revision: https://reviews.llvm.org/D79547
When the folding is performed in place, the `::fold` function does not populate
its `results` argument to indicate that. (In the folding hook for single-result
operations, the result of the original operation is expected to be returned,
but it is then ignored by the wrapper.) `OperationFolder::create` would
erronously rely on the _operation_ having zero results instead of on the
_folding_ producing zero new results to populate the list of results with those
of the original operation. This would lead to a crash for single-result ops
with in-place folds where the first result is accessed uncondtionally because
the list of results was not properly populated. Use the list of values produced
by the folding instead.
Differential Revision: https://reviews.llvm.org/D79497
The types of forward references are checked that they match with other
uses, but they do not check they match with the definition.
func @forward_reference_type_check() -> (i8) {
br ^bb2
^bb1:
return %1 : i8
^bb2:
%1 = "bar"() : () -> (f32)
br ^bb1
}
Would be parsed and the use site of '%1' would be silently changed to
'f32'.
This commit adds a test for this case, and a check during parsing for
the types to match.
Patch by Matthew Parkinson <mattpark@microsoft.com>
Closes D79317.
Summary:
This revision adds a conservative canonicalization pattern for MemRefCastOp that are typically inserted during ViewOp and SubViewOp canonicalization.
Ideally such canonicalizations would propagate the type to consumers but this is not a local behavior. As a consequence MemRefCastOp are introduced to keep type compatibility but need to be cleaned up later, in the case where more dynamic behavior than necessary is introduced.
Differential Revision: https://reviews.llvm.org/D79438
Essentially takes the lld/Common/Threads.h wrappers and moves them to
the llvm/Support/Paralle.h algorithm header.
The changes are:
- Remove policy parameter, since all clients use `par`.
- Rename the methods to `parallelSort` etc to match LLVM style, since
they are no longer C++17 pstl compatible.
- Move algorithms from llvm::parallel:: to llvm::, since they have
"parallel" in the name and are no longer overloads of the regular
algorithms.
- Add range overloads
- Use the sequential algorithm directly when 1 thread is requested
(skips task grouping)
- Fix the index type of parallelForEachN to size_t. Nobody in LLVM was
using any other parameter, and it made overload resolution hard for
for_each_n(par, 0, foo.size(), ...) because 0 is int, not size_t.
Remove Threads.h and update LLD for that.
This is a prerequisite for parallel public symbol processing in the PDB
library, which is in LLVM.
Reviewed By: MaskRay, aganea
Differential Revision: https://reviews.llvm.org/D79390
This revision allows for creating DenseElementsAttrs and accessing elements using std::complex<APInt>/std::complex<APFloat>. This allows for opaquely accessing and transforming complex values. This is used by the printer/parser to provide pretty printing for complex values. The form for complex values matches that of std::complex, i.e.:
```
// `(` element `,` element `)`
dense<(10,10)> : tensor<complex<i64>>
```
Differential Revision: https://reviews.llvm.org/D79296
This revision adds support for storing ComplexType elements inside of a DenseElementsAttr. We store complex objects as an array of two elements, matching the definition of std::complex. There is no current attribute storage for ComplexType, but DenseElementsAttr provides API for access/creation using std::complex<>. Given that the internal implementation of DenseElementsAttr is already fairly opaque, the only real complexity here is in the printing/parsing. This revision keeps it simple for now and always uses hex when printing complex elements. A followup will add prettier syntax for this.
Differential Revision: https://reviews.llvm.org/D79281
We see intermittent build errors on the windows buildbot because
mlir-opt is including Linalg headers which haven't been built yet.
This dependence should be resolved by declaring a PUBLIC dependence
on the Linalg library when building MLIROptMain.
This addresses a compilation failure on GCC 5:
error: #error This file requires compiler and library support for the
ISO C++ 2011 standard. This support must be enabled with the -std=c++11
or -std=gnu++11 compiler options.
#error This file requires compiler and library support
Differential Revision: https://reviews.llvm.org/D79439
DMA operation classes in the Standard dialect (`DmaStartOp` and `DmaWaitOp`)
provide helper functions that make numerous assumptions about the number and
order of operands, and about their types. However, these assumptions were not
checked in the verifier, leading to assertion failures or crashes when helper
functions were used on ill-formed ops. Some of the assuptions were checked in
the custom parser (and thus could not check assumption violations in ops
constructed programmatically, e.g., during rewrites) and others were not
checked at all. Introduce the verifiers for all these assumptions and drop
unnecessary checks in the parser that are now covered by the verifier.
Addresses PR45560.
Differential Revision: https://reviews.llvm.org/D79408
Summary:
Adds the loop unroll transformation for loop::ForOp.
Adds support for promoting the body of single-iteration loop::ForOps into its containing block.
Adds check tests for loop::ForOps with dynamic and static lower/upper bounds and step.
Care was taken to share code (where possible) with the AffineForOp unroll transformation to ease maintenance and potential future transition to a LoopLike construct on which loop transformations for different loop types can implemented.
Reviewers: ftynse, nicolasvasilache
Reviewed By: ftynse
Subscribers: bondhugula, mgorny, zzheng, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79184
Adding this pattern reduces code duplication. There is no need to have a
custom implementation for lowering to llvm.cmpxchg.
Differential Revision: https://reviews.llvm.org/D78753
Portions of MLIR which depend on LLVMIR generally need to depend on
intrinsics_gen, to ensure that tablegen'd header files from LLVM are built
first. Without this, we get errors, typically about llvm/IR/Attributes.inc
not being found.
Note that previously the Linalg Dialect depended on intrinsics_gen, but it
doesn't need to, since it doesn't use LLVMIR.
Differential Revision: https://reviews.llvm.org/D79389
This revision adds support for merging identical blocks, or those with the same operations that branch to the same successors. Operands that mismatch between the different blocks are replaced with new block arguments added to the merged block.
Differential Revision: https://reviews.llvm.org/D79134
This change removes tabs from the comments printed by the asmprinter after basic
block declarations in favor of two spaces. This is currently the only place in
the printed IR that uses tabs.
Differential Revision: https://reviews.llvm.org/D79377
Summary:
In the particular case of an insertion in a block without a terminator, the BlockBuilder insertion point should be block->end().
Adding a unit test to exercise this.
Differential Revision: https://reviews.llvm.org/D79363
This allows for walking the operations nested directly within a region, without traversing nested regions.
Differential Revision: https://reviews.llvm.org/D79056
Summary:
As D78974, this patch implements the emulation for store op. The emulation is
done with atomic operations. E.g., if the storing value is i8, rewrite the
StoreOp to:
1) load a 32-bit integer
2) clear 8 bits in the loading value
3) store 32-bit value back
4) load a 32-bit integer
5) modify 8 bits in the loading value
6) store 32-bit value back
The step 1 to step 3 are done by AtomicAnd as one atomic step, and the step 4
to step 6 are done by AtomicOr as another atomic step.
Differential Revision: https://reviews.llvm.org/D79272
This std::copy_n copies 8 byte data (APInt raw data) by 1 byte from the
beginning of char array. This is no problem in little endian, but the
data is not copied correctly in big endian because the data should be
copied from the end of the char array.
- Example of 4 byte data (such as float32)
Little endian (First 4 bytes):
Address | 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08
Data | 0xcd 0xcc 0x8c 0x3f 0x00 0x00 0x00 0x00
Big endian (Last 4 bytes):
Address | 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08
Data | 0x00 0x00 0x00 0x00 0x3f 0x8c 0xcc 0xcd
In general, when it copies N(N<8) byte data in big endian, the start
address should be incremented by (8 - N) bytes.
The original code has no problem when it includes 8 byte data(such as
double) even in big endian.
Differential Revision: https://reviews.llvm.org/D78076
- Exports MLIR targets to be used out-of-tree.
- mimicks `add_clang_library` and `add_flang_library`.
- Fixes libMLIR.so
After https://reviews.llvm.org/D77515 libMLIR.so was no longer containing
any object files. We originally had a cludge there that made it work with
the static initalizers and when switchting away from that to the way the
clang shlib does it, I noticed that MLIR doesn't create a `obj.{name}` target,
and doesn't export it's targets to `lib/cmake/mlir`.
This is due to MLIR using `add_llvm_library` under the hood, which adds
the target to `llvmexports`.
Differential Revision: https://reviews.llvm.org/D78773
[MLIR] Fix libMLIR.so and LLVM_LINK_LLVM_DYLIB
Primarily, this patch moves all mlir references to LLVM libraries into
either LLVM_LINK_COMPONENTS or LINK_COMPONENTS. This enables magic in
the llvm cmake files to automatically replace reference to LLVM components
with references to libLLVM.so when necessary. Among other things, this
completes fixing libMLIR.so, which has been broken for some configurations
since D77515.
Unlike previously, the pattern is now that mlir libraries should almost
always use add_mlir_library. Previously, some libraries still used
add_llvm_library. However, this confuses the export of targets for use
out of tree because libraries specified with add_llvm_library are exported
by LLVM. Instead users which don't need/can't be linked into libMLIR.so
can specify EXCLUDE_FROM_LIBMLIR
A common error mode is linking with LLVM libraries outside of LINK_COMPONENTS.
This almost always results in symbol confusion or multiply defined options
in LLVM when the same object file is included as a static library and
as part of libLLVM.so. To catch these errors more directly, there's now
mlir_check_all_link_libraries.
To simplify usage of add_mlir_library, we assume that all mlir
libraries depend on LLVMSupport, so it's not necessary to separately specify
it.
tested with:
BUILD_SHARED_LIBS=on,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB + LLVM_LINK_LLVM_DYLIB.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79067
[MLIR] Move from using target_link_libraries to LINK_LIBS
This allows us to correctly generate dependencies for derived targets,
such as targets which are created for object libraries.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79243
Three commits have been squashed to avoid intermediate build breakage.
Linalg transformations are currently exposed as DRRs.
Unfortunately RewriterGen does not play well with the line of work on named linalg ops which require variadic operands and results.
Additionally, DRR is arguably not the right abstraction to expose compositions of such patterns that don't rely on SSA use-def semantics.
This revision abandons DRRs and exposes manually written C++ patterns.
Refactorings and cleanups are performed to uniformize APIs.
This refactoring will allow replacing the currently manually specified Linalg named ops.
A collateral victim of this refactoring is the `tileAndFuse` DRR, and the one associated test, which will be revived at a later time.
Lastly, the following 2 tests do not add value and are altered:
- a dot_perm tile + interchange test does not test anything new and is removed
- a dot tile + lower to loops does not need 2-D tiling and is trimmed.
Add `CreateComplexOp`, `ReOp`, and `ImOp` to the standard dialect.
This is the first step to support complex numbers.
Differential Revision: https://reviews.llvm.org/D79159
The current BufferPlacement implementation tries to find Alloc and Dealloc
operations in order to move them. However, this is a tight coupling to
standard-dialect ops which has been removed in this CL.
Differential Revision: https://reviews.llvm.org/D78993
This is useful for several reasons:
* In some situations the user can guarantee that thread-safety isn't necessary and don't want to pay the cost of synchronization, e.g., when parsing a very large module.
* For things like logging threading is not desirable as the output is not guaranteed to be in stable order.
This flag also subsumes the pass manager flag for multi-threading.
Differential Revision: https://reviews.llvm.org/D79266
In cmake, dependencies on generated files require some sophistication in the build system. At build time, files are parsed to determine which headers they depend on and these dependencies are injected into the build system. This works well with ninja, but has some constraints with the makefile generator. According to the cmake documentation, this only works reliably within the same directory.
This patch expands the usage of mlir-headers to include all generated headers and adds an mlir-generic-headers target which triggers generation of dialect-independent headers. These targets are used to express dependencies on generated headers. This is mostly handled in AddMLIR.cmake and only a few CMakeLists.txt files need to change.
Differential Revision: https://reviews.llvm.org/D79242
These libraries are distinct from other things in Analysis in that they
operate only on core IR concepts. This also simplifies dependencies
so that Dialect -> Analysis -> Parser -> IR. Previously, the parser depended
on portions of the the Analysis directory as well, which sometimes
caused issues with the way the cmake makefile generator discovers
dependencies on generated files during compilation.
Differential Revision: https://reviews.llvm.org/D79240
Summary:
This is an initial version, currently supports OpString and OpLine
for autogenerated operations during (de)serialization.
Differential Revision: https://reviews.llvm.org/D79091
The current OpBuilder has a set of virtual functions required by the fact that the PatternRewriter inherits from it for convenience. The PatternRewriter is required to know about IR mutations for correctness. This revision changes the relationship to be explicit by having users register a listener with the builder instead of using inheritance/vtables. This still requires that users properly transfer the listener when creating new builders, but has several benefits:
* More than one builder can be created during pattern rewrites(assuming that the listener is properly forwarded)
* OpBuilder no longer requires a vtable, and thus does not incur the cost when a listener isn't present.
Differential Revision: https://reviews.llvm.org/D79206
Summary:
Maps ZeroExtendIOp and TruncateIOp to spirv::UConvertOp and spirv::SConvertOp.
Depends On D78974
Differential Revision: https://reviews.llvm.org/D79143
Summary:
The current implementation in SPIRVTypeConverter just unconditionally turns
everything into 32-bit if it doesn't meet the requirements of extensions or
capabilities. In this case, we can load a 32-bit value and then do bit
extraction to get the value.
Differential Revision: https://reviews.llvm.org/D78974
- Extract common logic between -convert-gpu-to-nvvm and -convert-gpu-to-rocdl.
- Cope with the fact that alloca operates on different addrspaces between NVVM
and ROCDL.
- Modernize unit tests for ROCDL dialect.
Differential Revision: https://reviews.llvm.org/D79021
Summary:
This revision cleans up a layer of complexity in ScopedContext and uses InsertGuard instead of previously manual bookkeeping.
The method `getBuilder` is renamed to `getBuilderRef` and spurious copies of OpBuilder are tracked.
This results in some canonicalizations not happening anymore in the Linalg matmul to vector test. This test is retired because relying on DRRs for this has been shaky at best. The solution will be better support to write fused passes in C++ with more idiomatic pattern composition and application.
Differential Revision: https://reviews.llvm.org/D79208
This revision adds support to allow named ops to lower to loops.
Linalg.batch_matmul successfully lowers to loops and to LLVM.
In the process, this test also activates linalg to affine loops.
However padded convolutions to not lower to affine.load atm so this revision overrides the type of underlying load / store operation.
Differential Revision: https://reviews.llvm.org/D79135
There are three op conversion modes: Partial, Full, and Analysis. This change modifies the Partial mode to optionally take a set of non-legalizable ops. If this parameter is specified, all ops that are not legalizable (i.e. would cause full conversion to fail) are tracked throughout the partial legalization.
Differential Revision: https://reviews.llvm.org/D78788
This commit marks AllocLikeOp as MemAlloc in StandardOps.
Also in Linalg dependency analysis use memory effect to detect
allocation. This allows the dependency analysis to be more
general and recognize other allocation-like operations.
Differential Revision: https://reviews.llvm.org/D78705
Summary:
The purpose of this is to aid in having code behave differently on
Operations based on their Dialect without caring about the specific
Op. Additionally this is consistent with most other types supporting
isa<> and dyn_cast<>.
A Dialect matches isa<> based only on its namespace and relies on each
namespace being unique.
Differential Revision: https://reviews.llvm.org/D79088
This revision allows masked vector transfers with m-D buffers and n-D vectors to
progressively lower to m-D buffer and 1-D vector transfers.
For a vector.transfer_read, assuming a `memref<(leading_dims) x (major_dims) x (minor_dims) x type>` and a `vector<(minor_dims) x type>` are involved in the transfer, this generates pseudo-IR resembling:
```
if (any_of(%ivs_major + %offsets, <, major_dims)) {
%v = vector_transfer_read(
{%offsets_leading, %ivs_major + %offsets_major, %offsets_minor},
%ivs_minor):
memref<(leading_dims) x (major_dims) x (minor_dims) x type>,
vector<(minor_dims) x type>;
} else {
%v = splat(vector<(minor_dims) x type>, %fill)
}
```
Differential Revision: https://reviews.llvm.org/D79062
This range allows for performing many different operations on successor operands, including erasing/adding/setting. This removes the need for the explicit canEraseSuccessorOperand and eraseSuccessorOperand methods.
Differential Revision: https://reviews.llvm.org/D79077
Currently a declaration won't be generated if the method has a default implementation. Meaning that operations that wan't to override the default have to explicitly declare the method in the extraClassDeclarations. This revision adds an optional list parameter to DeclareOpInterfaceMethods to allow for specifying a set of methods that should always have the declarations generated, even if there is a default.
Differential Revision: https://reviews.llvm.org/D79030
This provides a general hash and comparison for checking if two operations are equivalent. This revision also optimizes the handling of result types to take advantage of how result types are stored on the operation.
Differential Revision: https://reviews.llvm.org/D79029
This class allows for mutating an operand range in-place, and provides vector like API for adding/erasing/setting. ODS now uses this class to generate mutable wrappers for named operands, with the name `MutableOperandRange <operand-name>Mutable()`
Differential Revision: https://reviews.llvm.org/D78892
The current implementation uses CrashRecoveryContext, but this only supports recovering in a certain number of cases. This revision adds a signal handler to support even more situations.
This revision was able to properly generate a reproducer for a segfault in the Inliner, that the current recovery couldn't.
Differential Revision: https://reviews.llvm.org/D78315
This revision adds a mode to the crash reproducer generator to attempt to generate a more local reproducer. This will attempt to generate a reproducer right before the offending pass that fails. This is useful for the majority of failures that are specific to a single pass, and situations where some passes in the pipeline are not registered with a specific tool.
Differential Revision: https://reviews.llvm.org/D78314
This moves the threading check to runOnOperation. This produces a much cleaner interface for the adaptor pass, and will allow for the ability to enable/disable threading in a much cleaner way in the future.
Differential Revision: https://reviews.llvm.org/D78313
Makes the relationship and function clearer. Accordingly rename getAttrList to getMutableAttrDict.
Differential Revision: https://reviews.llvm.org/D79125
Enable calling the sort, as expected by getWithSorted, into static member function so that callers can get same sorting behavior.
Differential Revision: https://reviews.llvm.org/D79011
type operands.
The instructions used to convert std.cmpi cannot have i1 types
according to SPIR-V specification. A different set of operations are
specified in the SPIR-V spec for comparing boolean types. Enhance the
StandardToSPIRV lowering to target these instructions when operands to
std.cmpi operation are of i1 type.
Differential Revision: https://reviews.llvm.org/D79049
On certain targets std.subview should be able to take memrefs from non-zero
addrspaces. Improve lowering logic to llvm dialect and amend the tests.
Differential Revision: https://reviews.llvm.org/D79024
Enhance lowering logic and tests so vector.transfer_read and
vector.transfer_write take memrefs on non-zero addrspaces.
Differential Revision: https://reviews.llvm.org/D79023
(A previous version of this, dd2c639c3c, was
reverted.)
Introduce op trait PolyhedralScope for ops to define a new scope for
polyhedral optimization / affine dialect purposes, thus generalizing
such scopes beyond FuncOp. Ops to which this trait is attached will
define a new scope for the consideration of SSA values as valid symbols
for the purposes of polyhedral analysis and optimization. Update methods
that check for dim/symbol validity to work based on this trait.
Differential Revision: https://reviews.llvm.org/D79060
OperationHandle mostly existed to mirror the behavior of ValueHandle.
This has become unnecessary and can be retired.
Differential Revision: https://reviews.llvm.org/D78692
Previously, they would only only verify `isa<DictionaryAttr>` on such attrs
which resulted in crashes down the line from code assuming that the
verifier was doing the more thorough check introduced in this patch.
The key change here is for StructAttr to use
`CPred<"$_self.isa<" # name # ">()">` instead of `isa<DictionaryAttr>`.
To test this, introduce struct attrs to the test dialect. Previously,
StructAttr was only being tested by unittests/, which didn't verify how
StructAttr interacted with ODS.
Differential Revision: https://reviews.llvm.org/D78975
Summary:
When creating an operation with
* `AttrSizedOperandSegments` trait
* Variadic operands of only non-buildable types
* assemblyFormat to automatically generate the parser
the `builder` local variable is used, but never declared.
This adds a fix as well as a test for this case as existing ones use buildable types only.
Reviewers: rriddle, Kayjukh, grosser
Reviewed By: Kayjukh
Subscribers: mehdi_amini, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, llvm-commits
Tags: #mlir, #llvm
Differential Revision: https://reviews.llvm.org/D79004
Summary:
This change results in tests also being changed to prevent dead
affine.load operations from being folded away during rewrites.
Also move AffineStoreOp and AffineLoadOp to an ODS file.
Differential Revision: https://reviews.llvm.org/D78930
As we start defining more complex Ops, we increasingly see the need for
Ops-with-regions to be able to construct Ops within their regions in
their ::build methods. However, these methods only have access to
Builder, and not OpBuilder. Creating a local instance of OpBuilder
inside ::build and using it fails to trigger the operation creation
hooks in derived builders (e.g., ConversionPatternRewriter). In this
case, we risk breaking the logic of the derived builder. At the same
time, OpBuilder::create, which is by far the largest user of ::build
already passes "this" as the first argument, so an OpBuilder instance is
already available.
Update all ::build methods in all Ops in MLIR and Flang to take
"OpBuilder &" instead of "Builder *". Note the change from pointer and
to reference to comply with the common style in MLIR, this also ensures
all other users must change their ::build methods.
Differential Revision: https://reviews.llvm.org/D78713
We have provided a generic buffer assignment transformation ported from
TensorFlow. This generic transformation pass automatically analyzes the values
and their aliases (also in other blocks) and returns the valid positions for
Alloc and Dealloc operations. To find these positions, the algorithm uses the
block Dominator and Post-Dominator analyses. In our proposed algorithm, we have
considered aliasing, liveness, nested regions, branches, conditional branches,
critical edges, and independency to custom block terminators. This
implementation doesn't support block loops. However, we have considered this in
our design. For this purpose, it is only required to have a loop analysis to
insert Alloc and Dealloc operations outside of these loops in some special
cases.
Differential Revision: https://reviews.llvm.org/D78484