It is time to compose Linalg related optimizations with SparseTensor
related optimizations. This is a careful first start by adding some
general Linalg optimizations "upstream" of the sparse compiler in the
full sparse compiler pipeline. Some minor changes were needed to make
those optimizations aware of sparsity.
Note that after this, we will add a sparse specific fusion rule,
just to demonstrate the power of the new composition.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119971
This commit adds a pattern to wrap a tensor.pad op with
an scf.if op to separate the cases where we don't need padding
(all pad sizes are actually zeros) and where we indeed need
padding.
This pattern is meant to handle padding inside tiled loops.
Under such cases the padding sizes typically depend on the
loop induction variables. Splitting them would allow treating
perfect tiles and edge tiles separately.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D117018
The pad-slice swap pattern generates `scf.if` and `tensor.generate`
to guard against zero-sized slices if it cannot prove the slice is
always non-zero. This is safe but quite conservative. It can be
unnecessary for cases where we know by problem definition such cases
does not exist, even if with dynamic shaped ops or unknown tile/slice
sizes, e.g., convolution padding size = 1 with kernel dim size = 3.
So this commit introduces a control to the pattern to specify
whether to generate the if constructs to handle such cases better,
given that once the if constructs is materialized, it's very hard
to analyze and simplify.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D117017
Rationale:
empty line between main include for this file
moved include that actually defines code into right section
Note that this revision started as breaking up ops/attrs even more
(for bug https://github.com/llvm/llvm-project/issues/52748), but due
the the connection in Dialect.initalize(), this cannot be split further).
All heavy lifting refactoring was already done by River in previous cleanup.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119617
Makes lld-link work in a non-MSVC shell by autodetecting MSVC toolchain. Also
adds support for /winsysroot and a few other switches.
All this is done by refactoring to share code with clang-cl's existing support
for the same.
Differential Revision: https://reviews.llvm.org/D118070
Although type punning is defined for union in C, it is UB in C++.
This patch introduces a bit_cast function to convert between types in a safe way.
This is necessary to get llvm-libc compile with GCC.
This patch is extracted from D119002.
Differential Revision: https://reviews.llvm.org/D119145
Build failures are not a particularly helpful way to enforce not using
deprecated APIs and that isn't the point of the Bazel build.
At the same time, this removes `-Wno-unused` this is a check that we do
enforce in the Google internal build and so are ok maintaining in our
maintenance of the upstream Bazel build (the comment about not wanting
to do so was from a time when this was in a separate repository and I was
the only one maintaining it).
Differential Revision: https://reviews.llvm.org/D118671
D116542 adds EmbedBufferInModule which introduces a layer violation
(https://llvm.org/docs/CodingStandards.html#library-layering).
See 2d5f857a1e for detail.
EmbedBufferInModule does not use BitcodeWriter functionality and should be moved
LLVMTransformsUtils. While here, change the function case to the prevailing
convention.
It seems that EmbedBufferInModule just follows the steps of
EmbedBitcodeInModule. EmbedBitcodeInModule calls WriteBitcodeToFile but has IR
update operations which ideally should be refactored to another library.
Reviewed By: jhuber6
Differential Revision: https://reviews.llvm.org/D118666
There is a layer violation and can break clang -fmodule-name=X -fmodules-strict-decluse builds:
* LLVMTransformUtils has `#include "llvm/Bitcode/BitcodeWriterPass.h"`
* LLVMBitWriter depends on LLVMTransformUtils after D116542
Temporarily work around the issue.
This reduces the dependencies of the MLIRVector target and makes the dialect consistent with other dialects.
Differential Revision: https://reviews.llvm.org/D118533
Also reimplement `std-bufferize` in terms of BufferizableOpInterface-based bufferization. The old `std.select` bufferization pattern is no longer needed and deleted.
Differential Revision: https://reviews.llvm.org/D118559
Refactor sqrt implementations:
- Move architecture specific instructions from `src/math/<arch>` to `src/__support/FPUtil/<arch>` folder.
- Move generic implementation of `sqrt` to `src/__support/FPUtil/generic` folder and add it as a header library.
- Use `src/__support/FPUtil/sqrt.h` for architecture/generic selections.
- Add unit tests for generic implementation of `sqrt`.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D118173
This would have enabled me to notice the MLIR test file needed updating too, preventing the test file of 2074eef5db from being necessary.
layering_check is already enabled in mlir/BUILD.bazel. I don't know why I didn't see the other breakage there.
Differential Revision: https://reviews.llvm.org/D118125
The underlying issue here is that line 125 checks if linkopts was assigned (by line 120) but it's not initialized, so that becomes a crash. This was noticed by someone trying to use Docker, so no terminfo library installed.
Reviewed By: GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D118270
No longer go through an external model. Also put BufferizableOpInterface into the same build target as the BufferizationDialect. This allows for some code reuse between BufferizationOps canonicalizers and BufferizableOpInterface implementations.
Differential Revision: https://reviews.llvm.org/D117987
This is in preparation of unifying the existing bufferization with One-Shot bufferization.
A subsequent commit will replace `tensor-bufferize`'s implementation with the BufferizableOpInterface-based implementation and move over missing test cases.
Differential Revision: https://reviews.llvm.org/D117984
This commit is the first step towards unifying core bufferization and One-Shot Bufferize.
This commit does not move over the implementations of BufferizableOpInterface yet. This will be done in separate commits. This change does also not move the unit tests yet. The tests will be moved together with op interface implementations and split into separate files.
Differential Revision: https://reviews.llvm.org/D117641
The code in `BufferizableOpInterface`'s header/source no longer contains any analysis code. This makes it easier to run the bufferization with a different analysis or without any analysis.
Differential Revision: https://reviews.llvm.org/D117478
If not allow-return-memref, raise an error if a new memory allocation is returned/yielded from a block. We do not check for new allocations directly, but for ops that yield/return values that are not equivalent to values that are defined outside of the current of the block.
Note: We still need to check that scf.for yield values and bbArgs are aliasing to ensure that getAliasingOpOperand/getAliasingOpResult is correct.
Differential Revision: https://reviews.llvm.org/D116687
This is important because *Objects targets need to only depend on other *Objects targets, not on the unsuffixed CAPI rules. Depending on how it is linked in the current setup, it can cause duplicate symbols.
Differential Revision: https://reviews.llvm.org/D117176
Dynamic batch for rescale, gather, max_pool, avg_pool, conv2D and depthwise_conv2D. Split helper functions into a separate header file.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D117031
This op is an example for how to deal with ops who's OpResult may aliasing with one of multiple OpOperands.
Differential Revision: https://reviews.llvm.org/D116868
This change simplifies BufferizableOpInterface and other functions. Overall, the API will get smaller: Functions related to custom IR traversal are deleted entirely. This will makes it easier to write BufferizableOpInterface implementations.
This is also in preparation of unifying Comprehensive Bufferize and core bufferization. While Comprehensive Bufferize could theoretically maintain its own IR traversal, there is no reason to do so, because all bufferize implementations in BufferizableOpInterface have to support partial bufferization anyway. And we can share a larger part of the code base between the two bufferizations.
Differential Revision: https://reviews.llvm.org/D116448
Although we moved to Github Issues. The bug report message refers to
Bugzilla still. This patch tries to update these URLs.
Reviewed By: MaskRay, Quuxplusone, jhenderson, libunwind, libc++
Differential Revision: https://reviews.llvm.org/D116351
Historically, the bindings for the Linalg dialect were included into the
"core" bindings library because they depended on the C++ implementation
of the "core" bindings. The other dialects followed the pattern. Now
that this dependency is gone, split out each dialect into a separate
Python extension library.
Depends On D116649, D116605
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D116662
So far, only the custom dialect types are exposed.
The build and packaging is same as for Linalg and SparseTensor, and in
need of refactoring that is beyond the scope of this patch.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D116605
This moves a bunch of helper functions from `Transforms/SparseTensorConversion.cpp` into `Transforms/CodegenUtils.{cpp,h}` so that they can be reused by `Transforms/Sparsification.cpp`, etc.
See also the dependent D115010 which cleans up some corner cases in this change.
Reviewed By: aartbik, rriddle
Differential Revision: https://reviews.llvm.org/D115008
This reverts 3816c53f04 and removes follow-up
fixups.
The original intention was to show error earlier (posix_fallocate time) than
later for ld.lld but it appears to cause some problems which make it not free.
* FreeBSD ZFS: EINVAL, not too bad.
* FreeBSD UFS: according to khng "devastatingly slow on freebsd because UFS on freebsd does not have preallocation support like illumos. It zero-fills."
* NetBSD: maybe EOPNOTSUPP
* Linux tmpfs: unless tmpfs is set up to use huge pages (requires CONFIG_TRANSPARENT_HUGE_PAGECACHE=y), I can consistently demonstrate ~300ms delay for a 1.4GiB output.
* Linux ext4: I don't measure any benefit, either backed by a hard disk or by a file in tmpfs.
* The current code organization of `defined(HAVE_POSIX_FALLOCATE)` costs us a macro dispatch for AIX.
I think we should just remove it. I think if posix_fallocate ever finds demonstrable benefit,
it is likely Linux specific and will not need HAVE_POSIX_FALLOCATE, and possibly opt-in by some specific programs.
In a filesystem with CoW and compression, the ENOSPC benefit may be lost as well.
Reviewed By: khng300
Differential Revision: https://reviews.llvm.org/D115957
`EnumAttr` is a pure TableGen implementation of enum attributes using `AttrDef`. This is meant as a drop-in replacement for `StrEnumAttr`, which is soon to be deprecated. `StrEnumAttr` is often used over `IntEnumAttr` because its more readable in MLIR assembly formats. However, storing and manipulating strings is not efficient. Defining `StrEnumAttr` can also be awkward and relies on a lot of special logic in `EnumsGen`, and has some hidden sharp edges.
Also, `EnumAttr` stores the enum directly, removing the need to convert to/from integers when calling attribute getters on ops.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D115181
This patch extends the GPU kernel outlining pass so that it can take in
an optional data layout specification that will be attached to the GPU
module operation generated. If the data layout specification is not provided
the default data layout is used instead.
Reviewed By: herhut, mehdi_amini
Differential Revision: https://reviews.llvm.org/D115722