Some tests still pass even though we don't claim big-endian support. Using
UNSUPPORTED is a better indicator than XFAIL that we don't guarantee that
the tests work.
Dialects can opt-in to providing custom encodings by implementing the
`BytecodeDialectInterface`. This interface provides hooks, namely
`readAttribute`/`readType` and `writeAttribute`/`writeType`, that will be used
by the bytecode reader and writer. These hooks are provided a reader and writer
implementation that can be used to encode various constructs in the underlying
bytecode format. A unique feature of this interface is that dialects may choose
to only encode a subset of their attributes and types in a custom bytecode
format, which can simplify adding new or experimental components that aren't
fully baked.
Differential Revision: https://reviews.llvm.org/D132498
This moves some parsing functionality from BytecodeReader to
AttrTypeReader, and removes some duplication between the attribute/type
code paths.
Differential Revision: https://reviews.llvm.org/D132497
This extracts the string section writer and reader into dedicated
classes, which better separates the logic and will also simplify future
patches that want to interact with the string section.
Differential Revision: https://reviews.llvm.org/D132496
Currently vector.gather only supports reading memory into a 1-D result vector.
This patch extends it to support an n-D result vector with the indices, masks,
and passthroughs in n-D vectors.
As we are trying to vectorize tensor.extract with vector.gather
(https://github.com/iree-org/iree/issues/9198), it will need to gather the
elements into an n-D vector. Having vector.gather with n-D results allows us
to avoid flatten and reshape at the vectorization stage. The backends can then
decide the optimal ways to lower the vector.gather op.
Note that this is different from n-D gathering, which is about reading n-D
memory with the n-D indices. The indices here are still only 1-D offsets on
the base.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D131905
GPUFuncOpLowering moves the body out of gpu.func op and erases it. An empty gpu.func may fail verification but should not crash it. Verification of an erased op is triggered e.g. with debug printing on.
Reviewed By: akuegel
Differential Revision: https://reviews.llvm.org/D132446
Currently, buffer deallocation considers arith.select to be
non-aliasing, which results in deallocs being inserted incorrectly. Since
arith.select doesn't implement any useful interfaces, this change just handles
it explicitly. Eventually this should probably be fixed properly, if this pass
is going to be used long term.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D132460
There are several use cases where a destination style operation needs an interface
that contains a subset of the methods from LinalgStructuredInterface.
In this change, we move all such methods to a new interface, and add forwarding
methods to LinalgStructuredInterface to make the change the less invasive.
It may be possible to refactor the code later to get rid of (some or all) of the
forwarding methods.
This change also removes the cloneWithMapper interface methods, as it is not used anywhere.
RFC:
https://discourse.llvm.org/t/rfc-interface-for-destination-style-ops/64056
Differential Revision: https://reviews.llvm.org/D132125
- Add spv.store instead of init for spv.variable to fix data issues in
some GPU drivers
Reviewed By: antiagainst
Patch By: raikonenfnu
Differential Revision: https://reviews.llvm.org/D132427
Fix incorrect pointer usage in Vulkan buffer and Memref descriptor binding:
- Vulkan runtime produced incorrect result if there is alignment.
- There was illegal memory access if binding a LLVM global materialized
from a constant op (0xdeadbeef).
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D132291
Default attributes were only handled by ODS accessors generated with the
intention that these behave as if set attributes. This addresses the
long standing TODO to address this inconsistency. Moving the
initialization to construction vs every access. Removing need for
duplicated default attribute population in python bindings.
Switch some of the OpenMP ones to optional attribute with default as the
currently set default values are not legal. May need to dig more there.
Switched LinAlg generated ones to optional attribute with default as its
quite widely used and unclear where it falls on two different
interpretations.
Differential Revision: https://reviews.llvm.org/D130916
tensor.pad is lowered to tensor.generate + tensor.insert_slice during bufferization. For best performance with constant padding values, users should vectorize the IR before bufferizing it.
This change also relaxes tje restriction that no new ops that bufferize to a memory write should be added during bufferization. Since bufferization has been split into two steps a while ago (tensor copy insertion + bufferization), it is reasonable to allow this now.
Differential Revision: https://reviews.llvm.org/D132355
TileToForeachThreadOp now accepts mixed SSA value operands / index attributes for tile_sizes and num_threads. (Reusing OperandsOrIntegersSizesList.) In case of an operand, a PDL_Operation must be specified that is mapped to a payload op that returns the tile size or number of threads.
Differential Revision: https://reviews.llvm.org/D131949
Previously, `arith.constant`, `arith.muli` and `affine.min` were supported when deriving upper loop bounds when converting parallel loops to GPU.
Reviewed By: akuegel
Differential Revision: https://reviews.llvm.org/D132354
This commit adds a new bytecode serialization format for MLIR.
The actual serialization of MLIR to binary is relatively straightforward,
given the very very general structure of MLIR. The underlying basis for
this format is a variable-length encoding for integers, which gets heavily
used for nearly all aspects of the encoding (given that most of the encoding
is just indexing into lists).
The format currently does not provide support for custom attribute/type
serialization, and thus always uses an assembly format fallback. It also
doesn't provide support for resources. These will be added in followups,
the intention for this patch is to provide something that supports the
basic cases, and can be built on top of.
https://discourse.llvm.org/t/rfc-a-binary-serialization-format-for-mlir/63518
Differential Revision: https://reviews.llvm.org/D131747
Now that MLIR is built with C++17, use the standard library equivalent of
`llvm::is_invocable`: `std::is_invocable`.
Differential Revision: https://reviews.llvm.org/D132318
The `IR/AttributeTest.cpp` test fails to compile on Solaris:
```
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:223:36: error: no matching function for call to 'allocate'
AttrT::get(type, "resource", UnmanagedAsmResourceBlob::allocate(data));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:237:3: note: in instantiation of function template specialization 'checkNativeAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeAccess<AttrT, T>(builder.getContext(), llvm::makeArrayRef(data),
^
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:258:3: note: in instantiation of function template specialization 'checkNativeIntAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeIntAccess<DenseI8ResourceElementsAttr, int8_t>(builder, 8);
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:221:3: note: candidate template ignored: requirement '!std::is_same<char, char>::value' was not satisfied [with T = char]
allocate(ArrayRef<T> data, bool dataIsMutable = false) {
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:214:26: note: candidate function not viable: requires at least 2 arguments, but 1 was provided
static AsmResourceBlob allocate(ArrayRef<char> data, size_t align,
^
```
Because `char` is `signed` by default on Solaris and `int8_t` is
`char`. `std::is_same<int8_t, char>` is `true` unlike elsewhere, rejecting
the one-arg `allocate` overload.
Fixed by renaming the two overloads to avoid the ambiguity.
Tested on `amd64-pc-solaris2.11` ,`sparcv9-sun-solaris2.11`, and
`x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D131148
This change adds a new AffineDelinearizeIndexOp to the affine dialect.
The operation accepts an index type as well as a basis (array of index
values) representing how the index should be decomposed into a
multi-index. The decomposition obeys a canonical semantic that treats
the final basis element as "fastest varying" and the first basis element
as "slowest varying". A naive lowering of the operation using a sequence
of AffineApplyOps is given.
RFC was discussed on discourse here: https://discourse.llvm.org/t/rfc-tensor-extracting-slices-from-tensor-collapse-shape/64034
Reviewed By: bondhugula, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D131997
The current llvm.prefetch intrinsic docs state "The rw, locality and
cache type arguments must be constant integers."
This change:
- Makes arg 3 (cache type) an ImmArg
- Improves the verifier error messages to reference the incorrect
argument.
- Fixes two tests which contradict the docs.
This is needed as the lowering to GlobalISel is different for ImmArgs
compared to other constants. The non-ImmArgs create a G_CONSTANT MIR
instruction, the for ImmArgs the constant is put directly on the
intrinsic's MIR instruction as an immediate.
Differential Revision: https://reviews.llvm.org/D132042
The current approach is convervative in which whenever there is a
non-normalizable operations in a function will the function be labelled
as non-normalizable. It means it requires that all operations must have
MemRefsNormalizable trait.
This patch relaxes the requirement that if the memref map layouts of a
non-normalizable operation are identity, this operation does not block
the normalization of the other operations in the same function.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D125854
Add an option (cleanUpUnroll) to unroll cleanup loop even if the trip count is smaller the unroll factor.
Differential Revision: https://reviews.llvm.org/D129171
The tiling interface implementation was making assumption on the code
generated by makeTiledShape which were wrong. The ExtractSliceOp create
may be combined with other ExtractSliceOp. To solve that we compute
directly the offset using the new utilities.
Differential Revision: https://reviews.llvm.org/D132182
This patch adds OMPIRBuilder support for the safelen clause for the
simd directive.
Reviewed By: shraiysh, Meinersbur
Differential Revision: https://reviews.llvm.org/D131526
We held off on this before as `LLVM_LIBDIR_SUFFIX` conflicted with it.
Now we return this.
`LLVM_LIBDIR_SUFFIX` is kept as a deprecated way to set
`CMAKE_INSTALL_LIBDIR`. The other `*_LIBDIR_SUFFIX` are just removed
entirely.
I imagine this is too potentially-breaking to make LLVM 15. That's fine.
I have a more minimal version of this in the disto (NixOS) patches for
LLVM 15 (like previous versions). This more expansive version I will
test harder after the release is cut.
Reviewed By: sebastian-ne, ldionne, #libc, #libc_abi
Differential Revision: https://reviews.llvm.org/D130586
The methods to perform such operations have been implemented for the DynamicMemRefType in a way that is similar to the implementation for StridedMemRefType. Up until here one could pass an unranked memref to the library, and thus obtain a “dynamic” memref descriptor, but then there would have been no possibility to operate on its content.
This patch "modernizes" the implementation of these operations by
switching them to assembly formats and type inference traits.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131971
This patch moves `LLVM::ShuffleVectorOp` to assembly format and in the
process drops the extra type that can be inferred (both operand types
are required to be the same) and switches to a dense integer array.
The syntax change:
```
// Before
%0 = llvm.shufflevector %0, %1 [0 : i32, 0 : i32, 0 : i32, 0 : i32] : vector<4xf32>, vector<4xf32>
// After
%0 = llvm.shufflevector %0, %1 [0, 0, 0, 0] : vector<4xf32>
```
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D132038
This fixes a bug in SCF/AffineCanonicalizationUtils.cpp. Loop lb/ub were previously considered dimensions, which caused a crash when a (non-optimizable) affine.min / affine.max expression was processed (due to multiplication of two dims). Lb/ub are now considered symbols and symbols may be multiplied. (The scope of the analysis is "within the loop body", at which point lb/ub are constants.)
Differential Revision: https://reviews.llvm.org/D132021
bufferization.to_memref ops are not supported in One-Shot Analysis. They often trigger a failed assertion that can be confusing. Instead, scan for to_memref ops before running the analysis and immediately abort with a proper error message.
Differential Revision: https://reviews.llvm.org/D132027
Arrange the operations in alphabetical order, rather than what
appears to be the order they were added. This was suggested in
a review when adding new operations.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D132046
A number of mlir tests `FAIL` on Solaris/sparcv9 with `Target has no JIT
support`. This patch fixes that by mimicing `clang/test/lit.cfg.py` which
implements a `host-supports-jit` keyword for this. The gtest-based unit
tests don't support `REQUIRES:`, so lack of support needs to be hardcoded
there.
Tested on `amd64-pc-solaris2.11` (`check-mlir` results unchanged) and
`sparcv9-sun-solaris2.11` (only one unrelated failure left).
Differential Revision: https://reviews.llvm.org/D131151
Remove hardcoded usage of memref.alloc/dealloc ops in affine utils; use memory
effects instead. This is NFC for existing test cases, but strictly more
general/powerful in terms of functionality: it allows other allocation and
freeing ops (like gpu.alloc/dealloc) to work in conjunction with affine ops and
utilities.
Differential Revision: https://reviews.llvm.org/D132011
This commit adds support for 0-D vectors in ReductionOp.
Reviewed By: nicolasvasilache, dcaballe
Differential Revision: https://reviews.llvm.org/D131896
This change omits default values from the sparse tensor type,
saving considerable text real estate for the common cases.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D132083
OptionalAttr does the wrapping with Optional explicitly in ODS while
default valued optional attribute doesn't (and follows DefaultValuedAttr
in this behavior/meant as drop-in for updating old behavior), update
when to emit check for non-null to account for this. Also add variant
for optional default valued string attribute to have same convenience as
default valued string attribute.
The `arith.addi_carry` op implements integer addition with overflows. The carry is returned via the second result, as `i1`.
Reviewed By: antiagainst, bondhugula
Differential Revision: https://reviews.llvm.org/D131893
These are useful in builders of Ops taking DenseArrayAttrs or for use in Rewriter, to create constant instances.
Differential Revision: https://reviews.llvm.org/D132067
Adding more supported storage classes in pointer type
Removing support for arithmetic ops as this is not available on the tensor hardware
Reviwed By: antiagainst
Differential Revision: https://reviews.llvm.org/D132039
Implement the new sparse_tensor.reduce operation which
accepts a starting identity value and a code block
describing how to perform the reduction.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D130573
There are some of this supported in various places, but the
basic conversion of single operations to LLVM was not supported.
Adding this to allow Flang to use these.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D131912
Simdloop collapse clause is supported in the same way
as colllapse clause for worksharing loops.
Reviewed By: kiranchandramohan
Differential Revision: https://reviews.llvm.org/D131674
Signed-off-by: Dominik Adamski <dominik.adamski@amd.com>
This reverts commit b8ecf32f81.
This commit introduces undefined behavior according to UBSan:
UndefinedBehaviorSanitizer: nullptr-with-nonzero-offset third_party/llvm/llvm-project/mlir/include/mlir/ExecutionEngine/CRunnerUtils.h:377:5
This patch remove the Operation *op from the argument list in utility functions, and directly pass the Location instead of calling op->getLoc().
This should make the code more clear, as the utility function (logically) does not relies on the operation that we are currently rewriting, and they behave the same regardless of the operation.
Reviewed By: aartbik, wrengr
Differential Revision: https://reviews.llvm.org/D131991
The operation computes pow(b, p), where 'b' is floating point
and 'p' is a signed integer. The result's type matches 'b' type.
The operands must have the same shape.
Differential Revision: https://reviews.llvm.org/D129811
This fixes some fallout from D131282. Currently, add_tablegen() will add the tablegen target to LLVM_EXPORTS and associates the install with LLVMExports. For non-standalone builds, this means that you end up with mlir-tblgen and clang-tblgen in LLVMExports.
However, these projects should instead be using MLIR_EXPORTS/MLIRTargets and CLANG_EXPORTS/ClangTargets. To fix this, add an extra EXPORT option and make use of get_target_export_arg() to create the correct export argument.
Reviewed By: ashay-github
Differential Revision: https://reviews.llvm.org/D131565
The latter has accuracy issues around 0. The lowering in MathToLLVM is kept for now.
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D131676
This change add a helper function for computing a topological sorting of a list of ops. E.g. this can be useful in transforms where a subset of ops should be cloned without dominance errors.
The analysis reuses the existing implementation in TopologicalSortUtils.cpp.
Differential Revision: https://reviews.llvm.org/D131669
D130092 removed types from attributes. This patch fixes a minor
issues what was exposed when integrating that change in IREE. An
explicit cast is needed so that the template type of `makeArrayRef`
is automatically deduced.
Co-authored-by: Lei Zhang <antiagainst@google.com>
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D131604
We were hitting issues on Linux where this was only being caught at runtime, and different linkers (BFD vs LLD) are differently strict in such situations. Such libraries will also fail to build properly on Windows (but test coverage of that is limited, so it is better to enforce globally).
Differential Revision: https://reviews.llvm.org/D131911
Depends On D131660
`defaultInitialize()` was introduced for the "nudging" behavior, which has been deleted.
Reviewed By: Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D131746
Currently, in the MLIR `{Sparse,Dense}DataFlowAnalysis` API, there is a small optimization:
Before running a transfer function, if the "out state" is already at the pessimistic fixpoint (bottom lattice value), then we know that it cannot possibly be changed, therefore we can skip the transfer function.
I benchmarked and found that this optimization is ineffective, so we can remove it and simplify `{Sparse,Dense}DataFlowAnalysis`. In a subsequent patch, I plan to change/remove the concept of the pessimistic fixpoint so that the API is further simplified.
Benchmark: I ran the following tests 5 times (after 3 warmup runs), and timed the `initializeAndRun()` function.
| Test | Before (us) | After (us) |
| mlir-opt -test-dead-code-analysis mlir/test/Analysis/DataFlow/test-dead-code-analysis.mlir | 181.2536 | 187.7074 |
| mlir-opt -- -test-dead-code-analysis mlir/test/Analysis/DataFlow/test-last-modified-callgraph.mlir | 109.5504 | 105.0654 |
| mlir-opt -- -test-dead-code-analysis mlir/test/Analysis/DataFlow/test-last-modified.mlir | 333.3646 | 322.4224 |
| mlir-opt -- -allow-unregistered-dialect -sccp mlir/test/Analysis/DataFlow/test-combined-sccp.mlir | 1027.1492 | 1081.818 |
Note: `test-combined-sccp.mlir` is crafted by combining `mlir/test/Transforms/sccp.mlir`, `mlir/test/Transforms/sccp-structured.mlir` and `mlir/test/Transforms/sccp-callgraph.mlir`.
Reviewed By: aartbik, Mogball
Differential Revision: https://reviews.llvm.org/D131660
This patch cleans up the way `LinalgLoopDistributionOptions` are meant
to be used. The option just contains a call back that takes the list
of loop ranges that represent the loops that are to be distributed.
These loops are the outer parallel loops of the tiled operation which
have non-zero tile sizes specified. The call back returns for each of
the loops,
- The procId to use,
- The number of processors,
- The distribution method to use for that loop.
Reviewed By: antiagainst, hanchung
Differential Revision: https://reviews.llvm.org/D131232
The methods to perform such operations have been implemented for the DynamicMemRefType in a way that is similar to the implementation for StridedMemRefType. Up until here one could pass an unranked memref to the library, and thus obtain a “dynamic” memref descriptor, but then there would have been no possibility to operate on its content.
Differential Revision: https://reviews.llvm.org/D131359
AllocTensorElimination does currently not support chains where the type is
changing. AllocTensorElimination used to generate invalid IR for such
inputs. With this commit, AllocTensorElimination does no longer apply to
such inputs. (It can be extended to support such IR if needed.)
Differential Revision: https://reviews.llvm.org/D131880
Confined -> ConfinedAttr
AllAttrConstraintsOf -> AllOfAttr
To be in line with ConfinedType and AllOfType.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131822
This patch deprecates hasValue and getValue for consistency with
std::optional and llvm::Optional. Note that I've migrated all known
uses of them to has_value and value, respectively.
Differential Revision: https://reviews.llvm.org/D131366
An optional attribute dictionary before a region in an assembly format
is a potential format ambiguity because they both start with `{`.
Fixes#53077
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131636
This patch adds support for string literals as `custom` directive
arguments. This can be useful for re-using custom parsers and printers
when arguments have a known value. For example:
```
ParseResult parseTypedAttr(AsmParser &parser, Attribute &attr, Type type) {
return parser.parseAttribute(attr, type);
}
void printTypedAttr(AsmPrinter &printer, Attribute attr, Type type) {
return parser.printAttributeWithoutType(attr);
}
```
And in TableGen:
```
def FooOp : ... {
let arguments = (ins AnyAttr:$a);
let assemblyFormat = [{ custom<TypedAttr>($a, "$_builder.getI1Type()")
attr-dict }];
}
def BarOp : ... {
let arguments = (ins AnyAttr:$a);
let assemblyFormat = [{ custom<TypedAttr>($a, "$_builder.getIndexType()")
attr-dict }];
}
```
Instead of writing two separate sets of custom parsers and printers.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131603
This reland includes changes to the Python bindings.
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131801
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131803
ReluN has been removed from the TOSA specification. It can be replaced
in all instances with Clamp(0,N)
Signed-off-by: Eric Kunze <eric.kunze@arm.com>
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D128683
`AllOfType` is a type constraint that satisfies all given type
constraints and `ConfinedType` is a type that satisfies additional
predicates. These shorthands simplify type constraint definition mostly
by removing the need to deal with `myType.predicate` manipulation.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D131788
The end-to-end test for this new feature also exposed a bug
in LLVM IR lowering (since then, fixed), where we need to account
for the min-poison bit as extra argument.
declare i32 @llvm.abs.i32(i32 <src>, i1 <is_int_min_poison>)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D131712
The LLVM intrinsic has a bool flag `is_int_min_poison` that needs to be
set.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D131785
Introduce two different failure propagation mode in the Transform
dialect's Sequence operation. These modes specify whether silenceable
errors produced by nested ops are immediately propagated, thus stopping
the sequence, or suppressed. The latter is useful in end-to-end
transform application scenarios where the user cannot correct the
transformation, but it is robust enough to silenceable failures. It
can be combined with the "alternatives" operation. There is
intentionally no default value to avoid favoring one mode over the
other.
Downstreams can update their tests using:
S='s/sequence \(%.*\) {/sequence \1 failures(propagate) {/'
T='s/sequence {/sequence failures(propagate) {/'
git grep -l transform.sequence | xargs sed -i -e "$S"
git grep -l transform.sequence | xargs sed -i -e "$T"
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D131774
MemRefUtils.h uses UINT_MAX, which is not included in current header files list. This patch include climits to avoid compilation error in this header file.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D131529
Using a loop init_arg inside of the loop is not supported. This change adds a pre-processing pass that resolves such IR with copies.
Differential Revision: https://reviews.llvm.org/D131689
This removes the type from EmitC's opaque attribute. The value provided
as a StringRefParameter can always be emitted as is. In consquence the
constant and variable ops explicitly need to opaque attributes which are
no longer typed attributes.
Co-authored-by: Simon Camphausen <simon.camphausen@iml.fraunhofer.de>
Reviewed By: Mogball, jpienaar
Differential Revision: https://reviews.llvm.org/D131666
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131738
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D131702
Follow-up to D123774, where the syntax of dense arrays was discussed. It
was included that the syntax should be changed to `array<i32: 1, 2>`.
This patch changes the syntax but importantly preserves the `[1, 2]`
syntax when embedding these attributes in assembly formats through ODS.
Reviewed By: mehdi_amini, jpienaar
Differential Revision: https://reviews.llvm.org/D131738
Fix hasNoInterveningEffect in the presence of ops from different affine
scopes. Also, correctly check for dependence failures as well instead of
just for the existence of a dependence.
Differential Revision: https://reviews.llvm.org/D131641
This sets the `MLIR_PDLL_TABLEGEN_EXE` and `MLIR_PDLL_TABLEGEN_TARGET`
as cache variables which is necessary for cross-compiling projects that
rely on MLIR and the mlir-pdll-tblgen tool.
The patch is similar to https://reviews.llvm.org/D130350.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D131596
c7ec6e19d5 made LLVM adhere to the x86
psABI and pass bf16 in SSE registers instead of GPRs. This breaks the
custom versions of runtime functions we have for bf16 conversion. A
great fix for this would be to use __bf16 types instead which carry the
right ABI, but that type isn't widely available.
Instead just pretend it's a 32 bit float on the ABI boundary and
carefully cast it to the right type.
Fixes#57042
Refactor affine analysis helpers: the existing ones were using
unnecessary or improperly named arguments. NFC.
Differential Revision: https://reviews.llvm.org/D131557
Add missing check in affine dependence analysis when dependence analysis
isn't possible due to the ops being in different affine scopes. The
lack of such a check could lead to a crash or incorrect behavior in
several dependent utilities.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D131556
Made passes converting ops from other dialects to spirv OperationPass,
so that downstream compiler could put them in a proper nested pass
manager to lower device code only.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D131591
The operation computes pow(b, p), where 'b' and 'p' are signed integers
of the same width. The result's type matches the operands' type.
Differential Revision: https://reviews.llvm.org/D129809
This patch "modernizes" the LLVM `insertvalue` and `extractvalue`
operations to use DenseI64ArrayAttr, since they only require an array of
indices and previously there was confusion about whether to use i32 or
i64 arrays, and to use assembly format.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D131537
This change separates computation of the actual parameters of the subset and
the materialization of subview/extract_slice. That way the users can still use
Linalg tiling logic even if they use different operations to materialize the
subsets.
Differential Revision: https://reviews.llvm.org/D131053
When emitting the declarations for interface methods defined in ODS,
also emit their descriptions as C++ comments. This makes the
documentation accessible to C++ tooling such as IDEs that offers better
usability than reading it form the .td or the website.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D130478
Flang was not able to convert simd directive which contains nested
Fortran loops. The nested Fortran loops inside SIMD directive
are modelled as FIR loops and they need to be translated into LLVM
MLIR dialect.
Differential Revision: https://reviews.llvm.org/D131402
Reviewed by: peixin
Signed-off-by: Dominik Adamski <dominik.adamski@amd.com>
Previously, calling `Value.owner()` would C++ assert in debug builds if
`Value` was a block argument. Additionally, the behavior was just wrong
in release builds. This patch adds support for BlockArg Values.
This patch cleans up the definition of `DenseArrayAttrBase` by relying
more on ODS-generated methods. It also exposes an API for using the raw
data of a dense array, similar to `DenseIntOrFPElementsAttr::getRaw`.
Reviewed By: lattner, mehdi_amini
Differential Revision: https://reviews.llvm.org/D131450
They appeared at the wrong place in the switch, treating
them as unary op rather than binary op.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D131509
This makes it easier to use as a utility function to query the
mappings, including the reverse.
This commit also drops some storage classes that aren't needed
for now.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131411
This commit moves MemRef memory space to SPIR-V storage class
conversion out of the main SPIR-V type converter. Now the mapping
should happen as a prelimiary step before performing the final
conversion to SPIR-V. Flows are expect to write their own memory
space mappings like the `MapMemRefStorageClassPass` to handle
memory space mappings according to their needs.
This is needed because SPIR-V is serving multiple client APIs,
including Vulkan and OpenCL. Different client APIs might want
to use different storage classes for buffers in a particular
memory space, e.g., `StorageBuffer` for Vulkan vs. `CrossWorkgroup`
for OpenCL when converting the default 0 memory space. Hardcoding
a specific mapping makes that hard. While it's possible to embed
selection logic further inside the main type converter, it will
make the main type converter even complicated. So it's better to
separate the concerns, as mapping the memory space is really
concretizing the meaning of those numeric memory spaces in the
particular context of SPIR-V lowering.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131410
* Avoid restricting the pass to to builtin module ops. The pass
should be able to run on any region ops.
* Avoid hardcoding func FuncOp when handling functions. Instead,
use the function op interface.
* Assigns the default mapping in the constructor. So for cases
where we are using the pass in a pipeline, we still have a
meaningful default.
Along the way, dropped uncessary unrealized conversion casts and
use full conversion. The pass should be able to convert all sorts
of ops; there is really no need to have such bridages.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131409
In addition to memref, accept ranked tensor as the base operand of vector.gather, similar to vector.trasnfer_read.
This will allow us to vectorize noncontiguous tensor.extract into vector.gather. Full discussion can be found here: https://github.com/iree-org/iree/issues/9198
Reviewed By: hanchung, dcaballe
Differential Revision: https://reviews.llvm.org/D130097
Previously we can only lower arith.index_cast with 1-D vectors to LLVM. This change added the support for n-D vectors.
Reviewed By: ftynse, hanchung
Differential Revision: https://reviews.llvm.org/D129907
Previously we are using IntegerAttr to back all SPIR-V enum
attributes. Therefore we all such attributes are showed like
IntegerAttr in IRs, which is barely readable and breaks
roundtripability of the IR. This commit changes to use
`EnumAttr` as the base directly so that we can have separate
attribute definitions and better IR printing.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131311
Resolve almost all clang tidy warnings in this file:
1. Clean up string constants.
2. Use consistent argument names across function declarations and definitions. Rename `state` - > `result`, which is consistent with the other dialects.
3. Remove misleading function parameter name comments (`argTypes`). This did not match the actual function argument (`bool enableNameShadowing`).
4. Simplify calls to `is_splat`.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D131297
now that `arith` dialect has maxf/minf use it instead of cmp/select.
Also refactor clamp helpers to make them simlper.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D131426
The test was using a missing prefix. Add the prefix and fix the naming.
Found by @csigg
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D131428
Moved some parts from comments (not user facing) to the actual description
(user facing). Rephrased a bit as well.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D131418
This further relaxes the requirement to allow aliased resources
to have different primitive types and some are scalars while the
other are vectors.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D131207
Currently, dense tensors are initialized in Sparse Integration tests using
"buffer.tensor_alloc and scf.for" . This makes code harder to read and maintain.
This diff uses tensor.generate instead to initialize dense tensors.
Testing: Ran integration tests after building with -DLLVM_USE_SANITIZER=Address flag.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D131404
This commit updates all SPIR-V enum definitions to match the latest
specification (v1.6 revision 2). Along the way, fixed some issues
in `gen_spirv_dialect.py` and added a new script for refreshing
all op definitions for such cases.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131293
Adds an integer absolute value op to the math dialect.
When converting to LLVM, this op is lowered to the LLVM `abs` intrinsic.
When converting to SPIRV, this op is lowered to `spv.GL.SAbs`.
Depends on D131325
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D131327
IntegerPolyhedron::findIntegerLexmin currently does not return values of
the local ids, so when a test for sampling includes a set with locals, the
result of findIntegerLexmin should be checked using containsPointNoLocal,
not containsPoint.
Prior to this patch, the `add_tablegen()` macro in
llvm/cmake/modules/TableGen.cmake added the install rule only if
`project` matched `LLVM` or `MLIR`. This patch adds an optional
`DESTINATION` argument, which, if non-empty, decides whether (and where)
to install the tablegen tool, thus eliminating the need for
project-specific overrides. This patch also updates all other
invocations of the `add_tablegen()` macro.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D131282
This patch fixes:
llvm-project/mlir/include/mlir/IR/OpDefinition.h:1544:19: error: use
of bitwise '|' with boolean operands
[-Werror,-Wbitwise-instead-of-logical]
Now that C++17 is enabled in LLVM, a lot of the TODOs and patterns to emulate C++17 features can be eliminated.
The steps I have taken were essentially:
```
git grep C++17
git grep c++17
git grep "initializer_list<int>"
```
and address given comments and patterns.
Most of the changes boiled down to just using fold expressions rather than initializer_list.
While doing this I also discovered that Clang by default restricts the depth of fold expressions to 256 elements. I specifically hit this with `TestDialect` in `addOperations`. I opted to not replace it with fold expressions because of that but instead adding a comment documenting the issue.
If any other functions may be called with more than 256 elements in the future we might have to revert other parts as well.
I don't think this is a common occurence besides the `TestDialect` however. If need be, this could potentially be fixed via `mlir-tblgen` in the future.
Differential Revision: https://reviews.llvm.org/D131323
This has previously been done for `mlir-opt` and `mlir-reduce` and roughly the same approach has been done here.
The use case for having a separate library is that it is easier for downstream to make custom TableGen backends/executable that work on top of the utilities that are defined in `mlir/TableGen`.
The customization point here is the same one as for any upstream TableGen backends: One can add a new generator by simply creating a global instance of `mlir::GenRegistration`.
Differential Revision: https://reviews.llvm.org/D131112
Based on `spv.ISubBorrow` from D127909.
Also resolved some clang-tidy warnings.
Reviewed By: antiagainst, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D131281
MemRef types now can carry an attribute to represent the memory
space. Still, upper layers in the compilation stack mostly use
nuemric values. They don't mean much (other than differentiating
separate memory domains) in MLIR's multi-level settings. Those
numeric memory space inside MemRef types need to be translated
into concrete SPIR-V storage classes during lowering to pin down
to concrete memory types.
Thus far we have been hardcoding an arbitrary mapping from memory
space to storage class for converting MemRef types. This works fine
for only targeting Vulkan; it falls apart if we want to target other
SPIR-V consumers like OpenCL, as different consumers might want
different storage classes for the buffer/variable of the same
lifetime. For example, StorageClass in Vulkan vs. CrossWorkgroup
in OpenCL.
So putting up a new pass to let the user to control how to map
MemRef memory spaces into SPIR-V storage classes. This provides
more flexibility and can address the awkwardness in the current
SPIR-V type converter. This pass should be the prelimiary step
towards lowering MemRef related types/ops into SPIR-V.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D130317
This patch adds constant folder for Atan2Op which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D131050
We were checking real-part twice, not real/imag-part.
The new test only passes after the bug fix.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D131190
As a percaution, truncate memory addresses passed to kernels to 48 bits,
since bits 48-63 of the buffer descriptor are used for the stride field
and, on gfx10, to control swizzling.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D131016
This reverts commit 07aaa35f74.
This breaks the Windows bot, and while the fix addressed the
`char`/`int8_t` case, it does not make sense for other cases like
`float`.
This can easily overflow and it is possible for these unsigned overflows to result in incorrect results.
For example, the two LCMs could be 641 and 6700417, which multiply to 2^32 + 1, which overflows to 1.
Unsigned overflows already occur in the existing tests.
Also, when switching to arbitrary-precision arithmetic, this results in a many
large integer multiplications resulting in a significant slowdown.
Reviewed By: Groverkss
Differential Revision: https://reviews.llvm.org/D131184
The `IR/AttributeTest.cpp` test fails to compile on Solaris:
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:223:36: error: no matching function for call to 'allocate'
AttrT::get(type, "resource", UnmanagedAsmResourceBlob::allocate(data));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:237:3: note: in instantiation of function template specialization 'checkNativeAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeAccess<AttrT, T>(builder.getContext(), llvm::makeArrayRef(data),
^
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:258:3: note: in instantiation of function template specialization 'checkNativeIntAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeIntAccess<DenseI8ResourceElementsAttr, int8_t>(builder, 8);
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:221:3: note: candidate template ignored: requirement '!std::is_same<char, char>::value' was not satisfied [with T = char]
allocate(ArrayRef<T> data, bool dataIsMutable = false) {
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:214:26: note: candidate function not viable: requires at least 2 arguments, but 1 was provided
static AsmResourceBlob allocate(ArrayRef<char> data, size_t align,
^
I suspect this happens because `char` is `signed` by default on Solaris.
Tested on `amd64-pc-solaris2.11` and `sparcv9-sun-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D131148
This prepares patterns that sometimes are generated by the front-end
and would prohibit fusion of SDDMM flavored kernels.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D131126
The functions are effectively independent of the interface already, however, they take it as an argument for no reason.
The current state complicates reuse outside of MLIR.
Differential Revision: https://reviews.llvm.org/D131120
`SymbolTable::lookupSymbolIn` is an expensive operation and we do not want to do it twice
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D131145
Previously cf.br cf.cond_br and cf.switch always lowered to their LLVM
equivalents. These ops are all ops that take in some values of given
types and jump to other blocks with argument lists of the same types. If
the types are not the same, a verification failure will later occur. This led
to confusions, as everything works when func->llvm and cf->llvm lowering
both occur because func->llvm updates the blocks and argument lists
while cf->llvm updates the branching ops. Without func->llvm though,
there will potentially be a type mismatch.
This change now only lowers the CF ops if they will later pass
verification. This is possible because the parent op and its blocks will
be updated before the contained branching ops, so they can test their
new operand types against the types of the blocks they jump to.
Another plan was to have func->llvm only update the entry block
signature and to allow cf->llvm to update all other blocks, but this had
2 problems:
1. This would create a FuncOp lowering in cf->llvm lowering which is
awkward
2. This new pattern would only be applied if the containing FuncOp is
marked invalid. This is infeasible with the shared LLVM type
conversion/target infrastructure.
See previous discussions at
https://discourse.llvm.org/t/lowering-cf-to-llvm/63863 and
https://github.com/llvm/llvm-project/issues/55301
Differential Revision: https://reviews.llvm.org/D130971
This patch adds a DenseI1ArrayAttr to support arrays of i1. Importantly,
the implementation is as a simple `ArrayRef<bool>` instead of using bit
compression, which was problematic in DenseElementsAttr.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D130957
This change separates computation of the actual parameters of the subset and
the materialization of subview/extract_slice. That way the users can still use
Linalg tiling logic even if they use different operations to materialize the
subsets.
Differential Revision: https://reviews.llvm.org/D131053
Using if (TARGET ${LLVM_NATIVE_ARCH}) only works if MLIR is built
together with LLVM, but not for standalone builds of MLIR. The
correct way to check this is
if (${LLVM_NATIVE_ARCH} IN_LIST LLVM_TARGETS_TO_BUILD), as the
LLVM build system exports LLVM_TARGETS_TO_BUILD.
To avoid repeating the same check many times, add a
MLIR_ENABLE_EXECUTION_ENGINE variable.
Differential Revision: https://reviews.llvm.org/D131071
It seems only the default implementation is ever used, so it doesn't seem
necessary to include this method in the interface.
Differential Revision: https://reviews.llvm.org/D130986
The reconciliation pass has been improved to introduce the support for chains of casts, thus not limiting anymore the reconciliation to just consider pairs of unrealized casts.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D130711
As reported in Issue #56850, mlir/lib/Analysis/Presburger/Utils.cpp doesn't
compile on Solaris 11.4/SPARC with the bundled GCC 11, as seen when testing
LLVM 15.0.0 rc1:
/var/llvm/reltest/llvm-15.0.0-rc1/rc1/llvm-project/mlir/include/mlir/Analysis/Presburger/MPInt.h:260:47:
error: inlining failed in call to ‘always_inline’ ‘int64_t
mlir::presburger::int64FromMPInt(const mlir::presburger::MPInt&)’: indirect
function call with a yet undetermined callee
This patch hacks around this and allowed the build to finish.
Tested on `sparcv9-sun-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D131060
This patch adds constant folder for AtanOp which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130983
llvm::Optional is in the process of switching to the
std::optional-like interface with has_value/value as opposed to
hasValue/getValue.
This patch adds has_value and value to enable the same transition.
Differential Revision: https://reviews.llvm.org/D130819
In some cases the recursion will grow the `visited` hash table and
invalidate the cached iterator.
(caught with ASAN)
Differential Revision: https://reviews.llvm.org/D131027
In the ROCm runtime (and probably CUDA as well), all kernel arguments
are aligned. Therefore, enable using bare pointers for memref
arguments to kernels when these memrefs have static shape and a
trivial layout.
This is a substantial optimization to launching kernels that use
memrefs with known, static sizes, since it causes the kernel launch
packet to no longer include information already known to the kernel,
which can enable packing the kernel launch arguments into launch
packets instead of having to allocate an entire separate structure to
hold unneeded memref information.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D130716
This rewriting was no longer functional after recent migration to one shot
bufferization. However, this revision makes it work again, with a CHECK test
to ensure fusion happens. Note that functionality is tested by several
integration tests.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D130996
The generic allocation and deallocation instructions, which are optionally used during the MemRef -> LLVM conversion, should have a name that is specifically bound to their origin, that is the conversion pass itself.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D130588
This patch adds constant folder for TanhOp which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130960
When the mlir-tblgen tool is set up, the `MLIR_TABLEGEN_EXE` variable
is set, which either points to the mlir-tblgen tool built in the current
cmake build, or points to one built in a nested cmake build (if cross
conpiling, or if building with e.g. `LLVM_OPTIMIZED_TABLEGEN`.
The `MLIR_TABLEGEN_EXE` variable is only set within the scope of the
mlir/CMakeLists.txt file, so it's unavailable in sibling level projects
such as flang.
Set the `MLIR_TABLEGEN_EXE` and the `MLIR_TABLEGEN_TARGET` variables
as global, so that flang can use them properly without guessing.
Differential Revision: https://reviews.llvm.org/D130350
This patch adds constant folder for TanOp which only supports single and double precision floating-point.
Differential Revision: https://reviews.llvm.org/D130873
Previously, DenseArrayAttr used VectorType for its shaped type.
VectorType is problematic for arrays because it doesn't support zero
dimensions, meaning that an empty array would have `vector<i32>` as its
type. ElementsAttr would think that an empty dense array is size 1, not
0. This patch switches over to TensorType, which does support zero
dimensions.
Fixes#56860
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D130921
Adds optional attribute to support tensor cores on F32 datatype by lowering to `mma.sync` with TF32 operands. Since, TF32 is not a native datatype in LLVM we are adding `tf32Enabled` as an attribute to allow the IR to be aware of `MmaSyncOp` datatype. Additionally, this patch adds placeholders for nvgpu-to-nvgpu transformation targeting higher precision tf32x3.
For mma.sync on f32 input using tensor cores there are two possibilites:
(a) tf32 (1 `mma.sync` per warp-level matrix-multiply-accumulate)
(b) tf32x3 (3 `mma.sync` per warp-level matrix-multiply-accumulate)
Typically, tf32 tensor core acceleration comes at a cost of accuracy from missing precision bits. While f32 has 23 precision bits, tf32 has only 10 precision bits. tf32x3 aims to recover the precision bits by splitting each operand into two tf32 values and issue three `mma.sync` tensor core operations.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D130294
This attribute is technical debt from the early stages of MLIR, before
ElementsAttr was an interface and when it was more difficult for
dialects to define their own types of attributes. At present it isn't
used at all in tree (aside from being convenient for eliding other
ElementsAttr), and has had little to no evolution in the past three years.
Differential Revision: https://reviews.llvm.org/D129917
This attributes is intended cover the current set of use cases that abuse
DenseElementsAttr, e.g. when the data is large. Using resources for large
data is one of the major reasons why they were added; e.g. they can be
deallocated mid-compilation, they support a wide variety of data origins
(e.g, heap allocated, mmap'd, etc.), they can support mutation, etc.
I considered at length not having a builtin variant of this, and instead
having multiple versions of this attribute for dialects that are interested,
but they all boiled down to the exact same attribute definition. Given the
generality of this attribute, it feels more aligned to keep it next to DenseArrayAttr
(given that DenseArrayAttr covers the "small" case, and DenseResourcesElementsAttr
covers the "large" case). The underlying infra used to build this attribute is
general, and having a builtin attribute doesn't preclude users from defining
their own when it makes sense (they can even share a blob manager with the
builtin dialect to avoid data duplication).
Differential Revision: https://reviews.llvm.org/D130022
The DialectResourceBlobManager class provides functionality for managing resource blobs
in a generic, dialect-agnostic fashion. In addition to this class, a dialect interface and custom
resource handle are provided to simplify referencing and interacting with the manager. These
classes intend to simplify the work required for dialects that want to manage resource blobs
during compilation, such as for large elements attrs. The old manager for the resource example
in the test dialect has been updated to use this, which provides and cleaner and more consistent API.
This commit also adds new HeapAsmResourceBlob and ImmortalAsmResourceBlob to simplify
creating resource blobs in common scenarios.
Differential Revision: https://reviews.llvm.org/D130021
Attempting to apply the range analysis to real code revealed that
trunci wasn't correctly handling the case where truncation would
create wider ranges - for example, if we truncate [255, 257] : i16 to
i8, the result can be 255, 0, or 1, which isn't a contiguous range of
values.
The previous implementation would naively map this to [255, 1], which
would cause issues with unsigned ranges and unification.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D130501