* Add a getCurrentLocation that returns the location directly.
* Add parseOperandList/parseTrailingOperandList overloads without the required operand count.
PiperOrigin-RevId: 251585488
Considered adding more placeholders to designate types in the replacement pattern, but convinced for now sticking to simpler approach. This should at least enable specifying constraints across operands/results/attributes and we can start getting rid of the special cases.
PiperOrigin-RevId: 251564893
This CL adds SPV_ModuleEndOp for terminating the only block inside a
SPV_ModuleOp's only region. Verification now enforces a spv.module only
contains func or spv.* ops and no external or nested functions are
present. Because of the structural requirement of a block, spv.Return
is also added in this CL.
PiperOrigin-RevId: 251510706
- added a typed walk to Block (matching the equivalent on Function)
- added token parsers (incl optional variants) for : and (
- added applyConversionPatterns that takes a list of functions to apply patterns to
PiperOrigin-RevId: 251481608
Currently, we use different names for block and grid dimensions in the GPU vs. NVVM dialect, which is confusing. With this change, the NVVM names match the ones from the GPU dialect. The NVVM intrinsics use different names anyway, so it does not add new confusion.
PiperOrigin-RevId: 251389281
To accomplish this, moving forward users will need to provide a legalization target that defines what operations are legal for the conversion. A target can mark an operation as legal by providing a specific legalization action. The initial actions are:
* Legal
- This action signals that every instance of the given operation is legal,
i.e. any combination of attributes, operands, types, etc. is valid.
* Dynamic
- This action signals that only some instances of a given operation are legal. This
allows for defining fine-tune constraints, like say std.add is only legal when
operating on 32-bit integers.
An example target is shown below:
struct MyTarget : public ConversionTarget {
MyTarget(MLIRContext &ctx) : ConversionTarget(ctx) {
// All operations in the LLVM dialect are legal.
addLegalDialect<LLVMDialect>();
// std.constant op is always legal on this target.
addLegalOp<ConstantOp>();
// std.return op has dynamic legality constraints.
addDynamicallyLegalOp<ReturnOp>();
}
/// Implement the custom legalization handler to handle
/// std.return.
bool isLegal(Operation *op) override {
// Process the dynamic handling for a std.return op (and any others that were
// marked "dynamic").
...
}
};
PiperOrigin-RevId: 251289374
The initial implementation of SDBM mistakenly swapped the order of variables in
the inequalities induced by a stripe equality: y = x # B actually implies
y - x <= 0 and x - y <= B - 1 rather than x - y <= 0 and y - x <= B - 1 as
implemented. Textual comments in the test files were correct but did not
correspond to the emitted IR. Round-tripping between SDBM and expression lists
was not affected because the wrong order was used in both directions of the
conversion. Use the correct order.
PiperOrigin-RevId: 251252980
When manipulating generic operations, such as in dialect conversion /
rewriting, it is often necessary to view a list of Values as operands to an
operation without creating the operation itself. The absence of such view
makes dialect conversion patterns, among others, to use magic numbers to obtain
specific operands from a list of rewritten values when converting an operation.
Introduce XOpOperandAdaptor classes that wrap an ArrayRef<Value *> and provide
accessor functions identical to those available in XOp. This makes it possible
for conversions to use these adaptors to address the operands with names rather
than rely on their position in the list. The adaptors are generated from ODS
together with the actual operation definitions.
This is another step towards making dialect conversion patterns specific for a
given operation.
Illustrate the approach on conversion patterns in the standard to LLVM dialect
conversion.
PiperOrigin-RevId: 251232899
* Fix a miscompile on older clang versions when initializing an ArrayRef with an unsigned variadic template argument.
* Update the errors to use the streaming interface instead of Twine.
PiperOrigin-RevId: 251223929
This script parses the SPIR-V JSON grammar to extract operand kinds that
are enums and generate TableGen definitions for them.
Also added a shell script to point to the correct relative file location
to simplify command invocation.
--
PiperOrigin-RevId: 251041084
We want to support 64-bit shapes (even when the compiler is on a 32-bit architecture). Using int64_t consistently allows us to sidestep the bugginess of unsigned arithmetic.
Still unsigned: kind, memory space, and bit width. The first two are basically enums. We could have a discussion about the last one, but it's basically just a very large enum as well and we're not doing any math on it, I think.
--
PiperOrigin-RevId: 250985791
These were just introduced by a previous CL moving MemRef getRank to return int64_t. size_t could be smaller than 64 bits and in equals comparisons, signed vs unsigned doesn't matter. In these cases, we know right now that the particular int64_t is not larger than max size_t (because it currently comes directly from a size() call), the alternative cast plus equals comparison is always safe, so we might as well do it that way and no longer require reasoning deeper into the callstack.
We are already assuming that size() calls fit into int64_t in a number of other cases like the aforementioned getRank() (since exabytes of RAM are rare). If we want to avoid this assumption we will have to come up with a principled way to do it throughout.
--
PiperOrigin-RevId: 250980297
Extract common methods into ShapedType.
Simplify methods.
Remove some extraneous asserts.
Replace sentinel value with a helper method to check the same.
--
PiperOrigin-RevId: 250945261
MemRefs have the same notion of shape, rank, and fixed element type. This allows us to reuse utilities based on shape for memref.
All dyn_cast and isa calls for ShapedType have been checked and either modified to explicitly check for vector or tensor, or confirmed to not depend on the result being a vector or tensor.
Discussion in https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/cHLoyfGu8y8
--
PiperOrigin-RevId: 250945184