MSVC has trouble resolving the static 'printOptionValue' from the method on llvm:🆑:opt/list. This change renames the static method to avoid this conflict.
This change refactors pass options to be more similar to how statistics are modeled. More specifically, the options are specified directly on the pass instead of in a separate options class. (Note that the behavior and specification for pass pipelines remains the same.) This brings about several benefits:
* The specification of options is much simpler
* The round-trip format of a pass can be generated automatically
* This gives a somewhat deeper integration with "configuring" a pass, which we could potentially expose to users in the future.
PiperOrigin-RevId: 286953824
This means that in-place, or root, updates need to use explicit calls to `startRootUpdate`, `finalizeRootUpdate`, and `cancelRootUpdate`. The major benefit of this change is that it enables in-place updates in DialectConversion, which simplifies the FuncOp pattern for example. The major downside to this is that the cases that *may* modify an operation in-place will need an explicit cancel on the failure branches(assuming that they started an update before attempting the transformation).
PiperOrigin-RevId: 286933674
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.
PiperOrigin-RevId: 286930047
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.
PiperOrigin-RevId: 286919966
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.
Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').
The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.
Closestensorflow/mlir#226
PiperOrigin-RevId: 286803388
This requires using explicitly default copy constructor and copy assignment
operator instead of hand-rolled ones. These classes are indeed cheap to copy
since they are wrappers around a pointer to the implementation. This change
makes sure templated code can use standard type traits to understand that
copying such objects is cheap and appeases analysis tools such as clang-tidy.
PiperOrigin-RevId: 286725565
This will allow us to lower most of gpu.all_reduce (when all_reduce
doesn't exist in the target dialect) within the GPU dialect, and only do
target-specific lowering for the shuffle op.
PiperOrigin-RevId: 286548256
This is the block argument equivalent of the existing `getAsmResultNames` hook.
Closestensorflow/mlir#329
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/329 from plaidml:flaub-region-arg-names fc7876f2d1335024e441083cd25263fd6247eb7d
PiperOrigin-RevId: 286523299
Concatting lists in TableGen is easy, creating unique lists less so. There is no reason for duplicated op traits so we could throw an error instead but duplicates could occur due to concatting different list of traits in ODS (e.g., for convenience reasons), so just dedup them during Operator trait construction instead.
PiperOrigin-RevId: 286488423
Update vector transfer_read/write ops to operatate on memrefs with vector element type.
This handle cases where the memref vector element type represents the minimal memory transfer unit (or multiple of the minimal memory transfer unit).
PiperOrigin-RevId: 286482115
This CL allows specifying an additional name for specifying the .td file that is used to generate the doc for a dialect. This is necessary for a dialect like Linalg which has different "types" of ops that are used in different contexts.
This CL also restructures the Linalg documentation and renames LinalgLibraryOps -> LinalgStructuredOps but is otherwise NFC.
PiperOrigin-RevId: 286450414
Adds vector ReshapeOp to the VectorOps dialect. An aggregate vector reshape operation, which aggregates multiple hardware vectors, can enable optimizations during decomposition (e.g. loading one input hardware vector and performing multiple rotate and scatter store operations to the vector output).
PiperOrigin-RevId: 286440658
Introduces some centralized methods to move towards
consistent use of i32 as vector subscripts.
Note: sizes/strides/offsets attributes are still i64
PiperOrigin-RevId: 286434133
This function template has been introduced in the early days of MLIR to work
around the absence of common type for ranges of values (operands, block
argumeents, vectors, etc). Core IR now provides ValueRange for exactly this
purpose. Use it instead of the template parameter.
PiperOrigin-RevId: 286431338
Added test cases for the newly added LLVM operations and lowering features.
Closestensorflow/mlir#300
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/300 from dfki-jugr:std_to_llvm da6168bbc1a369ae2e99ad3881fdddd82f075dd4
PiperOrigin-RevId: 286231169
This enables providing a default implementation of an interface method. This method is defined on the Trait that is attached to the operation, and thus has all of the same constraints and properties as any other interface method. This allows for interface authors to provide a conservative default implementation for certain methods, without requiring that all users explicitly define it. The default implementation can be specified via the argument directly after the interface method body:
StaticInterfaceMethod<
/*desc=*/"Returns whether two array of types are compatible result types for an op.",
/*retTy=*/"bool",
/*methodName=*/"isCompatibleReturnTypes",
/*args=*/(ins "ArrayRef<Type>":$lhs, "ArrayRef<Type>":$rhs),
/*methodBody=*/[{
return ConcreteOp::isCompatibleReturnTypes(lhs, rhs);
}],
/*defaultImplementation=*/[{
/// Returns whether two arrays are equal as strongest check for
/// compatibility by default.
return lhs == rhs;
}]
PiperOrigin-RevId: 286226054
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax
highlighting, while '{.mlir}' was a markdown extension that used a style file
to color the background differently of the code block. The background color
extension was a custom one that we can retire given we have syntax
highlighting.
Also change '```td' to '```tablegen' to match chroma syntax highlighting
designation.
PiperOrigin-RevId: 286222976
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.
Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.
affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#225
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
The definition of the function template LLVM::ModuleTranslation::lookupValues
has been located in a source file. As long as it has been the only file that
actually called into the function, this did not cause any problem. However, it
creates linking issues if the function is used from other translation units.
PiperOrigin-RevId: 286203078
When memory attributions are present in `gpu.func`, require that they are of
memref type and live in memoryspaces 3 and 5 for workgroup and private memory
attributions, respectively. Adapt the conversion from the GPU dialect to the
NVVM dialect to drop the private memory space from attributions as NVVM is able
to model them as local `llvm.alloca`s in the default memory space.
PiperOrigin-RevId: 286161763
The lowering of MemRef types to the LLVM dialect is connected to the underlying
runtime representation of structured memory buffers. It has changed several
times in the past and reached the current state of a LLVM structured-typed
descriptor containing two pointers and all sizes. In several reported use
cases, a different, often simpler, lowering scheme is required. For example,
lowering statically-shaped memrefs to bare LLVM pointers to simplify aliasing
annotation. Split the pattern population functions into those include
memref-related operations and the remaining ones. Users are expected to extend
TypeConverter::convertType to handle the memref types differently.
PiperOrigin-RevId: 286030610
This class provides a simplified mechanism for defining a switch over a set of types using llvm casting functionality. More specifically, this allows for defining a switch over a value of type T where each case corresponds to a type(CaseT) that can be used with dyn_cast<CaseT>(...). An example is shown below:
// Traditional piece of code:
Operation *op = ...;
if (auto constant = dyn_cast<ConstantOp>(op))
...;
else if (auto return = dyn_cast<ReturnOp>(op))
...;
else
...;
// New piece of code:
Operation *op = ...;
TypeSwitch<Operation *>(op)
.Case<ConstantOp>([](ConstantOp constant) { ... })
.Case<ReturnOp>([](ReturnOp return) { ... })
.Default([](Operation *op) { ... });
Aside from the above, TypeSwitch supports return values, void return, multiple types per case, etc. The usability is intended to be very similar to StringSwitch.
(Using c++14 template lambdas makes everything even nicer)
More complex example of how this makes certain things easier:
LogicalResult process(Constant op);
LogicalResult process(ReturnOp op);
LogicalResult process(FuncOp op);
TypeSwitch<Operation *, LogicalResult>(op)
.Case<ConstantOp, ReturnOp, FuncOp>([](auto op) { return process(op); })
.Default([](Operation *op) { return op->emitError() << "could not be processed"; });
PiperOrigin-RevId: 286003613
This CL adds more Linalg EDSC ops and tests to support building pointwise operations along with conv and dilated_conv.
This also fixes a bug in the existing linalg_matmul EDSC and beefs up the test.
The current set of ops is already enough to build an interesting, albeit simple, model used internally.
PiperOrigin-RevId: 285838012
This updates the lowering pipelines from the GPU dialect to lower-level
dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation
instead of a standard function annotated with an attribute. In particular, the
kernel outlining is updated to produce gpu.func instead of std.func and the
individual conversions are updated to consume gpu.funcs and disallow standard
funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved
in the generic syntax, but can also be used with the custom syntax on
gpu.funcs. The special kind of function for GPU allows one to use additional
features such as memory attribution.
PiperOrigin-RevId: 285822272
This keeps the IR valid and consistent as it is expected that each block should have a valid parent region/operation. Previously, converted blocks were kept floating without a valid parent region.
PiperOrigin-RevId: 285821687
The conversion from the Loops dialect to the Standard dialect, also known as
loop-to-cfg lowering, has been historically a function pass. It can be required
on non-Standard function Ops, in particular the recently introduced GPU
functions. Make the conversion an operation pass instead of a function pass.
PiperOrigin-RevId: 285814560
This PR targest issue tensorflow/mlir#295. It exposes the already existing
subiew promotion pass as a declarative pattern
Change-Id: If901ebef9fb53fcd0b12ecc536f6b174ce320b92
Closestensorflow/mlir#315
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/315 from tetuante:issue295 8e5f268b6d85f31015c33505329dbd7a4db97ac5
PiperOrigin-RevId: 285801463
Similar to insert/extract vector instructions but
(1) work on 1-D vectors only
(2) allow for a dynamic index
%c3 = constant 3 : index
%0 = vector.insertelement %arg0, %arg1[%c : index] : vector<4xf32>
%1 = vector.extractelement %arg0[%c3 : index] : vector<4xf32>
PiperOrigin-RevId: 285792205
ExtractSlicesOp extracts slices of its vector operand and with a specified tiling scheme.
This operation centralizes the tiling scheme around a single op, which simplifies vector op unrolling and subsequent pattern rewrite transformations.
PiperOrigin-RevId: 285761129
- bring op description comment in sync with the doc
- fix misformat in doc
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#317
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/317 from bondhugula:quickfix 7fcd945b318c973b2488b702874c87526855c8ef
PiperOrigin-RevId: 285574527
This will be evolved into a simple programming model for custom ops and custom layers in followup CLs.
This CL also deletes the obsolete tablegen's reference-impl.td that was using EDSCs.
PiperOrigin-RevId: 285459545