This change refactors and cleans up the implementation of the operation walk methods. After this refactoring is that the explicit template parameter for the operation type is no longer needed for the explicit op walks. For example:
op->walk<AffineForOp>([](AffineForOp op) { ... });
is now accomplished via:
op->walk([](AffineForOp op) { ... });
PiperOrigin-RevId: 266209552
- extend canonicalizeMapAndOperands to propagate constant operands into
the map's expressions (and thus drop those operands).
- canonicalizeMapAndOperands previously only dropped duplicate and
unused operands; however, operands that were constants were
retained.
This change makes IR maps/expressions generated by various
utilities/passes even simpler; also makes some of the test checks more
accurate and simpler -- for eg., 0' instead of symbol(%{{.*}}).
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#107
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/107 from bondhugula:canonicalize-maps c889a51486d14fbf7db489f224f881e7e1ff7d72
PiperOrigin-RevId: 266085289
- some of it has been adapted from LLVM's vim utils
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#90
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/90 from bondhugula:vim 22b1c958818c4b09de0ec8e1d7a4893171a03dbf
PiperOrigin-RevId: 266071752
The pass manager is moving towards being able to run on operations at arbitrary nesting. An operation may have both parent and child operations, and the AnalysisManager must be able to handle this generalization. The AnalysisManager class now contains generic 'getCachedParentAnalysis' and 'getChildAnalysis/getCachedChildAnalysis' functions to query analyses on parent/child operations. This removes the hard coded nesting relationship between Module/Function.
PiperOrigin-RevId: 266003636
Tweak to the pretty type parser to recognize that `->` is a special token that
shouldn't be split into two characters. This change allows dialect
types to wrap function types as in `!my.ptr_type<(i32) -> i32>`.
Closestensorflow/mlir#105
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/105 from schweitzpgi:parse-arrow 8b2d768053f419daae5a1a864121a44c4319acbe
PiperOrigin-RevId: 265986240
Instead of lowering the program in two steps (Standard->LLVM followed
by GPU->NVVM), leading to invalid IR inbetween, the runner now uses
one pattern based rewrite step to go directly from Standard+GPU to
LLVM+NVVM.
PiperOrigin-RevId: 265861934
Refactor replaceAllMemRefUsesWith to split it into two methods: the new
method does the replacement on a single op, and is used by the existing
one.
- make the methods return LogicalResult instead of bool
- Earlier, when replacement failed (due to non-deferencing uses of the
memref), the set of ops that had already been processed would have
been replaced leaving the IR in an inconsistent state. Now, a
pass is made over all ops to first check for non-deferencing
uses, and then replacement is performed. No test cases were affected
because all clients of this method were first checking for
non-deferencing uses before calling this method (for other reasons).
This isn't true for a use case in another upcoming PR (scalar
replacement); clients can now bail out with consistent IR on failure
of replaceAllMemRefUsesWith. Add test case.
- multiple deferencing uses of the same memref in a single op is
possible (we have no such use cases/scenarios), and this has always
remained unsupported. Add an assertion for this.
- minor fix to another test pipeline-data-transfer case.
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#87
PiperOrigin-RevId: 265808183
The code and documentation for this chapter of the tutorial have been updated to follow the new flow. The toy 'array' type has been replaced by usages of the MLIR tensor type. The code has also been cleaned up and modernized.
Closestensorflow/mlir#101
PiperOrigin-RevId: 265744086
Each basic block in SPIR-V must start with an OpLabel instruction.
We don't support control flow yet, so this CL just makes sure that
the entry block follows this rule and is valid.
PiperOrigin-RevId: 265718841
To support a conversion of a simple load-compute-store kernel from GPU
dialect to SPIR-V dialect, the conversion of operations like
"gpu.block_dim", "gpu.thread_id" which allow threads to get the launch
conversion is needed. In SPIR-V these are specified as global
variables with builin attributes. This CL adds support to specify
builtin variables in SPIR-V conversion framework. This is used to
convert the relevant operations from GPU dialect to SPIR-V dialect.
Also add support for conversion of load/store operation in Standard
dialect to SPIR-V dialect.
To simplify the conversion add a method to build a spv.AccessChain
operation that automatically determines the return type based on the
base pointer type and the indices provided.
PiperOrigin-RevId: 265718525
Change the use of 'array' to 'tensor' to reflect the new flow that the tutorial will follow. Also tidy up some of the documentation, code comments, and fix a few out-dated links.
PiperOrigin-RevId: 265174676
Add an extra RewritePattern that does not convert types to rewrite a CopyOp that has non-identity permutations into a sequence of TransposeOp followed by a CopyOp without such permutations.
This RewitePattern is made to fail in the non-permutation case so that the conversion pattern can kick in to lower to LLVM.
This is an instance of A->A->B lowering where A->A is done by a RewritePattern in case_1 and A->B is done by a ConversionPatternRewriter when not(case_1).
PiperOrigin-RevId: 265171380
Add a conversion pattern that transforms a linalg.transpose op into:
1. A function entry `alloca` operation to allocate a ViewDescriptor.
2. A load of the ViewDescriptor from the pointer allocated in 1.
3. Updates to the ViewDescriptor to introduce the data ptr, offset, size
and stride. Size and stride are permutations of the original values.
4. A store of the resulting ViewDescriptor to the alloca'ed pointer.
The linalg.transpose op is replaced by the alloca'ed pointer.
PiperOrigin-RevId: 265169112
A linalg.transpose op is a pure metadata operation that takes a view + permutation map and produces
another view of the same underlying data, with a different reindexing. This is a
pure metadata operation that does not touch the underlying data.
Example:
```
%t = linalg.transpose %v (i, j) -> (j, i) : !linalg.view<?x?xf32>
```
PiperOrigin-RevId: 265139429
This CL extends support for lowering of linalg to external C++ libraries with CopyOp. Currently this can only work when the permutation maps in the copies are identity. Future support for permutations will be added later.
PiperOrigin-RevId: 265093025
This will allow iterating the values of a non-opaque ElementsAttr, with all of the types currently supported by DenseElementsAttr. This should help reduce the amount of specialization on DenseElementsAttr.
PiperOrigin-RevId: 264968151
* Add a section on dialect attribute values and attribute aliases
* Move FloatAttr into its alphabetically correct place
* Add a "Standard Attribute Values" section
PiperOrigin-RevId: 264959306
* Alphabetize the type definitions
* Make 'Dialect specific types' a type-system subsection
* Merge Builtin types and Standard types
PiperOrigin-RevId: 264947721
Both sections are out-of-date and need to be updated. The dialect section is particularly bad in that it never actually mentions what a 'Dialect' is.
PiperOrigin-RevId: 264937905
linalg.subview used to lower to a slice with a bounded range resulting in correct bounded accesses. However linalg.slice could still index out of bounds. This CL moves the bounding to linalg.slice.
LLVM select and cmp ops gain a more idiomatic builder.
PiperOrigin-RevId: 264897125
This commit adds `PositiveI32Attr` and `PositiveI64Attr` to match positive
integers but not zero nor negative integers. This commit also adds
`HasAnyRankOfPred` to match tensors with the specified ranks.
PiperOrigin-RevId: 264867046
Split out method into specialized instances + add an early exit. Should be NFC, but simplifies reading the logic slightly IMHO.
PiperOrigin-RevId: 264855529