Start doc generation pass that generates simple markdown output. The output is formatted simply[1] in markdown, but this allows seeing what info we have, where we can refine the op description (e.g., the inputs is probably redundant), what info is missing (e.g., the attributes could probably have a description).
The formatting of the description is still left up to whatever was in the op definition (which luckily, due to the uniformity in the .td file, turned out well but relying on the indentation there is fragile). The mechanism to autogenerate these post changes has not been added yet either. The output file could be run through a markdown formatter too to remove extra spaces.
[1]. This is not proposal for final style :) There could also be a discussion around single doc vs multiple (per dialect, per op), whether we want a TOC, whether operands/attributes should be headings or just formatted differently ...
PiperOrigin-RevId: 230354538
This is needed to allow binding to more constant types.
Tests that exercise this behavior will come in a followup CL.
In the meantime this does not breaks things.
PiperOrigin-RevId: 230320621
1) Fix FloatAttr type inconsistency in conversion from tf.FusedBatchNorm to TFLite ops
We used to compose the splat tensor out of the scalar epsilon attribute by using the
type of the variance operand. However, the epsilon attribute may have a different
bitwidth than the one in the variance operand. So it ends up we were creating
inconsistent types within the FloatAttr itself.
2) Fix SplatElementsAttr type inconsistency in AnnotateInputArrays
We need to create the zero-valued attribute according to the type provided as the
command-line arguments.
3) Concretize the result type of tf.Shape constant folding test case
Currently the resultant constant is created by the constant folding harness, using
the result type of the original op as the constant's result type. That can be
a different type than the constant's internal DenseElementsAttr.
PiperOrigin-RevId: 230244665
- print multiplication by -1 as unary negate; expressions like s0 * -1, d0 * -1
+ d1 will now appear as -s0, -d0 + d1 resp.
- a minor cleanup while on printAffineExprInternal
PiperOrigin-RevId: 230222151
This CL also makes ScopedEDSCContexts to reset the Bindable numbering when
creating a new context.
This is useful to write minimal tests that don't use FileCheck pattern
captures for now.
PiperOrigin-RevId: 230079997
This CL performs a bunch of cleanups related to EDSCs that are generally
useful in the context of using them with a simple wrapping C API (not in this
CL) and with simple language bindings to Python and Swift.
PiperOrigin-RevId: 230066505
- detected with memref-bound-check
- fixes b/123072438; while on this, fix another test case which was reported
out of bounds
PiperOrigin-RevId: 229978187
*) Enables reduction of private memref size based on MemRef region accessed by fused slice.
*) Enables maximal fusion by creating a private memref to break a fusion-preventing dependence.
*) Adds maximal fusion flag to enable fusing as much as possible (though it still fuses the minimum cost computation slice).
PiperOrigin-RevId: 229936698
This CL adds a test reported by andydavis@ and fixes the corner case that
appears when operands do not come from an AffineApply and no Dim composition
is needed.
In such cases, we would need to create an empty map which is disallowed.
The composition in such cases becomes trivial: there is no composition.
This CL also updates the name AffineNormalizer to AffineApplyNormalizer.
PiperOrigin-RevId: 229819234
Change MinMaxAttr to match hasValidMinMaxAttribute behavior. Post rewriting the other users of that function it could be removed too. The currently generated error message is:
error: 'tfl.fake_quant' op attribute 'minmax' failed to satisfy constraint of MinMaxAttr
PiperOrigin-RevId: 229775631
This CL fixes a misunderstanding in how to build DimOp which triggered
execution issues in the CPU path.
The problem is that, given a `memref<?x4x?x8x?xf32>`, the expressions to
construct the dynamic dimensions should be:
`dim %arg, 0 : memref<?x4x?x8x?xf32>`
`dim %arg, 2 : memref<?x4x?x8x?xf32>`
and
`dim %arg, 4 : memref<?x4x?x8x?xf32>`
Before this CL, we wold construct:
`dim %arg, 0 : memref<?x4x?x8x?xf32>`
`dim %arg, 1 : memref<?x4x?x8x?xf32>`
`dim %arg, 2 : memref<?x4x?x8x?xf32>`
and expect the other dimensions to be constants.
This assumption seems consistent at first glance with the syntax of alloc:
```
%tensor = alloc(%M, %N, %O) : memref<?x4x?x8x?xf32>
```
But this was actuallyincorrect.
This CL also makes the relevant functions available to EDSCs and removes
duplication of the incorrect function.
PiperOrigin-RevId: 229622766
The operand and result types of binary ops are not necessarily the
same. For those binary ops, we cannot print in the short-form assembly.
Enhance impl:::printBinaryOp to consider operand and result types
to select which assembly form to use.
PiperOrigin-RevId: 229608142
A recent change in TableGen definitions allowed arbitrary AND/OR predicate
compositions at the cost of removing known-true predicate simplification.
Introduce a more advanced simplification mechanism instead.
In particular, instead of folding predicate C++ expressions directly in
TableGen, keep them as is and build a predicate tree in TableGen C++ library.
The predicate expression-substitution mechanism, necessary to implement complex
predicates for nested classes such as `ContainerType`, is replaced by a
dedicated predicate. This predicate appears in the predicate tree and can be
used for tree matching and separation. More specifically, subtrees defined
below such predicate may be subject to different transformations than those
that appear above. For example, a subtree known to be true above the
substitution predicate is not necessarily true below it.
Use the predicate tree structure to eliminate known-true and known-false
predicates before code emission, as well as to collapse AND and OR predicates
if their value can be deduced based on the value of one child.
PiperOrigin-RevId: 229605997
Start simple with single predicate match & transform rules for attributes.
* Its unclear whether modelling Attr predicates will be needed so start with allowing matching attributes with a single predicate.
* The input and output attr type often differs and so add ability to specify a transform between the input and output format.
PiperOrigin-RevId: 229580879
*) Adds support for fusing into consumer loop nests with multiple loads from the same memref.
*) Adds support for reducing slice loop trip count by projecting out destination loop IVs greater than destination loop depth.
*) Removes dependence on src loop depth and simplifies cost model computation.
PiperOrigin-RevId: 229575126
This is mostly plumbing to start allowing testing EDSC lowering. Prototype specifying reference implementation using verbose format without any generation/binding support. Add test pass that dumps the constructed EDSC (of which there can only be one). The idea is to enable iterating from multiple sides, this is wrong on many dimensions at the moment.
PiperOrigin-RevId: 229570535
In TableGen definitions, the "Type" class has been used for types of things
that can be stored in Attributes, but not necessarily present in the MLIR type
system. As a consequence, records like "String" or "DerviedAttrBody" were of
class "Type", which can be confusing. Furthermore, the "builderCall" field of
the "Type" class serves only for attribute construction. Some TableGen "Type"
subclasses that correspond to MLIR kinds of types do not have a canonical way
of construction only from the data available in TableGen, e.g. MemRefType would
require the list of affine maps. This leads to a conclusion that the entities
that describe types of objects appearing in Attributes should be independent of
"Type": they have some properties "Type"s don't and vice versa.
Do not parameterize Tablegen "Attr" class by an instance of "Type". Instead,
provide a "constBuilderCall" field that can be used to build an attribute from
a constant value stored in TableGen instead of indirectly going through
Attribute.Type.builderCall. Some attributes still don't have a
"constBuilderCall" because they used to depend on types without a
"builderCall".
Drop definitions of class "Type" that don't correspond to MLIR Types. Provide
infrastructure to define type-dependent attributes and string-backed attributes
for convenience.
PiperOrigin-RevId: 229570087
We also need the broadcast logic in the TensorFlow dialect. Move it to a
Dialect/ directory for a broader scope. This Dialect/ directory is intended
for code not in core IR, but can potentially be shared by multiple dialects.
Apart from fixing TensorFlow op TableGen to use this trait, this CL only
contains mechanical code shuffling.
PiperOrigin-RevId: 229563911
The constant folding rules assumes value attributes of operands are already
verified to be in good standing.
For each op in the above, the constant folding rules support both integer and
floating point cases. Broadcast behavior is also supported as per the semantics
of TFLite ops.
This CL does not handle overflow/underflow cases yet.
PiperOrigin-RevId: 229441221
LLVM IR types are defined using MLIR's extendable type system. The dialect
provides the only type kind, LLVMType, that wraps an llvm::Type*. Since LLVM
IR types are pointer-unique, MLIR type systems relies on those pointers to
perform its own type unique'ing. Type parsing and printing is delegated to
LLVM libraries.
Define MLIR operations for the LLVM IR instructions currently used by the
translation to the LLVM IR Target to simplify eventual transition. Operations
classes are defined using TableGen. LLVM IR instruction operands that are only
allowed to take constant values are accepted as attributes instead. All
operations are using verbose form for printing and parsing.
PiperOrigin-RevId: 229400375
MLIR has support for type-polymorphic instructions, i.e. instructions that may
take arguments of different types. For example, standard arithmetic operands
take scalars, vectors or tensors. In order to express such instructions in
TableGen, we need to be able to verify that a type object satisfies certain
constraints, but we don't need to construct an instance of this type. The
existing TableGen definition of Type requires both. Extract out a
TypeConstraint TableGen class to define restrictions on types. Define the Type
TableGen class as a subclass of TypeConstraint for consistency. Accept records
of the TypeConstraint class instead of the Type class as values in the
Arguments class when defining operators.
Replace the predicate logic TableGen class based on conjunctive normal form
with the predicate logic classes allowing for abitrary combinations of
predicates using Boolean operators (AND/OR/NOT). The combination is
implemented using simple string rewriting of C++ expressions and, therefore,
respects the short-circuit evaluation order. No logic simplification is
performed at the TableGen level so all expressions must be valid C++.
Maintaining CNF using TableGen only would have been complicated when one needed
to introduce top-level disjunction. It is also unclear if it could lead to a
significantly simpler emitted C++ code. In the future, we may replace inplace
predicate string combination with a tree structure that can be simplified in
TableGen's C++ driver.
Combined, these changes allow one to express traits like ArgumentsAreFloatLike
directly in TableGen instead of relying on C++ trait classes.
PiperOrigin-RevId: 229398247