This trait only works for tensor and vector types at the moment, verifying that the element type of an op with only tensor and vector types match. Added a unit test for it as there is no op currently in core that uses this trait.
--
PiperOrigin-RevId: 246661697
Since SDBM expressions are a subset of affine expressions, they can be
converted to affine expressions in a straightforward way. The inverse
conversion may fail when the affine expression is not an SDBM. Implement the
inverse convresion assuming affine expressions are simplified and
canonicalizied, detect subtractive and multiplicative forms of the stripe
operation.
--
PiperOrigin-RevId: 245494735
Striped difference-bound matrix expressions are a subset of affine expressions
supporting low-complexity algorithms that can be useful for loop
transformations. This introduces the basic data data structures for building
such expressions and unique'ing them in a MLIRContext.
--
PiperOrigin-RevId: 245380206
Currently predicates are written with positional placeholders `{N}` and rely on
`formatv` as the engine to do substitution. The problem with this approach is that
the definitions of those positional placeholders are not consistent; they are
entirely up to the defining predicate of question. For example, `{0}` in various
attribute constraints is used to mean the attribute, while it is used to main the
builder for certain attribute transformations. This can become very confusing.
This CL introduces `tgfmt` as a new mechanism to better support for predicate and
rewrite rule specification. Instead of entirely relying on positional placeholders,
`tgfmt` support both positional and special placeholders. The former is used for
DAG operands. The latter, including $_builder, $_op, $_self, are used as special
"hooks" to entities in the context. With this, the predicate and rewrite rules
specification can be more consistent is more readable.
--
PiperOrigin-RevId: 243249671
Includes a draft of documentation for the quantization setup.
Given how many comments such docs have garnered in the past, I've biased towards a lightly edited first-draft so that people can argue about terminology, approach and structure without having spent too much time on it.
Note that the sections under "Uniform quantization" were cribbed nearly verbatim from internal documentation that Daniel wrote.
PiperOrigin-RevId: 241768668
So that we can use this function to deduce broadcasted shapes elsewhere.
Also added support for unknown dimensions, by following TensorFlow behavior.
PiperOrigin-RevId: 237846065
* before/after pass execution
* after a pass fails
* before/after an analysis is computed
After getting this infrastructure in place, we can start providing common developer utilities like pass timing, IR printing after pass execution, etc.
PiperOrigin-RevId: 237709692