Commit Graph

377 Commits

Author SHA1 Message Date
Uday Bondhugula b26900dce5 Update dma-generate pass to (1) work on blocks of instructions (instead of just
loops), (2) take into account fast memory space capacity and lower 'dmaDepth'
to fit, (3) add location information for debug info / errors

- change dma-generate pass to work on blocks of instructions (start/end
  iterators) instead of 'for' loops; complete TODOs - allows DMA generation for
  straightline blocks of operation instructions interspersed b/w loops
- take into account fast memory capacity: check whether memory footprint fits
  in fastMemoryCapacity parameter, and recurse/lower the depth at which DMA
  generation is performed until it does fit in the provided memory
- add location information to MemRefRegion; any insufficient fast memory
  capacity errors or debug info w.r.t dma generation shows location information
- allow DMA generation pass to be instantiated with a fast memory capacity
  option (besides command line flag)

- change getMemRefRegion to return unique_ptr's
- change getMemRefFootprintBytes to work on a 'Block' instead of 'ForInst'
- other helper methods; add postDomInstFilter option for
  replaceAllMemRefUsesWith; drop forInst->walkOps, add Block::walkOps methods

Eg. output

$ mlir-opt  -dma-generate -dma-fast-mem-capacity=1 /tmp/single.mlir
/tmp/single.mlir:9:13: error: Total size of all DMA buffers' for this block exceeds fast memory capacity

        for %i3 = (d0) -> (d0)(%i1) to (d0) -> (d0 + 32)(%i1) {
            ^

$ mlir-opt -debug-only=dma-generate  -dma-generate -dma-fast-mem-capacity=400 /tmp/single.mlir
/tmp/single.mlir:9:13: note: 8 KiB of DMA buffers in fast memory space for this block

        for %i3 = (d0) -> (d0)(%i1) to (d0) -> (d0 + 32)(%i1) {

PiperOrigin-RevId: 232297044
2019-03-29 16:09:52 -07:00
River Riddle 5052bd8582 Define the AffineForOp and replace ForInst with it. This patch is largely mechanical, i.e. changing usages of ForInst to OpPointer<AffineForOp>. An important difference is that upon construction an AffineForOp no longer automatically creates the body and induction variable. To generate the body/iv, 'createBody' can be called on an AffineForOp with no body.
PiperOrigin-RevId: 232060516
2019-03-29 16:06:49 -07:00
MLIR Team d7c824451f LoopFusion: insert the source loop nest slice at a depth in the destination loop nest which preserves dependences (above any loop carried or other dependences). This is accomplished by updating the maximum destination loop depth based on dependence checks between source loop nest loads and stores which access the memref on which the source loop nest has a store op. In addition, prevent fusing in source loop nests which write to memrefs which escape or are live out.
PiperOrigin-RevId: 231684492
2019-03-29 16:03:23 -07:00
River Riddle a642bb1779 Update tests using affine maps to not rely on specific map numbers in the output IR. This is necessary to remove the dependency on ForInst not numbering the AffineMap bounds it has custom formatting for.
PiperOrigin-RevId: 231634812
2019-03-29 16:03:08 -07:00
River Riddle b6928c945c Standardize the spelling of debug info to "debuginfo" in opt flags.
PiperOrigin-RevId: 231610337
2019-03-29 16:02:38 -07:00
River Riddle 994111238b Fold CallIndirectOp to CallOp when the callee operand is a known constant function.
PiperOrigin-RevId: 231511697
2019-03-29 16:01:23 -07:00
MLIR Team a0f3db4024 Support fusing loop nests which require insertion into a new instruction Block position while preserving dependences, opening up additional fusion opportunities.
- Adds SSA Value edges to the data dependence graph used in the loop fusion pass.

PiperOrigin-RevId: 231417649
2019-03-29 16:00:04 -07:00
River Riddle 755538328b Recommit: Define a AffineOps dialect as well as an AffineIfOp operation. Replace all instances of IfInst with AffineIfOp and delete IfInst.
PiperOrigin-RevId: 231342063
2019-03-29 15:59:30 -07:00
Nicolas Vasilache ae772b7965 Automated rollback of changelist 231318632.
PiperOrigin-RevId: 231327161
2019-03-29 15:42:38 -07:00
River Riddle 5ecef2b3f6 Define a AffineOps dialect as well as an AffineIfOp operation. Replace all instances of IfInst with AffineIfOp and delete IfInst.
PiperOrigin-RevId: 231318632
2019-03-29 15:42:08 -07:00
Uday Bondhugula fb679fc2b5 Drop unused result from affine map in test case - NFC
PiperOrigin-RevId: 231008044
2019-03-29 15:38:53 -07:00
Chris Lattner 607d1c2ca7 More updates of tests to move towards single result affine maps.
PiperOrigin-RevId: 230991929
2019-03-29 15:38:38 -07:00
Uday Bondhugula b4a1443508 Update replaceAllMemRefUsesWith to generate single result affine_apply's for
index remapping
- generate a sequence of single result affine_apply's for the index remapping
  (instead of one multi result affine_apply)
- update dma-generate and loop-fusion test cases; while on this, change test cases
  to use single result affine apply ops
- some fusion comment fix/cleanup

PiperOrigin-RevId: 230985830
2019-03-29 15:38:23 -07:00
Uday Bondhugula b588d58c5f Update createAffineComputationSlice to generate single result affine maps
- Update createAffineComputationSlice to generate a sequence of single result
  affine apply ops instead of one multi-result affine apply
- update pipeline-data-transfer test case; while on this, also update the test
  case to use only single result affine maps, and make it more robust to
  change.

PiperOrigin-RevId: 230965478
2019-03-29 15:37:53 -07:00
Uday Bondhugula f94b15c247 Update dma-generate: update for multiple load/store op's per memref
- introduce a way to compute union using symbolic rectangular bounding boxes
- handle multiple load/store op's to the same memref by taking a union of the regions
- command-line argument to provide capacity of the fast memory space
- minor change to replaceAllMemRefUsesWith to not generate affine_apply if the
  supplied index remap was identity

PiperOrigin-RevId: 230848185
2019-03-29 15:35:38 -07:00
Chris Lattner f60a0ba61c Incremental progress to move the testsuite towards single-result affine_apply
instructions.

PiperOrigin-RevId: 230775607
2019-03-29 15:34:53 -07:00
Uday Bondhugula 72e5c7f428 Minor updates + cleanup to dma-generate
- switch some debug info to emitError
- use a single constant op for zero index to make it easier to write/update
  test cases; avoid creating new constant op's for common zero index cases
- test case cleanup

This is in preparation for an upcoming major update to this pass.

PiperOrigin-RevId: 230728379
2019-03-29 15:34:06 -07:00
River Riddle f319bbbd28 Add a function pass to strip debug info from functions and instructions.
PiperOrigin-RevId: 230654315
2019-03-29 15:33:50 -07:00
MLIR Team b28009b681 Fix single producer check in loop fusion pass.
PiperOrigin-RevId: 230565482
2019-03-29 15:32:20 -07:00
Uday Bondhugula 864d9e02a1 Update fusion cost model + some additional infrastructure and debug information for -loop-fusion
- update fusion cost model to fuse while tolerating a certain amount of redundant
  computation; add cl option -fusion-compute-tolerance
  evaluate memory footprint and intermediate memory reduction
- emit debug info from -loop-fusion showing what was fused and why
- introduce function to compute memory footprint for a loop nest
- getMemRefRegion readability update - NFC

PiperOrigin-RevId: 230541857
2019-03-29 15:32:06 -07:00
Uday Bondhugula 92e9d9484c loop unroll update: unroll factor one for a single iteration loop
- unrolling a single iteration loop by a factor of one should promote its body
  into its parent; this makes it consistent with the behavior/expectation that
  unrolling a loop by a factor equal to its trip count makes the loop go away.

PiperOrigin-RevId: 230426499
2019-03-29 15:31:35 -07:00
Uday Bondhugula 94a03f864f Allocate private/local buffers for slices accurately during fusion
- the size of the private memref created for the slice should be based on
  the memref region accessed at the depth at which the slice is being
  materialized, i.e., symbolic in the outer IVs up until that depth, as opposed
  to the region accessed based on the entire domain.

- leads to a significant contraction of the temporary / intermediate memref
  whenever the memref isn't reduced to a single scalar (through store fwd'ing).

Other changes

- update to promoteIfSingleIteration - avoid introducing unnecessary identity
  map affine_apply from IV; makes it much easier to write and read test cases
  and pass output for all passes that use promoteIfSingleIteration; loop-fusion
  test cases become much simpler

- fix replaceAllMemrefUsesWith bug that was exposed by the above update -
  'domInstFilter' could be one of the ops erased due to a memref replacement in
  it.

- fix getConstantBoundOnDimSize bug: a division by the coefficient of the identifier was
  missing (the latter need not always be 1); add lbFloorDivisors output argument

- rename getBoundingConstantSizeAndShape -> getConstantBoundingSizeAndShape

PiperOrigin-RevId: 230405218
2019-03-29 15:30:31 -07:00
MLIR Team 71495d58a7 Handle escaping memrefs in loop fusion pass:
*) Do not remove loop nests which write to memrefs which escape the function.
*) Do not remove memrefs which escape the function (e.g. are used in the return instruction).

PiperOrigin-RevId: 230398630
2019-03-29 15:30:14 -07:00
Uday Bondhugula c1880a857d AffineExpr pretty print - add missing handling to print expr * - 1 as -expr
- print multiplication by -1 as unary negate; expressions like s0 * -1, d0 * -1
  + d1 will now appear as -s0, -d0 + d1 resp.
- a minor cleanup while on printAffineExprInternal

PiperOrigin-RevId: 230222151
2019-03-29 15:28:44 -07:00
River Riddle 512d87cefc Add a constant folding hook to ExtractElementOp to fold extracting the element of a constant. This also adds a 'getValue' function to DenseElementsAttr and SparseElementsAttr to get the element at a constant index.
PiperOrigin-RevId: 230098938
2019-03-29 15:28:28 -07:00
Uday Bondhugula d7522eb264 Fix test cases that were accessing out of bounds to start with (b/123072438)
- detected with memref-bound-check

- fixes b/123072438; while on this, fix another test case which was reported
  out of bounds

PiperOrigin-RevId: 229978187
2019-03-29 15:27:29 -07:00
MLIR Team c4237ae990 LoopFusion: Creates private MemRefs which are used only by operations in the fused loop.
*) Enables reduction of private memref size based on MemRef region accessed by fused slice.
*) Enables maximal fusion by creating a private memref to break a fusion-preventing dependence.
*) Adds maximal fusion flag to enable fusing as much as possible (though it still fuses the minimum cost computation slice).

PiperOrigin-RevId: 229936698
2019-03-29 15:26:15 -07:00
Nicolas Vasilache 24e5a72dac Fix AffineApply corner case
This CL adds a test reported by andydavis@ and fixes the corner case that
appears when operands do not come from an AffineApply and no Dim composition
is needed.

In such cases, we would need to create an empty map which is disallowed.
The composition in such cases becomes trivial: there is no composition.

This CL also updates the name AffineNormalizer to AffineApplyNormalizer.

PiperOrigin-RevId: 229819234
2019-03-29 15:25:59 -07:00
Uday Bondhugula 40f7535571 Update stale / target-specific information in comments - NFC
PiperOrigin-RevId: 229800834
2019-03-29 15:25:29 -07:00
Nicolas Vasilache 4573a8da9a Fix improperly indexed DimOp in LowerVectorTransfers.cpp
This CL fixes a misunderstanding in how to build DimOp which triggered
execution issues in the CPU path.

The problem is that, given a `memref<?x4x?x8x?xf32>`, the expressions to
construct the dynamic dimensions should be:
`dim %arg, 0 : memref<?x4x?x8x?xf32>`
`dim %arg, 2 : memref<?x4x?x8x?xf32>`
and
`dim %arg, 4 : memref<?x4x?x8x?xf32>`

Before this CL, we wold construct:
`dim %arg, 0 : memref<?x4x?x8x?xf32>`
`dim %arg, 1 : memref<?x4x?x8x?xf32>`
`dim %arg, 2 : memref<?x4x?x8x?xf32>`

and expect the other dimensions to be constants.
This assumption seems consistent at first glance with the syntax of alloc:

```
    %tensor = alloc(%M, %N, %O) : memref<?x4x?x8x?xf32>
```

But this was actuallyincorrect.

This CL also makes the relevant functions available to EDSCs and removes
duplication of the incorrect function.

PiperOrigin-RevId: 229622766
2019-03-29 15:24:13 -07:00
River Riddle 5843e5a7c0 Add a canonicalization pattern to remove Dealloc operations if the memref is an AllocOp that is only used by Dealloc operations.
PiperOrigin-RevId: 229606558
2019-03-29 15:23:13 -07:00
River Riddle ada685f352 Add canonicalization to remove AllocOps if there are no uses. AllocOp has side effects on the heap, but can still be deleted if it has zero uses.
PiperOrigin-RevId: 229596556
2019-03-29 15:22:28 -07:00
MLIR Team 27d067e164 LoopFusion improvements:
*) Adds support for fusing into consumer loop nests with multiple loads from the same memref.
*) Adds support for reducing slice loop trip count by projecting out destination loop IVs greater than destination loop depth.
*) Removes dependence on src loop depth and simplifies cost model computation.

PiperOrigin-RevId: 229575126
2019-03-29 15:21:59 -07:00
River Riddle ed26dd0421 Add a canonicalization pattern for conditional branch to fold constant branch conditions.
PiperOrigin-RevId: 229242007
2019-03-29 15:14:37 -07:00
MLIR Team 38c2fe3158 LoopFusion: automate selection of source loop nest slice depth and destination loop nest insertion depth based on a simple cost model (cost model can be extended/replaced at a later time).
*) LoopFusion: Adds fusion cost function which compares the cost of the fused loop nest, with the cost of the two unfused loop nests to determine if it is profitable to fuse the candidate loop nests. The fusion cost function is run for various combinations for src/dst loop depths attempting find the minimum cost setting for src/dst loop depths which does not increase the computational cost when the loop nests are fused. Combinations of src/dst loop depth are evaluated attempting to maximize loop depth (i.e. take a bigger computation slice from the source loop nest, and insert it deeper in the destination loop nest for better locality).
*) LoopFusion: Adds utility to compute op instance count for loop nests, sliced loop nests, and to compute the cost of a loop nest fused with another sliced loop nest.
*) LoopFusion: canonicalizes slice bound AffineMaps (and updates related tests).
*) Analysis::Utils: Splits getBackwardComputationSlice into two functions: one which calculates and returns the slice loop bounds for analysis by LoopFusion, and the other for insertion of the computation slice (ones fusion has calculated the min-cost src/dst loop depths).
*) Test: Adds multiple unit tests to test the new functionality.

PiperOrigin-RevId: 229219757
2019-03-29 15:13:53 -07:00
Nicolas Vasilache 0ab81776aa Fix typo in lower_vector_transfers.mlir
PiperOrigin-RevId: 229010160
2019-03-29 15:12:40 -07:00
Nicolas Vasilache d734c50c5f [MLIR] Clip all access dimensions during LowerVectorTransfers
This CL adds a short term remedy to an issue that was found during execution
tests.

Lowering of vector transfer ops uses the permutation map to determine which
ForInst have been super-vectorized. During materialization to HW vector sizes
however, some of those dimensions may be fully unrolled and do not appear in
the permutation map.
Such dimensions were then not clipped and may have accessed out of bounds.

This CL conservatively clips all dimensions to ensure no out of bounds access.
The longer term solution is still up for debate but will probably require
either passing more information between Materialization and lowering, or just
merging the 2 passes.

PiperOrigin-RevId: 228980787
2019-03-29 15:12:26 -07:00
Uday Bondhugula c35d6b4f2d Drop -canonicalize from -dma-generate test case cmd
- should be testing on the output of -dma-generate and not '-dma-generate
  -canonicalize'; save trouble for those updating -canonicalize in the future!

PiperOrigin-RevId: 228915192
2019-03-29 15:11:26 -07:00
Lei Zhang 311af4abf3 Const fold splat vectors/tensors in standard add, sub, and mul ops
The const folding logic is structurally similar, so use a template
to abstract the common part.

Moved mul(x, 0) to a legalization pattern to be consistent with
mul(x, 1).

Also promoted getZeroAttr() to be a method on Builder since it is
expected to be frequently used.

PiperOrigin-RevId: 228891989
2019-03-29 15:09:55 -07:00
Nicolas Vasilache cfa5831960 Uniformize composition of AffineApplyOp by construction
This CL is the 5th on the path to simplifying AffineMap composition.
This removes the distinction between normalized single-result AffineMap and
more general composed multi-result map.

One nice byproduct of making the implementation driven by single-result is
that the multi-result extension is a trivial change: the implementation is
still single-result and we just use:

```
unsigned idx = getIndexOf(...);
map.getResult(idx);
```

This CL also fixes an AffineNormalizer implementation issue related to symbols.
Namely it stops performing substitutions on symbols in AffineNormalizer and
instead concatenates them all to be consistent with the call to
`AffineMap::compose(AffineMap)`. This latter call to `compose` cannot perform
simplifications of symbols coming from different maps based on positions only:
i.e. dims are applied and renumbered but symbols must be concatenated.

The only way to determine whether symbols from different AffineApply are the
same is to look at the concrete values. The canonicalizeMapAndOperands is thus
extended with behavior to support replacing operands that appear multiple
times.

Lastly, this CL demonstrates that the implementation is correct by rewriting
ComposeAffineMaps using only `makeComposedAffineApply`. The implementation
uses a matcher because AffineApplyOp are introduced as composed operations on
the fly instead of iteratively forwardSubstituting. For this purpose, a walker
would revisit freshly introduced AffineApplyOp. Regardless, ComposeAffineMaps
is scheduled to disappear, this CL replaces the implementation based on
iterative `forwardSubstitute` by a composed-by-construction
`makeComposedAffineApply`.
Remaining calls to `forwardSubstitute` will be removed in the next CL.

PiperOrigin-RevId: 228830443
2019-03-29 15:08:40 -07:00
Uday Bondhugula 2370c601ba Add safeguard against FM explosion
- FM has a worst case exponential complexity. For our purposes, this worst case
  is rarely expected, but could still appear due to improperly constructed
  constraints (a logical/memory error in other methods for eg.) or artificially
  created arbitrarily complex integer sets (adversarial / fuzz tests).

  Add a check to detect such an explosion in the number of constraints and
  conservatively return false from isEmpty() (instead of running out of memory
  or running for too long).

- Add an artifical virus test case.

PiperOrigin-RevId: 228753496
2019-03-29 15:07:55 -07:00
Alex Zinenko 9003490287 Implement branch-free single-division lowering of affine division/remainder
This implements the lowering of `floordiv`, `ceildiv` and `mod` operators from
affine expressions to the arithmetic primitive operations.  Integer division
rules in affine expressions explicitly require rounding towards either negative
or positive infinity unlike machine implementations that round towards zero.
In the general case, implementing `floordiv` and `ceildiv` using machine signed
division requires computing both the quotient and the remainder.  When the
divisor is positive, this can be simplified by adjusting the dividend and the
quotient by one and switching signs.

In the current use cases, we are unlikely to encounter affine expressions with
negative divisors (affine divisions appear in loop transformations such as
tiling that guarantee that divisors are positive by construction).  Therefore,
it is reasonable to use branch-free single-division implementation.  In case of
affine maps, divisors can only be literals so we can check the sign and
implement the case for negative divisors when the need arises.

The affine lowering pass can still fail when applied to semi-affine maps
(division or modulo by a symbol).

PiperOrigin-RevId: 228668181
2019-03-29 15:07:40 -07:00
Uday Bondhugula 742c37abc9 Fix DMA overlap pass buffer mapping
- the double buffer should be indexed (iv floordiv step) % 2 and NOT (iv % 2);
  step wasn't being accounted for.

- fix test cases, enable failing test cases

PiperOrigin-RevId: 228635726
2019-03-29 15:07:10 -07:00
Uday Bondhugula 303c09299f Fix affine expr flattener bug + improve simplification in a particular scenario
- fix visitDivExpr: constraints constructed for localVarCst used the original
  divisor instead of the simplified divisor; fix this. Add a simple test case
  in memref-bound-check that reproduces this bug - although this was encountered in the
  context of slicing for fusion.

- improve mod expr flattening: when flattening mod expressions,
  cancel out the GCD of the numerator and denominator so that we can get a
  simpler flattened form along with a simpler floordiv local var for it

PiperOrigin-RevId: 228539928
2019-03-29 15:06:11 -07:00
Nicolas Vasilache 1f78d63f05 [MLIR] Make SuperVectorization use normalized AffineApplyOp
Supervectorization does not plan on handling multi-result AffineMaps and
non-canonical chains of > 1 AffineApplyOp.
This CL uses the simpler single-result unbounded AffineApplyOp in the
MaterializeVectors pass.

PiperOrigin-RevId: 228469085
2019-03-29 15:05:55 -07:00
Nicolas Vasilache c6f798a976 Introduce AffineMap::compose(AffineMap)
This CL is the 2nd on the path to simplifying AffineMap composition.
This CL uses the now accepted `AffineExpr::compose(AffineMap)` to
implement `AffineMap::compose(AffineMap)`.

Implications of keeping the simplification function in
Analysis are documented where relevant.

PiperOrigin-RevId: 228276646
2019-03-29 15:04:20 -07:00
Uday Bondhugula e94ba6815a Fix 0-d memref corner case for getMemRefRegion()
- fix crash on test/Transforms/canonicalize.mlir with
  -memref-bound-check

PiperOrigin-RevId: 228268486
2019-03-29 15:03:50 -07:00
Nicolas Vasilache c449e46ceb Introduce AffineExpr::compose(AffineMap)
This CL is the 1st on the path to simplifying AffineMap composition.
This CL uses the now accepted AffineExpr.replaceDimsAndSymbols to
implement `AffineExpr::compose(AffineMap)`.

Arguably, `simplifyAffineExpr` should be part of IR and not Analysis but
this CL does not yet pull the trigger on that.

PiperOrigin-RevId: 228265845
2019-03-29 15:03:36 -07:00
Uday Bondhugula 21baf86a2f Extend loop-fusion's slicing utility + other fixes / updates
- refactor toAffineFromEq and the code surrounding it; refactor code into
  FlatAffineConstraints::getSliceBounds
- add FlatAffineConstraints methods to detect identifiers as mod's and div's of other
  identifiers
- add FlatAffineConstraints::getConstantLower/UpperBound
- Address b/122118218 (don't assert on invalid fusion depths cmdline flags -
  instead, don't do anything; change cmdline flags
  src-loop-depth -> fusion-src-loop-depth
- AffineExpr/Map print method update: don't fail on null instances (since we have
  a wrapper around a pointer, it's avoidable); rationale: dump/print methods should
  never fail if possible.
- Update memref-dataflow-opt to add an optimization to avoid a unnecessary call to
  IsRangeOneToOne when it's trivially going to be true.
- Add additional test cases to exercise the new support
- update a few existing test cases since the maps are now generated uniformly with
  all destination loop operands appearing for the backward slice
- Fix projectOut - fix wrong range for getBestElimCandidate.
- Fix for getConstantBoundOnDimSize() - didn't show up in any test cases since
  we didn't have any non-hyperrectangular ones.

PiperOrigin-RevId: 228265152
2019-03-29 15:03:20 -07:00
Uday Bondhugula b934d75b8f Convert expr - c * (expr floordiv c) to expr mod c in AffineExpr
- Detect 'mod' to replace the combination of floordiv, mul, and subtract when
  possible at construction time; when 'c' is a power of two, this reduces the number of
  operations; also more compact and readable. Update simplifyAdd for this.

  On a side note:
  - with the affine expr flattening we have, a mod expression like d0 mod c
    would be flattened into d0 - c * q,  c * q <= d0 <= c*q + c - 1, with 'q'
    being added as the local variable (q = d0 floordiv c); as a result, a mod
    was turned into a floordiv whenever the expression was reconstructed back,
    i.e., as  d0 - c * (d0 floordiv c); as a result of this change, we recover
    the mod back.

- rename SimplifyAffineExpr -> SimplifyAffineStructures (pass had been renamed but
  the file hadn't been).

PiperOrigin-RevId: 228258120
2019-03-29 15:02:56 -07:00
Nicolas Vasilache 7c0bbe0939 Iterate on vector rather than DenseMap during AffineMap normalization
This CL removes a flakyness associated to a spurious iteration on DenseMap
iterators when normalizing AffineMap.

PiperOrigin-RevId: 228160074
2019-03-29 14:59:37 -07:00
Alex Zinenko c47ed53211 Add simple constant folding hook for CmpIOp
Integer comparisons can be constant folded if both of their arguments are known
constants, which we can compare in the compiler.  This requires implementing
all comparison predicates, but thanks to consistency between LLVM and MLIR
comparison predicates, we have a one-to-one correspondence between predicates
and llvm::APInt comparison functions.  Constant folding of comparsions with
maximum/minimum values of the integer type are left for future work.

This will be used to test the lowering of mod/floordiv/ceildiv in affine
expressions at compile time.

PiperOrigin-RevId: 228077580
2019-03-29 14:59:22 -07:00
Alex Zinenko bc04556cf8 Introduce integer division and remainder operations
This adds signed/unsigned integer division and remainder operations to the
StandardOps dialect.  Two versions are required because MLIR integers are
signless, but the meaning of the leading bit is important in division and
affects the results.  LLVM IR made a similar choice.  Define the operations in
the tablegen file and add simple constant folding hooks in the C++
implementation.  Handle signed division overflow and division by zero errors in
constant folding.  Canonicalization is left for future work.

These operations are necessary to lower affine_apply's down to LLVM IR.

PiperOrigin-RevId: 228077549
2019-03-29 14:58:52 -07:00
Uday Bondhugula 8496f2c30b Complete TODOs / cleanup for loop-fusion utility
- this is CL 1/2 that does a clean up and gets rid of one limitation in an
  underlying method - as a result, fusion works for more cases.
- fix bugs/incomplete impl. in toAffineMapFromEq
- fusing across rank changing reshapes for example now just works

  For eg. given a rank 1 memref to rank 2 memref reshape (64 -> 8 x 8) like this,
  -loop-fusion -memref-dataflow-opt now completely fuses and inlines/store-forward
  to get rid of the temporary:

INPUT

  // Rank 1 -> Rank 2 reshape
  for %i0 = 0 to 64 {
     %v = load %A[%i0]
     store %v, %B[%i0 floordiv 8, i0 mod 8]
  }

  for %i1 = 0 to 8
    for %i2 = 0 to 8
      %w = load %B[%i1, i2]
      "foo"(%w) : (f32) -> ()

OUTPUT

$ mlir-opt -loop-fusion -memref-dataflow-opt fuse_reshape.mlir

#map0 = (d0, d1) -> (d0 * 8 + d1)
mlfunc @fuse_reshape(%arg0: memref<64xf32>) {
  for %i0 = 0 to 8 {
    for %i1 = 0 to 8 {
      %0 = affine_apply #map0(%i0, %i1)
      %1 = load %arg0[%0] : memref<64xf32>
      "foo"(%1) : (f32) -> ()
    }
  }
}

AFAIK, there is no polyhedral tool / compiler that can perform such fusion -
because it's not really standard loop fusion, but possible through a
generalized slicing-based approach such as ours.

PiperOrigin-RevId: 227918338
2019-03-29 14:57:22 -07:00
Nicolas Vasilache 618c6a74c6 [MLIR] Introduce normalized single-result unbounded AffineApplyOp
Supervectorization does not plan on handling multi-result AffineMaps and
non-canonical chains of > 1 AffineApplyOp.
This CL introduces a simpler abstraction and composition of single-result
unbounded AffineApplyOp by using the existing unbound AffineMap composition.

This CL adds a simple API call and relevant tests:

```c++
OpPointer<AffineApplyOp> makeNormalizedAffineApply(
  FuncBuilder *b, Location loc, AffineMap map, ArrayRef<Value*> operands);
```

which creates a single-result unbounded AffineApplyOp.
The operands of AffineApplyOp are not themselves results of AffineApplyOp by
consrtuction.

This represent the simplest possible interface to complement the composition
of (mathematical) AffineMap, for the cases when we are interested in applying
it to Value*.

In this CL the composed AffineMap is not compressed (i.e. there exist operands
that are not part of the result). A followup commit will compress to normal
form.

The single-result unbounded AffineApplyOp abstraction will be used in a
followup CL to support the MaterializeVectors pass.

PiperOrigin-RevId: 227879021
2019-03-29 14:56:37 -07:00
Chris Lattner 7983bbc251 Introduce a simple canonicalization of affine_apply that drops unused dims and
symbols.

Included with this is some other infra:
 - Testcases for other canonicalizations that I will implement next.
 - Some helpers in AffineMap/Expr for doing simple walks without defining whole
   visitor classes.
 - A 'replaceDimsAndSymbols' facility that I'll be using to simplify maps and
   exprs, e.g. to fold one constant into a mapping and to drop/renumber unused dims.
 - Allow index (and everything else) to work in memref's, as we previously
   discussed, to make the testcase easier to write.
 - A "getAffineBinaryExpr" helper to produce a binop when you know the kind as
   an enum.

This line of work will eventually subsume the ComposeAffineApply pass, but it is no where close to that yet :-)

PiperOrigin-RevId: 227852951
2019-03-29 14:56:07 -07:00
Alex Zinenko 0c4ee54198 Merge LowerAffineApplyPass into LowerIfAndForPass, rename to LowerAffinePass
This change is mechanical and merges the LowerAffineApplyPass and
LowerIfAndForPass into a single LowerAffinePass.  It makes a step towards
defining an "affine dialect" that would contain all polyhedral-related
constructs.  The motivation for merging these two passes is based on retiring
MLFunctions and, eventually, transforming If and For statements into regular
operations.  After that happens, LowerAffinePass becomes yet another
legalization.

PiperOrigin-RevId: 227566113
2019-03-29 14:52:52 -07:00
Alex Zinenko fa710c17f4 LowerForAndIf: expand affine_apply's inplace
Existing implementation was created before ML/CFG unification refactoring and
did not concern itself with further lowering to separate concerns.  As a
result, it emitted `affine_apply` instructions to implement `for` loop bounds
and `if` conditions and required a follow-up function pass to lower those
`affine_apply` to arithmetic primitives.  In the unified function world,
LowerForAndIf is mostly a lowering pass with low complexity.  As we move
towards a dialect for affine operations (including `for` and `if`), it makes
sense to lower `for` and `if` conditions directly to arithmetic primitives
instead of relying on `affine_apply`.

Expose `expandAffineExpr` function in LoweringUtils.  Use this function
together with `expandAffineMaps` to emit primitives that implement loop and
branch conditions directly.

Also remove tests that become unnecessary after transforming LowerForAndIf into
a function pass.

PiperOrigin-RevId: 227563608
2019-03-29 14:52:22 -07:00
Chris Lattner bbf362b784 Eliminate extfunc/cfgfunc/mlfunc as a concept, and just use 'func' instead.
The entire compiler now looks at structural properties of the function (e.g.
does it have one block, does it contain an if/for stmt, etc) so the only thing
holding up this difference is round tripping through the parser/printer syntax.
Removing this shrinks the compile by ~140LOC.

This is step 31/n towards merging instructions and statements.  The last step
is updating the docs, which I will do as a separate patch in order to split it
from this mostly mechanical patch.

PiperOrigin-RevId: 227540453
2019-03-29 14:51:37 -07:00
Nicolas Vasilache 73f5c9c380 [MLIR] Sketch a simple set of EDSCs to declaratively write MLIR
This CL introduces a simple set of Embedded Domain-Specific Components (EDSCs)
in MLIR components:
1. a `Type` system of shell classes that closely matches the MLIR type system. These
types are subdivided into `Bindable` leaf expressions and non-bindable `Expr`
expressions;
2. an `MLIREmitter` class whose purpose is to:
  a. maintain a map of `Bindable` leaf expressions to concrete SSAValue*;
  b. provide helper functionality to specify bindings of `Bindable` classes to
     SSAValue* while verifying comformable types;
  c. traverse the `Expr` and emit the MLIR.

This is used on a concrete example to implement MemRef load/store with clipping in the
LowerVectorTransfer pass. More specifically, the following pseudo-C++ code:
```c++
MLFuncBuilder *b = ...;
Location location = ...;
Bindable zero, one, expr, size;
// EDSL expression
auto access = select(expr < zero, zero, select(expr < size, expr, size - one));
auto ssaValue = MLIREmitter(b)
    .bind(zero, ...)
    .bind(one, ...)
    .bind(expr, ...)
    .bind(size, ...)
    .emit(location, access);
```
is used to emit all the MLIR for a clipped MemRef access.

This simple EDSL can easily be extended to more powerful patterns and should
serve as the counterpart to pattern matchers (and could potentially be unified
once we get enough experience).

In the future, most of this code should be TableGen'd but for now it has
concrete valuable uses: make MLIR programmable in a declarative fashion.

This CL also adds Stmt, proper supporting free functions and rewrites
VectorTransferLowering fully using EDSCs.

The code for creating the EDSCs emitting a VectorTransferReadOp as loops
with clipped loads is:

```c++
  Stmt block = Block({
    tmpAlloc = alloc(tmpMemRefType),
    vectorView = vector_type_cast(tmpAlloc, vectorMemRefType),
    ForNest(ivs, lbs, ubs, steps, {
      scalarValue = load(scalarMemRef, accessInfo.clippedScalarAccessExprs),
      store(scalarValue, tmpAlloc, accessInfo.tmpAccessExprs),
    }),
    vectorValue = load(vectorView, zero),
    tmpDealloc = dealloc(tmpAlloc.getLHS())});
  emitter.emitStmt(block);
```

where `accessInfo.clippedScalarAccessExprs)` is created with:

```c++
select(i + ii < zero, zero, select(i + ii < N, i + ii, N - one));
```

The generated MLIR resembles:

```mlir
    %1 = dim %0, 0 : memref<?x?x?x?xf32>
    %2 = dim %0, 1 : memref<?x?x?x?xf32>
    %3 = dim %0, 2 : memref<?x?x?x?xf32>
    %4 = dim %0, 3 : memref<?x?x?x?xf32>
    %5 = alloc() : memref<5x4x3xf32>
    %6 = vector_type_cast %5 : memref<5x4x3xf32>, memref<1xvector<5x4x3xf32>>
    for %i4 = 0 to 3 {
      for %i5 = 0 to 4 {
        for %i6 = 0 to 5 {
          %7 = affine_apply #map0(%i0, %i4)
          %8 = cmpi "slt", %7, %c0 : index
          %9 = affine_apply #map0(%i0, %i4)
          %10 = cmpi "slt", %9, %1 : index
          %11 = affine_apply #map0(%i0, %i4)
          %12 = affine_apply #map1(%1, %c1)
          %13 = select %10, %11, %12 : index
          %14 = select %8, %c0, %13 : index
          %15 = affine_apply #map0(%i3, %i6)
          %16 = cmpi "slt", %15, %c0 : index
          %17 = affine_apply #map0(%i3, %i6)
          %18 = cmpi "slt", %17, %4 : index
          %19 = affine_apply #map0(%i3, %i6)
          %20 = affine_apply #map1(%4, %c1)
          %21 = select %18, %19, %20 : index
          %22 = select %16, %c0, %21 : index
          %23 = load %0[%14, %i1, %i2, %22] : memref<?x?x?x?xf32>
          store %23, %5[%i6, %i5, %i4] : memref<5x4x3xf32>
        }
      }
    }
    %24 = load %6[%c0] : memref<1xvector<5x4x3xf32>>
    dealloc %5 : memref<5x4x3xf32>
```

In particular notice that only 3 out of the 4-d accesses are clipped: this
corresponds indeed to the number of dimensions in the super-vector.

This CL also addresses the cleanups resulting from the review of the prevous
CL and performs some refactoring to simplify the abstraction.

PiperOrigin-RevId: 227367414
2019-03-29 14:50:23 -07:00
Chris Lattner ae618428f6 Greatly simplify the ConvertToCFG pass, converting it from a module pass to a
function pass, and eliminating the need to copy over code and do
interprocedural updates.  While here, also improve it to make fewer empty
blocks, and rename it to "LowerIfAndFor" since that is what it does.  This is
a net reduction of ~170 lines of code.

As drive-bys, change the splitBlock method to *not* insert an unconditional
branch, since that behavior is annoying for all clients.  Also improve the
AsmPrinter to not crash when a block is referenced that isn't linked into a
function.

PiperOrigin-RevId: 227308856
2019-03-29 14:48:13 -07:00
Uday Bondhugula b9fe6be6d4 Introduce memref store to load forwarding - a simple memref dataflow analysis
- the load/store forwarding relies on memref dependence routines as well as
  SSA/dominance to identify the memref store instance uniquely supplying a value
  to a memref load, and replaces the result of that load with the value being
  stored. The memref is also deleted when possible if only stores remain.

- add methods for post dominance for MLFunction blocks.

- remove duplicated getLoopDepth/getNestingDepth - move getNestingDepth,
  getMemRefAccess, getNumCommonSurroundingLoops into Analysis/Utils (were
  earlier static)

- add a helper method in FlatAffineConstraints - isRangeOneToOne.

PiperOrigin-RevId: 227252907
2019-03-29 14:47:28 -07:00
Uday Bondhugula 6e3462d251 Fix b/122139732; update FlatAffineConstraints::isEmpty() to eliminate IDs in a
better order.

- update isEmpty() to eliminate IDs in a better order. Speed improvement for
  complex cases (for eg. high-d reshape's involving mod's/div's).
- minor efficiency update to projectOut (was earlier making an extra albeit
  benign call to gaussianEliminateIds) (NFC).
- move getBestIdToEliminate further up in the file (NFC).
- add the failing test case.
- add debug info to checkMemRefAccessDependence.

PiperOrigin-RevId: 227244634
2019-03-29 14:47:13 -07:00
Chris Lattner 8ef2552df7 Have the asmprinter take advantage of the new capabilities of the asmparser, by
printing the entry block in a CFG function's argument line.  Since I'm touching
all of the testcases anyway, change the argument list from printing as
"%arg : type" to "%arg: type" which is more consistent with bb arguments.

In addition to being more consistent, this is a much nicer look for cfg functions.

PiperOrigin-RevId: 227240069
2019-03-29 14:46:29 -07:00
Chris Lattner 37579ae8c4 Introduce ^ as a basic block sigil, eliminating an ambiguity on the MLIR
syntax.

PiperOrigin-RevId: 227234174
2019-03-29 14:45:59 -07:00
Chris Lattner 456ad6a8e0 Standardize naming of statements -> instructions, revisting the code base to be
consistent and moving the using declarations over.  Hopefully this is the last
truly massive patch in this refactoring.

This is step 21/n towards merging instructions and statements, NFC.

PiperOrigin-RevId: 227178245
2019-03-29 14:44:30 -07:00
Uday Bondhugula b1d9cc4d1e Extend/complete dependence tester to utilize local var info.
- extend/complete dependence tester to utilize local var info while adding
  access function equality constraints; one more step closer to get slicing
  based fusion working in the general case of affine_apply's involving mod's/div's.
- update test case to reflect more accurate dependence information; remove
  inaccurate comment on test case mod_deps.
- fix a minor "bug" in equality addition in addMemRefAccessConstraints (doesn't
  affect correctness, but the fixed version is more intuitive).
- some more surrounding code clean up
- move simplifyAffineExpr out of anonymous AffineExprFlattener class - the
  latter has state, and the former should reside outside.

PiperOrigin-RevId: 227175600
2019-03-29 14:44:14 -07:00
Uday Bondhugula 294687ef59 Fix affine expr flattener bug introduced by cl/225452174.
- inconsistent local var constraint size when repeatedly using the same
  flattener for all expressions in a map.

PiperOrigin-RevId: 227067836
2019-03-29 14:40:37 -07:00
Alex Zinenko eb0f9f37af SuperVectorization: fix 'isa' assertion
Supervectorization uses null pointers to SSA values as a means of communicating
the failure to vectorize.  In operation vectorization, all operations producing
the values of operation arguments must be vectorized for the given operation to
be vectorized.  The existing check verified if any of the value "def"
statements was vectorized instead, sometimes leading to assertions inside `isa`
called on a null pointer.  Fix this to check that all "def" statements were
vectorized.

PiperOrigin-RevId: 226941552
2019-03-29 14:37:20 -07:00
MLIR Team 4eef795a1d Computation slice update: adds parameters to insertBackwardComputationSlice which specify the source loop nest depth at which to perform iteration space slicing, and the destination loop nest depth at which to insert the compution slice.
Updates LoopFusion pass to take these parameters as command line flags for experimentation.

PiperOrigin-RevId: 226514297
2019-03-29 14:35:03 -07:00
MLIR Team bcb7c4742d Do proper indexing for local variables when building access function equality constraints (working on test cases).
PiperOrigin-RevId: 226399089
2019-03-29 14:34:02 -07:00
MLIR Team 2570fb5bb7 Address some issues from memref dependence check bug (b/121216762), adds tests cases.
PiperOrigin-RevId: 226277453
2019-03-29 14:33:17 -07:00
MLIR Team 6892ffb896 Improve loop fusion algorithm by using a memref dependence graph.
Fixed TODO for reduction fusion unit test.

PiperOrigin-RevId: 226277226
2019-03-29 14:33:02 -07:00
Uday Bondhugula 14d2618f63 Simplify memref-dependence-check's meta data structures / drop duplication and
reuse existing ones.

- drop IterationDomainContext, redundant since FlatAffineConstraints has
  MLValue information associated with its dimensions.
- refactor to use existing support
- leads to a reduction in LOC
- as a result of these changes, non-constant loop bounds get naturally
  supported for dep analysis.
- update test cases to include a couple with non-constant loop bounds
- rename addBoundsFromForStmt -> addForStmtDomain
- complete TODO for getLoopIVs (handle 'if' statements)

PiperOrigin-RevId: 226082008
2019-03-29 14:32:46 -07:00
Uday Bondhugula 1d72f2e47e Update / complete a TODO for addBoundsForForStmt
- when adding constraints from a 'for' stmt into FlatAffineConstraints,
  correctly add bound operands of the 'for' stmt as a dimensional identifier or
  a symbolic identifier depending on whether the bound operand is a valid
  MLFunction symbol
- update test case to exercise this.

PiperOrigin-RevId: 225988511
2019-03-29 14:32:31 -07:00
Uday Bondhugula 20531932f4 Refactor/update memref-dep-check's addMemRefAccessConstraints and
addDomainConstraints; add support for mod/div for dependence testing.

- add support for mod/div expressions in dependence analysis
- refactor addMemRefAccessConstraints to use getFlattenedAffineExprs (instead
  of getFlattenedAffineExpr); update addDomainConstraints.
- rename AffineExprFlattener::cst -> localVarCst

PiperOrigin-RevId: 225933306
2019-03-29 14:31:58 -07:00
Alex Zinenko 51c8a095a3 Materialize vector_type_cast operation in the SuperVector dialect
This operation is produced and used by the super-vectorization passes and has
been emitted as an abstract unregistered operation until now.  For end-to-end
testing purposes, it has to be eventually lowered to LLVM IR.  Matching
abstract operation by name goes into the opposite direction of the generic
lowering approach that is expected to be used for LLVM IR lowering in the
future.  Register vector_type_cast operation as a part of the SuperVector
dialect.

Arguably, this operation is a special case of the `view` operation from the
Standard dialect.  The semantics of `view` is not fully specified at this point
so it is safer to rely on a custom operation.  Additionally, using a custom
operation may help to achieve clear dialect separation.

PiperOrigin-RevId: 225887305
2019-03-29 14:31:13 -07:00
MLIR Team 3b69230b3a Loop Fusion pass update: introduce utilities to perform generalized loop fusion based on slicing; encompasses standard loop fusion.
*) Adds simple greedy fusion algorithm to drive experimentation. This algorithm greedily fuses loop nests with single-writer/single-reader memref dependences to improve locality.
*) Adds support for fusing slices of a loop nest computation: fusing one loop nest into another by adjusting the source loop nest's iteration bounds (after it is fused into the destination loop nest). This is accomplished by solving for the source loop nest's IVs in terms of the destination loop nests IVs and symbols using the dependece polyhedron, then creating AffineMaps of these functions for the loop bounds of the fused source loop.
*) Adds utility function 'insertMemRefComputationSlice' which computes and inserts computation slice from loop nest surrounding a source memref access into the loop nest surrounding the destingation memref access.
*) Adds FlatAffineConstraints::toAffineMap function which returns and AffineMap which represents an equality contraint where one dimension identifier is represented as a function of all others in the equality constraint.
*) Adds multiple fusion unit tests.

PiperOrigin-RevId: 225842944
2019-03-29 14:30:13 -07:00
Uday Bondhugula c41ee60647 'memref-bound-check': extend to store op's as well
- extend memref-bound-check to store op's
- make the bound check an analysis util and move to lib/Analysis/Utils.cpp (so that
  one doesn't need to always create a pass to use it)

PiperOrigin-RevId: 225564830
2019-03-29 14:29:13 -07:00
Uday Bondhugula 45a0f52519 Expression flattening improvement - reuse local expressions.
- if a local id was already for a specific mod/div expression, just reuse it if
  the expression repeats (instead of adding a new one).
- drastically reduces the number of local variables added during flattening for
  real use cases - since the same div's and mod expressions often repeat.
- add getFlattenedAffineExprs for AffineMap, IntegerSet based on the above

As a natural result of the above:

- FlatAffineConstraints(IntegerSet) ctor now deals with integer sets that have mod
  and div constraints as well, and these get simplified as well from -simplify-affine-structures

PiperOrigin-RevId: 225452174
2019-03-29 14:28:13 -07:00
Uday Bondhugula 8365bdc17f FlatAffineConstraints - complete TODOs: add method to remove duplicate /
trivially redundant constraints. Update projectOut to eliminate identifiers in
a more efficient order. Fix b/120801118.

- add method to remove duplicate / trivially redundant constraints from
  FlatAffineConstraints (use a hashing-based approach with DenseSet)
- update projectOut to eliminate identifiers in a more efficient order

(A sequence of affine_apply's like this (from a real use case) finally exposed
the lack of the above trivial/low hanging simplifications).

  for %ii = 0 to 64 {
    for %jj = 0 to 9 {
      %a0 = affine_apply (d0, d1) -> (d0 * (9 * 1024) + d1 * 128) (%ii, %jj)
      %a1 = affine_apply (d0) ->
        (d0 floordiv (2 * 3 * 3 * 128 * 128),
        (d0 mod 294912) floordiv (3 * 3 * 128 * 128),
        (((d0 mod 294912) mod 147456) floordiv 1152) floordiv 8,
        (((d0 mod 294912) mod 147456) mod 1152) floordiv 384,
        ((((d0 mod 294912) mod 147456) mod 1152) mod 384) floordiv 128,
        (((((d0 mod 294912) mod 147456) mod 1152) mod 384) mod 128)
          floordiv 128) (%a0)
      %v0 = load %in[%a1tensorflow/mlir#0, %a1tensorflow/mlir#1, %a1tensorflow/mlir#3, %a1tensorflow/mlir#4, %a1tensorflow/mlir#2, %a1tensorflow/mlir#5]
        : memref<2x2x3x3x16x1xi32>
    }
  }

- update FlatAffineConstraints::print to print number of constraints.

PiperOrigin-RevId: 225397480
2019-03-29 14:27:29 -07:00
Uday Bondhugula 4860f0e8fd Fix loop unrolling test cases
- These test cases had to be updated post the switch to exclusive upper bound;
  however, the test cases hadn't originally been written to check correctly; as
  a result, they didn't fail and weren't updated. Update test case and fix
  upper bound.

PiperOrigin-RevId: 225194016
2019-03-29 14:26:56 -07:00
Alex Zinenko 97d2f3cd3d ConvertToCFG: use affine_apply to implement loop steps
Originally, loop steps were implemented using `addi` and `constant` operations
because `affine_apply` was not handled in the first implementation.  The
support for `affine_apply` has been added, use it to implement the update of
the loop induction variable.  This is more consistent with the lower and upper
bounds of the loop that are also implemented as `affine_apply`, removes the
dependence of the converted function on the StandardOps dialect and makes it
clear from the CFG function that all operations on the loop induction variable
are purely affine.

PiperOrigin-RevId: 225165337
2019-03-29 14:26:22 -07:00
Uday Bondhugula b9f53dc0bd Update/Fix LoopUtils::stmtBodySkew to handle loop step.
- loop step wasn't handled and there wasn't a TODO or an assertion; fix this.
- rename 'delay' to shift for consistency/readability.
- other readability changes.
- remove duplicate attribute print for DmaStartOp; fix misplaced attribute
  print for DmaWaitOp
- add build method for AddFOp (unrelated to this CL, but add it anyway)

PiperOrigin-RevId: 224892958
2019-03-29 14:25:07 -07:00
Uday Bondhugula d59a95a05c Fix missing check for dependent DMAs in pipeline-data-transfer
- adding a conservative check for now (TODO: use the dependence analysis pass
  once the latter is extended to deal with DMA ops). resolve an existing bug on
  a test case.

- update test cases

PiperOrigin-RevId: 224869526
2019-03-29 14:24:53 -07:00
Uday Bondhugula 6757fb151d FlatAffineConstraints API cleanup; add normalizeConstraintsByGCD().
- add method normalizeConstraintsByGCD
- call normalizeConstraintsByGCD() and GCDTightenInequalities() at the end of
  projectOut.
- remove call to GCDTightenInequalities() from getMemRefRegion
- change isEmpty() to check isEmptyByGCDTest() / hasInvalidConstraint() each
  time an identifier is eliminated (to detect emptiness early).
- make FourierMotzkinEliminate, gaussianEliminateId(s),
  GCDTightenInequalities() private
- improve / update stale comments

PiperOrigin-RevId: 224866741
2019-03-29 14:24:37 -07:00
Uday Bondhugula 2ef57806ba Update/fix -pipeline-data-transfer; fix b/120770946
- fix replaceAllMemRefUsesWith call to replace only inside loop body.
- handle the case where DMA buffers are dynamic; extend doubleBuffer() method
  to handle dynamically shaped DMA buffers (pass the right operands to AllocOp)
- place alloc's for DMA buffers at the depth at which pipelining is being done
  (instead of at top-level)
- add more test cases

PiperOrigin-RevId: 224852231
2019-03-29 14:24:22 -07:00
Uday Bondhugula 2d6478fa92 Extend loop tiling utility to handle non-constant loop bounds and bounds that
are a max/min of several expressions.

- Extend loop tiling to handle non-constant loop bounds and bounds that
  are a max/min of several expressions, i.e., bounds using multi-result affine
  maps

- also fix b/120630124 as a result (the IR was in an invalid state when tiled
  loop generation failed; SSA uses were created that weren't plugged into the IR).

PiperOrigin-RevId: 224604460
2019-03-29 14:23:34 -07:00
Uday Bondhugula dfc752e42b Generate strided DMAs from -dma-generate
- generate DMAs correctly now using strided DMAs where needed
- add support for multi-level/nested strides; op still supports one level of
  stride for now.

Other things
- add test case for  symbolic lower/upper bound; cases where the DMA buffer
  size can't be bounded by a known constant
- add test case for dynamic shapes where the DMA buffers are however bounded by
  constants
- refactor some of the '-dma-generate' code

PiperOrigin-RevId: 224584529
2019-03-29 14:23:19 -07:00
Nicolas Vasilache d9b6420fc9 [MLIR] Add LowerVectorTransfersPass
This CL adds a pass that lowers VectorTransferReadOp and VectorTransferWriteOp
to a simple loop nest via local buffer allocations.

This is an MLIR->MLIR lowering based on builders.

A few TODOs are left to address in particular:
1. invert the permutation map so the accesses to the remote memref are coalesced;
2. pad the alloc for bank conflicts in local memory (e.g. GPUs shared_memory);
3. support broadcast / avoid copies when permutation_map is not of full column rank
4. add a proper "element_cast" op

One notable limitation is this does not plan on supporting boundary conditions.
It should be significantly easier to use pre-baked MLIR functions to handle such paddings.
This is left for future consideration.
Therefore the current CL only works properly for full-tile cases atm.

This CL also adds 2 simple tests:

```mlir
  for %i0 = 0 to %M step 3 {
    for %i1 = 0 to %N step 4 {
      for %i2 = 0 to %O {
        for %i3 = 0 to %P step 5 {
          vector_transfer_write %f1, %A, %i0, %i1, %i2, %i3 {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d0)} : vector<5x4x3xf32>, memref<?x?x?x?xf32, 0>, index, index, index, index
```

lowers into:
```mlir
for %i0 = 0 to %arg0 step 3 {
  for %i1 = 0 to %arg1 step 4 {
    for %i2 = 0 to %arg2 {
      for %i3 = 0 to %arg3 step 5 {
        %1 = alloc() : memref<5x4x3xf32>
        %2 = "element_type_cast"(%1) : (memref<5x4x3xf32>) -> memref<1xvector<5x4x3xf32>>
        store %cst, %2[%c0] : memref<1xvector<5x4x3xf32>>
        for %i4 = 0 to 5 {
          %3 = affine_apply (d0, d1) -> (d0 + d1) (%i3, %i4)
          for %i5 = 0 to 4 {
            %4 = affine_apply (d0, d1) -> (d0 + d1) (%i1, %i5)
            for %i6 = 0 to 3 {
              %5 = affine_apply (d0, d1) -> (d0 + d1) (%i0, %i6)
              %6 = load %1[%i4, %i5, %i6] : memref<5x4x3xf32>
              store %6, %0[%5, %4, %i2, %3] : memref<?x?x?x?xf32>
       dealloc %1 : memref<5x4x3xf32>
```

and
```mlir
  for %i0 = 0 to %M step 3 {
    for %i1 = 0 to %N {
      for %i2 = 0 to %O {
        for %i3 = 0 to %P step 5 {
          %f = vector_transfer_read %A, %i0, %i1, %i2, %i3 {permutation_map: (d0, d1, d2, d3) -> (d3, 0, d0)} : (memref<?x?x?x?xf32, 0>, index, index, index, index) -> vector<5x4x3xf32>

```

lowers into:
```mlir
for %i0 = 0 to %arg0 step 3 {
  for %i1 = 0 to %arg1 {
    for %i2 = 0 to %arg2 {
      for %i3 = 0 to %arg3 step 5 {
        %1 = alloc() : memref<5x4x3xf32>
        %2 = "element_type_cast"(%1) : (memref<5x4x3xf32>) -> memref<1xvector<5x4x3xf32>>
        for %i4 = 0 to 5 {
          %3 = affine_apply (d0, d1) -> (d0 + d1) (%i3, %i4)
          for %i5 = 0 to 4 {
            for %i6 = 0 to 3 {
              %4 = affine_apply (d0, d1) -> (d0 + d1) (%i0, %i6)
              %5 = load %0[%4, %i1, %i2, %3] : memref<?x?x?x?xf32>
              store %5, %1[%i4, %i5, %i6] : memref<5x4x3xf32>
        %6 = load %2[%c0] : memref<1xvector<5x4x3xf32>>
        dealloc %1 : memref<5x4x3xf32>
```

PiperOrigin-RevId: 224552717
2019-03-29 14:23:05 -07:00
Nicolas Vasilache 4adc169bd0 [MLIR] Add AffineMap composition and use it in Materialization
This CL adds the following free functions:
```
/// Returns the AffineExpr e o m.
AffineExpr compose(AffineExpr e, AffineMap m);
/// Returns the AffineExpr f o g.
AffineMap compose(AffineMap f, AffineMap g);
```

This addresses the issue that AffineMap composition is only available at a
distance via AffineValueMap and is thus unusable on Attributes.
This CL thus implements AffineMap composition in a more modular and composable
way.

This CL does not claim that it can be a good replacement for the
implementation in AffineValueMap, in particular it does not support bounded
maps atm.

Standalone tests are added that replicate some of the logic of the AffineMap
composition pass.

Lastly, affine map composition is used properly inside MaterializeVectors and
a standalone test is added that requires permutation_map composition with a
projection map.

PiperOrigin-RevId: 224376870
2019-03-29 14:20:22 -07:00
Nicolas Vasilache df0a25efee [MLIR] Add support for permutation_map
This CL hooks up and uses permutation_map in vector_transfer ops.
In particular, when going into the nuts and bolts of the implementation, it
became clear that cases arose that required supporting broadcast semantics.
Broadcast semantics are thus added to the general permutation_map.
The verify methods and tests are updated accordingly.

Examples of interest include.

Example 1:
The following MLIR snippet:
```mlir
   for %i3 = 0 to %M {
     for %i4 = 0 to %N {
       for %i5 = 0 to %P {
         %a5 = load %A[%i4, %i5, %i3] : memref<?x?x?xf32>
   }}}
```
may vectorize with {permutation_map: (d0, d1, d2) -> (d2, d1)} into:
```mlir
   for %i3 = 0 to %0 step 32 {
     for %i4 = 0 to %1 {
       for %i5 = 0 to %2 step 256 {
         %4 = vector_transfer_read %arg0, %i4, %i5, %i3
              {permutation_map: (d0, d1, d2) -> (d2, d1)} :
              (memref<?x?x?xf32>, index, index) -> vector<32x256xf32>
   }}}
````
Meaning that vector_transfer_read will be responsible for reading the 2-D slice:
`%arg0[%i4, %i5:%15+256, %i3:%i3+32]` into vector<32x256xf32>. This will
require a transposition when vector_transfer_read is further lowered.

Example 2:
The following MLIR snippet:
```mlir
   %cst0 = constant 0 : index
   for %i0 = 0 to %M {
     %a0 = load %A[%cst0, %cst0] : memref<?x?xf32>
   }
```
may vectorize with {permutation_map: (d0) -> (0)} into:
```mlir
   for %i0 = 0 to %0 step 128 {
     %3 = vector_transfer_read %arg0, %c0_0, %c0_0
          {permutation_map: (d0, d1) -> (0)} :
          (memref<?x?xf32>, index, index) -> vector<128xf32>
   }
````
Meaning that vector_transfer_read will be responsible of reading the 0-D slice
`%arg0[%c0, %c0]` into vector<128xf32>. This will require a 1-D vector
broadcast when vector_transfer_read is further lowered.

Additionally, some minor cleanups and refactorings are performed.

One notable thing missing here is the composition with a projection map during
materialization. This is because I could not find an AffineMap composition
that operates on AffineMap directly: everything related to composition seems
to require going through SSAValue and only operates on AffinMap at a distance
via AffineValueMap. I have raised this concern a bunch of times already, the
followup CL will actually do something about it.

In the meantime, the projection is hacked at a minimum to pass verification
and materialiation tests are temporarily incorrect.

PiperOrigin-RevId: 224376828
2019-03-29 14:20:07 -07:00
Alex Zinenko 7c89a225cf ConvertToCFG: support min/max in loop bounds.
The recently introduced `select` operation enables ConvertToCFG to support
min(max) in loop bounds.  Individual min(max) is implemented as
`cmpi "lt"`(`cmpi "gt"`) followed by a `select` between the compared values.
Multiple results of an `affine_apply` operation extracted from the loop bounds
are reduced using min(max) in a sequential manner.  While this may decrease the
potential for instruction-level parallelism, it is easier to recognize for the
following passes, in particular for the vectorizer.

PiperOrigin-RevId: 224376233
2019-03-29 14:19:52 -07:00
MLIR Team a53ed1b767 Fix bug in GCD calculation when flattening AffineExpr (adds unit test which triggers the bug and tests the fix).
PiperOrigin-RevId: 224246657
2019-03-29 14:19:07 -07:00
Uday Bondhugula a92130880e Complete multiple unhandled cases for DmaGeneration / getMemRefRegion;
update/improve/clean up API.

- update FlatAffineConstraints::getConstBoundDifference; return constant
  differences between symbolic affine expressions, look at equalities as well.
- fix buffer size computation when generating DMAs symbolic in outer loops,
  correctly handle symbols at various places (affine access maps, loop bounds,
  loop IVs outer to the depth at which DMA generation is being done)
- bug fixes / complete some TODOs for getMemRefRegion
- refactor common code b/w memref dependence check and getMemRefRegion
- FlatAffineConstraints API update; added methods employ trivial checks /
  detection - sufficient to handle hyper-rectangular cases in a precise way
  while being fast / low complexity. Hyper-rectangular cases fall out as
  trivial cases for these methods while other cases still do not cause failure
  (either return conservative or return failure that is handled by the caller).

PiperOrigin-RevId: 224229879
2019-03-29 14:18:22 -07:00
MLIR Team 753109547d During forward substitution, merge symbols from input AffineMap with the symbol list of the target AffineMap.
Symbols can be used as dim identifiers and symbolic identifiers, and so we must preserve the symbolic identifies from the input AffineMap during forward substitution, even if that same identifier is used as a dimension identifier in the target AffineMap.
Test case added.

Going forward, we may want to explore solutions where we do not maintain this split between dimensions and symbols, and instead verify the validity of each use of each AffineMap operand AffineMap in the context where the AffineMap operand usage is required to be a symbol: in the denominator of floordiv/ceildiv/mod for semi-affine maps, and in instructions that can capture symbols (i.e. alloc)

PiperOrigin-RevId: 224017364
2019-03-29 14:16:40 -07:00
Alex Zinenko 7868abd9d8 ConvertToCFG: convert "if" statements.
The condition of the "if" statement is an integer set, defined as a conjunction
of affine constraints.  An affine constraints consists of an affine expression
and a flag indicating whether the expression is strictly equal to zero or is
also allowed to be greater than zero.  Affine maps, accepted by `affine_apply`
are also formed from affine expressions.  Leverage this fact to implement the
checking of "if" conditions.  Each affine expression from the integer set is
converted into an affine map.  This map is applied to the arguments of the "if"
statement.  The result of the application is compared with zero given the
equality flag to obtain the final boolean value.  The conjunction of conditions
is tested sequentially with short-circuit branching to the "else" branch if any
of the condition evaluates to false.

Create an SESE region for the if statement (including its "then" and optional
"else" statement blocks) and append it to the end of the current region.  The
conditional region consists of a sequence of condition-checking blocks that
implement the short-circuit scheme, followed by a "then" SESE region and an
"else" SESE region, and the continuation block that post-dominates all blocks
of the "if" statement.  The flow of blocks that correspond to the "then" and
"else" clauses are constructed recursively, enabling easy nesting of "if"
statements and if-then-else-if chains.

Note that MLIR semantics does not require nor prohibit short-circuit
evaluation.  Since affine expressions do not have side effects, there is no
observable difference in the program behavior.  We may trade off extra
operations for operation-level parallelism opportunity by first performing all
`affine_apply` and comparison operations independently, and then performing a
tree pattern reduction of the resulting boolean values with the `muli i1`
operations (in absence of the dedicated bit operations).  The pros and cons are
not clear, and since MLIR does not include parallel semantics, we prefer to
minimize the number of sequentially executed operations.

PiperOrigin-RevId: 223970248
2019-03-29 14:16:10 -07:00
Nicolas Vasilache ebb3d38471 [MLIR] Separate and split vectorization tests
These tests have become too bulky and unwiedly.
Splitting simplifies modifications that will occur in the next CL.

PiperOrigin-RevId: 223874321
2019-03-29 14:15:40 -07:00
Nicolas Vasilache b39d1f0bdb [MLIR] Add VectorTransferOps
This CL implements and uses VectorTransferOps in lieu of the former custom
call op. Tests are updated accordingly.

VectorTransferOps come in 2 flavors: VectorTransferReadOp and
VectorTransferWriteOp.

VectorTransferOps can be thought of as a backend-independent
pseudo op/library call that needs to be legalized to MLIR (whiteboxed) before
it can be lowered to backend-dependent IR.

Note that the current implementation does not yet support a real permutation
map. Proper support will come in a followup CL.

VectorTransferReadOp
====================
VectorTransferReadOp performs a blocking read from a scalar memref
location into a super-vector of the same elemental type. This operation is
called 'read' by opposition to 'load' because the super-vector granularity
is generally not representable with a single hardware register. As a
consequence, memory transfers will generally be required when lowering
VectorTransferReadOp. A VectorTransferReadOp is thus a mid-level abstraction
that supports super-vectorization with non-effecting padding for full-tile
only code.

A vector transfer read has semantics similar to a vector load, with additional
support for:
  1. an optional value of the elemental type of the MemRef. This value
     supports non-effecting padding and is inserted in places where the
     vector read exceeds the MemRef bounds. If the value is not specified,
     the access is statically guaranteed to be within bounds;
  2. an attribute of type AffineMap to specify a slice of the original
     MemRef access and its transposition into the super-vector shape. The
     permutation_map is an unbounded AffineMap that must represent a
     permutation from the MemRef dim space projected onto the vector dim
     space.

Example:
```mlir
  %A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>
  ...
  %val = `ssa-value` : f32
  // let %i, %j, %k, %l be ssa-values of type index
  %v0 = vector_transfer_read %src, %i, %j, %k, %l
        {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
          (memref<?x?x?x?xf32>, index, index, index, index) ->
            vector<16x32x64xf32>
  %v1 = vector_transfer_read %src, %i, %j, %k, %l, %val
        {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
          (memref<?x?x?x?xf32>, index, index, index, index, f32) ->
            vector<16x32x64xf32>
```

VectorTransferWriteOp
=====================
VectorTransferWriteOp performs a blocking write from a super-vector to
a scalar memref of the same elemental type. This operation is
called 'write' by opposition to 'store' because the super-vector
granularity is generally not representable with a single hardware register. As
a consequence, memory transfers will generally be required when lowering
VectorTransferWriteOp. A VectorTransferWriteOp is thus a mid-level
abstraction that supports super-vectorization with non-effecting padding
for full-tile only code.
A vector transfer write has semantics similar to a vector store, with
additional support for handling out-of-bounds situations.

Example:
```mlir
  %A = alloc(%size1, %size2, %size3, %size4) : memref<?x?x?x?xf32>.
  %val = `ssa-value` : vector<16x32x64xf32>
  // let %i, %j, %k, %l be ssa-values of type index
  vector_transfer_write %val, %src, %i, %j, %k, %l
    {permutation_map: (d0, d1, d2, d3) -> (d3, d1, d2)} :
  (vector<16x32x64xf32>, memref<?x?x?x?xf32>, index, index, index, index)
```
PiperOrigin-RevId: 223873234
2019-03-29 14:15:25 -07:00
Uday Bondhugula 89c41fdca1 FlatAffineConstraints::composeMap: return failure instead of asserting on semi-affine maps
FlatAffineConstraints::composeMap: should return false instead of asserting on
a semi-affine map. Make getMemRefRegion just propagate false when encountering
semi-affine maps (instead of crashing!)
PiperOrigin-RevId: 223828743
2019-03-29 14:14:56 -07:00
Uday Bondhugula 5f76245cfe Minor fix for replaceAllMemRefUsesWith.
The check for whether the memref was used in a non-derefencing context had to
be done inside, i.e., only for the op stmt's that the replacement was specified
to be performed on (by the domStmtFilter arg if provided). As such, it is
completely fine for example for a function to return a memref while the replacement
is being performed only a specific loop's body (as in the case of DMA
generation).

PiperOrigin-RevId: 223827753
2019-03-29 14:14:43 -07:00
River Riddle 7669a259c4 Add a simple common sub expression elimination pass.
The algorithm collects defining operations within a scoped hash table. The scopes within the hash table correspond to nodes within the dominance tree for a function. This cl only adds support for simple operations, i.e non side-effecting. Such operations, e.g. load/store/call, will be handled in later patches.

PiperOrigin-RevId: 223811328
2019-03-29 14:14:28 -07:00
Nicolas Vasilache 1ae66f6520 [MLIR] Reenable materialize_vectors test
Fixes one of the Filecheck'ed test which was mistakenly disabled.

PiperOrigin-RevId: 223401978
2019-03-29 14:12:40 -07:00
Alex Zinenko 68e9721aa8 Rename Deaffinator to LowerAffineApply and patch it.
Several things were suggested in post-submission reviews.  In particular, use
pointers in function interfaces instead of references (still use references
internally).  Clarify the behavior of the pass in presence of MLFunctions.

PiperOrigin-RevId: 222556851
2019-03-29 14:08:59 -07:00
Nicolas Vasilache a5782f0d40 [MLIR][MaterializeVectors] Add a MaterializeVector pass via unrolling.
This CL adds an MLIR-MLIR pass which materializes super-vectors to
hardware-dependent sized vectors.

While the physical vector size is target-dependent, the pass is written in
a target-independent way: the target vector size is specified as a parameter
to the pass. This pass is thus a partial lowering that opens the "greybox"
that is the super-vector abstraction.

This first CL adds a first materilization pass iterates over vector_transfer_write operations and:
1. computes the program slice including the current vector_transfer_write;
2. computes the multi-dimensional ratio of super-vector shape to hardware
vector shape;
3. for each possible multi-dimensional value within the bounds of ratio, a new slice is
instantiated (i.e. cloned and rewritten) so that all operations in this instance operate on
the hardware vector type.

As a simple example, given:
```mlir
mlfunc @vector_add_2d(%M : index, %N : index) -> memref<?x?xf32> {
  %A = alloc (%M, %N) : memref<?x?xf32>
  %B = alloc (%M, %N) : memref<?x?xf32>
  %C = alloc (%M, %N) : memref<?x?xf32>
  for %i0 = 0 to %M {
    for %i1 = 0 to %N {
      %a1 = load %A[%i0, %i1] : memref<?x?xf32>
      %b1 = load %B[%i0, %i1] : memref<?x?xf32>
      %s1 = addf %a1, %b1 : f32
      store %s1, %C[%i0, %i1] : memref<?x?xf32>
    }
  }
  return %C : memref<?x?xf32>
}
```

and the following options:
```
-vectorize -virtual-vector-size 32 --test-fastest-varying=0 -materialize-vectors -vector-size=8
```

materialization emits:
```mlir
#map0 = (d0, d1) -> (d0, d1)
#map1 = (d0, d1) -> (d0, d1 + 8)
#map2 = (d0, d1) -> (d0, d1 + 16)
#map3 = (d0, d1) -> (d0, d1 + 24)
mlfunc @vector_add_2d(%arg0 : index, %arg1 : index) -> memref<?x?xf32> {
  %0 = alloc(%arg0, %arg1) : memref<?x?xf32>
  %1 = alloc(%arg0, %arg1) : memref<?x?xf32>
  %2 = alloc(%arg0, %arg1) : memref<?x?xf32>
  for %i0 = 0 to %arg0 {
    for %i1 = 0 to %arg1 step 32 {
      %3 = affine_apply #map0(%i0, %i1)
      %4 = "vector_transfer_read"(%0, %3tensorflow/mlir#0, %3tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %5 = affine_apply #map1(%i0, %i1)
      %6 = "vector_transfer_read"(%0, %5tensorflow/mlir#0, %5tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %7 = affine_apply #map2(%i0, %i1)
      %8 = "vector_transfer_read"(%0, %7tensorflow/mlir#0, %7tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %9 = affine_apply #map3(%i0, %i1)
      %10 = "vector_transfer_read"(%0, %9tensorflow/mlir#0, %9tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %11 = affine_apply #map0(%i0, %i1)
      %12 = "vector_transfer_read"(%1, %11tensorflow/mlir#0, %11tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %13 = affine_apply #map1(%i0, %i1)
      %14 = "vector_transfer_read"(%1, %13tensorflow/mlir#0, %13tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %15 = affine_apply #map2(%i0, %i1)
      %16 = "vector_transfer_read"(%1, %15tensorflow/mlir#0, %15tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %17 = affine_apply #map3(%i0, %i1)
      %18 = "vector_transfer_read"(%1, %17tensorflow/mlir#0, %17tensorflow/mlir#1) : (memref<?x?xf32>, index, index) -> vector<8xf32>
      %19 = addf %4, %12 : vector<8xf32>
      %20 = addf %6, %14 : vector<8xf32>
      %21 = addf %8, %16 : vector<8xf32>
      %22 = addf %10, %18 : vector<8xf32>
      %23 = affine_apply #map0(%i0, %i1)
      "vector_transfer_write"(%19, %2, %23tensorflow/mlir#0, %23tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> ()
      %24 = affine_apply #map1(%i0, %i1)
      "vector_transfer_write"(%20, %2, %24tensorflow/mlir#0, %24tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> ()
      %25 = affine_apply #map2(%i0, %i1)
      "vector_transfer_write"(%21, %2, %25tensorflow/mlir#0, %25tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> ()
      %26 = affine_apply #map3(%i0, %i1)
      "vector_transfer_write"(%22, %2, %26tensorflow/mlir#0, %26tensorflow/mlir#1) : (vector<8xf32>, memref<?x?xf32>, index, index) -> ()
    }
  }
  return %2 : memref<?x?xf32>
}
```

PiperOrigin-RevId: 222455351
2019-03-29 14:08:31 -07:00
Nicolas Vasilache 5c16564bca [MLIR][Slicing] Add utils for computing slices.
This CL adds tooling for computing slices as an independent CL.
The first consumer of this analysis will be super-vector materialization in a
followup CL.

In particular, this adds:
1. a getForwardStaticSlice function with documentation, example and a
standalone unit test;
2. a getBackwardStaticSlice function with documentation, example and a
standalone unit test;
3. a getStaticSlice function with documentation, example and a standalone unit
test;
4. a topologicalSort function that is exercised through the getStaticSlice
unit test.

The getXXXStaticSlice functions take an additional root (resp. terminators)
parameter which acts as a boundary that the transitive propagation algorithm
is not allowed to cross.

PiperOrigin-RevId: 222446208
2019-03-29 14:08:02 -07:00
Uday Bondhugula 2631b155a9 Fix bugs in DMA generation and FlatAffineConstraints; add more test
cases.

- fix bug in calculating index expressions for DMA buffers in certain cases
  (affected tiled loop nests); add more test cases for better coverage.
- introduce an additional optional argument to replaceAllMemRefUsesWith;
  additional operands to the index remap AffineMap can now be supplied by the
  client.
- FlatAffineConstraints::addBoundsForStmt - fix off by one upper bound,
  ::composeMap - fix position bug.
- Some clean up and more comments

PiperOrigin-RevId: 222434628
2019-03-29 14:07:31 -07:00
Alex Zinenko 615c41c788 Introduce Deaffinator pass.
This function pass replaces affine_apply operations in CFG functions with
sequences of primitive arithmetic instructions that form the affine map.

The actual replacement functionality is located in LoweringUtils as a
standalone function operating on an individual affine_apply operation and
inserting the result at the location of the original operation.  It is expected
to be useful for other, target-specific lowering passes that may start at
MLFunction level that Deaffinator does not support.

PiperOrigin-RevId: 222406692
2019-03-29 14:07:16 -07:00
Uday Bondhugula b6c03917ad Remove allocations for memref's that become dead as a result of double
buffering in the auto DMA overlap pass.

This is done online in the pass.

PiperOrigin-RevId: 222313640
2019-03-29 14:05:19 -07:00
Uday Bondhugula 0328217eb8 Automated rollback of changelist 221863955.
PiperOrigin-RevId: 222299120
2019-03-29 14:04:05 -07:00
Nicolas Vasilache 87d46aaf4b [MLIR][Vectorize] Refactor Vectorize use-def propagation.
This CL refactors a few things in Vectorize.cpp:
1. a clear distinction is made between:
  a. the LoadOp are the roots of vectorization and must be vectorized
  eagerly and propagate their value; and
  b. the StoreOp which are the terminals of vectorization and must be
  vectorized late (i.e. they do not produce values that need to be
  propagated).
2. the StoreOp must be vectorized late because in general it can store a value
that is not reachable from the subset of loads defined in the
current pattern. One trivial such case is storing a constant defined at the
top-level of the MLFunction and that needs to be turned into a splat.
3. a description of the algorithm is given;
4. the implementation matches the algorithm;
5. the last example is made parametric, in practice it will fully rely on the
implementation of vector_transfer_read/write which will handle boundary
conditions and padding. This will happen by lowering to a lower-level
abstraction either:
  a. directly in MLIR (whether DMA or just loops or any async tasks in the
     future) (whiteboxing);
  b. in LLO/LLVM-IR/whatever blackbox library call/ search + swizzle inventor
  one may want to use;
  c. a partial mix of a. and b. (grey-boxing)
5. minor cleanups are applied;
6. mistakenly disabled unit tests are re-enabled (oopsie).

With this CL, this MLIR snippet:
```
mlfunc @vector_add_2d(%M : index, %N : index) -> memref<?x?xf32> {
  %A = alloc (%M, %N) : memref<?x?xf32>
  %B = alloc (%M, %N) : memref<?x?xf32>
  %C = alloc (%M, %N) : memref<?x?xf32>
  %f1 = constant 1.0 : f32
  %f2 = constant 2.0 : f32
  for %i0 = 0 to %M {
    for %i1 = 0 to %N {
      // non-scoped %f1
      store %f1, %A[%i0, %i1] : memref<?x?xf32>
    }
  }
  for %i4 = 0 to %M {
    for %i5 = 0 to %N {
      %a5 = load %A[%i4, %i5] : memref<?x?xf32>
      %b5 = load %B[%i4, %i5] : memref<?x?xf32>
      %s5 = addf %a5, %b5 : f32
      // non-scoped %f1
      %s6 = addf %s5, %f1 : f32
      store %s6, %C[%i4, %i5] : memref<?x?xf32>
    }
  }
  return %C : memref<?x?xf32>
}
```

vectorized with these arguments:
```
-vectorize -virtual-vector-size 256 --test-fastest-varying=0
```

vectorization produces this standard innermost-loop vectorized code:
```
mlfunc @vector_add_2d(%arg0 : index, %arg1 : index) -> memref<?x?xf32> {
  %0 = alloc(%arg0, %arg1) : memref<?x?xf32>
  %1 = alloc(%arg0, %arg1) : memref<?x?xf32>
  %2 = alloc(%arg0, %arg1) : memref<?x?xf32>
  %cst = constant 1.000000e+00 : f32
  %cst_0 = constant 2.000000e+00 : f32
  for %i0 = 0 to %arg0 {
    for %i1 = 0 to %arg1 step 256 {
      %cst_1 = constant splat<vector<256xf32>, 1.000000e+00> : vector<256xf32>
      "vector_transfer_write"(%cst_1, %0, %i0, %i1) : (vector<256xf32>, memref<?x?xf32>, index, index) -> ()
    }
  }
  for %i2 = 0 to %arg0 {
    for %i3 = 0 to %arg1 step 256 {
      %3 = "vector_transfer_read"(%0, %i2, %i3) : (memref<?x?xf32>, index, index) -> vector<256xf32>
      %4 = "vector_transfer_read"(%1, %i2, %i3) : (memref<?x?xf32>, index, index) -> vector<256xf32>
      %5 = addf %3, %4 : vector<256xf32>
      %cst_2 = constant splat<vector<256xf32>, 1.000000e+00> : vector<256xf32>
      %6 = addf %5, %cst_2 : vector<256xf32>
      "vector_transfer_write"(%6, %2, %i2, %i3) : (vector<256xf32>, memref<?x?xf32>, index, index) -> ()
    }
  }
  return %2 : memref<?x?xf32>
}
```

Of course, much more intricate n-D imperfectly-nested patterns can be emitted too in a fully declarative fashion, but this is enough for now.

PiperOrigin-RevId: 222280209
2019-03-29 14:03:50 -07:00
Alex Zinenko f986d5920b ConvertToCFG: handle loop 1D affine loop bounds.
In the general case, loop bounds can be expressed as affine maps of the outer
loop iterators and function arguments.  Relax the check for loop bounds to be
known integer constants and also accept one-dimensional affine bounds in
ConvertToCFG ForStmt lowering.  Emit affine_apply operations for both the upper
and the lower bound.  The semantics of MLFunctions guarantees that both bounds
can be computed before the loop starts iterating.  Constant bounds are merely a
short-hand notation for zero-dimensional affine maps and get supported
transparently.

Multidimensional affine bounds are not yet supported because the target IR
dialect lacks min/max operations necessary to implement the corresponding
semantics.

PiperOrigin-RevId: 222275801
2019-03-29 14:03:20 -07:00
Nicolas Vasilache 89d9913a20 [MLIR][VectorAnalysis] Add a VectorAnalysis and standalone tests
This CL adds some vector support in prevision of the upcoming vector
materialization pass. In particular this CL adds 2 functions to:
1. compute the multiplicity of a subvector shape in a supervector shape;
2. help match operations on strict super-vectors. This is defined for a given
subvector shape as an operation that manipulates a vector type that is an
integral multiple of the subtype, with multiplicity at least 2.

This CL also adds a TestUtil pass where we can dump arbitrary testing of
functions and analysis that operate at a much smaller granularity than a pass
(e.g. an analysis for which it is convenient to write a bit of artificial MLIR
and write some custom test). This is in order to keep using Filecheck for
things that essentially look and feel like C++ unit tests.

PiperOrigin-RevId: 222250910
2019-03-29 14:02:17 -07:00
Uday Bondhugula fff1efbaf5 Updates to transformation/analysis passes/utilities. Update DMA generation pass
and getMemRefRegion() to work with specified loop depths; add support for
outgoing DMAs, store op's.

- add support for getMemRefRegion symbolic in outer loops - hence support for
  DMAs symbolic in outer surrounding loops.

- add DMA generation support for outgoing DMAs (store op's to lower memory
  space); extend getMemoryRegion to store op's. -memref-bound-check now works
  with store op's as well.

- fix dma-generate (references to the old memref in the dma_start op were also
  being replaced with the new buffer); we need replace all memref uses to work
  only on a subset of the uses - add a new optional argument for
  replaceAllMemRefUsesWith. update replaceAllMemRefUsesWith to take an optional
  'operation' argument to serve as a filter - if provided, only those uses that
  are dominated by the filter are replaced.

- Add missing print for attributes for dma_start, dma_wait op's.

- update the FlatAffineConstraints API

PiperOrigin-RevId: 221889223
2019-03-29 14:00:51 -07:00
Uday Bondhugula 6b52ac3aa6 Mark AllocOp as being free of side effects
PiperOrigin-RevId: 221863955
2019-03-29 14:00:37 -07:00
Alex Zinenko d030433443 ConvertToCFG: properly remap nested function attributes.
Array attributes can nested and function attributes can appear anywhere at that
level.  They should be remapped to point to the generated CFGFunction after
ML-to-CFG conversion, similarly to plain function attributes.  Extract the
nested attribute remapping functionality from the Parser to Utils.  Extract out
the remapping function for individual Functions from the module remapping
function.  Use these new functions in the ML-to-CFG conversion pass and in the
parser.

PiperOrigin-RevId: 221510997
2019-03-29 13:57:58 -07:00
Nicolas Vasilache fefbf91314 [MLIR] Support for vectorizing operations.
This CL adds support for and a vectorization test to perform scalar 2-D addf.

The support extension notably comprises:
1. extend vectorizable test to exclude vector_transfer operations and
expose them to LoopAnalysis where they are needed. This is a temporary
solution a concrete MLIR Op exists;
2. add some more functional sugar mapKeys, apply and ScopeGuard (which became
relevant again);
3. fix improper shifting during coarsening;
4. rename unaligned load/store to vector_transfer_read/write and simplify the
design removing the unnecessary AllocOp that were introduced prematurely:
vector_transfer_read currently has the form:
  (memref<?x?x?xf32>, index, index, index) -> vector<32x64x256xf32>
vector_transfer_write currently has the form:
  (vector<32x64x256xf32>, memref<?x?x?xf32>, index, index, index) -> ()
5. adds vectorizeOperations which traverses the operations in a ForStmt and
rewrites them to their vector form;
6. add support for vector splat from a constant.

The relevant tests are also updated.

PiperOrigin-RevId: 221421426
2019-03-29 13:56:47 -07:00
Alex Zinenko cab24dc211 Homogenize branch instruction arguments.
Branch instruction arguments were defined and used inconsistently across
different instructions, in both the spec and the implementation.  In
particular, conditional and unconditional branch instructions were using
different syntax in the implementation.  This led to the IR we produce not
being accepted by the parser. Update the printer to use common syntax: `(`
list-of-SSA-uses `:` list-of-types `)`.  The motivation for choosing this
syntax as opposed to the one in the spec, `(` list-of-SSA-uses `)` `:`
list-of-types is double-fold.  First, it is tricky to differentiate the label
of the false branch from the type while parsing conditional branches (which is
what apparently motivated the implementation to diverge from the spec in the
first place).  Second, the ongoing convergence between terminator instructions
and other operations prompts for consistency between their operand list syntax.
After this change, the only remaining difference between the two is the use of
parentheses.  Update the comment of the parser that did not correspond to the
code.  Remove the unused isParenthesized argument from parseSSAUseAndTypeList.

Update the spec accordingly.  Note that the examples in the spec were _not_
using the EBNF defined a couple of lines above them, but were using the current
syntax.  Add a supplementary example of a branch to a basic block with multiple
arguments.

PiperOrigin-RevId: 221162655
2019-03-29 13:55:36 -07:00
Alex Zinenko 5a0d3d0204 Basic conversion of MLFunctions to CFGFunctions.
Implement a pass converting a subset of MLFunctions to CFGFunctions.  Currently
supports arbitrarily complex imperfect loop nests with statically constant
(i.e., not affine map) bounds filled with operations.  Does NOT support
branches and non-constant loop bounds.

Conversion is performed per-function and the function names are preserved to
avoid breaking any external references to the current module.  In-memory IR is
updated to point to the right functions in direct calls and constant loads.
This behavior is tested via a really hidden flag that enables function
renaming.

Inside each function, the control flow conversion is based on single-entry
single-exit regions, i.e. subgraphs of the CFG that have exactly one incoming
and exactly one outgoing edge.  Since an MLFunction must have a single "return"
statement as per MLIR spec, it constitutes an SESE region.  Individual
operations are appended to this region.  Control flow statements are
recursively converted into such regions that are concatenated with the current
region.  Bodies of the compound statement also form SESE regions, which allows
to nest control flow statements easily.  Note that SESE regions are not
materialized in the code.  It is sufficent to keep track of the end of the
region as the current instruction insertion point as long as all recursive
calls update the insertion point in the end.

The converter maintains a mapping between SSA values in ML functions and their
CFG counterparts.  The mapping is used to find the operands for each operation
and is updated to contain the results of each operation as the conversion
continues.

PiperOrigin-RevId: 221162602
2019-03-29 13:55:22 -07:00
MLIR Team b5424dd0cb Adds support for returning the direction of the dependence between memref accesses (distance/direction vectors).
Updates MemRefDependenceCheck to check and report on all memref access pairs at all loop nest depths.
Updates old and adds new memref dependence check tests.
Resolves multiple TODOs.

PiperOrigin-RevId: 220816515
2019-03-29 13:53:28 -07:00
Uday Bondhugula e0623d4b86 Automatic DMA generation for simple cases.
- constant bounded memory regions, static shapes, no handling of
  overlapping/duplicate regions (through union) for now; also only, load memory
  op's.
- add build methods for DmaStartOp, DmaWaitOp.
- move getMemoryRegion() into Analysis/Utils and expose it.
- fix addIndexSet, getMemoryRegion() post switch to exclusive upper bounds;
  update test cases for memref-bound-check and memref-dependence-check for
  exclusive bounds (missed in a previous CL)

PiperOrigin-RevId: 220729810
2019-03-29 13:53:14 -07:00
Uday Bondhugula 23ddd577ef Complete migration to exclusive upper bound
cl/220448963 had missed a part of the updates.

- while on this, clean up some of the test cases to use ops' custom forms.

PiperOrigin-RevId: 220675303
2019-03-29 13:52:17 -07:00
Nicolas Vasilache cde8248753 [MLIR] Make upper bound implementation exclusive
This CL implement exclusive upper bound behavior as per b/116854378.
A followup CL will update the semantics of the for loop.

PiperOrigin-RevId: 220448963
2019-03-29 13:49:49 -07:00
Uday Bondhugula 6cd5d5c544 Introduce loop tiling code generation (hyper-rectangular case)
- simple perfectly nested band tiling with fixed tile sizes.
- only the hyper-rectangular case is handled, with other limitations of
  getIndexSet applying (constant loop bounds, etc.);  once
  the latter utility is extended, tiled code generation should become more
  general.
- Add FlatAffineConstraints::isHyperRectangular()

PiperOrigin-RevId: 220324933
2019-03-29 13:49:05 -07:00
MLIR Team 239e328913 Adds MemRefDependenceCheck analysis pass, plus multiple dependence check tests.
Adds equality constraints to dependence constraint system for accesses using dims/symbols where the defining operation of the dim/symbol is a constant.

PiperOrigin-RevId: 219814740
2019-03-29 13:48:05 -07:00
Uday Bondhugula 74c62c8ce0 Complete memref bound checker for arbitrary affine expressions. Handle local
variables from mod's and div's when converting to flat form.

- propagate mod, floordiv, ceildiv / local variables constraint information
  when flattening affine expressions and converting them into flat affine
  constraints; resolve multiple TODOs.
- enables memref bound checker to work with arbitrary affine expressions
- update FlatAffineConstraints API with several new methods
- test/exercise functionality mostly through -memref-bound-check
- other analyses such as dependence tests, etc. should now be able to work in the
  presence of any affine composition of add, mul, floor, ceil, mod.

PiperOrigin-RevId: 219711806
2019-03-29 13:47:29 -07:00
MLIR Team f28e4df666 Adds a dependence check to test whether two accesses to the same memref access the same element.
- Builds access functions and iterations domains for each access.
- Builds dependence polyhedron constraint system which has equality constraints for equated access functions and inequality constraints for iteration domain loop bounds.
- Runs elimination on the dependence polyhedron to test if no dependence exists between the accesses.
- Adds a trivial LoopFusion transformation pass with a simple test policy to test dependence between accesses to the same memref in adjacent loops.
- The LoopFusion pass will be extended in subsequent CLs.

PiperOrigin-RevId: 219630898
2019-03-29 13:47:13 -07:00
Nicolas Vasilache 21638dcda9 [MLIR] Extend vectorization to 2+-D patterns
This CL adds support for vectorization using more interesting 2-D and 3-D
patterns. Note in particular the fact that we match some pretty complex
imperfectly nested 2-D patterns with a quite minimal change to the
implementation: we just add a bit of recursion to traverse the matched
patterns and actually vectorize the loops.

For instance, vectorizing the following loop by 128:
```
for %i3 = 0 to %0 {
  %7 = affine_apply (d0) -> (d0)(%i3)
  %8 = load %arg0[%c0_0, %7] : memref<?x?xf32>
}
```

Currently generates:
```
#map0 = ()[s0] -> (s0 + 127)
#map1 = (d0) -> (d0)
for %i3 = 0 to #map0()[%0] step 128 {
  %9 = affine_apply #map1(%i3)
  %10 = alloc() : memref<1xvector<128xf32>>
  %11 = "n_d_unaligned_load"(%arg0, %c0_0, %9, %10, %c0) :
    (memref<?x?xf32>, index, index, memref<1xvector<128xf32>>, index) ->
    (memref<?x?xf32>, index, index, memref<1xvector<128xf32>>, index)
   %12 = load %10[%c0] : memref<1xvector<128xf32>>
}
```

The above is subject to evolution.

PiperOrigin-RevId: 219629745
2019-03-29 13:46:58 -07:00
Uday Bondhugula 8201e19e3d Introduce memref bound checking.
Introduce analysis to check memref accesses (in MLFunctions) for out of bound
ones. It works as follows:

$ mlir-opt -memref-bound-check test/Transforms/memref-bound-check.mlir

/tmp/single.mlir:10:12: error: 'load' op memref out of upper bound access along dimension tensorflow/mlir#1
      %x = load %A[%idxtensorflow/mlir#0, %idxtensorflow/mlir#1] : memref<9 x 9 x i32>
           ^
/tmp/single.mlir:10:12: error: 'load' op memref out of lower bound access along dimension tensorflow/mlir#1
      %x = load %A[%idxtensorflow/mlir#0, %idxtensorflow/mlir#1] : memref<9 x 9 x i32>
           ^
/tmp/single.mlir:10:12: error: 'load' op memref out of upper bound access along dimension tensorflow/mlir#2
      %x = load %A[%idxtensorflow/mlir#0, %idxtensorflow/mlir#1] : memref<9 x 9 x i32>
           ^
/tmp/single.mlir:10:12: error: 'load' op memref out of lower bound access along dimension tensorflow/mlir#2
      %x = load %A[%idxtensorflow/mlir#0, %idxtensorflow/mlir#1] : memref<9 x 9 x i32>
           ^
/tmp/single.mlir:12:12: error: 'load' op memref out of upper bound access along dimension tensorflow/mlir#1
      %y = load %B[%idy] : memref<128 x i32>
           ^
/tmp/single.mlir:12:12: error: 'load' op memref out of lower bound access along dimension tensorflow/mlir#1
      %y = load %B[%idy] : memref<128 x i32>
           ^
#map0 = (d0, d1) -> (d0, d1)
#map1 = (d0, d1) -> (d0 * 128 - d1)
mlfunc @test() {
  %0 = alloc() : memref<9x9xi32>
  %1 = alloc() : memref<128xi32>
  for %i0 = -1 to 9 {
    for %i1 = -1 to 9 {
      %2 = affine_apply #map0(%i0, %i1)
      %3 = load %0[%2tensorflow/mlir#0, %2tensorflow/mlir#1] : memref<9x9xi32>
      %4 = affine_apply #map1(%i0, %i1)
      %5 = load %1[%4] : memref<128xi32>
    }
  }
  return
}

- Improves productivity while manually / semi-automatically developing MLIR for
  testing / prototyping; also provides an indirect way to catch errors in
  transformations.

- This pass is an easy way to test the underlying affine analysis
  machinery including low level routines.

Some code (in getMemoryRegion()) borrowed from @andydavis cl/218263256.

While on this:

- create mlir/Analysis/Passes.h; move Pass.h up from mlir/Transforms/ to mlir/

- fix a bug in AffineAnalysis.cpp::toAffineExpr

TODO: extend to non-constant loop bounds (straightforward). Will transparently
work for all accesses once floordiv, mod, ceildiv are supported in the
AffineMap -> FlatAffineConstraints conversion.
PiperOrigin-RevId: 219397961
2019-03-29 13:46:08 -07:00
Nicolas Vasilache af7f56fdf8 [MLIR] Implement 1-D vectorization for fastest varying load/stores
This CL is a first in a series that implements early vectorization of
increasingly complex patterns. In particular, early vectorization will support
arbitrary loop nesting patterns (both perfectly and imperfectly nested), at
arbitrary depths in the loop tree.

This first CL builds the minimal support for applying 1-D patterns.
It relies on an unaligned load/store op abstraction that can be inplemented
differently on different HW.
Future CLs will support higher dimensional patterns, but 1-D patterns already
exhibit interesting properties.
In particular, we want to separate pattern matching (i.e. legality both
structural and dependency analysis based), from profitability analysis, from
application of the transformation.
As a consequence patterns may intersect and we need to verify that a pattern
can still apply by the time we get to applying it.

A non-greedy analysis on profitability that takes into account pattern
intersection is left for future work.

Additionally the CL makes the following cleanups:
1. the matches method now returns a value, not a reference;
2. added comments about the MLFunctionMatcher and MLFunctionMatches usage by
value;
3. added size and empty methods to matches;
4. added a negative vectorization test with a conditional, this exhibited a
but in the iterators. Iterators now return nullptr if the underlying storage
is nullpt.

PiperOrigin-RevId: 219299489
2019-03-29 13:44:26 -07:00
Lei Zhang 582b0761c6 Use matcher sugars for cannonicalization pattern matching
- Added a mechanism for specifying pattern matching more concisely like LLVM.
- Added support for canonicalization of addi/muli over vector/tensor splat
- Added ValueType to Attribute class hierarchy
- Allowed creating constant splat

PiperOrigin-RevId: 219149621
2019-03-29 13:43:44 -07:00
Uday Bondhugula 1ec77cecf2 FourierMotzkinEliminate trivial bug fix
PiperOrigin-RevId: 219148982
2019-03-29 13:43:30 -07:00
Lei Zhang 60b5184c8b Canonicalize muli(x, 1) into x
PiperOrigin-RevId: 218885877
2019-03-29 13:42:01 -07:00
Alex Zinenko aae372ecb8 Drop trivial identity affine mappings in MemRef construction.
As per MLIR spec, the absence of affine maps in MemRef type is interpreted as
an implicit identity affine map.  Therefore, MemRef types declared with
explicit or implicit identity map should be considered equal at the MemRefType
level.  During MemRefType construction, drop trivial identity affine map
compositions.  A trivial identity composition consists of a single unbounded
identity map.  It is unclear whether affine maps should be composed in-place to
a single map during MemRef type construction, so non-trivial compositions that
could have been simplified to an identity are NOT removed.  We chose to drop
the trivial identity map rather than inject it in places that assume its
present implicitly because it makes the code simpler by reducing boilerplate;
identity mappings are obvious defaults.

Update tests that were checking for the presence of trivial identity map
compositions in the outputs.

PiperOrigin-RevId: 218862454
2019-03-29 13:41:47 -07:00
Chris Lattner 967d934180 Fix two issues:
1) We incorrectly reassociated non-reassociative operations like subi, causing
    miscompilations.
 2) When constant folding, we didn't add users of the new constant back to the
    worklist for reprocessing, causing us to miss some cases (pointed out by
    Uday).

The code for tensorflow/mlir#2 is gross, but I'll add the new APIs in a followup patch.

PiperOrigin-RevId: 218803984
2019-03-29 13:40:35 -07:00
Uday Bondhugula 988ce3387f Change sigil for integer set: @@ -> #
PiperOrigin-RevId: 218786684
2019-03-29 13:40:21 -07:00
MLIR Team 13f6cc0187 Run GCD test before elimination. Adds test case with rational solutions, but no integer solutions.
PiperOrigin-RevId: 218772332
2019-03-29 13:39:34 -07:00
Uday Bondhugula 80610c2f49 Introduce Fourier-Motzkin variable elimination + other cleanup/support
- Introduce Fourier-Motzkin variable elimination to eliminate a dimension from
  a system of linear equalities/inequalities. Update isEmpty to use this.
  Since FM is only exact on rational/real spaces, an emptiness check based on
  this is guaranteed to be exact whenever it says the underlying set is empty;
  if it says, it's not empty, there may still be no integer points in it.
  Also, supports a version that computes "dark shadows".

- Test this by checking for "always false" conditionals in if statements.

- Unique IntegerSet's that are small (few constraints, few variables). This
  basically means the canonical empty set and other small sets that are
  likely commonly used get uniqued; allows checking for the canonical empty set
  by pointer. IntegerSet::kUniquingThreshold gives the threshold constraint size
  for uniqui'ing.

- rename simplify-affine-expr -> simplify-affine-structures

Other cleanup

- IntegerSet::numConstraints, AffineMap::numResults are no longer needed;
  remove them.
- add copy assignment operators for AffineMap, IntegerSet.
- rename Invalid() -> Null() on AffineExpr, AffineMap, IntegerSet
- Misc cleanup for FlatAffineConstraints API

PiperOrigin-RevId: 218690456
2019-03-29 13:38:24 -07:00
MLIR Team 5413239350 Adds Gaussian Elimination to FlatAffineConstraints.
- Adds FlatAffineConstraints::isEmpty method to test if there are no solutions to the system.
- Adds GCD test check if equality constraints have no solution.
- Adds unit test cases.

PiperOrigin-RevId: 218546319
2019-03-29 13:38:10 -07:00
Chris Lattner bd01f9541f Teach canonicalize pass to unique and hoist constants to the entry block. This
is a straight-forward change, but required adding missing moveBefore() methods
on operations (requiring moving some traits around to make C++ happy).  This
also fixes a constness issue with the getBlock/getFunction() methods on
Instruction, and adds a missing getFunction() method on MLFuncBuilder.

PiperOrigin-RevId: 218523905
2019-03-29 13:36:59 -07:00
Chris Lattner 301f83f906 Implement shape folding in the canonicalization pass:
- Add a few canonicalization patterns to fold memref_cast into
   load/store/dealloc.
 - Canonicalize alloc(constant) into an alloc with a constant shape followed by
   a cast.
 - Add a new PatternRewriter::updatedRootInPlace API to make this more convenient.

SimplifyAllocConst and the testcase is heavily based on Uday's implementation work, just
in a different framework.

PiperOrigin-RevId: 218361237
2019-03-29 13:36:31 -07:00
Chris Lattner a03051b9c4 Add a pattern (x+0) -> x, generalize Canonicalize to CFGFunc's, address a few TODOs,
and add some casting support to Operation.

PiperOrigin-RevId: 218219340
2019-03-29 13:35:33 -07:00
Chris Lattner 7850258c49 Introduce a new Operation::erase helper to generalize some code in
the pattern matcher / canonicalizer, and rename existing eraseFromBlock methods
to align with it.

PiperOrigin-RevId: 218104455
2019-03-29 13:34:51 -07:00
Uday Bondhugula 18e666702c Generalize / improve DMA transfer overlap; nested and multiple DMA support; resolve
multiple TODOs.

- replace the fake test pass (that worked on just the first loop in the
  MLFunction) to perform DMA pipelining on all suitable loops.
- nested DMAs work now (DMAs in an outer loop, more DMAs in nested inner loops)
- fix bugs / assumptions: correctly copy memory space and elemental type of source
  memref for double buffering.
- correctly identify matching start/finish statements, handle multiple DMAs per
  loop.
- introduce dominates/properlyDominates utitilies for MLFunction statements.
- move checkDominancePreservationOnShifts to LoopAnalysis.h; rename it
  getShiftValidity
- refactor getContainingStmtPos -> findAncestorStmtInBlock - move into
  Analysis/Utils.h; has two users.
- other improvements / cleanup for related API/utilities
- add size argument to dma_wait - for nested DMAs or in general, it makes it
  easy to obtain the size to use when lowering the dma_wait since we wouldn't
  want to identify the matching dma_start, and more importantly, in general/in the
  future, there may not always be a dma_start dominating the dma_wait.
- add debug information in the pass

PiperOrigin-RevId: 217734892
2019-03-29 13:32:28 -07:00
Nicolas Vasilache 3013dadb7c [MLIR] Basic infrastructure for vectorization test
This CL implements a very simple loop vectorization **test** and the basic
infrastructure to support it.

The test simply consists in:
1. matching the loops in the MLFunction and all the Load/Store operations
nested under the loop;
2. testing whether all the Load/Store are contiguous along the innermost
memory dimension along that particular loop. If any reference is
non-contiguous (i.e. the ForStmt SSAValue appears in the expression), then
the loop is not-vectorizable.

The simple test above can gradually be extended with more interesting
behaviors to account for the fact that a layout permutation may exist that
enables contiguity etc. All these will come in due time but it is worthwhile
noting that the test already supports detection of outer-vetorizable loops.

In implementing this test, I also added a recursive MLFunctionMatcher and some
sugar that can capture patterns
such as `auto gemmLike = Doall(Doall(Red(LoadStore())))` and allows iterating
on the matched IR structures. For now it just uses in order traversal but
post-order DFS will be useful in the future once IR rewrites start occuring.

One may note that the memory management design decision follows a different
pattern from MLIR. After evaluating different designs and how they quickly
increase cognitive overhead, I decided to opt for the simplest solution in my
view: a class-wide (threadsafe) RAII context.

This way, a pass that needs MLFunctionMatcher can just have its own locally
scoped BumpPtrAllocator and everything is cleaned up when the pass is destroyed.
If passes are expected to have a longer lifetime, then the contexts can easily
be scoped inside the runOnMLFunction call and storage lifetime reduced.
Lastly, whatever the scope of threading (module, function, pass), this is
expected to also be future-proof wrt concurrency (but this is a detail atm).

PiperOrigin-RevId: 217622889
2019-03-29 13:32:13 -07:00
Chris Lattner 80e884a9f8 Add constant folding and binary operator reassociation to the canonicalize
pass, build up the worklist infra in anticipation of improving the pattern
matcher to match more than one node.

PiperOrigin-RevId: 217330579
2019-03-29 13:31:44 -07:00
MLIR Team 0114e232d8 Adds method to AffineApplyOp which forward substitutes its results into any of its users which are also AffineApplyOps.
Updates ComposeAffineMaps test pass to use this method.
Updates affine map composition test cases to handle the new pass, which can be reused when this method is used in a future instruction combine pass.

PiperOrigin-RevId: 217163351
2019-03-29 13:30:49 -07:00
Uday Bondhugula 86eac4618c Create private exclusive / single use affine computation slice for an op stmt.
- add util to create a private / exclusive / single use affine
  computation slice for an op stmt (see method doc comment); a single
  multi-result affine_apply op is prepended to the op stmt to provide all
  results needed for its operands as a function of loop iterators and symbols.
- use it for DMA pipelining (to create private slices for DMA start stmt's);
  resolve TODOs/feature request (b/117159533)
- move createComposedAffineApplyOp to Transforms/Utils; free it from taking a
  memref as input / generalize it.

PiperOrigin-RevId: 216926818
2019-03-29 13:29:21 -07:00
Chris Lattner 9e3b928e32 Implement a super sketched out pattern match/rewrite framework and a sketched
out canonicalization pass to drive it, and a simple (x-x) === 0 pattern match
as a test case.

There is a tremendous number of improvements that need to land, and the
matcher/rewriter and patterns will be split out of this file, but this is a
starting point.

PiperOrigin-RevId: 216788604
2019-03-29 13:29:07 -07:00
Uday Bondhugula 82e55750d2 Add target independent standard DMA ops: dma.start, dma.wait
Add target independent standard DMA ops: dma.start, dma.wait. Update pipeline
data transfer to use these to detect DMA ops.

While on this
- return failure from mlir-opt::performActions if a pass generates invalid output
- improve error message for verify 'n' operand traits

PiperOrigin-RevId: 216429885
2019-03-29 13:26:10 -07:00
MLIR Team fe490043b0 Affine map composition.
*) Implements AffineValueMap forward substitution for AffineApplyOps.
*) Adds ComposeAffineMaps transformation pass, which composes affine maps for all loads/stores in an MLFunction.
*) Adds multiple affine map composition tests.

PiperOrigin-RevId: 216216446
2019-03-29 13:24:59 -07:00
Chris Lattner d2d89cbc19 Rename affineint type to index type. The name 'index' may not be perfect, but is better than the old name. Here is some justification:
1) affineint (as it is named) is not a type suitable for general computation (e.g. the multiply/adds in an integer matmul).  It has undefined width and is undefined on overflow.  They are used as the indices for forstmt because they are intended to be used as indexes inside the loop.

2) It can be used in both cfg and ml functions, and in cfg functions.  As you mention, “symbols” are not affine, and we use affineint values for symbols.

3) Integers aren’t affine, the algorithms applied to them can be. :)

4) The only suitable use for affineint in MLIR is for indexes and dimension sizes (i.e. the bounds of those indexes).

PiperOrigin-RevId: 216057974
2019-03-29 13:24:16 -07:00
Uday Bondhugula d18ae9e2c7 Constant folding for loop bounds.
- Fold the lower/upper bound of a loop to a constant whenever the result of the
  application of the bound's affine map on the operand list yields a constant.

- Update/complete 'for' stmt's API to set lower/upper bounds with operands.
  Resolve TODOs for ForStmt::set{Lower,Upper}Bound.

- Moved AffineExprConstantFolder into AffineMap.cpp and added
  AffineMap::constantFold to be used by both AffineApplyOp and
  ForStmt::constantFoldBound.

PiperOrigin-RevId: 215997346
2019-03-29 13:24:01 -07:00
Chris Lattner 6822c4e29c Implement support for constant folding operations even when their operands are
not all constant.  Implement support for folding dim, x*0, and affine_apply.

PiperOrigin-RevId: 215917432
2019-03-29 13:23:32 -07:00
Uday Bondhugula 6cfdb756b1 Introduce memref replacement/rewrite support: to replace an existing memref
with a new one (of a potentially different rank/shape) with an optional index
remapping.

- introduce Utils::replaceAllMemRefUsesWith
- use this for DMA double buffering

(This CL also adds a few temporary utilities / code that will be done away with
once:
1) abstract DMA op's are added
2) memref deferencing side-effect / trait is available on op's
3) b/117159533 is resolved (memref index computation slices).
PiperOrigin-RevId: 215831373
2019-03-29 13:23:19 -07:00
Feng Liu 7d016fd352 Add support to Add, Sub, Mul for both Integer and Float types.
The new operations are registered and also the const folding of them are implemented.

PiperOrigin-RevId: 215575999
2019-03-29 13:21:40 -07:00
Uday Bondhugula 041817a45e Introduce loop body skewing / loop pipelining / loop shifting utility.
- loopBodySkew shifts statements of a loop body by stmt-wise delays, and is
  typically meant to be used to:
  - allow overlap of non-blocking start/wait until completion operations with
    other computation
  - allow shifting of statements (for better register
    reuse/locality/parallelism)
  - software pipelining (when applied to the innermost loop)
- an additional argument specifies whether to unroll the prologue and epilogue.
- add method to check SSA dominance preservation.
- add a fake loop pipeline pass to test this utility.

Sample input/output are below. While on this, fix/add following:

- fix minor bug in getAddMulPureAffineExpr
- add additional builder methods for common affine map cases
- fix const_operand_iterator's for ForStmt, etc. When there is no such thing
  as 'const MLValue', the iterator shouldn't be returning const MLValue's.
  Returning MLValue is const correct.

Sample input/output examples:

1) Simplest case: shift second statement by one.

Input:

for %i = 0 to 7 {
  %y = "foo"(%i) : (affineint) -> affineint
  %x = "bar"(%i) : (affineint) -> affineint
}

Output:

#map0 = (d0) -> (d0 - 1)
mlfunc @loop_nest_simple1() {
  %c8 = constant 8 : affineint
  %c0 = constant 0 : affineint
  %0 = "foo"(%c0) : (affineint) -> affineint
  for %i0 = 1 to 7 {
    %1 = "foo"(%i0) : (affineint) -> affineint
    %2 = affine_apply #map0(%i0)
    %3 = "bar"(%2) : (affineint) -> affineint
  }
  %4 = affine_apply #map0(%c8)
  %5 = "bar"(%4) : (affineint) -> affineint
  return
}

2) DMA overlap: shift dma.wait and compute by one.

Input
  for %i = 0 to 7 {
    %pingpong = affine_apply (d0) -> (d0 mod 2) (%i)
    "dma.enqueue"(%pingpong) : (affineint) -> affineint
    %pongping = affine_apply (d0) -> (d0 mod 2) (%i)
    "dma.wait"(%pongping) : (affineint) -> affineint
    "compute1"(%pongping) : (affineint) -> affineint
  }

Output

#map0 = (d0) -> (d0 mod 2)
#map1 = (d0) -> (d0 - 1)
#map2 = ()[s0] -> (s0 + 7)
mlfunc @loop_nest_dma() {
  %c8 = constant 8 : affineint
  %c0 = constant 0 : affineint
  %0 = affine_apply #map0(%c0)
  %1 = "dma.enqueue"(%0) : (affineint) -> affineint
  for %i0 = 1 to 7 {
    %2 = affine_apply #map0(%i0)
    %3 = "dma.enqueue"(%2) : (affineint) -> affineint
    %4 = affine_apply #map1(%i0)
    %5 = affine_apply #map0(%4)
    %6 = "dma.wait"(%5) : (affineint) -> affineint
    %7 = "compute1"(%5) : (affineint) -> affineint
  }
  %8 = affine_apply #map1(%c8)
  %9 = affine_apply #map0(%8)
  %10 = "dma.wait"(%9) : (affineint) -> affineint
  %11 = "compute1"(%9) : (affineint) -> affineint
  return
}

3) With arbitrary affine bound maps:

Shift last two statements by two.

Input:

  for %i = %N to ()[s0] -> (s0 + 7)()[%N] {
    %y = "foo"(%i) : (affineint) -> affineint
    %x = "bar"(%i) : (affineint) -> affineint
    %z = "foo_bar"(%i) : (affineint) -> (affineint)
    "bar_foo"(%i) : (affineint) -> (affineint)
  }

Output

#map0 = ()[s0] -> (s0 + 1)
#map1 = ()[s0] -> (s0 + 2)
#map2 = ()[s0] -> (s0 + 7)
#map3 = (d0) -> (d0 - 2)
#map4 = ()[s0] -> (s0 + 8)
#map5 = ()[s0] -> (s0 + 9)

  for %i0 = %arg0 to #map0()[%arg0] {
    %0 = "foo"(%i0) : (affineint) -> affineint
    %1 = "bar"(%i0) : (affineint) -> affineint
  }
  for %i1 = #map1()[%arg0] to #map2()[%arg0] {
    %2 = "foo"(%i1) : (affineint) -> affineint
    %3 = "bar"(%i1) : (affineint) -> affineint
    %4 = affine_apply #map3(%i1)
    %5 = "foo_bar"(%4) : (affineint) -> affineint
    %6 = "bar_foo"(%4) : (affineint) -> affineint
  }
  for %i2 = #map4()[%arg0] to #map5()[%arg0] {
    %7 = affine_apply #map3(%i2)
    %8 = "foo_bar"(%7) : (affineint) -> affineint
    %9 = "bar_foo"(%7) : (affineint) -> affineint
  }

4) Shift one by zero, second by one, third by two

  for %i = 0 to 7 {
    %y = "foo"(%i) : (affineint) -> affineint
    %x = "bar"(%i) : (affineint) -> affineint
    %z = "foobar"(%i) : (affineint) -> affineint
  }

#map0 = (d0) -> (d0 - 1)
#map1 = (d0) -> (d0 - 2)
#map2 = ()[s0] -> (s0 + 7)

  %c9 = constant 9 : affineint
  %c8 = constant 8 : affineint
  %c1 = constant 1 : affineint
  %c0 = constant 0 : affineint
  %0 = "foo"(%c0) : (affineint) -> affineint
  %1 = "foo"(%c1) : (affineint) -> affineint
  %2 = affine_apply #map0(%c1)
  %3 = "bar"(%2) : (affineint) -> affineint
  for %i0 = 2 to 7 {
    %4 = "foo"(%i0) : (affineint) -> affineint
    %5 = affine_apply #map0(%i0)
    %6 = "bar"(%5) : (affineint) -> affineint
    %7 = affine_apply #map1(%i0)
    %8 = "foobar"(%7) : (affineint) -> affineint
  }
  %9 = affine_apply #map0(%c8)
  %10 = "bar"(%9) : (affineint) -> affineint
  %11 = affine_apply #map1(%c8)
  %12 = "foobar"(%11) : (affineint) -> affineint
  %13 = affine_apply #map1(%c9)
  %14 = "foobar"(%13) : (affineint) -> affineint

5) SSA dominance violated; no shifting if a shift is specified for the second
statement.

  for %i = 0 to 7 {
    %x = "foo"(%i) : (affineint) -> affineint
    "bar"(%x) : (affineint) -> affineint
  }

PiperOrigin-RevId: 214975731
2019-03-29 13:21:26 -07:00
Uday Bondhugula 591fa9698e Change behavior of loopUnrollFull with unroll factor 1
Using loopUnrollFull with unroll factor 1 should promote the loop body as
opposed to doing nothing.

PiperOrigin-RevId: 214812126
2019-03-29 13:20:59 -07:00
Nicolas Vasilache 54e5b4b4c0 [MLIR] Fix AsmPrinter for short-hand bound notation
This CL retricts shorthand notation printing to only the bounds that can
be roundtripped unambiguously; i.e.:
1. ()[]->(%some_cst) ()[]
2. ()[s0]->(s0) ()[%some_symbol]

Upon inspection it turns out that the constant case was lossy so this CL also
updates it.

Note however that fixing this issue exhibits a potential issues in unroll.mlir.
L488 exhibits a map ()[s0] -> (1)()[%arg0] which could be simplified down to
()[]->(1)()[].
This does not seem like a bug but maybe an undesired complexity in the maps
generated by unrolling.
bondhugula@, care to take a look?

PiperOrigin-RevId: 214531410
2019-03-29 13:19:04 -07:00
MLIR Team 99188b9d98 Adds constant folding hook for AffineApplyOp.
PiperOrigin-RevId: 214287780
2019-03-29 13:18:19 -07:00
Chris Lattner 82eb284a53 Implement support for constant folding operations and a simple constant folding
optimization pass:

 - Give the ability for operations to implement a constantFold hook (a simple
   one for single-result ops as well as general support for multi-result ops).
 - Implement folding support for constant and addf.
 - Implement support in AbstractOperation and Operation to make this usable by
   clients.
 - Implement a very simple constant folding pass that does top down folding on
   CFG and ML functions, with a testcase that exercises all the above stuff.

Random cleanups:
 - Improve the build APIs for ConstantOp.
 - Stop passing "-o -" to mlir-opt in the testsuite, since that is the default.

PiperOrigin-RevId: 213749809
2019-03-29 13:16:33 -07:00
Uday Bondhugula ab4797229c Extend loop unroll/unroll-and-jam to affine bounds + refactor related code.
- extend loop unroll-jam similar to loop unroll for affine bounds
- extend both loop unroll/unroll-jam to deal with cleanup loop for non multiple
  of unroll factor.
- extend promotion of single iteration loops to work with affine bounds
- fix typo bugs in loop unroll
- refactor common code b/w loop unroll and loop unroll-jam
- move prototypes of non-pass transforms to LoopUtils.h
- add additional builder methods.
- introduce loopUnrollUpTo(factor) to unroll by either factor or trip count,
  whichever is less.
- remove Statement::isInnermost (not used for now - will come back at the right
  place/in right form later)

PiperOrigin-RevId: 213471227
2019-03-29 13:15:06 -07:00
Uday Bondhugula 64812a56c7 Extend getConstantTripCount to deal with a larger subset of loop bounds; make loop
unroll/unroll-and-jam more powerful; add additional affine expr builder methods

- use previously added analysis/simplification to infer multiple of unroll
  factor trip counts, making loop unroll/unroll-and-jam more general.

- for loop unroll, support bounds that are single result affine map's with the
  same set of operands. For unknown loop bounds, loop unroll will now work as
  long as trip count can be determined to be a multiple of unroll factor.

- extend getConstantTripCount to deal with single result affine map's with the
  same operands. move it to mlir/Analysis/LoopAnalysis.cpp

- add additional builder utility methods for affine expr arithmetic
  (difference, mod/floordiv/ceildiv w.r.t postitive constant). simplify code to
  use the utility methods.

- move affine analysis routines to AffineAnalysis.cpp/.h from
  AffineStructures.cpp/.h.

- Rename LoopUnrollJam to LoopUnrollAndJam to match class name.

- add an additional simplification for simplifyFloorDiv, simplifyCeilDiv

- Rename AffineMap::getNumOperands() getNumInputs: an affine map by itself does
  not have operands. Operands are passed to it through affine_apply, from loop
  bounds/if condition's, etc., operands are stored in the latter.

This should be sufficiently powerful for now as far as unroll/unroll-and-jam go for TPU
code generation, and can move to other analyses/transformations.

Loop nests like these are now unrolled without any cleanup loop being generated.

  for %i = 1 to 100 {
    // unroll factor 4: no cleanup loop will be generated.
    for %j = (d0) -> (d0) (%i) to (d0) -> (5*d0 + 3) (%i) {
      %x = "foo"(%j) : (affineint) -> i32
    }
  }

  for %i = 1 to 100 {
    // unroll factor 4: no cleanup loop will be generated.
    for %j = (d0) -> (d0) (%i) to (d0) -> (d0 - d mod 4 - 1) (%i) {
      %y = "foo"(%j) : (affineint) -> i32
    }
  }

  for %i = 1 to 100 {
    for %j = (d0) -> (d0) (%i) to (d0) -> (d0 + 128) (%i) {
      %x = "foo"() : () -> i32
    }
  }

TODO(bondhugula): extend this to LoopUnrollAndJam as well in the next CL (with minor
changes).

PiperOrigin-RevId: 212661212
2019-03-29 13:13:00 -07:00
Uday Bondhugula 3bae041e5d Add utility to promote single iteration loops. Add methods for getting constant
loop counts. Improve / refactor loop unroll / loop unroll and jam.

- add utility to remove single iteration loops.
- use this utility to promote single iteration loops after unroll/unroll-and-jam
- use loopUnrollByFactor for loopUnrollFull and remove most of the latter.
- add methods for getting constant loop trip count

PiperOrigin-RevId: 212039569
2019-03-29 13:11:21 -07:00
Uday Bondhugula d5416f299e Complete AffineExprFlattener based simplification for floordiv/ceildiv.
- handle floordiv/ceildiv in AffineExprFlattener; update the simplification to
  work even if mod/floordiv/ceildiv expressions appearing in the tree can't be eliminated.
- refactor the flattening / analysis to move it out of lib/Transforms/
- fix MutableAffineMap::isMultipleOf
- add AffineBinaryOpExpr:getAdd/getMul/... utility methods

PiperOrigin-RevId: 211540536
2019-03-29 13:09:18 -07:00
Uday Bondhugula 0122a99cbb Affine expression analysis and simplification.
Outside of IR/
- simplify a MutableAffineMap by flattening the affine expressions
- add a simplify affine expression pass that uses this analysis
- update the FlatAffineConstraints API (to be used in the next CL)

In IR:
- add isMultipleOf and getKnownGCD for AffineExpr, and make the in-IR
  simplication of simplifyMod simpler and more powerful.
- rename the AffineExpr visitor methods to distinguish b/w visiting and
  walking, and to simplify API names based on context.

The next CL will use some of these for the loop unrolling/unroll-jam to make
the detection for the need of cleanup loop powerful/non-trivial.

A future CL will finally move this simplification to FlatAffineConstraints to
make it more powerful. For eg., currently, even if a mod expr appearing in a
part of the expression tree can't be simplified, the whole thing won't be
simplified.

PiperOrigin-RevId: 211012256
2019-03-29 13:07:44 -07:00
Uday Bondhugula e9fb4b492d Introduce loop unroll jam transformation.
- for test purposes, the unroll-jam pass unroll jams the first outermost loop.

While on this:
- fix StmtVisitor to allow overriding of function to iterate walk over children
  of a stmt.

PiperOrigin-RevId: 210644813
2019-03-29 13:07:30 -07:00
Uday Bondhugula 00bed4bd99 Extend loop unrolling to unroll by a given factor; add builder for affine
apply op.

- add builder for AffineApplyOp (first one for an operation that has
  non-zero operands)
- add support for loop unrolling by a given factor; uses the affine apply op
  builder.

While on this, change 'step' of ForStmt to be 'unsigned' instead of
AffineConstantExpr *. Add setters for ForStmt lb, ub, step.

Sample Input:

// CHECK-LABEL: mlfunc @loop_nest_unroll_cleanup() {
mlfunc @loop_nest_unroll_cleanup() {
  for %i = 1 to 100 {
    for %j = 0 to 17 {
      %x = "addi32"(%j, %j) : (affineint, affineint) -> i32
      %y = "addi32"(%x, %x) : (i32, i32) -> i32
    }
  }
  return
}

Output:

$ mlir-opt -loop-unroll -unroll-factor=4 /tmp/single2.mlir
#map0 = (d0) -> (d0 + 1)
#map1 = (d0) -> (d0 + 2)
#map2 = (d0) -> (d0 + 3)
mlfunc @loop_nest_unroll_cleanup() {
  for %i0 = 1 to 100 {
    for %i1 = 0 to 17 step 4 {
      %0 = "addi32"(%i1, %i1) : (affineint, affineint) -> i32
      %1 = "addi32"(%0, %0) : (i32, i32) -> i32
      %2 = affine_apply #map0(%i1)
      %3 = "addi32"(%2, %2) : (affineint, affineint) -> i32
      %4 = affine_apply #map1(%i1)
      %5 = "addi32"(%4, %4) : (affineint, affineint) -> i32
      %6 = affine_apply #map2(%i1)
      %7 = "addi32"(%6, %6) : (affineint, affineint) -> i32
    }
    for %i2 = 16 to 17 {
      %8 = "addi32"(%i2, %i2) : (affineint, affineint) -> i32
      %9 = "addi32"(%8, %8) : (i32, i32) -> i32
    }
  }
  return
}

PiperOrigin-RevId: 209676220
2019-03-29 13:03:38 -07:00
Uday Bondhugula 98a24881d3 ShortLoopUnroll - bug fix.
Collect loops through a post order walk instead of a pre-order so that loops
are collected from inner loops are collected before outer surrounding ones.

Add a complex test case.

PiperOrigin-RevId: 209041057
2019-03-29 13:01:22 -07:00
Chris Lattner d6c4c748d7 Escape and unescape strings in the parser and printer so they can roundtrip,
print floating point in a structured form that we know can round trip,
enumerate attributes in the visitor so we print affine mapping attributes
symbolically (the majority of the testcase updates).

We still have an issue where the hexadecimal floating point syntax is reparsed
as an integer, but that can evolve in subsequent patches.

PiperOrigin-RevId: 208828876
2019-03-29 13:00:05 -07:00
Uday Bondhugula d8490d8d4f Loop unrolling pass update
- fix/complete forStmt cloning for unrolling to work for outer loops
- create IV const's only when needed
- test outer loop unrolling by creating a short trip count unroll pass for
  loops with trip counts <= <parameter>
- add unrolling test cases for multiple op results, outer loop unrolling
- fix/clean up StmtWalker class while on this
- switch unroll loop iterator values from i32 to affineint

PiperOrigin-RevId: 207645967
2019-03-29 12:56:16 -07:00
Tatiana Shpeisman a0a6414ca2 Implement ML function arguments. Add representation for argument list in ML Function using TrailingObjects template. Implement argument iterators, parsing and printing.
Unrelated minor change - remove OperationStmt::dropReferences(). Since MLFunction does not have cyclic operand references (it's an AST) destruction can be safely done w/o a special pass to drop references.

PiperOrigin-RevId: 207583024
2019-03-29 12:55:47 -07:00
Uday Bondhugula 65b6e73245 Loop unrolling update.
- deal with non-operation stmt's (if/for stmt's) in loops being unrolled
  (unrolling of non-innermost loops works).
- update uses in unrolled bodies to use results of new operations that may be
  introduced in the unrolled bodies.

Unrolling now works for all kinds of loop nests - perfect nests, imperfect
nests, loops at any depth, and with any kind of operation in the body. (IfStmt
support not done, hence untested there).

Added missing dump/print method for StmtBlock.

TODO: add test case for outer loop unrolling.
PiperOrigin-RevId: 207314286
2019-03-29 12:55:19 -07:00
Uday Bondhugula 2a003256ae MLStmt cloning and IV replacement for loop unrolling, add constant pool to
MLFunctions.

- MLStmt cloning and IV replacement
- While at this, fix the innermostLoopGatherer to actually gather all the
  innermost loops (it was stopping its walk at the first innermost loop it
  found)
- Improve comments for MLFunction statement classes, fix inheritance order.

- Fixed StmtBlock destructor.

PiperOrigin-RevId: 207049173
2019-03-29 12:53:02 -07:00
Tatiana Shpeisman c8b0273f19 Implement induction variables. Pretty print induction variable operands as %i<ssa value number>. Add support for future pretty printing of ML function arguments as %arg<ssa value number>.
Induction variables are implemented by inheriting ForStmt from MLValue. ForStmt provides APIs that make this design decision invisible to the ForStmt users.

This CL in combination with cl/206253643 resolves  http://b/111769060.

PiperOrigin-RevId: 206655937
2019-03-29 12:49:36 -07:00
Uday Bondhugula a0abd666a7 Sketch out loop unrolling transformation.
- Implement a full loop unroll for innermost loops.
- Use it to implement a pass that unroll all the innermost loops of all
  mlfunction's in a module. ForStmt's parsed currently have constant trip
  counts (and constant loop bounds).
- Implement StmtVisitor based (Visitor pattern)

Loop IVs aren't currently parsed and represented as SSA values. Replacing uses
of loop IVs in unrolled bodies is thus a TODO. Class comments are sparse at some places - will add them after one round of comments.

A cmd-line flag triggers this for now.

Original:

mlfunc @loops() {
  for x = 1 to 100 step 2 {
    for x = 1 to 4 {
      "Const"(){value: 1} : () -> ()
    }
  }
  return
}

After unrolling:

mlfunc @loops() {
  for x = 1 to 100 step 2 {
    "Const"(){value: 1} : () -> ()
    "Const"(){value: 1} : () -> ()
    "Const"(){value: 1} : () -> ()
    "Const"(){value: 1} : () -> ()
  }
  return
}

PiperOrigin-RevId: 205933235
2019-03-29 12:43:01 -07:00
Tatiana Shpeisman 1b24c48b91 Scaffolding for convertToCFG pass that replaces all instances of ML functions with equivalent CFG functions. Traverses module MLIR, generates CFG functions (empty for now) and removes ML functions. Adds Transforms library and tests.
PiperOrigin-RevId: 205848367
2019-03-29 12:41:15 -07:00