Merge LowerAffineApplyPass into LowerIfAndForPass, rename to LowerAffinePass

This change is mechanical and merges the LowerAffineApplyPass and
LowerIfAndForPass into a single LowerAffinePass.  It makes a step towards
defining an "affine dialect" that would contain all polyhedral-related
constructs.  The motivation for merging these two passes is based on retiring
MLFunctions and, eventually, transforming If and For statements into regular
operations.  After that happens, LowerAffinePass becomes yet another
legalization.

PiperOrigin-RevId: 227566113
This commit is contained in:
Alex Zinenko 2019-01-02 12:52:41 -08:00 committed by jpienaar
parent 3633becf8a
commit 0c4ee54198
8 changed files with 265 additions and 417 deletions

View File

@ -4,19 +4,19 @@ This document describes the available MLIR passes and their contracts.
[TOC]
## Lower `if` and `for` (`-lower-if-and-for`) {#lower-if-and-for}
## Affine control lowering (`-lower-affine`) {#lower-affine-apply}
Lower the `if` and `for` instructions to the CFG equivalent.
Convert instructions related to affine control into a graph of blocks using
operations from the standard dialect.
Individual operations are preserved. Loops are converted to a subgraph of blocks
(initialization, condition checking, subgraph of body blocks) with loop
induction variable being passed as the block argument of the condition checking
block.
## `affine_apply` lowering (`-lower-affine-apply`) {#lower-affine-apply}
Convert `affine_apply` operations into arithmetic operations they comprise.
Arguments and results of all operations are of the `index` type.
Loop statements are converted to a subgraph of blocks (initialization, condition
checking, subgraph of body blocks) with loop induction variable being passed as
the block argument of the condition checking block. Conditional statements are
converted to a subgraph of blocks (chain of condition checking with
short-circuit logic, subgraphs of 'then' and 'else' body blocks). `affine_apply`
operations are converted into sequences of primitive arithmetic operations that
have the same effect, using operands of the `index` type. Consequently, named
maps and sets may be removed from the module.
For example, `%r = affine_apply (d0, d1)[s0] -> (d0 + 2*d1 + s0)(%d0, %d1)[%s0]`
can be converted into:
@ -33,14 +33,20 @@ can be converted into:
### Input invariant
`if` and `for` instructions should be eliminated before this pass.
- no `Tensor` types;
These restrictions may be lifted in the future.
### Output IR
Functions that do not contain any `affine_apply` operations. Consequently, named
maps may be removed from the module. CFG functions may use any operations from
the StandardOps dialect in addition to the already used dialects.
Functions with `for` and `if` instructions eliminated. These functions may
contain operations from the Standard dialect in addition to those already
present before the pass.
### Invariants
- Operations other than `affine_apply` are not modified.
- Functions without a body are not modified.
- The semantics of the other functions is preserved.
- Individual operations other than those mentioned above are not modified if
they do not depend on the loop iterator value or on the result of
`affine_apply`.

View File

@ -1,56 +0,0 @@
//===- LoweringUtils.h ---- Utilities for Lowering Passes -------*- C++ -*-===//
//
// Copyright 2019 The MLIR Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// =============================================================================
//
// This file implements miscellaneous utility functions for lowering passes.
//
//===----------------------------------------------------------------------===//
#ifndef MLIR_INCLUDE_MLIR_TRANSFORMS_LOWERINGUTILS_H
#define MLIR_INCLUDE_MLIR_TRANSFORMS_LOWERINGUTILS_H
#include "mlir/IR/AffineMap.h"
#include "mlir/IR/Location.h"
#include "mlir/Support/LLVM.h"
namespace mlir {
class AffineApplyOp;
class FuncBuilder;
class Value;
/// Expand the affine expression `expr` applied to the given dimension and
/// symbol values into a sequence of primitive arithmetic instructions that have
/// the same effect. Report errors at location `loc`. Return the resulting
/// value that the expression evaluates to, or `nullptr` in case of error.
mlir::Value *expandAffineExpr(FuncBuilder *builder, Location loc,
AffineExpr expr, ArrayRef<Value *> dimValues,
ArrayRef<Value *> symbolValues);
/// Expand the `affineMap` applied to `operands` into a sequence of primitive
/// arithmetic instructions that have the same effect. The list of operands
/// contains the values of dimensions, followed by those of symbols. Use
/// `builder` to create new instructions. Report errors at the specificed
/// location `loc`. Return a list of results, or `None` if any expansion
/// failed.
Optional<SmallVector<Value *, 8>> expandAffineMap(FuncBuilder *builder,
Location loc,
AffineMap affineMap,
ArrayRef<Value *> operands);
} // namespace mlir
#endif // MLIR_INCLUDE_MLIR_TRANSFORMS_LOWERINGUTILS_H

View File

@ -79,8 +79,10 @@ FunctionPass *createPipelineDataTransferPass();
/// Creates a pass which composes all affine maps applied to loads and stores.
FunctionPass *createComposeAffineMapsPass();
/// Lowers IfInst and ForInst to the equivalent lower level CFG structures.
FunctionPass *createLowerIfAndForPass();
/// Lowers affine control flow instructions (ForStmt, IfStmt and AffineApplyOp)
/// to equivalent lower-level constructs (flow of basic blocks and arithmetic
/// primitives).
FunctionPass *createLowerAffinePass();
/// Creates a pass to perform tiling on loop nests.
FunctionPass *createLoopTilingPass();
@ -91,12 +93,6 @@ FunctionPass *createDmaGenerationPass(unsigned lowMemorySpace,
unsigned highMemorySpace,
int minDmaTransferSize = 1024);
/// Replaces affine_apply operations in CFGFunctions with the arithmetic
/// primitives (addition, multplication) they comprise. Errors out on
/// any Function since it may contain affine_applies baked into the For loop
/// bounds that cannot be replaced.
FunctionPass *createLowerAffineApplyPass();
/// Creates a pass to lower VectorTransferReadOp and VectorTransferWriteOp.
FunctionPass *createLowerVectorTransfersPass();

View File

@ -1,4 +1,4 @@
//===- LowerIfAndFor.cpp - Lower If and For instructions to CFG -----------===//
//===- LowerAffine.cpp - Lower affine constructs to primitives ------------===//
//
// Copyright 2019 The MLIR Authors.
//
@ -15,33 +15,141 @@
// limitations under the License.
// =============================================================================
//
// This file lowers If and For instructions within a function into their lower
// level CFG equivalent blocks.
// This file lowers affine constructs (If and For statements, AffineApply
// operations) within a function into their lower level CFG equivalent blocks.
//
//===----------------------------------------------------------------------===//
#include "mlir/IR/AffineExprVisitor.h"
#include "mlir/IR/Builders.h"
#include "mlir/IR/BuiltinOps.h"
#include "mlir/IR/MLIRContext.h"
#include "mlir/Pass.h"
#include "mlir/StandardOps/StandardOps.h"
#include "mlir/Transforms/LoweringUtils.h"
#include "mlir/Support/Functional.h"
#include "mlir/Transforms/Passes.h"
using namespace mlir;
namespace {
class LowerIfAndForPass : public FunctionPass {
// Visit affine expressions recursively and build the sequence of instructions
// that correspond to it. Visitation functions return an Value of the
// expression subtree they visited or `nullptr` on error.
class AffineApplyExpander
: public AffineExprVisitor<AffineApplyExpander, Value *> {
public:
LowerIfAndForPass() : FunctionPass(&passID) {}
// This internal class expects arguments to be non-null, checks must be
// performed at the call site.
AffineApplyExpander(FuncBuilder *builder, ArrayRef<Value *> dimValues,
ArrayRef<Value *> symbolValues, Location loc)
: builder(*builder), dimValues(dimValues), symbolValues(symbolValues),
loc(loc) {}
template <typename OpTy> Value *buildBinaryExpr(AffineBinaryOpExpr expr) {
auto lhs = visit(expr.getLHS());
auto rhs = visit(expr.getRHS());
if (!lhs || !rhs)
return nullptr;
auto op = builder.create<OpTy>(loc, lhs, rhs);
return op->getResult();
}
Value *visitAddExpr(AffineBinaryOpExpr expr) {
return buildBinaryExpr<AddIOp>(expr);
}
Value *visitMulExpr(AffineBinaryOpExpr expr) {
return buildBinaryExpr<MulIOp>(expr);
}
// TODO(zinenko): implement when the standard operators are made available.
Value *visitModExpr(AffineBinaryOpExpr) {
builder.getContext()->emitError(loc, "unsupported binary operator: mod");
return nullptr;
}
Value *visitFloorDivExpr(AffineBinaryOpExpr) {
builder.getContext()->emitError(loc,
"unsupported binary operator: floor_div");
return nullptr;
}
Value *visitCeilDivExpr(AffineBinaryOpExpr) {
builder.getContext()->emitError(loc,
"unsupported binary operator: ceil_div");
return nullptr;
}
Value *visitConstantExpr(AffineConstantExpr expr) {
auto valueAttr =
builder.getIntegerAttr(builder.getIndexType(), expr.getValue());
auto op =
builder.create<ConstantOp>(loc, valueAttr, builder.getIndexType());
return op->getResult();
}
Value *visitDimExpr(AffineDimExpr expr) {
assert(expr.getPosition() < dimValues.size() &&
"affine dim position out of range");
return dimValues[expr.getPosition()];
}
Value *visitSymbolExpr(AffineSymbolExpr expr) {
assert(expr.getPosition() < symbolValues.size() &&
"symbol dim position out of range");
return symbolValues[expr.getPosition()];
}
private:
FuncBuilder &builder;
ArrayRef<Value *> dimValues;
ArrayRef<Value *> symbolValues;
Location loc;
};
} // namespace
// Create a sequence of instructions that implement the `expr` applied to the
// given dimension and symbol values.
static mlir::Value *expandAffineExpr(FuncBuilder *builder, Location loc,
AffineExpr expr,
ArrayRef<Value *> dimValues,
ArrayRef<Value *> symbolValues) {
return AffineApplyExpander(builder, dimValues, symbolValues, loc).visit(expr);
}
// Create a sequence of instructions that implement the `affineMap` applied to
// the given `operands` (as it it were an AffineApplyOp).
Optional<SmallVector<Value *, 8>> static expandAffineMap(
FuncBuilder *builder, Location loc, AffineMap affineMap,
ArrayRef<Value *> operands) {
auto numDims = affineMap.getNumDims();
auto expanded = functional::map(
[numDims, builder, loc, operands](AffineExpr expr) {
return expandAffineExpr(builder, loc, expr,
operands.take_front(numDims),
operands.drop_front(numDims));
},
affineMap.getResults());
if (llvm::all_of(expanded, [](Value *v) { return v; }))
return expanded;
return None;
}
namespace {
class LowerAffinePass : public FunctionPass {
public:
LowerAffinePass() : FunctionPass(&passID) {}
PassResult runOnFunction(Function *function) override;
bool lowerForInst(ForInst *forInst);
bool lowerIfInst(IfInst *ifInst);
bool lowerAffineApply(AffineApplyOp *op);
static char passID;
};
} // end anonymous namespace
char LowerIfAndForPass::passID = 0;
char LowerAffinePass::passID = 0;
// Given a range of values, emit the code that reduces them with "min" or "max"
// depending on the provided comparison predicate. The predicate defines which
@ -112,7 +220,7 @@ static Value *buildMinMaxReductionSeq(Location loc, CmpIPredicate predicate,
// | <code after the ForInst> |
// +--------------------------------+
//
bool LowerIfAndForPass::lowerForInst(ForInst *forInst) {
bool LowerAffinePass::lowerForInst(ForInst *forInst) {
auto loc = forInst->getLoc();
// Start by splitting the block containing the 'for' into two parts. The part
@ -244,7 +352,7 @@ bool LowerIfAndForPass::lowerForInst(ForInst *forInst) {
// | <code after the IfInst> |
// +--------------------------------+
//
bool LowerIfAndForPass::lowerIfInst(IfInst *ifInst) {
bool LowerAffinePass::lowerIfInst(IfInst *ifInst) {
auto loc = ifInst->getLoc();
// Start by splitting the block containing the 'if' into two parts. The part
@ -341,6 +449,26 @@ bool LowerIfAndForPass::lowerIfInst(IfInst *ifInst) {
return false;
}
// Convert an "affine_apply" operation into a sequence of arithmetic
// instructions using the StandardOps dialect. Return true on error.
bool LowerAffinePass::lowerAffineApply(AffineApplyOp *op) {
FuncBuilder builder(op->getInstruction());
auto maybeExpandedMap =
expandAffineMap(&builder, op->getLoc(), op->getAffineMap(),
llvm::to_vector<8>(op->getOperands()));
if (!maybeExpandedMap)
return true;
for (auto pair : llvm::zip(op->getResults(), *maybeExpandedMap)) {
Value *original = std::get<0>(pair);
Value *expanded = std::get<1>(pair);
if (!expanded)
return true;
original->replaceAllUsesWith(expanded);
}
op->erase();
return false;
}
// Entry point of the function convertor.
//
// Conversion is performed by recursively visiting instructions of a Function.
@ -359,14 +487,17 @@ bool LowerIfAndForPass::lowerIfInst(IfInst *ifInst) {
// construction. When an Value is used, it gets replaced with the
// corresponding Value that has been defined previously. The value flow
// starts with function arguments converted to basic block arguments.
PassResult LowerIfAndForPass::runOnFunction(Function *function) {
PassResult LowerAffinePass::runOnFunction(Function *function) {
SmallVector<Instruction *, 8> instsToRewrite;
// Collect all the If and For statements. We do this as a prepass to avoid
// invalidating the walker with our rewrite.
// Collect all the If and For instructions as well as AffineApplyOps. We do
// this as a prepass to avoid invalidating the walker with our rewrite.
function->walkInsts([&](Instruction *inst) {
if (isa<IfInst>(inst) || isa<ForInst>(inst))
instsToRewrite.push_back(inst);
auto op = dyn_cast<OperationInst>(inst);
if (op && op->isa<AffineApplyOp>())
instsToRewrite.push_back(inst);
});
// Rewrite all of the ifs and fors. We walked the instructions in preorder,
@ -375,8 +506,12 @@ PassResult LowerIfAndForPass::runOnFunction(Function *function) {
if (auto *ifInst = dyn_cast<IfInst>(inst)) {
if (lowerIfInst(ifInst))
return failure();
} else if (auto *forInst = dyn_cast<ForInst>(inst)) {
if (lowerForInst(forInst))
return failure();
} else {
if (lowerForInst(cast<ForInst>(inst)))
auto op = cast<OperationInst>(inst);
if (lowerAffineApply(op->cast<AffineApplyOp>()))
return failure();
}
@ -385,10 +520,8 @@ PassResult LowerIfAndForPass::runOnFunction(Function *function) {
/// Lowers If and For instructions within a function into their lower level CFG
/// equivalent blocks.
FunctionPass *mlir::createLowerIfAndForPass() {
return new LowerIfAndForPass();
}
FunctionPass *mlir::createLowerAffinePass() { return new LowerAffinePass(); }
static PassRegistration<LowerIfAndForPass>
pass("lower-if-and-for",
"Lower If and For instructions to CFG equivalents");
static PassRegistration<LowerAffinePass>
pass("lower-affine",
"Lower If, For, AffineApply instructions to primitive equivalents");

View File

@ -1,95 +0,0 @@
//===- LowerAffineApply.cpp - Convert affine_apply to primitives ----------===//
//
// Copyright 2019 The MLIR Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// =============================================================================
//
// This file defines an MLIR function pass that replaces affine_apply operations
// in CFGFunctions with sequences of corresponding elementary arithmetic
// operations.
//
//===----------------------------------------------------------------------===//
#include "mlir/IR/Builders.h"
#include "mlir/IR/BuiltinOps.h"
#include "mlir/Pass.h"
#include "mlir/Transforms/LoweringUtils.h"
#include "mlir/Transforms/Passes.h"
using namespace mlir;
namespace {
// TODO: This shouldn't be its own pass, it should be a legalization (once we
// have the proper infra).
struct LowerAffineApply : public FunctionPass {
explicit LowerAffineApply() : FunctionPass(&LowerAffineApply::passID) {}
PassResult runOnFunction(Function *f) override;
static char passID;
};
} // end anonymous namespace
char LowerAffineApply::passID = 0;
// Given an affine expression `expr` extracted from `op`, build the sequence of
// primitive instructions that correspond to the affine expression in the
// `builder`.
static bool expandAffineApply(AffineApplyOp *op) {
if (!op)
return true;
FuncBuilder builder(op->getInstruction());
auto maybeExpandedMap =
expandAffineMap(&builder, op->getLoc(), op->getAffineMap(),
llvm::to_vector<8>(op->getOperands()));
if (!maybeExpandedMap)
return true;
for (auto pair : llvm::zip(op->getResults(), *maybeExpandedMap)) {
Value *original = std::get<0>(pair);
Value *expanded = std::get<1>(pair);
if (!expanded)
return true;
original->replaceAllUsesWith(expanded);
}
op->erase();
return false;
}
PassResult LowerAffineApply::runOnFunction(Function *f) {
SmallVector<OpPointer<AffineApplyOp>, 8> affineApplyInsts;
// Find all the affine_apply operations.
f->walkOps([&](OperationInst *inst) {
auto applyOp = inst->dyn_cast<AffineApplyOp>();
if (applyOp)
affineApplyInsts.push_back(applyOp);
});
// Rewrite them in a second pass, avoiding invalidation of the walker
// iterator.
for (auto applyOp : affineApplyInsts)
if (expandAffineApply(applyOp))
return failure();
return success();
}
static PassRegistration<LowerAffineApply>
pass("lower-affine-apply",
"Decompose affine_applies into primitive operations");
FunctionPass *mlir::createLowerAffineApplyPass() {
return new LowerAffineApply();
}

View File

@ -1,137 +0,0 @@
//===- LoweringUtils.cpp - Utilities for Lowering Passes ------------------===//
//
// Copyright 2019 The MLIR Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// =============================================================================
//
// This file implements utility functions for lowering passes, for example
// lowering affine_apply operations to individual components.
//
//===----------------------------------------------------------------------===//
#include "mlir/Transforms/LoweringUtils.h"
#include "mlir/IR/AffineExprVisitor.h"
#include "mlir/IR/Builders.h"
#include "mlir/IR/BuiltinOps.h"
#include "mlir/IR/MLIRContext.h"
#include "mlir/StandardOps/StandardOps.h"
#include "mlir/Support/Functional.h"
#include "mlir/Support/LLVM.h"
using namespace mlir;
namespace {
// Visit affine expressions recursively and build the sequence of instructions
// that correspond to it. Visitation functions return an Value of the
// expression subtree they visited or `nullptr` on error.
class AffineApplyExpander
: public AffineExprVisitor<AffineApplyExpander, Value *> {
public:
// This internal class expects arguments to be non-null, checks must be
// performed at the call site.
AffineApplyExpander(FuncBuilder *builder, ArrayRef<Value *> dimValues,
ArrayRef<Value *> symbolValues, Location loc)
: builder(*builder), dimValues(dimValues), symbolValues(symbolValues),
loc(loc) {}
template <typename OpTy> Value *buildBinaryExpr(AffineBinaryOpExpr expr) {
auto lhs = visit(expr.getLHS());
auto rhs = visit(expr.getRHS());
if (!lhs || !rhs)
return nullptr;
auto op = builder.create<OpTy>(loc, lhs, rhs);
return op->getResult();
}
Value *visitAddExpr(AffineBinaryOpExpr expr) {
return buildBinaryExpr<AddIOp>(expr);
}
Value *visitMulExpr(AffineBinaryOpExpr expr) {
return buildBinaryExpr<MulIOp>(expr);
}
// TODO(zinenko): implement when the standard operators are made available.
Value *visitModExpr(AffineBinaryOpExpr) {
builder.getContext()->emitError(loc, "unsupported binary operator: mod");
return nullptr;
}
Value *visitFloorDivExpr(AffineBinaryOpExpr) {
builder.getContext()->emitError(loc,
"unsupported binary operator: floor_div");
return nullptr;
}
Value *visitCeilDivExpr(AffineBinaryOpExpr) {
builder.getContext()->emitError(loc,
"unsupported binary operator: ceil_div");
return nullptr;
}
Value *visitConstantExpr(AffineConstantExpr expr) {
auto valueAttr =
builder.getIntegerAttr(builder.getIndexType(), expr.getValue());
auto op =
builder.create<ConstantOp>(loc, valueAttr, builder.getIndexType());
return op->getResult();
}
Value *visitDimExpr(AffineDimExpr expr) {
assert(expr.getPosition() < dimValues.size() &&
"affine dim position out of range");
return dimValues[expr.getPosition()];
}
Value *visitSymbolExpr(AffineSymbolExpr expr) {
assert(expr.getPosition() < symbolValues.size() &&
"symbol dim position out of range");
return symbolValues[expr.getPosition()];
}
private:
FuncBuilder &builder;
ArrayRef<Value *> dimValues;
ArrayRef<Value *> symbolValues;
Location loc;
};
} // namespace
// Create a sequence of instructions that implement the `expr` applied to the
// given dimension and symbol values.
mlir::Value *mlir::expandAffineExpr(FuncBuilder *builder, Location loc,
AffineExpr expr,
ArrayRef<Value *> dimValues,
ArrayRef<Value *> symbolValues) {
return AffineApplyExpander(builder, dimValues, symbolValues, loc).visit(expr);
}
// Create a sequence of instructions that implement the `affineMap` applied to
// the given `operands` (as it it were an AffineApplyOp).
Optional<SmallVector<Value *, 8>>
mlir::expandAffineMap(FuncBuilder *builder, Location loc, AffineMap affineMap,
ArrayRef<Value *> operands) {
auto numDims = affineMap.getNumDims();
auto expanded = functional::map(
[numDims, builder, loc, operands](AffineExpr expr) {
return expandAffineExpr(builder, loc, expr,
operands.take_front(numDims),
operands.drop_front(numDims));
},
affineMap.getResults());
if (llvm::all_of(expanded, [](Value *v) { return v; }))
return expanded;
return None;
}

View File

@ -1,84 +0,0 @@
// RUN: mlir-opt -lower-affine-apply %s | FileCheck %s
#map0 = () -> (0)
#map1 = ()[s0] -> (s0)
#map2 = (d0) -> (d0)
#map3 = (d0)[s0] -> (d0 + s0 + 1)
#map4 = (d0,d1,d2,d3)[s0,s1,s2] -> (d0 + 2*d1 + 3*d2 + 4*d3 + 5*s0 + 6*s1 + 7*s2)
#map5 = (d0,d1,d2) -> (d0,d1,d2)
#map6 = (d0,d1,d2) -> (d0 + d1 + d2)
// CHECK-LABEL: func @affine_applies()
func @affine_applies() {
^bb0:
// CHECK: %c0 = constant 0 : index
%zero = affine_apply #map0()
// Identity maps are just discarded.
// CHECK-NEXT: %c101 = constant 101 : index
%101 = constant 101 : index
%symbZero = affine_apply #map1()[%zero]
// CHECK-NEXT: %c102 = constant 102 : index
%102 = constant 102 : index
%copy = affine_apply #map2(%zero)
// CHECK-NEXT: %0 = addi %c0, %c0 : index
// CHECK-NEXT: %c1 = constant 1 : index
// CHECK-NEXT: %1 = addi %0, %c1 : index
%one = affine_apply #map3(%symbZero)[%zero]
// CHECK-NEXT: %c103 = constant 103 : index
// CHECK-NEXT: %c104 = constant 104 : index
// CHECK-NEXT: %c105 = constant 105 : index
// CHECK-NEXT: %c106 = constant 106 : index
// CHECK-NEXT: %c107 = constant 107 : index
// CHECK-NEXT: %c108 = constant 108 : index
// CHECK-NEXT: %c109 = constant 109 : index
%103 = constant 103 : index
%104 = constant 104 : index
%105 = constant 105 : index
%106 = constant 106 : index
%107 = constant 107 : index
%108 = constant 108 : index
%109 = constant 109 : index
// CHECK-NEXT: %c2 = constant 2 : index
// CHECK-NEXT: %2 = muli %c104, %c2 : index
// CHECK-NEXT: %3 = addi %c103, %2 : index
// CHECK-NEXT: %c3 = constant 3 : index
// CHECK-NEXT: %4 = muli %c105, %c3 : index
// CHECK-NEXT: %5 = addi %3, %4 : index
// CHECK-NEXT: %c4 = constant 4 : index
// CHECK-NEXT: %6 = muli %c106, %c4 : index
// CHECK-NEXT: %7 = addi %5, %6 : index
// CHECK-NEXT: %c5 = constant 5 : index
// CHECK-NEXT: %8 = muli %c107, %c5 : index
// CHECK-NEXT: %9 = addi %7, %8 : index
// CHECK-NEXT: %c6 = constant 6 : index
// CHECK-NEXT: %10 = muli %c108, %c6 : index
// CHECK-NEXT: %11 = addi %9, %10 : index
// CHECK-NEXT: %c7 = constant 7 : index
// CHECK-NEXT: %12 = muli %c109, %c7 : index
// CHECK-NEXT: %13 = addi %11, %12 : index
%four = affine_apply #map4(%103,%104,%105,%106)[%107,%108,%109]
return
}
// CHECK-LABEL: func @multiresult_affine_apply()
func @multiresult_affine_apply() {
// CHECK-NEXT: %c1 = constant 1 : index
// CHECK-NEXT: %0 = addi %c1, %c1 : index
// CHECK-NEXT: %1 = addi %0, %c1 : index
%one = constant 1 : index
%tuple = affine_apply #map5 (%one, %one, %one)
%three = affine_apply #map6 (%tuple#0, %tuple#1, %tuple#2)
return
}
// CHECK-LABEL: func @args_ret_affine_apply(%arg0: index, %arg1: index)
func @args_ret_affine_apply(index, index) -> (index, index) {
^bb0(%0 : index, %1 : index):
// CHECK-NEXT: return %arg0, %arg1 : index, index
%00 = affine_apply #map2 (%0)
%11 = affine_apply #map1 ()[%1]
return %00, %11 : index, index
}

View File

@ -1,4 +1,4 @@
// RUN: mlir-opt -lower-if-and-for %s | FileCheck %s
// RUN: mlir-opt -lower-affine %s | FileCheck %s
// CHECK-LABEL: func @empty() {
func @empty() {
@ -491,3 +491,88 @@ func @min_reduction_tree(%v : index) {
}
return
}
/////////////////////////////////////////////////////////////////////
#map0 = () -> (0)
#map1 = ()[s0] -> (s0)
#map2 = (d0) -> (d0)
#map3 = (d0)[s0] -> (d0 + s0 + 1)
#map4 = (d0,d1,d2,d3)[s0,s1,s2] -> (d0 + 2*d1 + 3*d2 + 4*d3 + 5*s0 + 6*s1 + 7*s2)
#map5 = (d0,d1,d2) -> (d0,d1,d2)
#map6 = (d0,d1,d2) -> (d0 + d1 + d2)
// CHECK-LABEL: func @affine_applies()
func @affine_applies() {
^bb0:
// CHECK: %c0 = constant 0 : index
%zero = affine_apply #map0()
// Identity maps are just discarded.
// CHECK-NEXT: %c101 = constant 101 : index
%101 = constant 101 : index
%symbZero = affine_apply #map1()[%zero]
// CHECK-NEXT: %c102 = constant 102 : index
%102 = constant 102 : index
%copy = affine_apply #map2(%zero)
// CHECK-NEXT: %0 = addi %c0, %c0 : index
// CHECK-NEXT: %c1 = constant 1 : index
// CHECK-NEXT: %1 = addi %0, %c1 : index
%one = affine_apply #map3(%symbZero)[%zero]
// CHECK-NEXT: %c103 = constant 103 : index
// CHECK-NEXT: %c104 = constant 104 : index
// CHECK-NEXT: %c105 = constant 105 : index
// CHECK-NEXT: %c106 = constant 106 : index
// CHECK-NEXT: %c107 = constant 107 : index
// CHECK-NEXT: %c108 = constant 108 : index
// CHECK-NEXT: %c109 = constant 109 : index
%103 = constant 103 : index
%104 = constant 104 : index
%105 = constant 105 : index
%106 = constant 106 : index
%107 = constant 107 : index
%108 = constant 108 : index
%109 = constant 109 : index
// CHECK-NEXT: %c2 = constant 2 : index
// CHECK-NEXT: %2 = muli %c104, %c2 : index
// CHECK-NEXT: %3 = addi %c103, %2 : index
// CHECK-NEXT: %c3 = constant 3 : index
// CHECK-NEXT: %4 = muli %c105, %c3 : index
// CHECK-NEXT: %5 = addi %3, %4 : index
// CHECK-NEXT: %c4 = constant 4 : index
// CHECK-NEXT: %6 = muli %c106, %c4 : index
// CHECK-NEXT: %7 = addi %5, %6 : index
// CHECK-NEXT: %c5 = constant 5 : index
// CHECK-NEXT: %8 = muli %c107, %c5 : index
// CHECK-NEXT: %9 = addi %7, %8 : index
// CHECK-NEXT: %c6 = constant 6 : index
// CHECK-NEXT: %10 = muli %c108, %c6 : index
// CHECK-NEXT: %11 = addi %9, %10 : index
// CHECK-NEXT: %c7 = constant 7 : index
// CHECK-NEXT: %12 = muli %c109, %c7 : index
// CHECK-NEXT: %13 = addi %11, %12 : index
%four = affine_apply #map4(%103,%104,%105,%106)[%107,%108,%109]
return
}
// CHECK-LABEL: func @multiresult_affine_apply()
func @multiresult_affine_apply() {
// CHECK-NEXT: %c1 = constant 1 : index
// CHECK-NEXT: %0 = addi %c1, %c1 : index
// CHECK-NEXT: %1 = addi %0, %c1 : index
%one = constant 1 : index
%tuple = affine_apply #map5 (%one, %one, %one)
%three = affine_apply #map6 (%tuple#0, %tuple#1, %tuple#2)
return
}
// CHECK-LABEL: func @args_ret_affine_apply(%arg0: index, %arg1: index)
func @args_ret_affine_apply(index, index) -> (index, index) {
^bb0(%0 : index, %1 : index):
// CHECK-NEXT: return %arg0, %arg1 : index, index
%00 = affine_apply #map2 (%0)
%11 = affine_apply #map1 ()[%1]
return %00, %11 : index, index
}