llvm-project/mlir/unittests/Pass/AnalysisManagerTest.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

161 lines
5.2 KiB
C++
Raw Normal View History

//===- AnalysisManagerTest.cpp - AnalysisManager unit tests ---------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "mlir/Pass/AnalysisManager.h"
#include "mlir/IR/Builders.h"
#include "mlir/IR/Function.h"
#include "mlir/Pass/Pass.h"
#include "mlir/Pass/PassManager.h"
#include "gtest/gtest.h"
using namespace mlir;
using namespace mlir::detail;
namespace {
/// Minimal class definitions for two analyses.
struct MyAnalysis {
MyAnalysis(Operation *) {}
};
struct OtherAnalysis {
OtherAnalysis(Operation *) {}
};
struct OpSpecificAnalysis {
OpSpecificAnalysis(ModuleOp) {}
};
TEST(AnalysisManagerTest, FineGrainModuleAnalysisPreservation) {
Separate the Registration from Loading dialects in the Context This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry &registry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; registry.insert<mlir::standalone::StandaloneDialect>(); registry.insert<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>() Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 04:01:19 +08:00
MLIRContext context(false);
// Test fine grain invalidation of the module analysis manager.
OwningModuleRef module(ModuleOp::create(UnknownLoc::get(&context)));
ModuleAnalysisManager mam(*module, /*passInstrumentor=*/nullptr);
AnalysisManager am = mam;
// Query two different analyses, but only preserve one before invalidating.
am.getAnalysis<MyAnalysis>();
am.getAnalysis<OtherAnalysis>();
detail::PreservedAnalyses pa;
pa.preserve<MyAnalysis>();
am.invalidate(pa);
// Check that only MyAnalysis is preserved.
EXPECT_TRUE(am.getCachedAnalysis<MyAnalysis>().hasValue());
EXPECT_FALSE(am.getCachedAnalysis<OtherAnalysis>().hasValue());
}
TEST(AnalysisManagerTest, FineGrainFunctionAnalysisPreservation) {
Separate the Registration from Loading dialects in the Context This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry &registry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; registry.insert<mlir::standalone::StandaloneDialect>(); registry.insert<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>() Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 04:01:19 +08:00
MLIRContext context(false);
Builder builder(&context);
// Create a function and a module.
OwningModuleRef module(ModuleOp::create(UnknownLoc::get(&context)));
FuncOp func1 =
FuncOp::create(builder.getUnknownLoc(), "foo",
builder.getFunctionType(llvm::None, llvm::None));
module->push_back(func1);
// Test fine grain invalidation of the function analysis manager.
ModuleAnalysisManager mam(*module, /*passInstrumentor=*/nullptr);
AnalysisManager am = mam;
AnalysisManager fam = am.nest(func1);
// Query two different analyses, but only preserve one before invalidating.
fam.getAnalysis<MyAnalysis>();
fam.getAnalysis<OtherAnalysis>();
detail::PreservedAnalyses pa;
pa.preserve<MyAnalysis>();
fam.invalidate(pa);
// Check that only MyAnalysis is preserved.
EXPECT_TRUE(fam.getCachedAnalysis<MyAnalysis>().hasValue());
EXPECT_FALSE(fam.getCachedAnalysis<OtherAnalysis>().hasValue());
}
TEST(AnalysisManagerTest, FineGrainChildFunctionAnalysisPreservation) {
Separate the Registration from Loading dialects in the Context This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry &registry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; registry.insert<mlir::standalone::StandaloneDialect>(); registry.insert<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>() Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 04:01:19 +08:00
MLIRContext context(false);
Builder builder(&context);
// Create a function and a module.
OwningModuleRef module(ModuleOp::create(UnknownLoc::get(&context)));
FuncOp func1 =
FuncOp::create(builder.getUnknownLoc(), "foo",
builder.getFunctionType(llvm::None, llvm::None));
module->push_back(func1);
// Test fine grain invalidation of a function analysis from within a module
// analysis manager.
ModuleAnalysisManager mam(*module, /*passInstrumentor=*/nullptr);
AnalysisManager am = mam;
// Check that the analysis cache is initially empty.
EXPECT_FALSE(am.getCachedChildAnalysis<MyAnalysis>(func1).hasValue());
// Query two different analyses, but only preserve one before invalidating.
am.getChildAnalysis<MyAnalysis>(func1);
am.getChildAnalysis<OtherAnalysis>(func1);
detail::PreservedAnalyses pa;
pa.preserve<MyAnalysis>();
am.invalidate(pa);
// Check that only MyAnalysis is preserved.
EXPECT_TRUE(am.getCachedChildAnalysis<MyAnalysis>(func1).hasValue());
EXPECT_FALSE(am.getCachedChildAnalysis<OtherAnalysis>(func1).hasValue());
}
/// Test analyses with custom invalidation logic.
struct TestAnalysisSet {};
struct CustomInvalidatingAnalysis {
CustomInvalidatingAnalysis(Operation *) {}
bool isInvalidated(const AnalysisManager::PreservedAnalyses &pa) {
return !pa.isPreserved<TestAnalysisSet>();
}
};
TEST(AnalysisManagerTest, CustomInvalidation) {
Separate the Registration from Loading dialects in the Context This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry &registry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; registry.insert<mlir::standalone::StandaloneDialect>(); registry.insert<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>() Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 04:01:19 +08:00
MLIRContext context(false);
Builder builder(&context);
// Create a function and a module.
OwningModuleRef module(ModuleOp::create(UnknownLoc::get(&context)));
ModuleAnalysisManager mam(*module, /*passInstrumentor=*/nullptr);
AnalysisManager am = mam;
detail::PreservedAnalyses pa;
// Check that the analysis is invalidated properly.
am.getAnalysis<CustomInvalidatingAnalysis>();
am.invalidate(pa);
EXPECT_FALSE(am.getCachedAnalysis<CustomInvalidatingAnalysis>().hasValue());
// Check that the analysis is preserved properly.
am.getAnalysis<CustomInvalidatingAnalysis>();
pa.preserve<TestAnalysisSet>();
am.invalidate(pa);
EXPECT_TRUE(am.getCachedAnalysis<CustomInvalidatingAnalysis>().hasValue());
}
TEST(AnalysisManagerTest, OpSpecificAnalysis) {
MLIRContext context;
// Create a module.
OwningModuleRef module(ModuleOp::create(UnknownLoc::get(&context)));
ModuleAnalysisManager mam(*module, /*passInstrumentor=*/nullptr);
AnalysisManager am = mam;
// Query the op specific analysis for the module and verify that its cached.
am.getAnalysis<OpSpecificAnalysis, ModuleOp>();
EXPECT_TRUE(am.getCachedAnalysis<OpSpecificAnalysis>().hasValue());
}
} // end namespace