Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
//===- ExecutionEngine.cpp - MLIR Execution engine and utils --------------===//
|
|
|
|
//
|
|
|
|
// Copyright 2019 The MLIR Authors.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
// =============================================================================
|
|
|
|
//
|
|
|
|
// This file implements the execution engine for MLIR modules based on LLVM Orc
|
|
|
|
// JIT engine.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
#include "mlir/ExecutionEngine/ExecutionEngine.h"
|
|
|
|
#include "mlir/IR/Function.h"
|
|
|
|
#include "mlir/IR/Module.h"
|
2019-03-11 09:43:55 +08:00
|
|
|
#include "mlir/LLVMIR/Transforms.h"
|
2019-02-28 06:45:36 +08:00
|
|
|
#include "mlir/Pass/Pass.h"
|
2019-02-28 02:59:29 +08:00
|
|
|
#include "mlir/Pass/PassManager.h"
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
#include "mlir/Target/LLVMIR.h"
|
|
|
|
#include "mlir/Transforms/Passes.h"
|
|
|
|
|
|
|
|
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
|
|
|
|
#include "llvm/ExecutionEngine/Orc/ExecutionUtils.h"
|
|
|
|
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
|
2019-02-08 00:12:14 +08:00
|
|
|
#include "llvm/ExecutionEngine/Orc/IRTransformLayer.h"
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
#include "llvm/ExecutionEngine/Orc/JITTargetMachineBuilder.h"
|
|
|
|
#include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
|
|
|
|
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
|
|
|
|
#include "llvm/IR/IRBuilder.h"
|
|
|
|
#include "llvm/Support/Error.h"
|
|
|
|
#include "llvm/Support/TargetRegistry.h"
|
|
|
|
|
|
|
|
using namespace mlir;
|
|
|
|
using llvm::Error;
|
|
|
|
using llvm::Expected;
|
|
|
|
|
|
|
|
namespace {
|
|
|
|
// Memory manager for the JIT's objectLayer. Its main goal is to fallback to
|
|
|
|
// resolving functions in the current process if they cannot be resolved in the
|
|
|
|
// JIT-compiled modules.
|
|
|
|
class MemoryManager : public llvm::SectionMemoryManager {
|
|
|
|
public:
|
|
|
|
MemoryManager(llvm::orc::ExecutionSession &execSession)
|
|
|
|
: session(execSession) {}
|
|
|
|
|
|
|
|
// Resolve the named symbol. First, try looking it up in the main library of
|
|
|
|
// the execution session. If there is no such symbol, try looking it up in
|
|
|
|
// the current process (for example, if it is a standard library function).
|
|
|
|
// Return `nullptr` if lookup fails.
|
|
|
|
llvm::JITSymbol findSymbol(const std::string &name) override {
|
|
|
|
auto mainLibSymbol = session.lookup({&session.getMainJITDylib()}, name);
|
|
|
|
if (mainLibSymbol)
|
|
|
|
return mainLibSymbol.get();
|
|
|
|
auto address = llvm::RTDyldMemoryManager::getSymbolAddressInProcess(name);
|
|
|
|
if (!address) {
|
|
|
|
llvm::errs() << "Could not look up: " << name << '\n';
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
return llvm::JITSymbol(address, llvm::JITSymbolFlags::Exported);
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
llvm::orc::ExecutionSession &session;
|
|
|
|
};
|
|
|
|
} // end anonymous namespace
|
|
|
|
|
|
|
|
namespace mlir {
|
|
|
|
namespace impl {
|
2019-05-14 01:59:04 +08:00
|
|
|
|
|
|
|
/// Wrapper class around DynamicLibrarySearchGenerator to allow searching
|
|
|
|
/// in-process symbols that have not been explicitly exported.
|
|
|
|
/// This first tries to resolve a symbol by using DynamicLibrarySearchGenerator.
|
|
|
|
/// For symbols that are not found this way, it then uses
|
|
|
|
/// `llvm::sys::DynamicLibrary::SearchForAddressOfSymbol` to extract symbols
|
|
|
|
/// that have been explicitly added with `llvm::sys::DynamicLibrary::AddSymbol`,
|
|
|
|
/// previously.
|
|
|
|
class SearchGenerator {
|
|
|
|
public:
|
|
|
|
SearchGenerator(char GlobalPrefix)
|
|
|
|
: defaultGenerator(cantFail(
|
|
|
|
llvm::orc::DynamicLibrarySearchGenerator::GetForCurrentProcess(
|
|
|
|
GlobalPrefix))) {}
|
|
|
|
|
|
|
|
// This function forwards to DynamicLibrarySearchGenerator::operator() and
|
|
|
|
// adds an extra resolution for names explicitly registered via
|
|
|
|
// `llvm::sys::DynamicLibrary::AddSymbol`.
|
|
|
|
Expected<llvm::orc::SymbolNameSet>
|
|
|
|
operator()(llvm::orc::JITDylib &JD, const llvm::orc::SymbolNameSet &Names) {
|
|
|
|
auto res = defaultGenerator(JD, Names);
|
|
|
|
if (!res)
|
|
|
|
return res;
|
|
|
|
llvm::orc::SymbolMap newSymbols;
|
|
|
|
for (auto &Name : Names) {
|
|
|
|
if (res.get().count(Name) > 0)
|
|
|
|
continue;
|
|
|
|
res.get().insert(Name);
|
|
|
|
auto addedSymbolAddress =
|
|
|
|
llvm::sys::DynamicLibrary::SearchForAddressOfSymbol(*Name);
|
|
|
|
if (!addedSymbolAddress)
|
|
|
|
continue;
|
|
|
|
llvm::JITEvaluatedSymbol Sym(
|
|
|
|
reinterpret_cast<uintptr_t>(addedSymbolAddress),
|
|
|
|
llvm::JITSymbolFlags::Exported);
|
|
|
|
newSymbols[Name] = Sym;
|
|
|
|
}
|
|
|
|
if (!newSymbols.empty())
|
|
|
|
cantFail(JD.define(absoluteSymbols(std::move(newSymbols))));
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
llvm::orc::DynamicLibrarySearchGenerator defaultGenerator;
|
|
|
|
};
|
|
|
|
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// Simple layered Orc JIT compilation engine.
|
|
|
|
class OrcJIT {
|
|
|
|
public:
|
2019-02-08 00:12:14 +08:00
|
|
|
using IRTransformer = std::function<Error(llvm::Module *)>;
|
|
|
|
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// Construct a JIT engine for the target host defined by `machineBuilder`,
|
|
|
|
// using the data layout provided as `dataLayout`.
|
2019-05-14 01:59:04 +08:00
|
|
|
// Setup the object layer to use our custom memory manager in order to
|
|
|
|
// resolve calls to library functions present in the process.
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
OrcJIT(llvm::orc::JITTargetMachineBuilder machineBuilder,
|
2019-02-08 00:12:14 +08:00
|
|
|
llvm::DataLayout layout, IRTransformer transform)
|
|
|
|
: irTransformer(transform),
|
|
|
|
objectLayer(
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
session,
|
|
|
|
[this]() { return llvm::make_unique<MemoryManager>(session); }),
|
|
|
|
compileLayer(
|
|
|
|
session, objectLayer,
|
|
|
|
llvm::orc::ConcurrentIRCompiler(std::move(machineBuilder))),
|
2019-02-08 00:12:14 +08:00
|
|
|
transformLayer(session, compileLayer, makeIRTransformFunction()),
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
dataLayout(layout), mangler(session, this->dataLayout),
|
|
|
|
threadSafeCtx(llvm::make_unique<llvm::LLVMContext>()) {
|
|
|
|
session.getMainJITDylib().setGenerator(
|
2019-05-14 01:59:04 +08:00
|
|
|
SearchGenerator(layout.getGlobalPrefix()));
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Create a JIT engine for the current host.
|
2019-02-08 00:12:14 +08:00
|
|
|
static Expected<std::unique_ptr<OrcJIT>>
|
2019-02-09 00:59:23 +08:00
|
|
|
createDefault(IRTransformer transformer) {
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
auto machineBuilder = llvm::orc::JITTargetMachineBuilder::detectHost();
|
|
|
|
if (!machineBuilder)
|
|
|
|
return machineBuilder.takeError();
|
|
|
|
|
|
|
|
auto dataLayout = machineBuilder->getDefaultDataLayoutForTarget();
|
|
|
|
if (!dataLayout)
|
|
|
|
return dataLayout.takeError();
|
|
|
|
|
|
|
|
return llvm::make_unique<OrcJIT>(std::move(*machineBuilder),
|
2019-02-08 00:12:14 +08:00
|
|
|
std::move(*dataLayout), transformer);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Add an LLVM module to the main library managed by the JIT engine.
|
|
|
|
Error addModule(std::unique_ptr<llvm::Module> M) {
|
2019-02-09 00:59:23 +08:00
|
|
|
return transformLayer.add(
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
session.getMainJITDylib(),
|
|
|
|
llvm::orc::ThreadSafeModule(std::move(M), threadSafeCtx));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Lookup a symbol in the main library managed by the JIT engine.
|
|
|
|
Expected<llvm::JITEvaluatedSymbol> lookup(StringRef Name) {
|
|
|
|
return session.lookup({&session.getMainJITDylib()}, mangler(Name.str()));
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
2019-02-08 00:12:14 +08:00
|
|
|
// Wrap the `irTransformer` into a function that can be called by the
|
2019-05-14 01:59:04 +08:00
|
|
|
// IRTranformLayer. If `irTransformer` is not set up, return the module as
|
|
|
|
// is without errors.
|
2019-02-08 00:12:14 +08:00
|
|
|
llvm::orc::IRTransformLayer::TransformFunction makeIRTransformFunction() {
|
|
|
|
return [this](llvm::orc::ThreadSafeModule module,
|
|
|
|
const llvm::orc::MaterializationResponsibility &resp)
|
|
|
|
-> Expected<llvm::orc::ThreadSafeModule> {
|
|
|
|
(void)resp;
|
|
|
|
if (!irTransformer)
|
2019-03-29 09:00:57 +08:00
|
|
|
return std::move(module);
|
2019-02-08 00:12:14 +08:00
|
|
|
if (Error err = irTransformer(module.getModule()))
|
|
|
|
return std::move(err);
|
2019-03-29 09:00:57 +08:00
|
|
|
return std::move(module);
|
2019-02-08 00:12:14 +08:00
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
IRTransformer irTransformer;
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
llvm::orc::ExecutionSession session;
|
|
|
|
llvm::orc::RTDyldObjectLinkingLayer objectLayer;
|
|
|
|
llvm::orc::IRCompileLayer compileLayer;
|
2019-02-08 00:12:14 +08:00
|
|
|
llvm::orc::IRTransformLayer transformLayer;
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
llvm::DataLayout dataLayout;
|
|
|
|
llvm::orc::MangleAndInterner mangler;
|
|
|
|
llvm::orc::ThreadSafeContext threadSafeCtx;
|
|
|
|
};
|
|
|
|
} // end namespace impl
|
|
|
|
} // namespace mlir
|
|
|
|
|
|
|
|
// Wrap a string into an llvm::StringError.
|
|
|
|
static inline Error make_string_error(const llvm::Twine &message) {
|
|
|
|
return llvm::make_error<llvm::StringError>(message.str(),
|
|
|
|
llvm::inconvertibleErrorCode());
|
|
|
|
}
|
|
|
|
|
2019-02-28 06:45:36 +08:00
|
|
|
// Given a list of PassRegistryEntry coming from a higher level, populates the
|
|
|
|
// given pass manager and appends the default set of required passes to lower to
|
|
|
|
// LLVMIR.
|
|
|
|
// Currently, these passes are:
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// - constant folding
|
|
|
|
// - CSE
|
|
|
|
// - canonicalization
|
|
|
|
// - affine lowering
|
2019-02-28 06:45:36 +08:00
|
|
|
static void getDefaultPasses(
|
|
|
|
PassManager &manager,
|
|
|
|
const std::vector<const mlir::PassRegistryEntry *> &mlirPassRegistryList) {
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// Run each of the passes that were selected.
|
2019-02-28 06:45:36 +08:00
|
|
|
for (const auto *passEntry : mlirPassRegistryList)
|
|
|
|
passEntry->addToPipeline(manager);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
2019-02-28 02:59:29 +08:00
|
|
|
// Append the extra passes for lowering to MLIR.
|
2019-04-05 02:40:57 +08:00
|
|
|
manager.addPass(mlir::createCanonicalizerPass());
|
2019-02-28 02:59:29 +08:00
|
|
|
manager.addPass(mlir::createCSEPass());
|
|
|
|
manager.addPass(mlir::createCanonicalizerPass());
|
|
|
|
manager.addPass(mlir::createLowerAffinePass());
|
|
|
|
manager.addPass(mlir::createConvertToLLVMIRPass());
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Setup LLVM target triple from the current machine.
|
2019-04-09 15:19:40 +08:00
|
|
|
bool ExecutionEngine::setupTargetTriple(llvm::Module *llvmModule) {
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// Setup the machine properties from the current architecture.
|
|
|
|
auto targetTriple = llvm::sys::getDefaultTargetTriple();
|
|
|
|
std::string errorMessage;
|
|
|
|
auto target = llvm::TargetRegistry::lookupTarget(targetTriple, errorMessage);
|
|
|
|
if (!target) {
|
|
|
|
llvm::errs() << "NO target: " << errorMessage << "\n";
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
auto machine =
|
|
|
|
target->createTargetMachine(targetTriple, "generic", "", {}, {});
|
|
|
|
llvmModule->setDataLayout(machine->createDataLayout());
|
|
|
|
llvmModule->setTargetTriple(targetTriple);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static std::string makePackedFunctionName(StringRef name) {
|
|
|
|
return "_mlir_" + name.str();
|
|
|
|
}
|
|
|
|
|
|
|
|
// For each function in the LLVM module, define an interface function that wraps
|
|
|
|
// all the arguments of the original function and all its results into an i8**
|
|
|
|
// pointer to provide a unified invocation interface.
|
|
|
|
void packFunctionArguments(llvm::Module *module) {
|
|
|
|
auto &ctx = module->getContext();
|
|
|
|
llvm::IRBuilder<> builder(ctx);
|
|
|
|
llvm::DenseSet<llvm::Function *> interfaceFunctions;
|
|
|
|
for (auto &func : module->getFunctionList()) {
|
|
|
|
if (func.isDeclaration()) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (interfaceFunctions.count(&func)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Given a function `foo(<...>)`, define the interface function
|
|
|
|
// `mlir_foo(i8**)`.
|
|
|
|
auto newType = llvm::FunctionType::get(
|
|
|
|
builder.getVoidTy(), builder.getInt8PtrTy()->getPointerTo(),
|
|
|
|
/*isVarArg=*/false);
|
|
|
|
auto newName = makePackedFunctionName(func.getName());
|
2019-02-02 03:04:48 +08:00
|
|
|
auto funcCst = module->getOrInsertFunction(newName, newType);
|
|
|
|
llvm::Function *interfaceFunc =
|
|
|
|
llvm::cast<llvm::Function>(funcCst.getCallee());
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
interfaceFunctions.insert(interfaceFunc);
|
|
|
|
|
|
|
|
// Extract the arguments from the type-erased argument list and cast them to
|
|
|
|
// the proper types.
|
|
|
|
auto bb = llvm::BasicBlock::Create(ctx);
|
|
|
|
bb->insertInto(interfaceFunc);
|
|
|
|
builder.SetInsertPoint(bb);
|
|
|
|
llvm::Value *argList = interfaceFunc->arg_begin();
|
|
|
|
llvm::SmallVector<llvm::Value *, 8> args;
|
|
|
|
args.reserve(llvm::size(func.args()));
|
|
|
|
for (auto &indexedArg : llvm::enumerate(func.args())) {
|
|
|
|
llvm::Value *argIndex = llvm::Constant::getIntegerValue(
|
|
|
|
builder.getInt64Ty(), llvm::APInt(64, indexedArg.index()));
|
|
|
|
llvm::Value *argPtrPtr = builder.CreateGEP(argList, argIndex);
|
|
|
|
llvm::Value *argPtr = builder.CreateLoad(argPtrPtr);
|
|
|
|
argPtr = builder.CreateBitCast(
|
|
|
|
argPtr, indexedArg.value().getType()->getPointerTo());
|
|
|
|
llvm::Value *arg = builder.CreateLoad(argPtr);
|
|
|
|
args.push_back(arg);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Call the implementation function with the extracted arguments.
|
|
|
|
llvm::Value *result = builder.CreateCall(&func, args);
|
|
|
|
|
|
|
|
// Assuming the result is one value, potentially of type `void`.
|
|
|
|
if (!result->getType()->isVoidTy()) {
|
|
|
|
llvm::Value *retIndex = llvm::Constant::getIntegerValue(
|
|
|
|
builder.getInt64Ty(), llvm::APInt(64, llvm::size(func.args())));
|
|
|
|
llvm::Value *retPtrPtr = builder.CreateGEP(argList, retIndex);
|
|
|
|
llvm::Value *retPtr = builder.CreateLoad(retPtrPtr);
|
|
|
|
retPtr = builder.CreateBitCast(retPtr, result->getType()->getPointerTo());
|
|
|
|
builder.CreateStore(result, retPtr);
|
|
|
|
}
|
|
|
|
|
|
|
|
// The interface function returns void.
|
|
|
|
builder.CreateRetVoid();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-26 02:51:51 +08:00
|
|
|
// Out of line for PIMPL unique_ptr.
|
|
|
|
ExecutionEngine::~ExecutionEngine() = default;
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
2019-02-08 00:12:14 +08:00
|
|
|
Expected<std::unique_ptr<ExecutionEngine>> ExecutionEngine::create(
|
2019-04-05 23:15:41 +08:00
|
|
|
Module *m, PassManager *pm,
|
|
|
|
std::function<llvm::Error(llvm::Module *)> transformer) {
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
auto engine = llvm::make_unique<ExecutionEngine>();
|
2019-02-08 00:12:14 +08:00
|
|
|
auto expectedJIT = impl::OrcJIT::createDefault(transformer);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
if (!expectedJIT)
|
|
|
|
return expectedJIT.takeError();
|
|
|
|
|
2019-04-05 23:15:41 +08:00
|
|
|
if (pm && failed(pm->run(m)))
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
return make_string_error("passes failed");
|
|
|
|
|
2019-02-14 07:30:24 +08:00
|
|
|
auto llvmModule = translateModuleToLLVMIR(*m);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
if (!llvmModule)
|
|
|
|
return make_string_error("could not convert to LLVM IR");
|
|
|
|
// FIXME: the triple should be passed to the translation or dialect conversion
|
|
|
|
// instead of this. Currently, the LLVM module created above has no triple
|
|
|
|
// associated with it.
|
|
|
|
setupTargetTriple(llvmModule.get());
|
|
|
|
packFunctionArguments(llvmModule.get());
|
|
|
|
|
Add a C API for EDSCs in other languages + python
This CL adds support for calling EDSCs from other languages than C++.
Following the LLVM convention this CL:
1. declares simple opaque types and a C API in mlir-c/Core.h;
2. defines the implementation directly in lib/EDSC/Types.cpp and
lib/EDSC/MLIREmitter.cpp.
Unlike LLVM however the nomenclature for these types and API functions is not
well-defined, naming suggestions are most welcome.
To avoid the need for conversion functions, Types.h and MLIREmitter.h include
mlir-c/Core.h and provide constructors and conversion operators between the
mlir::edsc type and the corresponding C type.
In this first commit, mlir-c/Core.h only contains the types for the C API
to allow EDSCs to work from Python. This includes both a minimal set of core
MLIR
types (mlir_context_t, mlir_type_t, mlir_func_t) as well as the EDSC types
(edsc_mlir_emitter_t, edsc_expr_t, edsc_stmt_t, edsc_indexed_t). This can be
restructured in the future as concrete needs arise.
For now, the API only supports:
1. scalar types;
2. memrefs of scalar types with static or symbolic shapes;
3. functions with input and output of these types.
The C API is not complete wrt ownership semantics. This is in large part due
to the fact that python bindings are written with Pybind11 which allows very
idiomatic C++ bindings. An effort is made to write a large chunk of these
bindings using the C API but some C++isms are used where the design benefits
from this simplication. A fully isolated C API will make more sense once we
also integrate with another language like Swift and have enough use cases to
drive the design.
Lastly, this CL also fixes a bug in mlir::ExecutionEngine were the order of
declaration of llvmContext and the JIT result in an improper order of
destructors (which used to crash before the fix).
PiperOrigin-RevId: 231290250
2019-01-29 06:32:00 +08:00
|
|
|
if (auto err = (*expectedJIT)->addModule(std::move(llvmModule)))
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
return std::move(err);
|
Add a C API for EDSCs in other languages + python
This CL adds support for calling EDSCs from other languages than C++.
Following the LLVM convention this CL:
1. declares simple opaque types and a C API in mlir-c/Core.h;
2. defines the implementation directly in lib/EDSC/Types.cpp and
lib/EDSC/MLIREmitter.cpp.
Unlike LLVM however the nomenclature for these types and API functions is not
well-defined, naming suggestions are most welcome.
To avoid the need for conversion functions, Types.h and MLIREmitter.h include
mlir-c/Core.h and provide constructors and conversion operators between the
mlir::edsc type and the corresponding C type.
In this first commit, mlir-c/Core.h only contains the types for the C API
to allow EDSCs to work from Python. This includes both a minimal set of core
MLIR
types (mlir_context_t, mlir_type_t, mlir_func_t) as well as the EDSC types
(edsc_mlir_emitter_t, edsc_expr_t, edsc_stmt_t, edsc_indexed_t). This can be
restructured in the future as concrete needs arise.
For now, the API only supports:
1. scalar types;
2. memrefs of scalar types with static or symbolic shapes;
3. functions with input and output of these types.
The C API is not complete wrt ownership semantics. This is in large part due
to the fact that python bindings are written with Pybind11 which allows very
idiomatic C++ bindings. An effort is made to write a large chunk of these
bindings using the C API but some C++isms are used where the design benefits
from this simplication. A fully isolated C API will make more sense once we
also integrate with another language like Swift and have enough use cases to
drive the design.
Lastly, this CL also fixes a bug in mlir::ExecutionEngine were the order of
declaration of llvmContext and the JIT result in an improper order of
destructors (which used to crash before the fix).
PiperOrigin-RevId: 231290250
2019-01-29 06:32:00 +08:00
|
|
|
engine->jit = std::move(*expectedJIT);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
2019-03-29 09:00:57 +08:00
|
|
|
return std::move(engine);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
}
|
|
|
|
|
2019-04-05 23:15:41 +08:00
|
|
|
Expected<std::unique_ptr<ExecutionEngine>> ExecutionEngine::create(
|
|
|
|
Module *m, std::function<llvm::Error(llvm::Module *)> transformer) {
|
|
|
|
// Construct and run the default MLIR pipeline.
|
|
|
|
PassManager manager;
|
|
|
|
getDefaultPasses(manager, {});
|
|
|
|
return create(m, &manager, transformer);
|
|
|
|
}
|
|
|
|
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
Expected<void (*)(void **)> ExecutionEngine::lookup(StringRef name) const {
|
|
|
|
auto expectedSymbol = jit->lookup(makePackedFunctionName(name));
|
|
|
|
if (!expectedSymbol)
|
|
|
|
return expectedSymbol.takeError();
|
|
|
|
auto rawFPtr = expectedSymbol->getAddress();
|
|
|
|
auto fptr = reinterpret_cast<void (*)(void **)>(rawFPtr);
|
|
|
|
if (!fptr)
|
|
|
|
return make_string_error("looked up function is null");
|
|
|
|
return fptr;
|
|
|
|
}
|
2019-01-26 06:57:30 +08:00
|
|
|
|
|
|
|
llvm::Error ExecutionEngine::invoke(StringRef name,
|
|
|
|
MutableArrayRef<void *> args) {
|
|
|
|
auto expectedFPtr = lookup(name);
|
|
|
|
if (!expectedFPtr)
|
|
|
|
return expectedFPtr.takeError();
|
|
|
|
auto fptr = *expectedFPtr;
|
|
|
|
|
|
|
|
(*fptr)(args.data());
|
|
|
|
|
|
|
|
return llvm::Error::success();
|
|
|
|
}
|