Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
//===- ExecutionEngine.cpp - MLIR Execution engine and utils --------------===//
|
|
|
|
//
|
|
|
|
// Copyright 2019 The MLIR Authors.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
// =============================================================================
|
|
|
|
//
|
|
|
|
// This file implements the execution engine for MLIR modules based on LLVM Orc
|
|
|
|
// JIT engine.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
#include "mlir/ExecutionEngine/ExecutionEngine.h"
|
|
|
|
#include "mlir/IR/Function.h"
|
|
|
|
#include "mlir/IR/Module.h"
|
2019-08-31 04:01:34 +08:00
|
|
|
#include "mlir/Support/FileUtilities.h"
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
#include "mlir/Target/LLVMIR.h"
|
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
#include "llvm/Bitcode/BitcodeReader.h"
|
|
|
|
#include "llvm/Bitcode/BitcodeWriter.h"
|
|
|
|
#include "llvm/ExecutionEngine/ObjectCache.h"
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
|
|
|
|
#include "llvm/ExecutionEngine/Orc/ExecutionUtils.h"
|
|
|
|
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
|
2019-02-08 00:12:14 +08:00
|
|
|
#include "llvm/ExecutionEngine/Orc/IRTransformLayer.h"
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
#include "llvm/ExecutionEngine/Orc/JITTargetMachineBuilder.h"
|
|
|
|
#include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
|
|
|
|
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
|
|
|
|
#include "llvm/IR/IRBuilder.h"
|
|
|
|
#include "llvm/Support/Error.h"
|
|
|
|
#include "llvm/Support/TargetRegistry.h"
|
2019-08-31 04:01:34 +08:00
|
|
|
#include "llvm/Support/ToolOutputFile.h"
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
|
|
|
using namespace mlir;
|
2019-08-22 09:15:39 +08:00
|
|
|
using llvm::dbgs;
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
using llvm::Error;
|
2019-08-22 09:15:39 +08:00
|
|
|
using llvm::errs;
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
using llvm::Expected;
|
2019-08-22 09:15:39 +08:00
|
|
|
using llvm::LLVMContext;
|
|
|
|
using llvm::MemoryBuffer;
|
|
|
|
using llvm::MemoryBufferRef;
|
|
|
|
using llvm::Module;
|
|
|
|
using llvm::SectionMemoryManager;
|
|
|
|
using llvm::StringError;
|
|
|
|
using llvm::Triple;
|
|
|
|
using llvm::orc::DynamicLibrarySearchGenerator;
|
|
|
|
using llvm::orc::ExecutionSession;
|
|
|
|
using llvm::orc::IRCompileLayer;
|
|
|
|
using llvm::orc::JITTargetMachineBuilder;
|
|
|
|
using llvm::orc::RTDyldObjectLinkingLayer;
|
|
|
|
using llvm::orc::ThreadSafeModule;
|
|
|
|
using llvm::orc::TMOwningSimpleCompiler;
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
// Wrap a string into an llvm::StringError.
|
|
|
|
static inline Error make_string_error(const llvm::Twine &message) {
|
|
|
|
return llvm::make_error<StringError>(message.str(),
|
|
|
|
llvm::inconvertibleErrorCode());
|
|
|
|
}
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
|
|
|
namespace mlir {
|
2019-05-16 00:26:27 +08:00
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
void SimpleObjectCache::notifyObjectCompiled(const Module *M,
|
|
|
|
MemoryBufferRef ObjBuffer) {
|
2019-08-31 04:01:34 +08:00
|
|
|
cachedObjects[M->getModuleIdentifier()] = MemoryBuffer::getMemBufferCopy(
|
2019-08-22 09:15:39 +08:00
|
|
|
ObjBuffer.getBuffer(), ObjBuffer.getBufferIdentifier());
|
2019-05-16 00:26:27 +08:00
|
|
|
}
|
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
std::unique_ptr<MemoryBuffer> SimpleObjectCache::getObject(const Module *M) {
|
2019-08-31 04:01:34 +08:00
|
|
|
auto I = cachedObjects.find(M->getModuleIdentifier());
|
|
|
|
if (I == cachedObjects.end()) {
|
2019-08-22 09:15:39 +08:00
|
|
|
dbgs() << "No object for " << M->getModuleIdentifier()
|
|
|
|
<< " in cache. Compiling.\n";
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
dbgs() << "Object for " << M->getModuleIdentifier()
|
|
|
|
<< " loaded from cache.\n";
|
|
|
|
return MemoryBuffer::getMemBuffer(I->second->getMemBufferRef());
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
}
|
|
|
|
|
2019-08-31 04:01:34 +08:00
|
|
|
void SimpleObjectCache::dumpToObjectFile(llvm::StringRef outputFilename) {
|
|
|
|
// Set up the output file.
|
|
|
|
std::string errorMessage;
|
|
|
|
auto file = openOutputFile(outputFilename, &errorMessage);
|
|
|
|
if (!file) {
|
|
|
|
llvm::errs() << errorMessage << "\n";
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Dump the object generated for a single module to the output file.
|
|
|
|
assert(cachedObjects.size() == 1 && "Expected only one object entry.");
|
|
|
|
auto &cachedObject = cachedObjects.begin()->second;
|
|
|
|
file->os() << cachedObject->getBuffer();
|
|
|
|
file->keep();
|
|
|
|
}
|
|
|
|
|
|
|
|
void ExecutionEngine::dumpToObjectFile(llvm::StringRef filename) {
|
|
|
|
cache->dumpToObjectFile(filename);
|
|
|
|
}
|
|
|
|
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// Setup LLVM target triple from the current machine.
|
2019-08-22 09:15:39 +08:00
|
|
|
bool ExecutionEngine::setupTargetTriple(Module *llvmModule) {
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
// Setup the machine properties from the current architecture.
|
|
|
|
auto targetTriple = llvm::sys::getDefaultTargetTriple();
|
|
|
|
std::string errorMessage;
|
|
|
|
auto target = llvm::TargetRegistry::lookupTarget(targetTriple, errorMessage);
|
|
|
|
if (!target) {
|
2019-08-22 09:15:39 +08:00
|
|
|
errs() << "NO target: " << errorMessage << "\n";
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
auto machine =
|
|
|
|
target->createTargetMachine(targetTriple, "generic", "", {}, {});
|
|
|
|
llvmModule->setDataLayout(machine->createDataLayout());
|
|
|
|
llvmModule->setTargetTriple(targetTriple);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static std::string makePackedFunctionName(StringRef name) {
|
|
|
|
return "_mlir_" + name.str();
|
|
|
|
}
|
|
|
|
|
|
|
|
// For each function in the LLVM module, define an interface function that wraps
|
|
|
|
// all the arguments of the original function and all its results into an i8**
|
|
|
|
// pointer to provide a unified invocation interface.
|
2019-08-22 09:15:39 +08:00
|
|
|
void packFunctionArguments(Module *module) {
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
auto &ctx = module->getContext();
|
|
|
|
llvm::IRBuilder<> builder(ctx);
|
|
|
|
llvm::DenseSet<llvm::Function *> interfaceFunctions;
|
|
|
|
for (auto &func : module->getFunctionList()) {
|
|
|
|
if (func.isDeclaration()) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (interfaceFunctions.count(&func)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Given a function `foo(<...>)`, define the interface function
|
|
|
|
// `mlir_foo(i8**)`.
|
|
|
|
auto newType = llvm::FunctionType::get(
|
|
|
|
builder.getVoidTy(), builder.getInt8PtrTy()->getPointerTo(),
|
|
|
|
/*isVarArg=*/false);
|
|
|
|
auto newName = makePackedFunctionName(func.getName());
|
2019-02-02 03:04:48 +08:00
|
|
|
auto funcCst = module->getOrInsertFunction(newName, newType);
|
|
|
|
llvm::Function *interfaceFunc =
|
|
|
|
llvm::cast<llvm::Function>(funcCst.getCallee());
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
interfaceFunctions.insert(interfaceFunc);
|
|
|
|
|
|
|
|
// Extract the arguments from the type-erased argument list and cast them to
|
|
|
|
// the proper types.
|
|
|
|
auto bb = llvm::BasicBlock::Create(ctx);
|
|
|
|
bb->insertInto(interfaceFunc);
|
|
|
|
builder.SetInsertPoint(bb);
|
|
|
|
llvm::Value *argList = interfaceFunc->arg_begin();
|
|
|
|
llvm::SmallVector<llvm::Value *, 8> args;
|
|
|
|
args.reserve(llvm::size(func.args()));
|
|
|
|
for (auto &indexedArg : llvm::enumerate(func.args())) {
|
|
|
|
llvm::Value *argIndex = llvm::Constant::getIntegerValue(
|
|
|
|
builder.getInt64Ty(), llvm::APInt(64, indexedArg.index()));
|
|
|
|
llvm::Value *argPtrPtr = builder.CreateGEP(argList, argIndex);
|
|
|
|
llvm::Value *argPtr = builder.CreateLoad(argPtrPtr);
|
|
|
|
argPtr = builder.CreateBitCast(
|
|
|
|
argPtr, indexedArg.value().getType()->getPointerTo());
|
|
|
|
llvm::Value *arg = builder.CreateLoad(argPtr);
|
|
|
|
args.push_back(arg);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Call the implementation function with the extracted arguments.
|
|
|
|
llvm::Value *result = builder.CreateCall(&func, args);
|
|
|
|
|
|
|
|
// Assuming the result is one value, potentially of type `void`.
|
|
|
|
if (!result->getType()->isVoidTy()) {
|
|
|
|
llvm::Value *retIndex = llvm::Constant::getIntegerValue(
|
|
|
|
builder.getInt64Ty(), llvm::APInt(64, llvm::size(func.args())));
|
|
|
|
llvm::Value *retPtrPtr = builder.CreateGEP(argList, retIndex);
|
|
|
|
llvm::Value *retPtr = builder.CreateLoad(retPtrPtr);
|
|
|
|
retPtr = builder.CreateBitCast(retPtr, result->getType()->getPointerTo());
|
|
|
|
builder.CreateStore(result, retPtr);
|
|
|
|
}
|
|
|
|
|
|
|
|
// The interface function returns void.
|
|
|
|
builder.CreateRetVoid();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-08-31 04:01:34 +08:00
|
|
|
ExecutionEngine::ExecutionEngine(bool enableObjectCache)
|
|
|
|
: cache(enableObjectCache ? nullptr : new SimpleObjectCache()) {}
|
|
|
|
|
|
|
|
Expected<std::unique_ptr<ExecutionEngine>> ExecutionEngine::create(
|
|
|
|
ModuleOp m, std::function<Error(llvm::Module *)> transformer,
|
|
|
|
ArrayRef<StringRef> sharedLibPaths, bool enableObjectCache) {
|
|
|
|
auto engine = std::make_unique<ExecutionEngine>(enableObjectCache);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
std::unique_ptr<llvm::LLVMContext> ctx(new llvm::LLVMContext);
|
2019-07-03 01:49:17 +08:00
|
|
|
auto llvmModule = translateModuleToLLVMIR(m);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
if (!llvmModule)
|
|
|
|
return make_string_error("could not convert to LLVM IR");
|
|
|
|
// FIXME: the triple should be passed to the translation or dialect conversion
|
|
|
|
// instead of this. Currently, the LLVM module created above has no triple
|
|
|
|
// associated with it.
|
|
|
|
setupTargetTriple(llvmModule.get());
|
|
|
|
packFunctionArguments(llvmModule.get());
|
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
// Clone module in a new LLVMContext since translateModuleToLLVMIR buries
|
|
|
|
// ownership too deeply.
|
|
|
|
// TODO(zinenko): Reevaluate model of ownership of LLVMContext in LLVMDialect.
|
|
|
|
SmallVector<char, 1> buffer;
|
|
|
|
{
|
|
|
|
llvm::raw_svector_ostream os(buffer);
|
|
|
|
WriteBitcodeToFile(*llvmModule, os);
|
|
|
|
}
|
|
|
|
llvm::MemoryBufferRef bufferRef(llvm::StringRef(buffer.data(), buffer.size()),
|
|
|
|
"cloned module buffer");
|
|
|
|
auto expectedModule = parseBitcodeFile(bufferRef, *ctx);
|
|
|
|
if (!expectedModule)
|
|
|
|
return expectedModule.takeError();
|
|
|
|
std::unique_ptr<Module> deserModule = std::move(*expectedModule);
|
|
|
|
|
|
|
|
// Callback to create the object layer with symbol resolution to current
|
|
|
|
// process and dynamically linked libraries.
|
|
|
|
auto objectLinkingLayerCreator = [&](ExecutionSession &session,
|
|
|
|
const Triple &TT) {
|
|
|
|
auto objectLayer = std::make_unique<RTDyldObjectLinkingLayer>(
|
|
|
|
session, []() { return std::make_unique<SectionMemoryManager>(); });
|
|
|
|
auto dataLayout = deserModule->getDataLayout();
|
|
|
|
|
|
|
|
// Resolve symbols that are statically linked in the current process.
|
|
|
|
session.getMainJITDylib().addGenerator(
|
|
|
|
cantFail(DynamicLibrarySearchGenerator::GetForCurrentProcess(
|
|
|
|
dataLayout.getGlobalPrefix())));
|
|
|
|
|
|
|
|
// Resolve symbols from shared libraries.
|
|
|
|
for (auto libPath : sharedLibPaths) {
|
|
|
|
auto mb = llvm::MemoryBuffer::getFile(libPath);
|
|
|
|
if (!mb) {
|
|
|
|
errs() << "Fail to create MemoryBuffer for: " << libPath << "\n";
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
auto &JD = session.createJITDylib(libPath);
|
|
|
|
auto loaded = DynamicLibrarySearchGenerator::Load(
|
|
|
|
libPath.data(), dataLayout.getGlobalPrefix());
|
|
|
|
if (!loaded) {
|
|
|
|
errs() << "Could not load: " << libPath << "\n";
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
JD.addGenerator(std::move(*loaded));
|
|
|
|
cantFail(objectLayer->add(JD, std::move(mb.get())));
|
|
|
|
}
|
|
|
|
|
|
|
|
return objectLayer;
|
|
|
|
};
|
|
|
|
|
|
|
|
// Callback to inspect the cache and recompile on demand. This follows Lang's
|
|
|
|
// LLJITWithObjectCache example.
|
|
|
|
auto compileFunctionCreator = [&](JITTargetMachineBuilder JTMB)
|
|
|
|
-> Expected<IRCompileLayer::CompileFunction> {
|
|
|
|
auto TM = JTMB.createTargetMachine();
|
|
|
|
if (!TM)
|
|
|
|
return TM.takeError();
|
|
|
|
return IRCompileLayer::CompileFunction(
|
|
|
|
TMOwningSimpleCompiler(std::move(*TM), engine->cache.get()));
|
|
|
|
};
|
|
|
|
|
|
|
|
// Create the LLJIT by calling the LLJITBuilder with 2 callbacks.
|
|
|
|
auto jit =
|
|
|
|
cantFail(llvm::orc::LLJITBuilder()
|
|
|
|
.setCompileFunctionCreator(compileFunctionCreator)
|
|
|
|
.setObjectLinkingLayerCreator(objectLinkingLayerCreator)
|
|
|
|
.create());
|
|
|
|
|
|
|
|
// Add a ThreadSafemodule to the engine and return.
|
|
|
|
ThreadSafeModule tsm(std::move(deserModule), std::move(ctx));
|
Use transform function on llvm::Module in the ExecutionEngine
The refactoring of ExecutionEngine dropped the usage of the irTransform function used to pass -O3 and other options to LLVM. As a consequence, the proper optimizations do not kick in in LLMV-land.
This CL makes use of the transform function and allows producing avx512 instructions, on an internal example, when using:
`mlir-cpu-runner -dump-object-file=1 -object-filename=foo.o` combined with `objdump -D foo.o`.
Assembly produced resembles:
```
2b2e: 62 72 7d 48 18 04 0e vbroadcastss (%rsi,%rcx,1),%zmm8
2b35: 62 71 7c 48 28 ce vmovaps %zmm6,%zmm9
2b3b: 62 72 3d 48 a8 c9 vfmadd213ps %zmm1,%zmm8,%zmm9
2b41: 62 f1 7c 48 28 cf vmovaps %zmm7,%zmm1
2b47: 62 f2 3d 48 a8 c8 vfmadd213ps %zmm0,%zmm8,%zmm1
2b4d: 62 f2 7d 48 18 44 0e vbroadcastss 0x4(%rsi,%rcx,1),%zmm0
2b54: 01
2b55: 62 71 7c 48 28 c6 vmovaps %zmm6,%zmm8
2b5b: 62 72 7d 48 a8 c3 vfmadd213ps %zmm3,%zmm0,%zmm8
2b61: 62 f1 7c 48 28 df vmovaps %zmm7,%zmm3
2b67: 62 f2 7d 48 a8 da vfmadd213ps %zmm2,%zmm0,%zmm3
2b6d: 62 f2 7d 48 18 44 0e vbroadcastss 0x8(%rsi,%rcx,1),%zmm0
2b74: 02
2b75: 62 f2 7d 48 a8 f5 vfmadd213ps %zmm5,%zmm0,%zmm6
2b7b: 62 f2 7d 48 a8 fc vfmadd213ps %zmm4,%zmm0,%zmm7
```
etc.
Fixes tensorflow/mlir#120
PiperOrigin-RevId: 267281097
2019-09-05 10:16:32 +08:00
|
|
|
if (transformer)
|
|
|
|
cantFail(tsm.withModuleDo(
|
|
|
|
[&](llvm::Module &module) { return transformer(&module); }));
|
2019-08-22 09:15:39 +08:00
|
|
|
cantFail(jit->addIRModule(std::move(tsm)));
|
|
|
|
engine->jit = std::move(jit);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
|
2019-03-29 09:00:57 +08:00
|
|
|
return std::move(engine);
|
Simple CPU runner
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363
2019-01-25 19:16:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
Expected<void (*)(void **)> ExecutionEngine::lookup(StringRef name) const {
|
|
|
|
auto expectedSymbol = jit->lookup(makePackedFunctionName(name));
|
|
|
|
if (!expectedSymbol)
|
|
|
|
return expectedSymbol.takeError();
|
|
|
|
auto rawFPtr = expectedSymbol->getAddress();
|
|
|
|
auto fptr = reinterpret_cast<void (*)(void **)>(rawFPtr);
|
|
|
|
if (!fptr)
|
|
|
|
return make_string_error("looked up function is null");
|
|
|
|
return fptr;
|
|
|
|
}
|
2019-01-26 06:57:30 +08:00
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
Error ExecutionEngine::invoke(StringRef name, MutableArrayRef<void *> args) {
|
2019-01-26 06:57:30 +08:00
|
|
|
auto expectedFPtr = lookup(name);
|
|
|
|
if (!expectedFPtr)
|
|
|
|
return expectedFPtr.takeError();
|
|
|
|
auto fptr = *expectedFPtr;
|
|
|
|
|
|
|
|
(*fptr)(args.data());
|
|
|
|
|
2019-08-22 09:15:39 +08:00
|
|
|
return Error::success();
|
2019-01-26 06:57:30 +08:00
|
|
|
}
|
2019-08-22 09:15:39 +08:00
|
|
|
} // end namespace mlir
|