Summary:
Prototype of a JIT compiler that utilizes ThinLTO summaries to compile modules ahead of time. This is an implementation of the concept I presented in my "ThinLTO Summaries in JIT Compilation" talk at the 2018 Developers' Meeting: http://llvm.org/devmtg/2018-10/talk-abstracts.html#lt8
Upfront the JIT first populates the *combined ThinLTO module index*, which provides fast access to the global call-graph and module paths by function. Next, it loads the main function's module and compiles it. All functions in the module will be emitted with prolog instructions that *fire a discovery flag* once execution reaches them. In parallel, the *discovery thread* is busy-watching the existing flags. Once it detects one has fired, it uses the module index to find all functions that are reachable from it within a given number of calls and submits their defining modules to the compilation pipeline.
While execution continues, more flags are fired and further modules added. Ideally the JIT can be tuned in a way, so that in the majority of cases the code on the execution path can be compiled ahead of time. In cases where it doesn't work, the JIT has a *definition generator* in place that loads modules if missing functions are reached.
Reviewers: lhames, dblaikie, jfb, tejohnson, pree-jackie, AlexDenisov, kavon
Subscribers: mgorny, mehdi_amini, inglorion, hiraditya, steven_wu, dexonsmith, arphaman, jfb, merge_guards_bot, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72486
ObjectLinkingLayer::Plugin instances can be used to receive events from
ObjectLinkingLayer, and to inspect/modify JITLink linker graphs. This example
shows how to write and set up a plugin to dump the linker graph at various
points in the linking process.
These examples were all copied and adapted from the original HowToUseLLJIT
example code, however the calls to cl::ParseCommandLineOptions were not
updated.
This patch makes the target triple available via the LLJIT interface, and moves
the IRTransformLayer from LLLazyJIT down into LLJIT. Together these changes make
it easier to use the lazyReexports utility with LLJIT, and to apply IR
transforms to code as it is compiled in LLJIT (rather than requiring transforms
to be applied manually before code is added). An code example is added in
llvm/examples/LLJITExamples/LLJITWithLazyReexports
- Update documentation now that the move to monorepo has been made
- Do not tie compiler extension testing to LLVM_BUILD_EXAMPLES
- No need to specify LLVM libraries for plugins
- Add NO_MODULE option to match Polly specific requirements (i.e. building the
module *and* linking it statically)
- Issue a warning when building the compiler extension with
LLVM_BYE_LINK_INTO_TOOLS=ON, as it modifies the behavior of clang, which only
makes sense for testing purpose.
Still mark llvm/test/Feature/load_extension.ll as XFAIL because of a
ManagedStatic dependency that's going to be fixed in a seperate commit.
Differential Revision: https://reviews.llvm.org/D72327
There's quite a lot of references to Polly in the LLVM CMake codebase. However
the registration pattern used by Polly could be useful to other external
projects: thanks to that mechanism it would be possible to develop LLVM
extension without touching the LLVM code base.
This patch has two effects:
1. Remove all code specific to Polly in the llvm/clang codebase, replaicing it
with a generic mechanism
2. Provide a generic mechanism to register compiler extensions.
A compiler extension is similar to a pass plugin, with the notable difference
that the compiler extension can be configured to be built dynamically (like
plugins) or statically (like regular passes).
As a result, people willing to add extra passes to clang/opt can do it using a
separate code repo, but still have their pass be linked in clang/opt as built-in
passes.
Differential Revision: https://reviews.llvm.org/D61446
LLJIT now uses JITLink/ObjectLinkingLayer by default where available, so
these steps aren't required to use it. The tutorial is still useful though:
Clients can use it to test altervative linking layer implementations (e.g.
handing off to the system linker) or to test implementations of JITLink that
are still under development.
This patch removes the magic "main" JITDylib from ExecutionEngine. The main
JITDylib was created automatically at ExecutionSession construction time, and
all subsequently created JITDylibs were added to the main JITDylib's
links-against list by default. This saves a couple of lines of boilerplate for
simple JIT setups, but this isn't worth introducing magical behavior for.
ORCv2 clients should now construct their own main JITDylib using
ExecutionSession::createJITDylib and set up its linkages manually using
JITDylib::setSearchOrder (or related methods in JITDylib).
The runAsMain function takes a pointer to a function with a standard C main
signature, int(*)(int, char*[]), and invokes it using the given arguments and
program name. The arguments are copied into writable temporary storage as
required by the C and C++ specifications, so runAsMain safe to use when calling
main functions that modify their arguments in-place.
This patch also uses the new runAsMain function to replace hand-rolled versions
in lli, llvm-jitlink, and the SpeculativeJIT example.
Adds a DumpObjects utility that can be used to dump JIT'd objects to disk.
Instances of DebugObjects may be used by ObjectTransformLayer as no-op
transforms.
This patch also adds an ObjectTransformLayer to LLJIT and an example of how
to use this utility to dump JIT'd objects in LLJIT.
Avoids the need to include TargetMachine.h from various places just for
an enum. Various other enums live here, such as the optimization level,
TLS model, etc. Data suggests that this change probably doesn't matter,
but it seems nice to have anyway.
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
This patch adds a new IRTransformations directory to llvm/examples/. This is
intended to serve as a new home for example transformations/analysis
code used by various tutorials.
If LLVM_BUILD_EXAMPLES is enabled, the ExamplesIRTransforms library is
linked into the opt binary and the example passes become available.
To start off with, it contains the CFG simplifications used in the IR
part of the 'Getting Started With LLVM: Basics' tutorial at the US LLVM
Developers Meeting 2019.
Reviewers: paquette, jfb, meikeb, lhames, kbarton
Reviewed By: paquette
Differential Revision: https://reviews.llvm.org/D69416
Summary:
When createing an ORC remote JIT target the current library split forces the target process to link large portions of LLVM (Core, Execution Engine, JITLink, Object, MC, Passes, RuntimeDyld, Support, Target, and TransformUtils). This occurs because the ORC RPC interfaces rely on the static globals the ORC Error types require, which starts a cycle of pulling in more and more.
This patch breaks the ORC RPC Error implementations out into an "OrcError" library which only depends on LLVM Support. It also pulls the ORC RPC headers into their own subdirectory.
With this patch code can include the Orc/RPC/*.h headers and will only incur link dependencies on LLVMOrcError and LLVMSupport.
Reviewers: lhames
Reviewed By: lhames
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68732
ExecutionEngine.cpp contains the anchor() for the ObjectCache base class, so we
need an explicit dependency on it.
Patch by Stephen Neuendorffer. Thanks Stephen!
llvm-svn: 375461
JITLink is LLVM's newer jit-linker. It is an alternative to (and hopefully
eventually a replacement for) LLVM's older jit-linker, RuntimeDyld. Unlike
RuntimeDyld which requries JIT'd code to be complied with the large code
model, JITlink can link code compiled with the small code model, which is
the native code model for a number of targets (including all supported MachO
targets).
This example shows how to:
-- Create a JITLink InProcessMemoryManager
-- Set the code model to small
-- Use a JITLink backed ObjectLinkingLayer as the linking layer for LLJIT
(rather than the default RTDyldObjectLinkingLayer).
Note: This example will only work on platforms supported by JITLink. As of
this commit that's MachO/x86-64 and MachO/arm64.
llvm-svn: 375266
Summary:
This patch introduces, SequenceBBQuery - new heuristic to find likely next callable functions it tries to find the blocks with calls in order of execution sequence of Blocks.
It still uses BlockFrequencyAnalysis to find high frequency blocks. For a handful of hottest blocks (plan to customize), the algorithm traverse and discovered the caller blocks along the way to Entry Basic Block and Exit Basic Block. It uses Block Hint, to stop traversing the already visited blocks in both direction. It implicitly assumes that once the block is visited during discovering entry or exit nodes, revisiting them again does not add much. It also branch probability info (cached result) to traverse only hot edges (planned to customize) from hot blocks. Without BPI, the algorithm mostly return's all the blocks in the CFG with calls.
It also changes the heuristic queries, so they don't maintain states. Hence it is safe to call from multiple threads.
It also implements, new instrumentation to avoid jumping into JIT on every call to the function with the help _orc_speculate.decision.block and _orc_speculate.block.
"Speculator Registration Mechanism is also changed" - kudos to @lhames
Open to review, mostly looking to change implementation of SequeceBBQuery heuristics with good data structure choices.
Reviewers: lhames, dblaikie
Reviewed By: lhames
Subscribers: mgorny, hiraditya, mgrang, llvm-commits, lhames
Tags: #speculative_compilation_in_orc, #llvm
Differential Revision: https://reviews.llvm.org/D66399
llvm-svn: 370092
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.
llvm-svn: 369013
ThreadSafeModule/ThreadSafeContext are used to manage lifetimes and locking
for LLVMContexts in ORCv2. Prior to this patch contexts were locked as soon
as an associated Module was emitted (to be compiled and linked), and were not
unlocked until the emit call returned. This could lead to deadlocks if
interdependent modules that shared contexts were compiled on different threads:
when, during emission of the first module, the dependence was discovered the
second module (which would provide the required symbol) could not be emitted as
the thread emitting the first module still held the lock.
This patch eliminates this possibility by moving to a finer-grained locking
scheme. Each client holds the module lock only while they are actively operating
on it. To make this finer grained locking simpler/safer to implement this patch
removes the explicit lock method, 'getContextLock', from ThreadSafeModule and
replaces it with a new method, 'withModuleDo', that implicitly locks the context,
calls a user-supplied function object to operate on the Module, then implicitly
unlocks the context before returning the result.
ThreadSafeModule TSM = getModule(...);
size_t NumFunctions = TSM.withModuleDo(
[](Module &M) { // <- context locked before entry to lambda.
return M.size();
});
Existing ORCv2 layers that operate on ThreadSafeModules are updated to use the
new method.
This method is used to introduce Module locking into each of the existing
layers.
llvm-svn: 367686
Summary:
ORCv1 is deprecated. The current aim is to remove it before the LLVM 10.0
release. This patch adds deprecation attributes to the ORCv1 layers and
utilities to warn clients of the change.
Reviewers: dblaikie, sgraenitz, AlexDenisov
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64609
llvm-svn: 366344
LLJITBuilder now has a setCompileFunctionCreator method which can be used to
construct a CompileFunction for the LLJIT instance being created. The motivating
use-case for this is supporting ObjectCaches, which can now be set up at
compile-function construction time. To demonstrate this an example project,
LLJITWithObjectCache, is included.
llvm-svn: 365671
Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc
doesn't choke on it, hopefully.
Original Message:
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352827