Renames the llvm/examples/LLJITExamples directory to llvm/examples/OrcV2Examples
since it is becoming a home for all OrcV2 examples, not just LLJIT.
See http://llvm.org/PR31103.
Initializers and deinitializers are used to implement C++ static constructors
and destructors, runtime registration for some languages (e.g. with the
Objective-C runtime for Objective-C/C++ code) and other tasks that would
typically be performed when a shared-object/dylib is loaded or unloaded by a
statically compiled program.
MCJIT and ORC have historically provided limited support for discovering and
running initializers/deinitializers by scanning the llvm.global_ctors and
llvm.global_dtors variables and recording the functions to be run. This approach
suffers from several drawbacks: (1) It only works for IR inputs, not for object
files (including cached JIT'd objects). (2) It only works for initializers
described by llvm.global_ctors and llvm.global_dtors, however not all
initializers are described in this way (Objective-C, for example, describes
initializers via specially named metadata sections). (3) To make the
initializer/deinitializer functions described by llvm.global_ctors and
llvm.global_dtors searchable they must be promoted to extern linkage, polluting
the JIT symbol table (extra care must be taken to ensure this promotion does
not result in symbol name clashes).
This patch introduces several interdependent changes to ORCv2 to support the
construction of new initialization schemes, and includes an implementation of a
backwards-compatible llvm.global_ctor/llvm.global_dtor scanning scheme, and a
MachO specific scheme that handles Objective-C runtime registration (if the
Objective-C runtime is available) enabling execution of LLVM IR compiled from
Objective-C and Swift.
The major changes included in this patch are:
(1) The MaterializationUnit and MaterializationResponsibility classes are
extended to describe an optional "initializer" symbol for the module (see the
getInitializerSymbol method on each class). The presence or absence of this
symbol indicates whether the module contains any initializers or
deinitializers. The initializer symbol otherwise behaves like any other:
searching for it triggers materialization.
(2) A new Platform interface is introduced in llvm/ExecutionEngine/Orc/Core.h
which provides the following callback interface:
- Error setupJITDylib(JITDylib &JD): Can be used to install standard symbols
in JITDylibs upon creation. E.g. __dso_handle.
- Error notifyAdding(JITDylib &JD, const MaterializationUnit &MU): Generally
used to record initializer symbols.
- Error notifyRemoving(JITDylib &JD, VModuleKey K): Used to notify a platform
that a module is being removed.
Platform implementations can use these callbacks to track outstanding
initializers and implement a platform-specific approach for executing them. For
example, the MachOPlatform installs a plugin in the JIT linker to scan for both
__mod_inits sections (for C++ static constructors) and ObjC metadata sections.
If discovered, these are processed in the usual platform order: Objective-C
registration is carried out first, then static initializers are executed,
ensuring that calls to Objective-C from static initializers will be safe.
This patch updates LLJIT to use the new scheme for initialization. Two
LLJIT::PlatformSupport classes are implemented: A GenericIR platform and a MachO
platform. The GenericIR platform implements a modified version of the previous
llvm.global-ctor scraping scheme to provide support for Windows and
Linux. LLJIT's MachO platform uses the MachOPlatform class to provide MachO
specific initialization as described above.
Reviewers: sgraenitz, dblaikie
Subscribers: mgorny, hiraditya, mgrang, ributzka, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74300
Summary:
Most libraries are defined in the lib/ directory but there are also a
few libraries defined in tools/ e.g. libLLVM, libLTO. I'm defining
"Component Libraries" as libraries defined in lib/ that may be included in
libLLVM.so. Explicitly marking the libraries in lib/ as component
libraries allows us to remove some fragile checks that attempt to
differentiate between lib/ libraries and tools/ libraires:
1. In tools/llvm-shlib, because
llvm_map_components_to_libnames(LIB_NAMES "all") returned a list of
all libraries defined in the whole project, there was custom code
needed to filter out libraries defined in tools/, none of which should
be included in libLLVM.so. This code assumed that any library
defined as static was from lib/ and everything else should be
excluded.
With this change, llvm_map_components_to_libnames(LIB_NAMES, "all")
only returns libraries that have been added to the LLVM_COMPONENT_LIBS
global cmake property, so this custom filtering logic can be removed.
Doing this also fixes the build with BUILD_SHARED_LIBS=ON
and LLVM_BUILD_LLVM_DYLIB=ON.
2. There was some code in llvm_add_library that assumed that
libraries defined in lib/ would not have LLVM_LINK_COMPONENTS or
ARG_LINK_COMPONENTS set. This is only true because libraries
defined lib lib/ use LLVMBuild.txt and don't set these values.
This code has been fixed now to check if the library has been
explicitly marked as a component library, which should now make it
easier to remove LLVMBuild at some point in the future.
I have tested this patch on Windows, MacOS and Linux with release builds
and the following combinations of CMake options:
- "" (No options)
- -DLLVM_BUILD_LLVM_DYLIB=ON
- -DLLVM_LINK_LLVM_DYLIB=ON
- -DBUILD_SHARED_LIBS=ON
- -DBUILD_SHARED_LIBS=ON -DLLVM_BUILD_LLVM_DYLIB=ON
- -DBUILD_SHARED_LIBS=ON -DLLVM_LINK_LLVM_DYLIB=ON
Reviewers: beanz, smeenai, compnerd, phosek
Reviewed By: beanz
Subscribers: wuzish, jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, mgorny, mehdi_amini, sbc100, jgravelle-google, hiraditya, aheejin, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, steven_wu, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, dang, Jim, lenary, s.egerton, pzheng, sameer.abuasal, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70179
Adds a DumpObjects utility that can be used to dump JIT'd objects to disk.
Instances of DebugObjects may be used by ObjectTransformLayer as no-op
transforms.
This patch also adds an ObjectTransformLayer to LLJIT and an example of how
to use this utility to dump JIT'd objects in LLJIT.
Summary:
When createing an ORC remote JIT target the current library split forces the target process to link large portions of LLVM (Core, Execution Engine, JITLink, Object, MC, Passes, RuntimeDyld, Support, Target, and TransformUtils). This occurs because the ORC RPC interfaces rely on the static globals the ORC Error types require, which starts a cycle of pulling in more and more.
This patch breaks the ORC RPC Error implementations out into an "OrcError" library which only depends on LLVM Support. It also pulls the ORC RPC headers into their own subdirectory.
With this patch code can include the Orc/RPC/*.h headers and will only incur link dependencies on LLVMOrcError and LLVMSupport.
Reviewers: lhames
Reviewed By: lhames
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D68732
Summary:
rL367756 (f5c40cb) increases the dependency of LLVMOrcJIT on LLVMPasses.
In particular, symbols defined in LLVMPasses that are referenced by the
destructor of `PassBuilder` are now referenced by LLVMOrcJIT through
`Speculation.cpp.o`.
We believe that referencing symbols defined in LLVMPasses in the
destructor of `PassBuilder` is valid, and that adding to the set of such
symbols is legitimate. To support such cases, this patch adds LLVMPasses
to the set of libraries being linked when linking in LLVMOrcJIT causes
such symbols from LLVMPasses to be referenced.
Reviewers: Whitney, anhtuyen, pree-jackie
Reviewed By: pree-jackie
Subscribers: mgorny, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66441
llvm-svn: 369310
Summary:
JITLink is a jit-linker that performs the same high-level task as RuntimeDyld:
it parses relocatable object files and makes their contents runnable in a target
process.
JITLink aims to improve on RuntimeDyld in several ways:
(1) A clear design intended to maximize code-sharing while minimizing coupling.
RuntimeDyld has been developed in an ad-hoc fashion for a number of years and
this had led to intermingling of code for multiple architectures (e.g. in
RuntimeDyldELF::processRelocationRef) in a way that makes the code more
difficult to read, reason about, extend. JITLink is designed to isolate
format and architecture specific code, while still sharing generic code.
(2) Support for native code models.
RuntimeDyld required the use of large code models (where calls to external
functions are made indirectly via registers) for many of platforms due to its
restrictive model for stub generation (one "stub" per symbol). JITLink allows
arbitrary mutation of the atom graph, allowing both GOT and PLT atoms to be
added naturally.
(3) Native support for asynchronous linking.
JITLink uses asynchronous calls for symbol resolution and finalization: these
callbacks are passed a continuation function that they must call to complete the
linker's work. This allows for cleaner interoperation with the new concurrent
ORC JIT APIs, while still being easily implementable in synchronous style if
asynchrony is not needed.
To maximise sharing, the design has a hierarchy of common code:
(1) Generic atom-graph data structure and algorithms (e.g. dead stripping and
| memory allocation) that are intended to be shared by all architectures.
|
+ -- (2) Shared per-format code that utilizes (1), e.g. Generic MachO to
| atom-graph parsing.
|
+ -- (3) Architecture specific code that uses (1) and (2). E.g.
JITLinkerMachO_x86_64, which adds x86-64 specific relocation
support to (2) to build and patch up the atom graph.
To support asynchronous symbol resolution and finalization, the callbacks for
these operations take continuations as arguments:
using JITLinkAsyncLookupContinuation =
std::function<void(Expected<AsyncLookupResult> LR)>;
using JITLinkAsyncLookupFunction =
std::function<void(const DenseSet<StringRef> &Symbols,
JITLinkAsyncLookupContinuation LookupContinuation)>;
using FinalizeContinuation = std::function<void(Error)>;
virtual void finalizeAsync(FinalizeContinuation OnFinalize);
In addition to its headline features, JITLink also makes other improvements:
- Dead stripping support: symbols that are not used (e.g. redundant ODR
definitions) are discarded, and take up no memory in the target process
(In contrast, RuntimeDyld supported pointer equality for weak definitions,
but the redundant definitions stayed resident in memory).
- Improved exception handling support. JITLink provides a much more extensive
eh-frame parser than RuntimeDyld, and is able to correctly fix up many
eh-frame sections that RuntimeDyld currently (silently) fails on.
- More extensive validation and error handling throughout.
This initial patch supports linking MachO/x86-64 only. Work on support for
other architectures and formats will happen in-tree.
Differential Revision: https://reviews.llvm.org/D58704
llvm-svn: 358818
This will allow other utilities (including a future RuntimeDyld replacement) to
use these types without pulling in the major Core types (JITDylib, etc.).
llvm-svn: 351138
(1) Adds comments for the API.
(2) Removes the setArch method: This is redundant: the setArchStr method on the
triple should be used instead.
(3) Turns EmulatedTLS on by default. This matches EngineBuilder's behavior.
llvm-svn: 343423
implementation as lazy compile callbacks, and a "lazy re-exports" utility that
builds lazy call-throughs.
Lazy call-throughs are similar to lazy compile callbacks (and are based on the
same underlying state saving/restoring trampolines) but resolve their targets
by performing a standard ORC lookup rather than invoking a user supplied
compiler callback. This allows them to inherit the thread-safety of ORC lookups
while blocking only the calling thread (whereas compile callbacks also block one
compile thread).
Lazy re-exports provide a simple way of building lazy call-throughs. Unlike a
regular re-export, a lazy re-export generates a new address (a stub entry point)
that will act like the re-exported symbol when called. The first call via a
lazy re-export will trigger compilation of the re-exported symbol before calling
through to it.
llvm-svn: 343061
compilation of IR in the JIT.
ThreadSafeContext is a pair of an LLVMContext and a mutex that can be used to
lock that context when it needs to be accessed from multiple threads.
ThreadSafeModule is a pair of a unique_ptr<Module> and a
shared_ptr<ThreadSafeContext>. This allows the lifetime of a ThreadSafeContext
to be managed automatically in terms of the ThreadSafeModules that refer to it:
Once all modules using a ThreadSafeContext are destructed, and providing the
client has not held on to a copy of shared context pointer, the context will be
automatically destructed.
This scheme is necessary due to the following constraits: (1) We need multiple
contexts for multithreaded compilation (at least one per compile thread plus
one to store any IR not currently being compiled, though one context per module
is simpler). (2) We need to free contexts that are no longer being used so that
the JIT does not leak memory over time. (3) Module lifetimes are not
predictable (modules are compiled as needed depending on the flow of JIT'd
code) so there is no single point where contexts could be reclaimed.
JIT clients not using concurrency can safely use one ThreadSafeContext for all
ThreadSafeModules.
JIT clients who want to be able to compile concurrently should use a different
ThreadSafeContext for each module, or call setCloneToNewContextOnEmit on their
top-level IRLayer. The former reduces compile latency (since no clone step is
needed) at the cost of additional memory overhead for uncompiled modules (as
every uncompiled module will duplicate the LLVM types, constants and metadata
that have been shared).
llvm-svn: 343055
Summary:
CompileOnDemandLayer.cpp uses function in these libraries, and builds
with `-DSHARED_LIB=ON` fail without this.
Reviewers: lhames
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D48995
llvm-svn: 336389
LLJIT is a prefabricated ORC based JIT class that is meant to be the go-to
replacement for MCJIT. Unlike OrcMCJITReplacement (which will continue to be
supported) it is not API or bug-for-bug compatible, but targets the same
use cases: Simple, non-lazy compilation and execution of LLVM IR.
LLLazyJIT extends LLJIT with support for function-at-a-time lazy compilation,
similar to what was provided by LLVM's original (now long deprecated) JIT APIs.
This commit also contains some simple utility classes (CtorDtorRunner2,
LocalCXXRuntimeOverrides2, JITTargetMachineBuilder) to support LLJIT and
LLLazyJIT.
Both of these classes are works in progress. Feedback from JIT clients is very
welcome!
llvm-svn: 335670
CompileOnDemandLayer2 is a replacement for CompileOnDemandLayer built on the ORC
Core APIs. Functions in added modules are extracted and compiled lazily.
CompileOnDemandLayer2 supports multithreaded JIT'd code, and compilation on
multiple threads.
llvm-svn: 334967
orc::SymbolResolver to JITSymbolResolver adapter.
The new orc::SymbolResolver interface uses asynchronous queries for better
performance. (Asynchronous queries with bulk lookup minimize RPC/IPC overhead,
support parallel incoming queries, and expose more available work for
distribution). Existing ORC layers will soon be updated to use the
orc::SymbolResolver API rather than the legacy llvm::JITSymbolResolver API.
Because RuntimeDyld still uses JITSymbolResolver, this patch also includes an
adapter that wraps an orc::SymbolResolver with a JITSymbolResolver API.
llvm-svn: 323073
version being used on some of the green dragon builders (plus a clang-format).
Workaround: AsynchronousSymbolQuery and VSO want to work with
JITEvaluatedSymbols anyway, so just use them (instead of JITSymbol, which
happens to tickle the bug).
The libcxx bug being worked around was fixed in r276003, and there are plans to
update the offending builders.
llvm-svn: 322140
The original commit broke the builders due to a think-o in an assertion:
AsynchronousSymbolQuery's constructor needs to check the callback member
variables, not the constructor arguments.
llvm-svn: 321853
SymbolSource.
These new APIs are a first stab at tackling some current shortcomings of ORC,
especially in performance and threading support.
VSO (Virtual Shared Object) is a symbol table representing the symbol
definitions of a set of modules that behave as if they had been statically
linked together into a shared object or dylib. Symbol definitions, either
pre-defined addresses or lazy definitions, can be added and queries for symbol
addresses made. The table applies the same linkage strength rules that static
linkers do when constructing a dylib or shared object: duplicate definitions
result in errors, strong definitions override weak or common ones. This class
should improve symbol lookup speed by providing centralized symbol tables (as
compared to the findSymbol implementation in the in-tree ORC layers, which
maintain one symbol table per object file / module added).
AsynchronousSymbolQuery is a query for the addresses of a set of symbols.
Query results are returned via a callback once they become available. Querying
for a set of symbols, rather than one symbol at a time (as the current lookup
scheme does) the JIT has the opportunity to make better use of available
resources (e.g. by spawning multiple jobs to materialize the requested symbols
if possible). Returning results via a callback makes queries asynchronous, so
queries from multiple threads of JIT'd code can proceed simultaneously.
SymbolSource represents a source of symbol definitions. It is used when
adding lazy symbol definitions to a VSO. Symbol definitions can be materialized
when needed or discarded if a stronger definition is found. Materializing on
demand via SymbolSources should (eventually) allow us to remove the lazy
materializers from JITSymbol, which will in turn allow the removal of many
current error checks and reduce the number of RPC round-trips involved in
materializing remote symbols. Adding a discard function allows sources to
discard symbol definitions (or mark them as available_externally), reducing the
amount of redundant code generated by the JIT for ODR symbols.
llvm-svn: 321838
(1) Add support for function key negotiation.
The previous version of the RPC required both sides to maintain the same
enumeration for functions in the API. This means that any version skew between
the client and server would result in communication failure.
With this version of the patch functions (and serializable types) are defined
with string names, and the derived function signature strings are used to
negotiate the actual function keys (which are used for efficient call
serialization). This allows clients to connect to any server that supports a
superset of the API (based on the function signatures it supports).
(2) Add a callAsync primitive.
The callAsync primitive can be used to install a return value handler that will
run as soon as the RPC function's return value is sent back from the remote.
(3) Launch policies for RPC function handlers.
The new addHandler method, which installs handlers for RPC functions, takes two
arguments: (1) the handler itself, and (2) an optional "launch policy". When the
RPC function is called, the launch policy (if present) is invoked to actually
launch the handler. This allows the handler to be spawned on a background
thread, or added to a work list. If no launch policy is used, the handler is run
on the server thread itself. This should only be used for short-running
handlers, or entirely synchronous RPC APIs.
(4) Zero cost cross type serialization.
You can now define serialization from any type to a different "wire" type. For
example, this allows you to call an RPC function that's defined to take a
std::string while passing a StringRef argument. If a serializer from StringRef
to std::string has been defined for the channel type this will be used to
serialize the argument without having to construct a std::string instance.
This allows buffer reference types to be used as arguments to RPC calls without
requiring a copy of the buffer to be made.
llvm-svn: 286620
This patch adds utilities to ORC for managing a remote JIT target. It consists
of:
1. A very primitive RPC system for making calls over a byte-stream. See
RPCChannel.h, RPCUtils.h.
2. An RPC API defined in the above system for managing memory, looking up
symbols, creating stubs, etc. on a remote target. See OrcRemoteTargetRPCAPI.h.
3. An interface for creating high-level JIT components (memory managers,
callback managers, stub managers, etc.) that operate over the RPC API. See
OrcRemoteTargetClient.h.
4. A helper class for building servers that can handle the RPC calls. See
OrcRemoteTargetServer.h.
The system is designed to work neatly with the existing ORC components and
functionality. In particular, the ORC callback API (and consequently the
CompileOnDemandLayer) is supported, enabling lazy compilation of remote code.
Assuming this doesn't trigger any builder failures, a follow-up patch will be
committed which tests these utilities by using them to replace LLI's existing
remote-JITing demo code.
llvm-svn: 257305
Summary:
This is an implementation of RuntimeDyld::SymbolResolver that simply
rejects all resolution requests; useful for clients that do not have any
cross-object symbol references.
Reviewers: lhames
Reviewed By: lhames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10455
llvm-svn: 240288
`LLVM_ENABLE_MODULES` builds sometimes fail because `Intrinsics.td`
needs to regenerate `Instrinsics.h` before anyone can include anything
from the LLVM_IR module. Represent the dependency explicitly to prevent
that.
llvm-svn: 239796
and avoid cloning unused decls into every partition.
Module partitioning showed up as a source of significant overhead when I
profiled some trivial test cases. Avoiding the overhead of partitionging
for uncalled functions helps to mitigate this.
This change also means that it is no longer necessary to have a
LazyEmittingLayer underneath the CompileOnDemand layer, since the
CompileOnDemandLayer will not extract or emit function bodies until they are
called.
llvm-svn: 236465
use these to add support for C++ static ctors/dtors to the Orc-lazy JIT in LLI.
Replace the trivial_retval_1 regression test - the new 'hello' test is covering
strictly more code.
llvm-svn: 233885
This allows IDEs to recognize the entire set of header files for
each of the core LLVM projects.
Differential Revision: http://reviews.llvm.org/D7526
Reviewed By: Chris Bieneman
llvm-svn: 228798