LLJITBuilder and LLLazyJITBuilder construct LLJIT and LLLazyJIT instances
respectively. Over time these will allow more configurable options to be
added while remaining easy to use in the default case, which for default
in-process JITing is now:
auto J = ExitOnErr(LLJITBuilder.create());
llvm-svn: 359511
Summary:
JITLink is a jit-linker that performs the same high-level task as RuntimeDyld:
it parses relocatable object files and makes their contents runnable in a target
process.
JITLink aims to improve on RuntimeDyld in several ways:
(1) A clear design intended to maximize code-sharing while minimizing coupling.
RuntimeDyld has been developed in an ad-hoc fashion for a number of years and
this had led to intermingling of code for multiple architectures (e.g. in
RuntimeDyldELF::processRelocationRef) in a way that makes the code more
difficult to read, reason about, extend. JITLink is designed to isolate
format and architecture specific code, while still sharing generic code.
(2) Support for native code models.
RuntimeDyld required the use of large code models (where calls to external
functions are made indirectly via registers) for many of platforms due to its
restrictive model for stub generation (one "stub" per symbol). JITLink allows
arbitrary mutation of the atom graph, allowing both GOT and PLT atoms to be
added naturally.
(3) Native support for asynchronous linking.
JITLink uses asynchronous calls for symbol resolution and finalization: these
callbacks are passed a continuation function that they must call to complete the
linker's work. This allows for cleaner interoperation with the new concurrent
ORC JIT APIs, while still being easily implementable in synchronous style if
asynchrony is not needed.
To maximise sharing, the design has a hierarchy of common code:
(1) Generic atom-graph data structure and algorithms (e.g. dead stripping and
| memory allocation) that are intended to be shared by all architectures.
|
+ -- (2) Shared per-format code that utilizes (1), e.g. Generic MachO to
| atom-graph parsing.
|
+ -- (3) Architecture specific code that uses (1) and (2). E.g.
JITLinkerMachO_x86_64, which adds x86-64 specific relocation
support to (2) to build and patch up the atom graph.
To support asynchronous symbol resolution and finalization, the callbacks for
these operations take continuations as arguments:
using JITLinkAsyncLookupContinuation =
std::function<void(Expected<AsyncLookupResult> LR)>;
using JITLinkAsyncLookupFunction =
std::function<void(const DenseSet<StringRef> &Symbols,
JITLinkAsyncLookupContinuation LookupContinuation)>;
using FinalizeContinuation = std::function<void(Error)>;
virtual void finalizeAsync(FinalizeContinuation OnFinalize);
In addition to its headline features, JITLink also makes other improvements:
- Dead stripping support: symbols that are not used (e.g. redundant ODR
definitions) are discarded, and take up no memory in the target process
(In contrast, RuntimeDyld supported pointer equality for weak definitions,
but the redundant definitions stayed resident in memory).
- Improved exception handling support. JITLink provides a much more extensive
eh-frame parser than RuntimeDyld, and is able to correctly fix up many
eh-frame sections that RuntimeDyld currently (silently) fails on.
- More extensive validation and error handling throughout.
This initial patch supports linking MachO/x86-64 only. Work on support for
other architectures and formats will happen in-tree.
Differential Revision: https://reviews.llvm.org/D58704
llvm-svn: 358818
Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc
doesn't choke on it, hopefully.
Original Message:
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352827
This reverts commit f47d6b38c7 (r352791).
Seems to run into compilation failures with GCC (but not clang, where
I tested it). Reverting while I investigate.
llvm-svn: 352800
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352791
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
This shortcut mechanism for creating types was added 10 years ago, but
has seen almost no uptake since then, neither internally nor in
external projects.
The very small number of characters saved by using it does not seem
worth the mental overhead of an additional type-creation API, so,
delete it.
Differential Revision: https://reviews.llvm.org/D56573
llvm-svn: 351020
In a lot of places an empty string was passed as the ErrorBanner to
logAllUnhandledErrors. This patch makes that argument optional to
simplify the call sites.
llvm-svn: 346604
Doesn't build on Windows. The call to 'lookup' is ambiguous. Clang and
MSVC agree, anyway.
http://lab.llvm.org:8011/builders/clang-x64-windows-msvc/builds/787
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\unittests\ExecutionEngine\Orc\CoreAPIsTest.cpp(315): error C2668: 'llvm::orc::ExecutionSession::lookup': ambiguous call to overloaded function
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\include\llvm/ExecutionEngine/Orc/Core.h(823): note: could be 'llvm::Expected<llvm::JITEvaluatedSymbol> llvm::orc::ExecutionSession::lookup(llvm::ArrayRef<llvm::orc::JITDylib *>,llvm::orc::SymbolStringPtr)'
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\include\llvm/ExecutionEngine/Orc/Core.h(817): note: or 'llvm::Expected<llvm::JITEvaluatedSymbol> llvm::orc::ExecutionSession::lookup(const llvm::orc::JITDylibSearchList &,llvm::orc::SymbolStringPtr)'
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\unittests\ExecutionEngine\Orc\CoreAPIsTest.cpp(315): note: while trying to match the argument list '(initializer list, llvm::orc::SymbolStringPtr)'
llvm-svn: 345078
In the new scheme the client passes a list of (JITDylib&, bool) pairs, rather
than a list of JITDylibs. For each JITDylib the boolean indicates whether or not
to match against non-exported symbols (true means that they should be found,
false means that they should not). The MatchNonExportedInJD and MatchNonExported
parameters on lookup are removed.
The new scheme is more flexible, and easier to understand.
This patch also updates JITDylib search orders to be lists of (JITDylib&, bool)
pairs to match the new lookup scheme. Error handling is also plumbed through
the LLJIT class to allow regression tests to fail predictably when a lookup from
a lazy call-through fails.
llvm-svn: 345077
This commit adds a 'Legacy' prefix to old ORC layers and utilities, and removes
the '2' suffix from the new ORC layers. If you wish to continue using the old
ORC layers you will need to add a 'Legacy' prefix to your classes. If you were
already using the new ORC layers you will need to drop the '2' suffix.
The legacy layers will remain in-tree until the new layers reach feature
parity with them. This will involve adding support for removing code from the
new layers, and ensuring that performance is comperable.
llvm-svn: 344572
Renames:
JITDylib's setFallbackDefinitionGenerator method to setGenerator.
DynamicLibraryFallbackGenerator class to DynamicLibrarySearchGenerator.
ReexportsFallbackDefinitionGenerator to ReexportsGenerator.
llvm-svn: 344489
for the target machine.
This simplifies usage during setup of concurrent JIT stacks where the client
needs a DataLayout, but not a TargetMachine (TargetMachines are created on
the fly by the compile threads later).
llvm-svn: 343429
(1) Adds comments for the API.
(2) Removes the setArch method: This is redundant: the setArchStr method on the
triple should be used instead.
(3) Turns EmulatedTLS on by default. This matches EngineBuilder's behavior.
llvm-svn: 343423
CompileOnDemandLayer2 now supports user-supplied partition functions (the
original CompileOnDemandLayer already supported these).
Partition functions are called with the list of requested global values
(i.e. global values that currently have queries waiting on them) and have an
opportunity to select extra global values to materialize at the same time.
Also adds testing infrastructure for the new feature to lli.
llvm-svn: 343396
(1) A const accessor for the LLVMContext held by a ThreadSafeContext.
(2) A const accessor for the ThreadSafeModules held by an IRMaterializationUnit.
(3) A const MaterializationResponsibility reference to IRTransformLayer2's
transform function. This makes IRTransformLayer2 useful for JIT debugging
(since it can inspect JIT state through the responsibility argument) as well
as program transformations.
llvm-svn: 343365
one SymbolLinkagePromoter utility.
SymbolLinkagePromoter renames anonymous and private symbols, and bumps all
linkages to at least global/hidden-visibility. Modules whose symbols have been
promoted by this utility can be decomposed into sub-modules without introducing
link errors. This is used by the CompileOnDemandLayer to extract single-function
modules for lazy compilation.
llvm-svn: 343257
Modifies lit to add a 'thread_support' feature that can be used in lit test
REQUIRES clauses. The thread_support flag is set if -DLLVM_ENABLE_THREADS=ON
and unset if -DLLVM_ENABLE_THREADS=OFF. The lit flag is used to disable the
multiple-compile-threads-basic.ll testcase when threading is disabled.
llvm-svn: 343122
This doesn't work well in builds configured with LLVM_ENABLE_THREADS=OFF,
causing the following assert when running
ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll:
lib/ExecutionEngine/Orc/Core.cpp:1748: Expected<llvm::JITEvaluatedSymbol>
llvm::orc::lookup(const llvm::orc::JITDylibList &, llvm::orc::SymbolStringPtr):
Assertion `ResultMap->size() == 1 && "Unexpected number of results"' failed.
> LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
> arguments. If this is non-zero then a thread-pool will be created with the
> given number of threads, and compile tasks will be dispatched to the thread
> pool.
>
> To enable testing of this feature, two new flags are added to lli:
>
> (1) -compile-threads=N (N = 0 by default) controls the number of compile threads
> to use.
>
> (2) -thread-entry can be used to execute code on additional threads. For each
> -thread-entry argument supplied (multiple are allowed) a new thread will be
> created and the given symbol called. These additional thread entry points are
> called after static constructors are run, but before main.
llvm-svn: 343099
LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
arguments. If this is non-zero then a thread-pool will be created with the
given number of threads, and compile tasks will be dispatched to the thread
pool.
To enable testing of this feature, two new flags are added to lli:
(1) -compile-threads=N (N = 0 by default) controls the number of compile threads
to use.
(2) -thread-entry can be used to execute code on additional threads. For each
-thread-entry argument supplied (multiple are allowed) a new thread will be
created and the given symbol called. These additional thread entry points are
called after static constructors are run, but before main.
llvm-svn: 343058
compilation of IR in the JIT.
ThreadSafeContext is a pair of an LLVMContext and a mutex that can be used to
lock that context when it needs to be accessed from multiple threads.
ThreadSafeModule is a pair of a unique_ptr<Module> and a
shared_ptr<ThreadSafeContext>. This allows the lifetime of a ThreadSafeContext
to be managed automatically in terms of the ThreadSafeModules that refer to it:
Once all modules using a ThreadSafeContext are destructed, and providing the
client has not held on to a copy of shared context pointer, the context will be
automatically destructed.
This scheme is necessary due to the following constraits: (1) We need multiple
contexts for multithreaded compilation (at least one per compile thread plus
one to store any IR not currently being compiled, though one context per module
is simpler). (2) We need to free contexts that are no longer being used so that
the JIT does not leak memory over time. (3) Module lifetimes are not
predictable (modules are compiled as needed depending on the flow of JIT'd
code) so there is no single point where contexts could be reclaimed.
JIT clients not using concurrency can safely use one ThreadSafeContext for all
ThreadSafeModules.
JIT clients who want to be able to compile concurrently should use a different
ThreadSafeContext for each module, or call setCloneToNewContextOnEmit on their
top-level IRLayer. The former reduces compile latency (since no clone step is
needed) at the cost of additional memory overhead for uncompiled modules (as
every uncompiled module will duplicate the LLVM types, constants and metadata
that have been shared).
llvm-svn: 343055
The addObjectFile method adds the given object file to the JIT session, making
its code available for execution.
Support for the -extra-object flag is added to lli when operating in
-jit-kind=orc-lazy mode to support testing of this feature.
llvm-svn: 340870
VSO was a little close to VDSO (an acronym on Linux for Virtual Dynamic Shared
Object) for comfort. It also risks giving the impression that instances of this
class could be shared between ExecutionSessions, which they can not.
JITDylib seems moderately less confusing, while still hinting at how this
class is intended to be used, i.e. as a JIT-compiled stand-in for a dynamic
library (code that would have been a dynamic library if you had wanted to
compile it ahead of time).
llvm-svn: 340084
This new JIT event listener supports generating profiling data for
the linux 'perf' profiling tool, allowing it to generate function and
instruction level profiles.
Currently this functionality is not enabled by default, but must be
enabled with LLVM_USE_PERF=yes. Given that the listener has no
dependencies, it might be sensible to enable by default once the
initial issues have been shaken out.
I followed existing precedent in registering the listener by default
in lli. Should there be a decision to enable this by default on linux,
that should probably be changed.
Please note that until https://reviews.llvm.org/D47343 is resolved,
using this functionality with mcjit rather than orcjit will not
reliably work.
Disregarding the previous comment, here's an example:
$ cat /tmp/expensive_loop.c
bool stupid_isprime(uint64_t num)
{
if (num == 2)
return true;
if (num < 1 || num % 2 == 0)
return false;
for(uint64_t i = 3; i < num / 2; i+= 2) {
if (num % i == 0)
return false;
}
return true;
}
int main(int argc, char **argv)
{
int numprimes = 0;
for (uint64_t num = argc; num < 100000; num++)
{
if (stupid_isprime(num))
numprimes++;
}
return numprimes;
}
$ clang -ggdb -S -c -emit-llvm /tmp/expensive_loop.c -o
/tmp/expensive_loop.ll
$ perf record -o perf.data -g -k 1 ./bin/lli -jit-kind=mcjit /tmp/expensive_loop.ll 1
$ perf inject --jit -i perf.data -o perf.jit.data
$ perf report -i perf.jit.data
- 92.59% lli jitted-5881-2.so [.] stupid_isprime
stupid_isprime
main
llvm::MCJIT::runFunction
llvm::ExecutionEngine::runFunctionAsMain
main
__libc_start_main
0x4bf6258d4c544155
+ 0.85% lli ld-2.27.so [.] do_lookup_x
And line-level annotations also work:
│ for(uint64_t i = 3; i < num / 2; i+= 2) {
│1 30: movq $0x3,-0x18(%rbp)
0.03 │1 38: mov -0x18(%rbp),%rax
0.03 │ mov -0x10(%rbp),%rcx
│ shr $0x1,%rcx
3.63 │ ┌──cmp %rcx,%rax
│ ├──jae 6f
│ │ if (num % i == 0)
0.03 │ │ mov -0x10(%rbp),%rax
│ │ xor %edx,%edx
89.00 │ │ divq -0x18(%rbp)
│ │ cmp $0x0,%rdx
0.22 │ │↓ jne 5f
│ │ return false;
│ │ movb $0x0,-0x1(%rbp)
│ │↓ jmp 73
│ │ }
3.22 │1 5f:│↓ jmp 61
│ │ for(uint64_t i = 3; i < num / 2; i+= 2) {
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D44892
llvm-svn: 337789
The verifier identified several modules that were broken due to incorrect
linkage on declarations. To fix this, CompileOnDemandLayer2::extractFunction
has been updated to change decls to external linkage.
llvm-svn: 336150
LLJIT is a prefabricated ORC based JIT class that is meant to be the go-to
replacement for MCJIT. Unlike OrcMCJITReplacement (which will continue to be
supported) it is not API or bug-for-bug compatible, but targets the same
use cases: Simple, non-lazy compilation and execution of LLVM IR.
LLLazyJIT extends LLJIT with support for function-at-a-time lazy compilation,
similar to what was provided by LLVM's original (now long deprecated) JIT APIs.
This commit also contains some simple utility classes (CtorDtorRunner2,
LocalCXXRuntimeOverrides2, JITTargetMachineBuilder) to support LLJIT and
LLLazyJIT.
Both of these classes are works in progress. Feedback from JIT clients is very
welcome!
llvm-svn: 335670
Previously JITCompileCallbackManager only supported single threaded code. This
patch embeds a VSO (see include/llvm/ExecutionEngine/Orc/Core.h) in the callback
manager. The VSO ensures that the compile callback is only executed once and that
the resulting address cached for use by subsequent re-entries.
llvm-svn: 333490
VSOs now track dependencies for materializing symbols. Each symbol must have its
dependencies registered with the VSO prior to finalization. Usually this will
involve registering the dependencies returned in
AsynchronousSymbolQuery::ResolutionResults for queries made while linking the
symbols being materialized.
Queries against symbols are notified that a symbol is ready once it and all of
its transitive dependencies are finalized, allowing compilation work to be
broken up and moved between threads without queries returning until their
symbols fully safe to access / execute.
Related utilities (VSO, MaterializationUnit, MaterializationResponsibility) are
updated to support dependence tracking and more explicitly track responsibility
for symbols from the point of definition until they are finalized.
llvm-svn: 332541
The DEBUG() macro is very generic so it might clash with other projects.
The renaming was done as follows:
- git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g'
- git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM
- Manual change to APInt
- Manually chage DOCS as regex doesn't match it.
In the transition period the DEBUG() macro is still present and aliased
to the LLVM_DEBUG() one.
Differential Revision: https://reviews.llvm.org/D43624
llvm-svn: 332240
See r331124 for how I made a list of files missing the include.
I then ran this Python script:
for f in open('filelist.txt'):
f = f.strip()
fl = open(f).readlines()
found = False
for i in xrange(len(fl)):
p = '#include "llvm/'
if not fl[i].startswith(p):
continue
if fl[i][len(p):] > 'Config':
fl.insert(i, '#include "llvm/Config/llvm-config.h"\n')
found = True
break
if not found:
print 'not found', f
else:
open(f, 'w').write(''.join(fl))
and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p`
and tried to fix include ordering and whatnot.
No intended behavior change.
llvm-svn: 331184
We have a few functions that virtually all command wants to run on
process startup/shutdown. This patch adds InitLLVM class to do that
all at once, so that we don't need to copy-n-paste boilerplate code
to each llvm command's main() function.
Differential Revision: https://reviews.llvm.org/D45602
llvm-svn: 330046
These aren't the .def style files used in LLVM that require a macro
defined before their inclusion - they're just basic non-modular includes
to stamp out command line flag variables.
llvm-svn: 329840
This reverts commit r327566, it breaks
test/ExecutionEngine/OrcMCJIT/test-global-ctors.ll.
The test doesn't crash with a stack trace, unfortunately. It merely
returns 1 as the exit code.
ASan didn't produce a report, and I reproduced this on my Linux machine
and Windows box.
llvm-svn: 327576
Layer implementations typically mutate module state, and this is better
reflected by having layers own the Module they are operating on.
llvm-svn: 327566
Handles were returned by addModule and used as keys for removeModule,
findSymbolIn, and emitAndFinalize. Their job is now subsumed by VModuleKeys,
which simplify resource management by providing a consistent handle across all
layers.
llvm-svn: 324700
In particular this patch switches RTDyldObjectLinkingLayer to use
orc::SymbolResolver and threads the requried changse (ExecutionSession
references and VModuleKeys) through the existing layer APIs.
The purpose of the new resolver interface is to improve query performance and
better support parallelism, both in JIT'd code and within the compiler itself.
The most visibile change is switch of the <Layer>::addModule signatures from:
Expected<Handle> addModule(std::shared_ptr<ModuleType> Mod,
std::shared_ptr<JITSymbolResolver> Resolver)
to:
Expected<Handle> addModule(VModuleKey K, std::shared_ptr<ModuleType> Mod);
Typical usage of addModule will now look like:
auto K = ES.allocateVModuleKey();
Resolvers[K] = createSymbolResolver(...);
Layer.addModule(K, std::move(Mod));
See the BuildingAJIT tutorial code for example usage.
llvm-svn: 324405
ExternalSymbolMap now stores the string key (rather than using a StringRef),
as the object file backing the key may be removed at any time.
llvm-svn: 323001