In a lot of places an empty string was passed as the ErrorBanner to
logAllUnhandledErrors. This patch makes that argument optional to
simplify the call sites.
llvm-svn: 346604
Doesn't build on Windows. The call to 'lookup' is ambiguous. Clang and
MSVC agree, anyway.
http://lab.llvm.org:8011/builders/clang-x64-windows-msvc/builds/787
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\unittests\ExecutionEngine\Orc\CoreAPIsTest.cpp(315): error C2668: 'llvm::orc::ExecutionSession::lookup': ambiguous call to overloaded function
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\include\llvm/ExecutionEngine/Orc/Core.h(823): note: could be 'llvm::Expected<llvm::JITEvaluatedSymbol> llvm::orc::ExecutionSession::lookup(llvm::ArrayRef<llvm::orc::JITDylib *>,llvm::orc::SymbolStringPtr)'
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\include\llvm/ExecutionEngine/Orc/Core.h(817): note: or 'llvm::Expected<llvm::JITEvaluatedSymbol> llvm::orc::ExecutionSession::lookup(const llvm::orc::JITDylibSearchList &,llvm::orc::SymbolStringPtr)'
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\unittests\ExecutionEngine\Orc\CoreAPIsTest.cpp(315): note: while trying to match the argument list '(initializer list, llvm::orc::SymbolStringPtr)'
llvm-svn: 345078
In the new scheme the client passes a list of (JITDylib&, bool) pairs, rather
than a list of JITDylibs. For each JITDylib the boolean indicates whether or not
to match against non-exported symbols (true means that they should be found,
false means that they should not). The MatchNonExportedInJD and MatchNonExported
parameters on lookup are removed.
The new scheme is more flexible, and easier to understand.
This patch also updates JITDylib search orders to be lists of (JITDylib&, bool)
pairs to match the new lookup scheme. Error handling is also plumbed through
the LLJIT class to allow regression tests to fail predictably when a lookup from
a lazy call-through fails.
llvm-svn: 345077
Non-loaded sections (whose unused load-address defaults to zero) should not
be taken into account when calculating ImageBase, or ImageBase will be
incorrectly set to 0.
Patch by Andrew Scheidecker. Thanks Andrew!
https://reviews.llvm.org/D51343
+ // The Sections list may contain sections that weren't loaded for
+ // whatever reason: they may be debug sections, and ProcessAllSections
+ // is false, or they may be sections that contain 0 bytes. If the
+ // section isn't loaded, the load address will be 0, and it should not
+ // be included in the ImageBase calculation.
llvm-svn: 344995
Otherwise we can end up with a data-race when linking concurrently.
This should fix an intermittent failure in the multiple-compile-threads-basic.ll
testcase.
llvm-svn: 344956
MaterializationResponsibility.
VModuleKeys are intended to enable selective removal of modules from a JIT
session, however for a wide variety of use cases selective removal is not
needed and introduces unnecessary overhead. As of this commit, the default
constructed VModuleKey value is reserved as a "do not track" value, and
becomes the default when adding a new module to the JIT.
This commit also changes the propagation of VModuleKeys. They were passed
alongside the MaterializationResponsibity instance in XXLayer::emit methods,
but are now propagated as part of the MaterializationResponsibility instance
itself (and as part of MaterializationUnit when stored in a JITDylib).
Associating VModuleKeys with MaterializationUnits in this way should allow
for a thread-safe module removal mechanism in the future, even when a module
is in the process of being compiled, by having the
MaterializationResponsibility object check in on its VModuleKey's state
before commiting its results to the JITDylib.
llvm-svn: 344643
This commit adds a 'Legacy' prefix to old ORC layers and utilities, and removes
the '2' suffix from the new ORC layers. If you wish to continue using the old
ORC layers you will need to add a 'Legacy' prefix to your classes. If you were
already using the new ORC layers you will need to drop the '2' suffix.
The legacy layers will remain in-tree until the new layers reach feature
parity with them. This will involve adding support for removing code from the
new layers, and ensuring that performance is comperable.
llvm-svn: 344572
The new name is a better fit: This class does not actually spawn any new
threads for compilation, it is just safe to call from multiple threads
concurrently.
The "Simple" part of the name did not convey much either, so it was
dropped.
llvm-svn: 344567
Renames:
JITDylib's setFallbackDefinitionGenerator method to setGenerator.
DynamicLibraryFallbackGenerator class to DynamicLibrarySearchGenerator.
ReexportsFallbackDefinitionGenerator to ReexportsGenerator.
llvm-svn: 344489
This adds two arguments to the main ExecutionSession::lookup method:
MatchNonExportedInJD, and MatchNonExported. These control whether and where
hidden symbols should be matched when searching a list of JITDylibs.
A similar effect could have been achieved by filtering search results, but
this would have involved materializing symbol definitions (since materialization
is triggered on lookup) only to throw the results away, among other issues.
llvm-svn: 344467
rather than require them to have been promoted before being passed in.
Dropping this precondition is better for layer composition (CompileOnDemandLayer
was the only one that placed pre-conditions on the modules that could be added).
It also means that the promoted private symbols do not show up in the target
JITDylib's symbol table. Instead, they are confined to the hidden implementation
dylib that contains the actual definitions.
For the 403.gcc testcase this cut down the public symbol table size from ~15,000
symbols to ~4000, substantially reducing symbol dependence tracking costs.
llvm-svn: 344078
Symbols can be removed provided that all are present in the JITDylib and none
are currently in the materializing state. On success all requested symbols are
removed. On failure an error is returned and no symbols are removed.
llvm-svn: 343928
(1) Adds comments for the API.
(2) Removes the setArch method: This is redundant: the setArchStr method on the
triple should be used instead.
(3) Turns EmulatedTLS on by default. This matches EngineBuilder's behavior.
llvm-svn: 343423
CompileOnDemandLayer2 now supports user-supplied partition functions (the
original CompileOnDemandLayer already supported these).
Partition functions are called with the list of requested global values
(i.e. global values that currently have queries waiting on them) and have an
opportunity to select extra global values to materialize at the same time.
Also adds testing infrastructure for the new feature to lli.
llvm-svn: 343396
This makes it available for use in IRTransformLayer2::TransformFunction
instances (since a const MaterializationResponsibility& parameter was
added in r343365).
llvm-svn: 343367
(1) A const accessor for the LLVMContext held by a ThreadSafeContext.
(2) A const accessor for the ThreadSafeModules held by an IRMaterializationUnit.
(3) A const MaterializationResponsibility reference to IRTransformLayer2's
transform function. This makes IRTransformLayer2 useful for JIT debugging
(since it can inspect JIT state through the responsibility argument) as well
as program transformations.
llvm-svn: 343365
(1) Print debugging output under a session lock to avoid garbled messages when
compiling on multiple threads.
(2) Name MaterializationUnits, add an ostream operator for them, and so they can
be easily referenced in debugging output, and have that ostream operator
optionally print code/data/hidden symbols provided by that materialization unit
based on command line options.
llvm-svn: 343323
flag to true in LLJIT when running in multithreaded mode.
The IRLayer::setCloneToNewContextOnEmit method sets a flag within the IRLayer
that causes modules added to that layer to be moved to a new context (by
serializing to/from a memory buffer) when they are emitted. This allows modules
that were all loaded on the same context to be compiled in parallel.
llvm-svn: 343266
one SymbolLinkagePromoter utility.
SymbolLinkagePromoter renames anonymous and private symbols, and bumps all
linkages to at least global/hidden-visibility. Modules whose symbols have been
promoted by this utility can be decomposed into sub-modules without introducing
link errors. This is used by the CompileOnDemandLayer to extract single-function
modules for lazy compilation.
llvm-svn: 343257
Modifies lit to add a 'thread_support' feature that can be used in lit test
REQUIRES clauses. The thread_support flag is set if -DLLVM_ENABLE_THREADS=ON
and unset if -DLLVM_ENABLE_THREADS=OFF. The lit flag is used to disable the
multiple-compile-threads-basic.ll testcase when threading is disabled.
llvm-svn: 343122
This doesn't work well in builds configured with LLVM_ENABLE_THREADS=OFF,
causing the following assert when running
ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll:
lib/ExecutionEngine/Orc/Core.cpp:1748: Expected<llvm::JITEvaluatedSymbol>
llvm::orc::lookup(const llvm::orc::JITDylibList &, llvm::orc::SymbolStringPtr):
Assertion `ResultMap->size() == 1 && "Unexpected number of results"' failed.
> LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
> arguments. If this is non-zero then a thread-pool will be created with the
> given number of threads, and compile tasks will be dispatched to the thread
> pool.
>
> To enable testing of this feature, two new flags are added to lli:
>
> (1) -compile-threads=N (N = 0 by default) controls the number of compile threads
> to use.
>
> (2) -thread-entry can be used to execute code on additional threads. For each
> -thread-entry argument supplied (multiple are allowed) a new thread will be
> created and the given symbol called. These additional thread entry points are
> called after static constructors are run, but before main.
llvm-svn: 343099
for lazy compilation, rather than a callback manager.
The new mechanism does not block compile threads, and does not require
function bodies to be renamed.
Future modifications should allow laziness on a per-module basis to work
without any modification of the input module.
llvm-svn: 343065
implementation as lazy compile callbacks, and a "lazy re-exports" utility that
builds lazy call-throughs.
Lazy call-throughs are similar to lazy compile callbacks (and are based on the
same underlying state saving/restoring trampolines) but resolve their targets
by performing a standard ORC lookup rather than invoking a user supplied
compiler callback. This allows them to inherit the thread-safety of ORC lookups
while blocking only the calling thread (whereas compile callbacks also block one
compile thread).
Lazy re-exports provide a simple way of building lazy call-throughs. Unlike a
regular re-export, a lazy re-export generates a new address (a stub entry point)
that will act like the re-exported symbol when called. The first call via a
lazy re-export will trigger compilation of the re-exported symbol before calling
through to it.
llvm-svn: 343061
This will allow trampoline pools to be re-used for a new lazy-reexport utility
that generates looks up function bodies using the standard symbol lookup process
(rather than using a user provided compile function). This new utility provides
the same capabilities (since MaterializationUnits already allow user supplied
compile functions to be run) as JITCompileCallbackManager, but can use the new
asynchronous lookup functions to avoid blocking a compile thread.
This patch also updates createLocalCompileCallbackManager to return an error if
a callback manager can not be created, and updates clients of that API to
account for the change. Finally, the OrcCBindingsStack is updates so that if
a callback manager is not available for the target platform a valid stack
(without support for lazy compilation) can still be constructed.
llvm-svn: 343059
LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
arguments. If this is non-zero then a thread-pool will be created with the
given number of threads, and compile tasks will be dispatched to the thread
pool.
To enable testing of this feature, two new flags are added to lli:
(1) -compile-threads=N (N = 0 by default) controls the number of compile threads
to use.
(2) -thread-entry can be used to execute code on additional threads. For each
-thread-entry argument supplied (multiple are allowed) a new thread will be
created and the given symbol called. These additional thread entry points are
called after static constructors are run, but before main.
llvm-svn: 343058
compilation of IR in the JIT.
ThreadSafeContext is a pair of an LLVMContext and a mutex that can be used to
lock that context when it needs to be accessed from multiple threads.
ThreadSafeModule is a pair of a unique_ptr<Module> and a
shared_ptr<ThreadSafeContext>. This allows the lifetime of a ThreadSafeContext
to be managed automatically in terms of the ThreadSafeModules that refer to it:
Once all modules using a ThreadSafeContext are destructed, and providing the
client has not held on to a copy of shared context pointer, the context will be
automatically destructed.
This scheme is necessary due to the following constraits: (1) We need multiple
contexts for multithreaded compilation (at least one per compile thread plus
one to store any IR not currently being compiled, though one context per module
is simpler). (2) We need to free contexts that are no longer being used so that
the JIT does not leak memory over time. (3) Module lifetimes are not
predictable (modules are compiled as needed depending on the flow of JIT'd
code) so there is no single point where contexts could be reclaimed.
JIT clients not using concurrency can safely use one ThreadSafeContext for all
ThreadSafeModules.
JIT clients who want to be able to compile concurrently should use a different
ThreadSafeContext for each module, or call setCloneToNewContextOnEmit on their
top-level IRLayer. The former reduces compile latency (since no clone step is
needed) at the cost of additional memory overhead for uncompiled modules (as
every uncompiled module will duplicate the LLVM types, constants and metadata
that have been shared).
llvm-svn: 343055
switch RTDyldObjectLinkingLayer2 to use it.
RuntimeDyld::loadObject is currently a blocking operation. This means that any
JIT'd code whose call-graph contains an embedded complete K graph will require
at least K threads to link, which precludes the use of a fixed sized thread
pool for concurrent JITing of arbitrary code (whatever K the thread-pool is set
at, any code with a K+1 complete subgraph will deadlock at JIT-link time).
To address this issue, this commmit introduces a function called jitLinkForORC
that uses continuation-passing style to pass the fix-up and finalization steps
to the asynchronous symbol resolver interface so that linking can be performed
without blocking.
llvm-svn: 343043
This reverts commit r342939.
MSVC's promise/future implementation does not like types that are not default
constructible. Reverting while I figure out a solution.
llvm-svn: 342941
Asynchronous resolution (where the caller receives a callback once the requested
set of symbols are resolved) is a core part of the new concurrent ORC APIs. This
change extends the asynchronous resolution model down to RuntimeDyld, which is
necessary to prevent deadlocks when compiling/linking on a fixed number of
threads: If RuntimeDyld's linking process were a blocking operation, then any
complete K-graph in a program will require at least K threads to link in the
worst case, as each thread would block waiting for all the others to complete.
Using callbacks instead allows the work to be passed between dependent threads
until it is complete.
For backwards compatibility, all existing RuntimeDyld functions will continue
to operate in blocking mode as before. This change will enable the introduction
of a new async finalization process in a subsequent patch to enable asynchronous
JIT linking.
llvm-svn: 342939