flag to true in LLJIT when running in multithreaded mode.
The IRLayer::setCloneToNewContextOnEmit method sets a flag within the IRLayer
that causes modules added to that layer to be moved to a new context (by
serializing to/from a memory buffer) when they are emitted. This allows modules
that were all loaded on the same context to be compiled in parallel.
llvm-svn: 343266
one SymbolLinkagePromoter utility.
SymbolLinkagePromoter renames anonymous and private symbols, and bumps all
linkages to at least global/hidden-visibility. Modules whose symbols have been
promoted by this utility can be decomposed into sub-modules without introducing
link errors. This is used by the CompileOnDemandLayer to extract single-function
modules for lazy compilation.
llvm-svn: 343257
Modifies lit to add a 'thread_support' feature that can be used in lit test
REQUIRES clauses. The thread_support flag is set if -DLLVM_ENABLE_THREADS=ON
and unset if -DLLVM_ENABLE_THREADS=OFF. The lit flag is used to disable the
multiple-compile-threads-basic.ll testcase when threading is disabled.
llvm-svn: 343122
This doesn't work well in builds configured with LLVM_ENABLE_THREADS=OFF,
causing the following assert when running
ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll:
lib/ExecutionEngine/Orc/Core.cpp:1748: Expected<llvm::JITEvaluatedSymbol>
llvm::orc::lookup(const llvm::orc::JITDylibList &, llvm::orc::SymbolStringPtr):
Assertion `ResultMap->size() == 1 && "Unexpected number of results"' failed.
> LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
> arguments. If this is non-zero then a thread-pool will be created with the
> given number of threads, and compile tasks will be dispatched to the thread
> pool.
>
> To enable testing of this feature, two new flags are added to lli:
>
> (1) -compile-threads=N (N = 0 by default) controls the number of compile threads
> to use.
>
> (2) -thread-entry can be used to execute code on additional threads. For each
> -thread-entry argument supplied (multiple are allowed) a new thread will be
> created and the given symbol called. These additional thread entry points are
> called after static constructors are run, but before main.
llvm-svn: 343099
for lazy compilation, rather than a callback manager.
The new mechanism does not block compile threads, and does not require
function bodies to be renamed.
Future modifications should allow laziness on a per-module basis to work
without any modification of the input module.
llvm-svn: 343065
implementation as lazy compile callbacks, and a "lazy re-exports" utility that
builds lazy call-throughs.
Lazy call-throughs are similar to lazy compile callbacks (and are based on the
same underlying state saving/restoring trampolines) but resolve their targets
by performing a standard ORC lookup rather than invoking a user supplied
compiler callback. This allows them to inherit the thread-safety of ORC lookups
while blocking only the calling thread (whereas compile callbacks also block one
compile thread).
Lazy re-exports provide a simple way of building lazy call-throughs. Unlike a
regular re-export, a lazy re-export generates a new address (a stub entry point)
that will act like the re-exported symbol when called. The first call via a
lazy re-export will trigger compilation of the re-exported symbol before calling
through to it.
llvm-svn: 343061
This will allow trampoline pools to be re-used for a new lazy-reexport utility
that generates looks up function bodies using the standard symbol lookup process
(rather than using a user provided compile function). This new utility provides
the same capabilities (since MaterializationUnits already allow user supplied
compile functions to be run) as JITCompileCallbackManager, but can use the new
asynchronous lookup functions to avoid blocking a compile thread.
This patch also updates createLocalCompileCallbackManager to return an error if
a callback manager can not be created, and updates clients of that API to
account for the change. Finally, the OrcCBindingsStack is updates so that if
a callback manager is not available for the target platform a valid stack
(without support for lazy compilation) can still be constructed.
llvm-svn: 343059
LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
arguments. If this is non-zero then a thread-pool will be created with the
given number of threads, and compile tasks will be dispatched to the thread
pool.
To enable testing of this feature, two new flags are added to lli:
(1) -compile-threads=N (N = 0 by default) controls the number of compile threads
to use.
(2) -thread-entry can be used to execute code on additional threads. For each
-thread-entry argument supplied (multiple are allowed) a new thread will be
created and the given symbol called. These additional thread entry points are
called after static constructors are run, but before main.
llvm-svn: 343058
compilation of IR in the JIT.
ThreadSafeContext is a pair of an LLVMContext and a mutex that can be used to
lock that context when it needs to be accessed from multiple threads.
ThreadSafeModule is a pair of a unique_ptr<Module> and a
shared_ptr<ThreadSafeContext>. This allows the lifetime of a ThreadSafeContext
to be managed automatically in terms of the ThreadSafeModules that refer to it:
Once all modules using a ThreadSafeContext are destructed, and providing the
client has not held on to a copy of shared context pointer, the context will be
automatically destructed.
This scheme is necessary due to the following constraits: (1) We need multiple
contexts for multithreaded compilation (at least one per compile thread plus
one to store any IR not currently being compiled, though one context per module
is simpler). (2) We need to free contexts that are no longer being used so that
the JIT does not leak memory over time. (3) Module lifetimes are not
predictable (modules are compiled as needed depending on the flow of JIT'd
code) so there is no single point where contexts could be reclaimed.
JIT clients not using concurrency can safely use one ThreadSafeContext for all
ThreadSafeModules.
JIT clients who want to be able to compile concurrently should use a different
ThreadSafeContext for each module, or call setCloneToNewContextOnEmit on their
top-level IRLayer. The former reduces compile latency (since no clone step is
needed) at the cost of additional memory overhead for uncompiled modules (as
every uncompiled module will duplicate the LLVM types, constants and metadata
that have been shared).
llvm-svn: 343055
switch RTDyldObjectLinkingLayer2 to use it.
RuntimeDyld::loadObject is currently a blocking operation. This means that any
JIT'd code whose call-graph contains an embedded complete K graph will require
at least K threads to link, which precludes the use of a fixed sized thread
pool for concurrent JITing of arbitrary code (whatever K the thread-pool is set
at, any code with a K+1 complete subgraph will deadlock at JIT-link time).
To address this issue, this commmit introduces a function called jitLinkForORC
that uses continuation-passing style to pass the fix-up and finalization steps
to the asynchronous symbol resolver interface so that linking can be performed
without blocking.
llvm-svn: 343043
This reverts commit r342939.
MSVC's promise/future implementation does not like types that are not default
constructible. Reverting while I figure out a solution.
llvm-svn: 342941
Asynchronous resolution (where the caller receives a callback once the requested
set of symbols are resolved) is a core part of the new concurrent ORC APIs. This
change extends the asynchronous resolution model down to RuntimeDyld, which is
necessary to prevent deadlocks when compiling/linking on a fixed number of
threads: If RuntimeDyld's linking process were a blocking operation, then any
complete K-graph in a program will require at least K threads to link in the
worst case, as each thread would block waiting for all the others to complete.
Using callbacks instead allows the work to be passed between dependent threads
until it is complete.
For backwards compatibility, all existing RuntimeDyld functions will continue
to operate in blocking mode as before. This change will enable the introduction
of a new async finalization process in a subsequent patch to enable asynchronous
JIT linking.
llvm-svn: 342939
This replaces instances of the LLVMOrcErrorCode type with LLVMErrorRef,
simplifying the implementation of the OrcCBindingsStack class and ORC
C API bindings and making it possible to return arbitrary (wrapped)
llvm::Errors.
llvm-svn: 342828
template methods in JITDylib out-of-line.
This also splits JITDylib::define into a pair of template methods, one taking an
lvalue reference and the other an rvalue reference. This simplifies the
templates at the cost of a small amount of code duplication.
llvm-svn: 342087
construction, a new convenience lookup method, and add-to layer methods.
ExecutionSession now creates a special 'main' JITDylib upon construction. All
subsequently created JITDylibs are added to the main JITDylib's search order by
default (controlled by the AddToMainDylibSearchOrder parameter to
ExecutionSession::createDylib). The main JITDylib's search order will be used in
the future to properly handle cross-JITDylib weak symbols, with the first
definition in this search order selected.
This commit also adds a new ExecutionSession::lookup convenience method that
performs a blocking lookup using the main JITDylib's search order, as this will
be a very common operation for clients.
Finally, new convenience overloads of IRLayer and ObjectLayer's add methods are
introduced that add the given program representations to the main dylib, which
is likely to be the common case.
llvm-svn: 342086
This patch adds support for ORC JIT for mips/mips64 architecture.
In common code $static is changed to __ORCstatic because on MIPS
architecture "$" is a reserved character.
Patch by Luka Ercegovcevic
Differential Revision: https://reviews.llvm.org/D49665
llvm-svn: 341934
The existing memory manager API can not be shared between objects when linking
concurrently (since there is no way to know which concurrent allocations were
performed on behalf of which object, and hence which allocations would be safe
to finalize when finalizeMemory is called). For now, we can work around this by
requiring a new memory manager for each object.
This change only affects the concurrent version of the ORC APIs.
llvm-svn: 341579
Section address mappings can be applied using the RuntimeDyld instance passed to
the RuntimeDyld::MemoryManager::notifyObjectLoaded method. Proving an alternate
route via RuntimeDyldObjectLinkingLayer2 is redundant.
llvm-svn: 341578
Removes the implicit conversion to the underlying type for
JITSymbolFlags::FlagNames and replaces it with some bitwise and comparison
operators.
llvm-svn: 341282
management and materialization responsibility registration.
The setOverrideObjectFlagsWithResponsibilityFlags method instructs
RTDyldObjectlinkingLayer2 to override the symbol flags produced by RuntimeDyld with
the flags provided by the MaterializationResponsibility instance. This can be used
to enable symbol visibility (hidden/exported) for COFF object files, which do not
currently support the SF_Exported flag.
The setAutoClaimResponsibilityForObjectSymbols method instructs
RTDyldObjectLinkingLayer2 to claim responsibility for any symbols provided by a
given object file that were not already in the MaterializationResponsibility
instance. Setting this flag allows higher-level program representations (e.g.
LLVM IR) to be added based on only a subset of the symbols they provide, without
having to write intervening layers to scan and add the additional symbols. This
trades diagnostic quality for convenience however: If all symbols are enumerated
up-front then clashes can be detected and reported early. If this option is set,
clashes for the additional symbols may not be detected until late, and detection
may depend on the flow of control through JIT'd code.
llvm-svn: 341154
The new method name/behavior more closely models the way it was being used.
It also fixes an assertion that can occur when using the new ORC Core APIs,
where flags alone don't necessarily provide enough context to decide whether
the caller is responsible for materializing a given symbol (which was always
the reason this API existed).
The default implementation of getResponsibilitySet uses lookupFlags to determine
responsibility as before, so existing JITSymbolResolvers should continue to
work.
llvm-svn: 340874
The addObjectFile method adds the given object file to the JIT session, making
its code available for execution.
Support for the -extra-object flag is added to lli when operating in
-jit-kind=orc-lazy mode to support testing of this feature.
llvm-svn: 340870
Private symbols are not visible outside the object file, and so not defined by
the object file from ORC's perspective.
No test case yet. Ideally this would be a unit test parsing a checked-in binary,
but I am not aware of any way to reference the LLVM source root from a unit
test.
llvm-svn: 340703
An emitted symbol has had its contents written and its memory protections
applied, but it is not automatically ready to execute.
Prior to ORC supporting concurrent compilation, the term "finalized" could be
interpreted two different (but effectively equivalent) ways: (1) The finalized
symbol's contents have been written and its memory protections applied, and (2)
the symbol is ready to run. Now that ORC supports concurrent compilation, sense
(1) no longer implies sense (2). We have already introduced a new term, 'ready',
to capture sense (2), so rename sense (1) to 'emitted' to avoid any lingering
confusion.
llvm-svn: 340115
VSO was a little close to VDSO (an acronym on Linux for Virtual Dynamic Shared
Object) for comfort. It also risks giving the impression that instances of this
class could be shared between ExecutionSessions, which they can not.
JITDylib seems moderately less confusing, while still hinting at how this
class is intended to be used, i.e. as a JIT-compiled stand-in for a dynamic
library (code that would have been a dynamic library if you had wanted to
compile it ahead of time).
llvm-svn: 340084
MCJIT::getSymbolAddress was handling a non-fatal error condition of JITSymbol
as fatal. JITSymbol::operator bool returns false if no address is available
but no error is set. This can occur e.g. if the symbol name was not found.
Patch by Jascha Wetzel. Thanks Jascha!
llvm-svn: 339809
This code was moved out from BasicObjectLayerMaterializationUnit, which required
the supplied object to be well formed. The getObjectSymbolFlags function does
not require a well-formed object, so we have to propagate the error here.
llvm-svn: 338975
An instance of ReexportsFallbackDefinitionGenerator can be attached to a VSO
(via setFallbackDefinitionGenerator) to re-export symbols on demandy from a
backing VSO.
llvm-svn: 338764
The callable flag can be used to indicate that a symbol is callable. If present,
the symbol is callable. If absent, the symbol may or may not be callable (the
client must determine this by context, for example by examining the program
representation that will provide the symbol definition).
This flag will be used in the near future to enable creation of lazy compilation
stubs based on SymbolFlagsMap instances only (without having to provide
additional information to determine which symbols need stubs).
llvm-svn: 338649
Initially, in https://reviews.llvm.org/D44890, I had these defined as
empty functions inside the header when the respective event listener
was not built in. As done in that commit, that wasn't correct, because
it was a ODR violation. Krasimir hot-fixed that in r333265, but that
wasn't quite right either, because it'd lead to the symbol not being
available.
Instead just move the fallbacksto ExecutionEngineBindings.cpp. Could
define them as static inlines in the header too, but I don't think it
matters.
Reviewers: whitequark
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D49654
llvm-svn: 337930
This new JIT event listener supports generating profiling data for
the linux 'perf' profiling tool, allowing it to generate function and
instruction level profiles.
Currently this functionality is not enabled by default, but must be
enabled with LLVM_USE_PERF=yes. Given that the listener has no
dependencies, it might be sensible to enable by default once the
initial issues have been shaken out.
I followed existing precedent in registering the listener by default
in lli. Should there be a decision to enable this by default on linux,
that should probably be changed.
Please note that until https://reviews.llvm.org/D47343 is resolved,
using this functionality with mcjit rather than orcjit will not
reliably work.
Disregarding the previous comment, here's an example:
$ cat /tmp/expensive_loop.c
bool stupid_isprime(uint64_t num)
{
if (num == 2)
return true;
if (num < 1 || num % 2 == 0)
return false;
for(uint64_t i = 3; i < num / 2; i+= 2) {
if (num % i == 0)
return false;
}
return true;
}
int main(int argc, char **argv)
{
int numprimes = 0;
for (uint64_t num = argc; num < 100000; num++)
{
if (stupid_isprime(num))
numprimes++;
}
return numprimes;
}
$ clang -ggdb -S -c -emit-llvm /tmp/expensive_loop.c -o
/tmp/expensive_loop.ll
$ perf record -o perf.data -g -k 1 ./bin/lli -jit-kind=mcjit /tmp/expensive_loop.ll 1
$ perf inject --jit -i perf.data -o perf.jit.data
$ perf report -i perf.jit.data
- 92.59% lli jitted-5881-2.so [.] stupid_isprime
stupid_isprime
main
llvm::MCJIT::runFunction
llvm::ExecutionEngine::runFunctionAsMain
main
__libc_start_main
0x4bf6258d4c544155
+ 0.85% lli ld-2.27.so [.] do_lookup_x
And line-level annotations also work:
│ for(uint64_t i = 3; i < num / 2; i+= 2) {
│1 30: movq $0x3,-0x18(%rbp)
0.03 │1 38: mov -0x18(%rbp),%rax
0.03 │ mov -0x10(%rbp),%rcx
│ shr $0x1,%rcx
3.63 │ ┌──cmp %rcx,%rax
│ ├──jae 6f
│ │ if (num % i == 0)
0.03 │ │ mov -0x10(%rbp),%rax
│ │ xor %edx,%edx
89.00 │ │ divq -0x18(%rbp)
│ │ cmp $0x0,%rdx
0.22 │ │↓ jne 5f
│ │ return false;
│ │ movb $0x0,-0x1(%rbp)
│ │↓ jmp 73
│ │ }
3.22 │1 5f:│↓ jmp 61
│ │ for(uint64_t i = 3; i < num / 2; i+= 2) {
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D44892
llvm-svn: 337789
deprecating SymbolResolver and AsynchronousSymbolQuery.
Both lookup overloads take a VSO search order to perform the lookup. The first
overload is non-blocking and takes OnResolved and OnReady callbacks. The second
is blocking, takes a boolean flag to indicate whether to wait until all symbols
are ready, and returns a SymbolMap. Both overloads take a RegisterDependencies
function to register symbol dependencies (if any) on the query.
llvm-svn: 337595
This discards the unresolved symbols set and returns the flags map directly
(rather than mutating it via the first argument).
The unresolved symbols result made it easy to chain lookupFlags calls, but such
chaining should be rare to non-existant (especially now that symbol resolvers
are being deprecated) so the simpler method signature is preferable.
llvm-svn: 337594
A search order is a list of VSOs to be searched linearly to find symbols. Each
VSO now has a search order that will be used when fixing up definitions in that
VSO. Each VSO's search order defaults to just that VSO itself.
This is a first step towards removing symbol resolvers from ORC altogether. In
practice symbol resolvers tended to be used to implement a search order anyway,
sometimes with additional programatic generation of symbols. Now that VSOs
support programmatic generation of definitions via fallback generators, search
orders provide a cleaner way to achieve the desired effect (while removing a lot
of boilerplate).
llvm-svn: 337593
delegate method (and unit test).
The name 'replace' better captures what the old delegate method did: it
returned materialization responsibility for a set of symbols to the VSO.
The new delegate method delegates responsibility for a set of symbols to a new
MaterializationResponsibility instance. This can be used to split responsibility
between multiple threads, or multiple materialization methods.
llvm-svn: 336603
Summary:
CompileOnDemandLayer.cpp uses function in these libraries, and builds
with `-DSHARED_LIB=ON` fail without this.
Reviewers: lhames
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D48995
llvm-svn: 336389
writing them to a buffer and re-loading them.
Also introduces a multithreaded variant of SimpleCompiler
(MultiThreadedSimpleCompiler) for compiling IR concurrently on multiple
threads.
These changes are required to JIT IR on multiple threads correctly.
No test case yet. I will be looking at how to modify LLI / LLJIT to test
multithreaded JIT support soon.
llvm-svn: 336385
The verifier identified several modules that were broken due to incorrect
linkage on declarations. To fix this, CompileOnDemandLayer2::extractFunction
has been updated to change decls to external linkage.
llvm-svn: 336150
LLJIT is a prefabricated ORC based JIT class that is meant to be the go-to
replacement for MCJIT. Unlike OrcMCJITReplacement (which will continue to be
supported) it is not API or bug-for-bug compatible, but targets the same
use cases: Simple, non-lazy compilation and execution of LLVM IR.
LLLazyJIT extends LLJIT with support for function-at-a-time lazy compilation,
similar to what was provided by LLVM's original (now long deprecated) JIT APIs.
This commit also contains some simple utility classes (CtorDtorRunner2,
LocalCXXRuntimeOverrides2, JITTargetMachineBuilder) to support LLJIT and
LLLazyJIT.
Both of these classes are works in progress. Feedback from JIT clients is very
welcome!
llvm-svn: 335670
AsynchronousSymbolQuery::canStillFail checks the value of the callback to
prevent sending it redundant error notifications, so we need to reset it after
running it.
llvm-svn: 335664
CompileOnDemandLayer2 is a replacement for CompileOnDemandLayer built on the ORC
Core APIs. Functions in added modules are extracted and compiled lazily.
CompileOnDemandLayer2 supports multithreaded JIT'd code, and compilation on
multiple threads.
llvm-svn: 334967
materializing weak symbols as strong.
This removes some elaborate flag tweaking and plays nicer with RuntimeDyld,
which relies of weak/common flags to determine whether it should emit a given
weak definition. (Switching to strong up-front makes it appear as if there is
already an overriding definition, which would require an extra back-channel to
override).
llvm-svn: 334966
symbols in debug mode.
The MaterializationResponsibility class hijacks the Materializing flag to track
symbols that have not yet been resolved in order to guard against redundant
resolution. Since this is an API contract check and only enforced in debug mode
there is no reason to maintain the flag state in release mode.
llvm-svn: 334909
Add support for the "@high" and "@higha" symbol modifiers in powerpc64 assembly.
The modifiers represent accessing the segment consiting of bits 16-31 of a
64-bit address/offset.
Differential Revision: https://reviews.llvm.org/D47729
llvm-svn: 334855
Once a symbol has been selected for materialization it can no longer be
overridden. Stripping the weak flag guarantees this (override attempts will
then be treated as duplicate definitions and result in a DuplicateDefinition
error).
llvm-svn: 334771
If WaitUntilReady is set to true then blockingLookup will return once all
requested symbols are ready. If WaitUntilReady is set to false then
blockingLookup will return as soon as all requested symbols have been
resolved. In the latter case, if any error occurs in finalizing the symbols it
will be reported to the ExecutionSession, rather than returned by
blockingLookup.
llvm-svn: 334722
If a VSO has a fallback definition generator attached it will be called during
lookup (and lookupFlags) for any unresolved symbols. The definition generator
can add new definitions to the VSO for any unresolved symbol. This allows VSOs
to generate new definitions on demand.
The immediate use case for this code is supporting VSOs that can import
definitions found via dlsym on demand.
llvm-svn: 334538
Resolvers are required to find results for all requested symbols or return an
error, but if a resolver fails to adhere to this contract (by returning results
for only a subset of the requested symbols) then this code will infinite loop.
This assertion catches resolvers that fail to adhere to the contract.
llvm-svn: 334536
This only affects modules with lazy GVMaterializers attached (usually modules
read off disk using the lazy bitcode reader). For such modules, materializing
before compiling prevents crashes due to missing function bodies /
initializers.
llvm-svn: 334535
pre-existing SymbolFlags and SymbolToDefinition maps.
This constructor is useful when delegating work from an existing
IRMaterialiaztionUnit to a new one, as it avoids the cost of re-computing these
maps.
llvm-svn: 333852
This method returns the set of symbols in the target VSO that have queries
waiting on them. This can be used to make decisions about which symbols to
delegate to another MaterializationUnit (typically this will involve
delegating all symbols that have *not* been requested to another
MaterializationUnit so that materialization of those symbols can be
deferred until they are requested).
llvm-svn: 333684
and make it protected rather than private.
The new name reflects the actual information in the map, and this information
can be useful to derived classes (for example, to quickly look up the IR
definition of a requested symbol).
llvm-svn: 333683
The relocation for branch instructions in the dynamic loader of ExecutionEngine assumes branch instructions with R_PPC64_REL24 relocation type are only bl. However, with the tail call optimization, b instructions can be also used to jump into another function.
This patch makes the relocation to keep bits in the branch instruction other than the jump offset to avoid relocation rewrites a b instruction into bl.
Differential Revision: https://reviews.llvm.org/D47456
llvm-svn: 333502
Previously JITCompileCallbackManager only supported single threaded code. This
patch embeds a VSO (see include/llvm/ExecutionEngine/Orc/Core.h) in the callback
manager. The VSO ensures that the compile callback is only executed once and that
the resulting address cached for use by subsequent re-entries.
llvm-svn: 333490
Currently RTDyldObjectLinkingLayer makes it hard to support
JITEventListeners. Which in turn means debugging and profiling JIT
generated code hard.
Supporting JITEventListeners at minimum requries a freed
callback (added).
As listeners expect the ObjectFile to be passed as well, an adaptor
between RTDyldObjectLinkingLayer and JITEventListeners would currently
need to also maintain ObjectFiles for all loaded modules. To make that
less awkward, extend the callbacks to pass the ObjectFile to both
Finalized and Freed callbacks. That requires extending the lifetime
of the object file when callbacks are present.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D44890
llvm-svn: 333227
Re-appply r333147, reverted in r333152 due to a pre-existing bug. As
D47308 has been merged in r333206, the OSX issue should now be
resolved.
In many cases JIT users will know in which module a symbol
resides. Avoiding to search other modules can be more efficient. It
also allows to handle duplicate symbol names between modules.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D44889
llvm-svn: 333215
The lack of name mangling caused a unittest failure after r333147 (in
TestEagerIRCompilation), as OSX prefixes symbol names with '_'. The
lack of name mangling therefore leads to a NULL pointer being returned
and then called, hence the failure.
While it may look like it, this isn't an actual behavioral change, as
findSymbolIn() previously was not exposed externally, and essentially
dead code. Which explains why nobody noticed the issue previously.
Reviewers: lhames
Reviewed By: lhames
Subscribers: chandlerc, llvm-commits
Differential Revision: https://reviews.llvm.org/D47308
llvm-svn: 333206
This reverts r333147 until https://reviews.llvm.org/D47308 is ready to
be reviewed. r333147 exposed a behavioural difference between
OrcCBindingsStack::findSymbolIn() and OrcCBindingsStack::findSymbol(),
where only the latter does name mangling. After r333147 that causes a
test failure on OSX, because the new test looks for main using
findSymbolIn() but the mangled name is _main.
llvm-svn: 333152
In many cases JIT users will know in which module a symbol
resides. Avoiding to search other modules can be more efficient. It
also allows to handle duplicate symbol names between modules.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D44889
llvm-svn: 333147
to a base class (IRMaterializationUnit).
The new class, IRMaterializationUnit, provides a convenient base for any client
that wants to write a materializer for LLVM IR.
llvm-svn: 332993
Also tightens the behavior of ExecutionSession::failQuery. Queries can usually
only be failed by marking a symbol as failed-to-materialize, but
ExecutionSession::failQuery provides a second route, and both routes may be
executed from different threads. In the case that a query has already been
failed due to a materialization error, ExecutionSession::failQuery will
direct the error to ExecutionSession::reportError instead.
llvm-svn: 332898
The lookup function provides blocking symbol resolution for JIT clients (not
layers themselves) so it does not need to track symbol dependencies via a
MaterializationResponsibility.
llvm-svn: 332897
notifyFailed method rather than passing in an error generator.
VSO::notifyFailed is responsible for notifying queries that they will not
succeed due to error. In practice the queries don't care about the details
of the failure, just the fact that a failure occurred for some symbols.
Having VSO::notifyFailed take care of this simplifies the interface.
llvm-svn: 332666
VSOs now track dependencies for materializing symbols. Each symbol must have its
dependencies registered with the VSO prior to finalization. Usually this will
involve registering the dependencies returned in
AsynchronousSymbolQuery::ResolutionResults for queries made while linking the
symbols being materialized.
Queries against symbols are notified that a symbol is ready once it and all of
its transitive dependencies are finalized, allowing compilation work to be
broken up and moved between threads without queries returning until their
symbols fully safe to access / execute.
Related utilities (VSO, MaterializationUnit, MaterializationResponsibility) are
updated to support dependence tracking and more explicitly track responsibility
for symbols from the point of definition until they are finalized.
llvm-svn: 332541
The DEBUG() macro is very generic so it might clash with other projects.
The renaming was done as follows:
- git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g'
- git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM
- Manual change to APInt
- Manually chage DOCS as regex doesn't match it.
In the transition period the DEBUG() macro is still present and aliased
to the LLVM_DEBUG() one.
Differential Revision: https://reviews.llvm.org/D43624
llvm-svn: 332240
Previously thumb bits were only checked for external relocations (thumb to arm
code and vice-versa). This patch adds detection for thumb callees in the same
section asthe (also thumb) caller.
The MachO/Thumb test case is updated to cover this, and redundant checks
(handled by the MachO/ARM test) are removed.
llvm-svn: 331838
This is a follow-up to r331272.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
https://reviews.llvm.org/D46290
llvm-svn: 331275
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46290
llvm-svn: 331272
See r331124 for how I made a list of files missing the include.
I then ran this Python script:
for f in open('filelist.txt'):
f = f.strip()
fl = open(f).readlines()
found = False
for i in xrange(len(fl)):
p = '#include "llvm/'
if not fl[i].startswith(p):
continue
if fl[i][len(p):] > 'Config':
fl.insert(i, '#include "llvm/Config/llvm-config.h"\n')
found = True
break
if not found:
print 'not found', f
else:
open(f, 'w').write(''.join(fl))
and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p`
and tried to fix include ordering and whatnot.
No intended behavior change.
llvm-svn: 331184
This forces these operations to be carried out via a
MaterializationResponsibility instance, ensuring responsibility is explicitly
tracked.
llvm-svn: 330356
materializing function definitions.
MaterializationUnit instances are responsible for resolving and finalizing
symbol definitions when their materialize method is called. By contract, the
MaterializationUnit must materialize all definitions it is responsible for and
no others. If it can not materialize all definitions (because of some error)
then it must notify the associated VSO about each definition that could not be
materialized. The MaterializationResponsibility class tracks this
responsibility, asserting that all required symbols are resolved and finalized,
and that no extraneous symbols are resolved or finalized. In the event of an
error it provides a convenience method for notifying the VSO about each
definition that could not be materialized.
llvm-svn: 330142
notifyMaterializationFailed.
The notifyMaterializationFailed method can determine which error to raise by
looking at which queue the pending queries are in (resolution or finalization).
llvm-svn: 330141
Summary: As discussed in https://reviews.llvm.org/D45606, it makes more sense to name the class as SmallVectorMemoryBuffer
Reviewers: bkramer, dblaikie
Reviewed By: dblaikie
Subscribers: mehdi_amini, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D45661
llvm-svn: 330107
Summary:
Since the class is used by both MCJIT and LTO, it makes more sense to move it to Support lib.
This is a follow up patch to r329929 and https://reviews.llvm.org/D45244
Reviewers: bkramer, dblaikie
Reviewed By: bkramer
Subscribers: mehdi_amini, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D45606
llvm-svn: 330093
Functions in different objects may use different TOCs, so calls between such
functions should use the global entry point of the callee which updates the
TOC pointer.
This should fix a bug that the Numba developers encountered (see
https://github.com/numba/numba/issues/2451).
Patch by Olexa Bilaniuk. Thanks Olexa!
No RuntimeDyld checker test case yet as I am not familiar enough with how
RuntimeDyldELF fixes up call-sites, but I do not want to hold up landing
this. I will continue to work on it and see if I can rope some powerpc
experts in.
llvm-svn: 329335
Previously this crashed because a nullptr (returned by
createLocalIndirectStubsManagerBuilder() on platforms without
indirection support) functor was unconditionally invoked.
Patch by Andres Freund. Thanks Andres!
llvm-svn: 328687
This includes llvm-c/TargetMachine.h which is logically part of
libTarget (since libTarget implements llvm-c/TargetMachine.h's
functions).
llvm-svn: 328394
operation all-or-nothing, rather than allowing materialization on a per-symbol
basis.
This addresses a shortcoming of per-symbol materialization: If a
MaterializationUnit (/SymbolSource) wants to materialize more symbols than
requested (which is likely: most materializers will want to materialize whole
modules) then it needs a way to notify the symbol table about the extra symbols
being materialized. This process (checking what has been requested against what
is being provided and notifying the symbol table about the difference) has to
be repeated at every level of the JIT stack. Making materialization
all-or-nothing eliminates this issue, simplifying both materializer
implementations and the symbol table (VSO class) API. The cost is that
per-symbol materialization (e.g. for individual symbols in a module) now
requires multiple MaterializationUnits.
llvm-svn: 327946
This reverts commit r327566, it breaks
test/ExecutionEngine/OrcMCJIT/test-global-ctors.ll.
The test doesn't crash with a stack trace, unfortunately. It merely
returns 1 as the exit code.
ASan didn't produce a report, and I reproduced this on my Linux machine
and Windows box.
llvm-svn: 327576
Layer implementations typically mutate module state, and this is better
reflected by having layers own the Module they are operating on.
llvm-svn: 327566
The Error locals need to be protected by a mutex. (This could be fixed by
having the promises / futures contain Expected and Error values, but
MSVC's future implementation does not support this yet).
Hopefully this will fix some of the errors seen on the builders due to
r327474.
llvm-svn: 327477
This can be used to extract the symbol table from a RuntimeDyld instance prior
to disposing of it.
This patch also updates RTDyldObjectLinkingLayer to use the new method, rather
than requesting symbols one at a time via getSymbol.
llvm-svn: 327476
The lookup function takes a list of VSOs, a set of symbol names (or just one
symbol name) and a materialization function object. It returns an
Expected<SymbolMap> (if given a set of names) or an Expected<JITEvaluatedSymbol>
(if given just one name). The lookup method constructs an
AsynchronousSymbolQuery for the given names, applies that query to each VSO in
the list in turn, and then blocks waiting for the query to complete. If
threading is enabled then the materialization function object can be used to
execute the materialization on different threads. If threading is disabled the
MaterializeOnCurrentThread utility must be used.
llvm-svn: 327474
test case.
r326290 fixed the assertion for decodeAddend, but not encodeAddend. The
regression test failed to catch this because it was missing the
subsections_via_symbols flag, so the desired relocation was not applied.
This patch also fixes the formatting of the assertion from r326290.
llvm-svn: 326406
Emulated TLS is enabled by llc flag -emulated-tls,
which is passed by clang driver.
When llc is called explicitly or from other drivers like LTO,
missing -emulated-tls flag would generate wrong TLS code for targets
that supports only this mode.
Now use useEmulatedTLS() instead of Options.EmulatedTLS to decide whether
emulated TLS code should be generated.
Unit tests are modified to run with and without the -emulated-tls flag.
Differential Revision: https://reviews.llvm.org/D42999
llvm-svn: 326341