Doesn't build on Windows. The call to 'lookup' is ambiguous. Clang and
MSVC agree, anyway.
http://lab.llvm.org:8011/builders/clang-x64-windows-msvc/builds/787
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\unittests\ExecutionEngine\Orc\CoreAPIsTest.cpp(315): error C2668: 'llvm::orc::ExecutionSession::lookup': ambiguous call to overloaded function
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\include\llvm/ExecutionEngine/Orc/Core.h(823): note: could be 'llvm::Expected<llvm::JITEvaluatedSymbol> llvm::orc::ExecutionSession::lookup(llvm::ArrayRef<llvm::orc::JITDylib *>,llvm::orc::SymbolStringPtr)'
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\include\llvm/ExecutionEngine/Orc/Core.h(817): note: or 'llvm::Expected<llvm::JITEvaluatedSymbol> llvm::orc::ExecutionSession::lookup(const llvm::orc::JITDylibSearchList &,llvm::orc::SymbolStringPtr)'
C:\b\slave\clang-x64-windows-msvc\build\llvm.src\unittests\ExecutionEngine\Orc\CoreAPIsTest.cpp(315): note: while trying to match the argument list '(initializer list, llvm::orc::SymbolStringPtr)'
llvm-svn: 345078
In the new scheme the client passes a list of (JITDylib&, bool) pairs, rather
than a list of JITDylibs. For each JITDylib the boolean indicates whether or not
to match against non-exported symbols (true means that they should be found,
false means that they should not). The MatchNonExportedInJD and MatchNonExported
parameters on lookup are removed.
The new scheme is more flexible, and easier to understand.
This patch also updates JITDylib search orders to be lists of (JITDylib&, bool)
pairs to match the new lookup scheme. Error handling is also plumbed through
the LLJIT class to allow regression tests to fail predictably when a lookup from
a lazy call-through fails.
llvm-svn: 345077
This commit adds a 'Legacy' prefix to old ORC layers and utilities, and removes
the '2' suffix from the new ORC layers. If you wish to continue using the old
ORC layers you will need to add a 'Legacy' prefix to your classes. If you were
already using the new ORC layers you will need to drop the '2' suffix.
The legacy layers will remain in-tree until the new layers reach feature
parity with them. This will involve adding support for removing code from the
new layers, and ensuring that performance is comperable.
llvm-svn: 344572
Renames:
JITDylib's setFallbackDefinitionGenerator method to setGenerator.
DynamicLibraryFallbackGenerator class to DynamicLibrarySearchGenerator.
ReexportsFallbackDefinitionGenerator to ReexportsGenerator.
llvm-svn: 344489
for the target machine.
This simplifies usage during setup of concurrent JIT stacks where the client
needs a DataLayout, but not a TargetMachine (TargetMachines are created on
the fly by the compile threads later).
llvm-svn: 343429
(1) Adds comments for the API.
(2) Removes the setArch method: This is redundant: the setArchStr method on the
triple should be used instead.
(3) Turns EmulatedTLS on by default. This matches EngineBuilder's behavior.
llvm-svn: 343423
CompileOnDemandLayer2 now supports user-supplied partition functions (the
original CompileOnDemandLayer already supported these).
Partition functions are called with the list of requested global values
(i.e. global values that currently have queries waiting on them) and have an
opportunity to select extra global values to materialize at the same time.
Also adds testing infrastructure for the new feature to lli.
llvm-svn: 343396
(1) A const accessor for the LLVMContext held by a ThreadSafeContext.
(2) A const accessor for the ThreadSafeModules held by an IRMaterializationUnit.
(3) A const MaterializationResponsibility reference to IRTransformLayer2's
transform function. This makes IRTransformLayer2 useful for JIT debugging
(since it can inspect JIT state through the responsibility argument) as well
as program transformations.
llvm-svn: 343365
one SymbolLinkagePromoter utility.
SymbolLinkagePromoter renames anonymous and private symbols, and bumps all
linkages to at least global/hidden-visibility. Modules whose symbols have been
promoted by this utility can be decomposed into sub-modules without introducing
link errors. This is used by the CompileOnDemandLayer to extract single-function
modules for lazy compilation.
llvm-svn: 343257
Modifies lit to add a 'thread_support' feature that can be used in lit test
REQUIRES clauses. The thread_support flag is set if -DLLVM_ENABLE_THREADS=ON
and unset if -DLLVM_ENABLE_THREADS=OFF. The lit flag is used to disable the
multiple-compile-threads-basic.ll testcase when threading is disabled.
llvm-svn: 343122
This doesn't work well in builds configured with LLVM_ENABLE_THREADS=OFF,
causing the following assert when running
ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll:
lib/ExecutionEngine/Orc/Core.cpp:1748: Expected<llvm::JITEvaluatedSymbol>
llvm::orc::lookup(const llvm::orc::JITDylibList &, llvm::orc::SymbolStringPtr):
Assertion `ResultMap->size() == 1 && "Unexpected number of results"' failed.
> LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
> arguments. If this is non-zero then a thread-pool will be created with the
> given number of threads, and compile tasks will be dispatched to the thread
> pool.
>
> To enable testing of this feature, two new flags are added to lli:
>
> (1) -compile-threads=N (N = 0 by default) controls the number of compile threads
> to use.
>
> (2) -thread-entry can be used to execute code on additional threads. For each
> -thread-entry argument supplied (multiple are allowed) a new thread will be
> created and the given symbol called. These additional thread entry points are
> called after static constructors are run, but before main.
llvm-svn: 343099
LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
arguments. If this is non-zero then a thread-pool will be created with the
given number of threads, and compile tasks will be dispatched to the thread
pool.
To enable testing of this feature, two new flags are added to lli:
(1) -compile-threads=N (N = 0 by default) controls the number of compile threads
to use.
(2) -thread-entry can be used to execute code on additional threads. For each
-thread-entry argument supplied (multiple are allowed) a new thread will be
created and the given symbol called. These additional thread entry points are
called after static constructors are run, but before main.
llvm-svn: 343058
compilation of IR in the JIT.
ThreadSafeContext is a pair of an LLVMContext and a mutex that can be used to
lock that context when it needs to be accessed from multiple threads.
ThreadSafeModule is a pair of a unique_ptr<Module> and a
shared_ptr<ThreadSafeContext>. This allows the lifetime of a ThreadSafeContext
to be managed automatically in terms of the ThreadSafeModules that refer to it:
Once all modules using a ThreadSafeContext are destructed, and providing the
client has not held on to a copy of shared context pointer, the context will be
automatically destructed.
This scheme is necessary due to the following constraits: (1) We need multiple
contexts for multithreaded compilation (at least one per compile thread plus
one to store any IR not currently being compiled, though one context per module
is simpler). (2) We need to free contexts that are no longer being used so that
the JIT does not leak memory over time. (3) Module lifetimes are not
predictable (modules are compiled as needed depending on the flow of JIT'd
code) so there is no single point where contexts could be reclaimed.
JIT clients not using concurrency can safely use one ThreadSafeContext for all
ThreadSafeModules.
JIT clients who want to be able to compile concurrently should use a different
ThreadSafeContext for each module, or call setCloneToNewContextOnEmit on their
top-level IRLayer. The former reduces compile latency (since no clone step is
needed) at the cost of additional memory overhead for uncompiled modules (as
every uncompiled module will duplicate the LLVM types, constants and metadata
that have been shared).
llvm-svn: 343055
The addObjectFile method adds the given object file to the JIT session, making
its code available for execution.
Support for the -extra-object flag is added to lli when operating in
-jit-kind=orc-lazy mode to support testing of this feature.
llvm-svn: 340870
VSO was a little close to VDSO (an acronym on Linux for Virtual Dynamic Shared
Object) for comfort. It also risks giving the impression that instances of this
class could be shared between ExecutionSessions, which they can not.
JITDylib seems moderately less confusing, while still hinting at how this
class is intended to be used, i.e. as a JIT-compiled stand-in for a dynamic
library (code that would have been a dynamic library if you had wanted to
compile it ahead of time).
llvm-svn: 340084
This new JIT event listener supports generating profiling data for
the linux 'perf' profiling tool, allowing it to generate function and
instruction level profiles.
Currently this functionality is not enabled by default, but must be
enabled with LLVM_USE_PERF=yes. Given that the listener has no
dependencies, it might be sensible to enable by default once the
initial issues have been shaken out.
I followed existing precedent in registering the listener by default
in lli. Should there be a decision to enable this by default on linux,
that should probably be changed.
Please note that until https://reviews.llvm.org/D47343 is resolved,
using this functionality with mcjit rather than orcjit will not
reliably work.
Disregarding the previous comment, here's an example:
$ cat /tmp/expensive_loop.c
bool stupid_isprime(uint64_t num)
{
if (num == 2)
return true;
if (num < 1 || num % 2 == 0)
return false;
for(uint64_t i = 3; i < num / 2; i+= 2) {
if (num % i == 0)
return false;
}
return true;
}
int main(int argc, char **argv)
{
int numprimes = 0;
for (uint64_t num = argc; num < 100000; num++)
{
if (stupid_isprime(num))
numprimes++;
}
return numprimes;
}
$ clang -ggdb -S -c -emit-llvm /tmp/expensive_loop.c -o
/tmp/expensive_loop.ll
$ perf record -o perf.data -g -k 1 ./bin/lli -jit-kind=mcjit /tmp/expensive_loop.ll 1
$ perf inject --jit -i perf.data -o perf.jit.data
$ perf report -i perf.jit.data
- 92.59% lli jitted-5881-2.so [.] stupid_isprime
stupid_isprime
main
llvm::MCJIT::runFunction
llvm::ExecutionEngine::runFunctionAsMain
main
__libc_start_main
0x4bf6258d4c544155
+ 0.85% lli ld-2.27.so [.] do_lookup_x
And line-level annotations also work:
│ for(uint64_t i = 3; i < num / 2; i+= 2) {
│1 30: movq $0x3,-0x18(%rbp)
0.03 │1 38: mov -0x18(%rbp),%rax
0.03 │ mov -0x10(%rbp),%rcx
│ shr $0x1,%rcx
3.63 │ ┌──cmp %rcx,%rax
│ ├──jae 6f
│ │ if (num % i == 0)
0.03 │ │ mov -0x10(%rbp),%rax
│ │ xor %edx,%edx
89.00 │ │ divq -0x18(%rbp)
│ │ cmp $0x0,%rdx
0.22 │ │↓ jne 5f
│ │ return false;
│ │ movb $0x0,-0x1(%rbp)
│ │↓ jmp 73
│ │ }
3.22 │1 5f:│↓ jmp 61
│ │ for(uint64_t i = 3; i < num / 2; i+= 2) {
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D44892
llvm-svn: 337789
The verifier identified several modules that were broken due to incorrect
linkage on declarations. To fix this, CompileOnDemandLayer2::extractFunction
has been updated to change decls to external linkage.
llvm-svn: 336150
LLJIT is a prefabricated ORC based JIT class that is meant to be the go-to
replacement for MCJIT. Unlike OrcMCJITReplacement (which will continue to be
supported) it is not API or bug-for-bug compatible, but targets the same
use cases: Simple, non-lazy compilation and execution of LLVM IR.
LLLazyJIT extends LLJIT with support for function-at-a-time lazy compilation,
similar to what was provided by LLVM's original (now long deprecated) JIT APIs.
This commit also contains some simple utility classes (CtorDtorRunner2,
LocalCXXRuntimeOverrides2, JITTargetMachineBuilder) to support LLJIT and
LLLazyJIT.
Both of these classes are works in progress. Feedback from JIT clients is very
welcome!
llvm-svn: 335670
Previously JITCompileCallbackManager only supported single threaded code. This
patch embeds a VSO (see include/llvm/ExecutionEngine/Orc/Core.h) in the callback
manager. The VSO ensures that the compile callback is only executed once and that
the resulting address cached for use by subsequent re-entries.
llvm-svn: 333490
The DEBUG() macro is very generic so it might clash with other projects.
The renaming was done as follows:
- git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g'
- git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM
- Manual change to APInt
- Manually chage DOCS as regex doesn't match it.
In the transition period the DEBUG() macro is still present and aliased
to the LLVM_DEBUG() one.
Differential Revision: https://reviews.llvm.org/D43624
llvm-svn: 332240
See r331124 for how I made a list of files missing the include.
I then ran this Python script:
for f in open('filelist.txt'):
f = f.strip()
fl = open(f).readlines()
found = False
for i in xrange(len(fl)):
p = '#include "llvm/'
if not fl[i].startswith(p):
continue
if fl[i][len(p):] > 'Config':
fl.insert(i, '#include "llvm/Config/llvm-config.h"\n')
found = True
break
if not found:
print 'not found', f
else:
open(f, 'w').write(''.join(fl))
and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p`
and tried to fix include ordering and whatnot.
No intended behavior change.
llvm-svn: 331184
We have a few functions that virtually all command wants to run on
process startup/shutdown. This patch adds InitLLVM class to do that
all at once, so that we don't need to copy-n-paste boilerplate code
to each llvm command's main() function.
Differential Revision: https://reviews.llvm.org/D45602
llvm-svn: 330046
These aren't the .def style files used in LLVM that require a macro
defined before their inclusion - they're just basic non-modular includes
to stamp out command line flag variables.
llvm-svn: 329840
llc, opt, and clang can all autodetect the CPU and supported features. lli cannot as far as I could tell.
This patch uses the getCPUStr() and introduces a new getCPUFeatureList() and uses those in lli in place of MCPU and MAttrs.
Ideally, we would merge getCPUFeatureList and getCPUFeatureStr, but opt and llc need a string and lli wanted a list. Maybe we should just return the SubtargetFeature object and let the caller decide what it needs?
Differential Revision: https://reviews.llvm.org/D41833
llvm-svn: 322100
Since this isn't a real header - it includes static functions and had
external linkage variables (though this change makes them static, since
that's what they should be) so can't be included more than once in a
program.
llvm-svn: 319082
code duplication in the client, and improve error propagation.
This patch moves the OrcRemoteTarget rpc::Function declarations from
OrcRemoteTargetRPCAPI into their own namespaces under llvm::orc::remote so that
they can be used in new contexts (in particular, a remote-object-file adapter
layer that I will commit shortly).
Code duplication in OrcRemoteTargetClient (especially in loops processing the
code, rw-data and ro-data allocations) is removed by moving the loop bodies
into their own functions.
Error propagation is (slightly) improved by adding an ErrorReporter functor to
the OrcRemoteTargetClient -- Errors that can't be returned (because they occur
in destructors, or behind stable APIs that don't provide error returns) can be
sent to the ErrorReporter instead. Some methods in the Client API are also
changed to make better use of the Expected class: returning Expected<T>s rather
than returning Errors and taking T&s to store the results.
llvm-svn: 312500
IMHO it is an antipattern to have a enum value that is Default.
At any given piece of code it is not clear if we have to handle
Default or if has already been mapped to a concrete value. In this
case in particular, only the target can do the mapping and it is nice
to make sure it is always done.
This deletes the two default enum values of CodeModel and uses an
explicit Optional<CodeModel> when it is possible that it is
unspecified.
llvm-svn: 309911
From a user prospective, it forces the use of an annoying nullptr to mark the end of the vararg, and there's not type checking on the arguments.
The variadic template is an obvious solution to both issues.
Differential Revision: https://reviews.llvm.org/D31070
llvm-svn: 299949
Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.
From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.
llvm-svn: 299925
Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.
From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.
Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu>
Differential Revision: https://reviews.llvm.org/D31070
llvm-svn: 299699
(1) Add support for function key negotiation.
The previous version of the RPC required both sides to maintain the same
enumeration for functions in the API. This means that any version skew between
the client and server would result in communication failure.
With this version of the patch functions (and serializable types) are defined
with string names, and the derived function signature strings are used to
negotiate the actual function keys (which are used for efficient call
serialization). This allows clients to connect to any server that supports a
superset of the API (based on the function signatures it supports).
(2) Add a callAsync primitive.
The callAsync primitive can be used to install a return value handler that will
run as soon as the RPC function's return value is sent back from the remote.
(3) Launch policies for RPC function handlers.
The new addHandler method, which installs handlers for RPC functions, takes two
arguments: (1) the handler itself, and (2) an optional "launch policy". When the
RPC function is called, the launch policy (if present) is invoked to actually
launch the handler. This allows the handler to be spawned on a background
thread, or added to a work list. If no launch policy is used, the handler is run
on the server thread itself. This should only be used for short-running
handlers, or entirely synchronous RPC APIs.
(4) Zero cost cross type serialization.
You can now define serialization from any type to a different "wire" type. For
example, this allows you to call an RPC function that's defined to take a
std::string while passing a StringRef argument. If a serializer from StringRef
to std::string has been defined for the channel type this will be used to
serialize the argument without having to construct a std::string instance.
This allows buffer reference types to be used as arguments to RPC calls without
requiring a copy of the buffer to be made.
llvm-svn: 286620
Summary:
Split ReaderWriter.h which contains the APIs into both the BitReader and
BitWriter libraries into BitcodeReader.h and BitcodeWriter.h.
This is to address Chandler's concern about sharing the same API header
between multiple libraries (BitReader and BitWriter). That concern is
why we create a single bitcode library in our downstream build of clang,
which led to r286297 being reverted as it added a dependency that
created a cycle only when there is a single bitcode library (not two as
in upstream).
Reviewers: mehdi_amini
Subscribers: dlj, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D26502
llvm-svn: 286566
../tools/llvm-extract/llvm-extract.cpp: In function ‘int main(int, char**)’:
warning: ISO C++ forbids zero-size array ‘argv’ [-Wpedantic]
GCC reference bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61259
llvm-svn: 286396