Summary:
The capability was lost with D10429 where the personality function was set at function level rather than landing pad level. Now there is no way to get/set the personality function from the C API. That is a problem.
Note that the whole thing could be avoided by improving the C API testing, as started by D10725
Reviewers: chandlerc, bogner, majnemer, andrew.w.kaylor, rafael, rnk, axw
Subscribers: rafael, llvm-commits
Differential Revision: http://reviews.llvm.org/D10946
llvm-svn: 242104
Summary:
This introduces new instructions neccessary to implement MSVC-compatible
exception handling support. Most of the middle-end and none of the
back-end haven't been audited or updated to take them into account.
Reviewers: rnk, JosephTremoulet, reames, nlewycky, rjmccall
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11041
llvm-svn: 241888
Originally added in r139314.
Back then it didn't actually get the address, it got whatever value the
relocation used: address or offset.
The values in different object formats are:
* MachO: Always an offset.
* COFF: Always an address, but when talking about the virtual address of
sections it says: "for simplicity, compilers should set this to zero".
* ELF: An offset for .o files and and address for .so files. In the case of the
.so, the relocation in not linked to any section (sh_info is 0). We can't
really compute an offset.
Some API mappings would be:
* Use getAddress for everything. It would be quite cumbersome. To compute the
address elf has to follow sh_info, which can be corrupted and therefore the
method has to return an ErrorOr. The address of the section is also the same
for every relocation in a section, so we shouldn't have to check the error
and fetch the value for every relocation.
* Use a getValue and make it up to the user to know what it is getting.
* Use a getOffset and:
* Assert for dynamic ELF objects. That is a very peculiar case and it is
probably fair to ask any tool that wants to support it to use ELF.h. The
only tool we have that reads those (llvm-readobj) already does that. The
only other use case I can think of is a dynamic linker.
* Check that COFF .obj files have sections with zero virtual address spaces. If
it turns out that some assembler/compiler produces these, we can change
COFFObjectFile::getRelocationOffset to subtract it. Given COFF format,
this can be done without the need for ErrorOr.
The getRelocationAddress method was never implemented for COFF. It also
had exactly one use in a very peculiar case: a shortcut for adding the
section value to a pcrel reloc on MachO.
Given that, I don't expect that there is any use out there of the C API. If
that is not the case, let me know and I will add it back with the implementation
inlined and do a proper deprecation.
llvm-svn: 241450
This is needed for COFF linkers to distinguish between weak external aliases
and regular symbols with LLVM weak linkage, which are represented as strong
symbols in COFF.
llvm-svn: 241389
Specifically, remove the dependent library interface and replace the existing
linker option interface with a new one that returns a single list of flags.
Differential Revision: http://reviews.llvm.org/D10820
llvm-svn: 241018
This change unifies how LTOModule and the backend obtain linker flags
for globals: via a new TargetLoweringObjectFile member function named
emitLinkerFlagsForGlobal. A new function LTOModule::getLinkerOpts() returns
the list of linker flags as a single concatenated string.
This change affects the C libLTO API: the function lto_module_get_*deplibs now
exposes an empty list, and lto_module_get_*linkeropts exposes a single element
which combines the contents of all observed flags. libLTO should never have
tried to parse the linker flags; it is the linker's job to do so. Because
linkers will need to be able to parse flags in regular object files, it
makes little sense for libLTO to have a redundant mechanism for doing so.
The new API is compatible with the old one. It is valid for a user to specify
multiple linker flags in a single pragma directive like this:
#pragma comment(linker, "/defaultlib:foo /defaultlib:bar")
The previous implementation would not have exposed
either flag via lto_module_get_*deplibs (as the test in
TargetLoweringObjectFileCOFF::getDepLibFromLinkerOpt was case sensitive)
and would have exposed "/defaultlib:foo /defaultlib:bar" as a single flag via
lto_module_get_*linkeropts. This may have been a bug in the implementation,
but it does give us a chance to fix the interface.
Differential Revision: http://reviews.llvm.org/D10548
llvm-svn: 241010
any tests and I even don't know how to run the tests. This seems like a
minimal change to make them work again, although I can't really verify
at this point. Additionally, it probably makes sense to propagate the
personality parameter removal further.
llvm-svn: 240010
constants in commented-out part of LLVMAttribute enum. Add tests that verify
that the safestack attribute is only allowed as a function attribute.
llvm-svn: 239772
This represents some of the functionality we expose in the llvmlite Python
binding.
Patch by Antoine Pitrou
Differential Revision: http://reviews.llvm.org/D10222
llvm-svn: 239411
Reverse libLTO's default behaviour for preserving use-list order in
bitcode, and add API for controlling it. The default setting is now
`false` (don't preserve them), which is consistent with `clang`'s
default behaviour.
Users of libLTO should call `lto_codegen_should_embed_uselists(CG,true)`
prior to calling `lto_codegen_write_merged_modules()` whenever the
output file isn't part of the production workflow in order to reproduce
results with subsequent calls to `llc`.
(I haven't added tests since `llvm-lto` (the test tool for LTO) doesn't
support bitcode output, and even if it did: there isn't actually a good
way to test whether a tool has passed the flag. If the order is already
"natural" (if the order will already round-trip) then no use-list
directives are emitted at all. At some point I'll circle back to add
tests to `llvm-as` (etc.) that they actually respect the flag, at which
point I can somehow add a test here as well.)
llvm-svn: 235943
These look like copy/paste errors, and shouldn't have the "prior to"
qualifier. Each API was introduced at the given values of
`LTO_API_VERSION`. The "prior to" in other doxygen comments is because
I couldn't easily differentiate between versions 1 and 2 when I added
these comments.
llvm-svn: 235925
When debugging LTO issues with ld64, we use -save-temps to save the merged
optimized bitcode file, then invoke ld64 again on the single bitcode file.
The saved bitcode file is already internalized, so we can call
lto_codegen_set_should_internalize and skip running internalization again.
rdar://20227235
llvm-svn: 235211
Summary:
If a pointer is marked as dereferenceable_or_null(N), LLVM assumes it
is either `null` or `dereferenceable(N)` or both. This change only
introduces the attribute and adds a token test case for the `llvm-as`
/ `llvm-dis`. It does not hook up other parts of the optimizer to
actually exploit the attribute -- those changes will come later.
For pointers in address space 0, `dereferenceable(N)` is now exactly
equivalent to `dereferenceable_or_null(N)` && `nonnull`. For other
address spaces, `dereferenceable(N)` is potentially weaker than
`dereferenceable_or_null(N)` && `nonnull` (since we could have a null
`dereferenceable(N)` pointer).
The motivating case for this change is Java (and other managed
languages), where pointers are either `null` or dereferenceable up to
some usually known-at-compile-time constant offset.
Reviewers: rafael, hfinkel
Reviewed By: hfinkel
Subscribers: nicholas, llvm-commits
Differential Revision: http://reviews.llvm.org/D8650
llvm-svn: 235132
Add the enum "LLVMLinkerMode" back for backwards-compatibility and add the
linker mode parameter back to the "LLVMLinkModules" function. The paramter is
ignored and has no effect.
Patch provided by: Filip Pizlo
Reviewed by: Rafael and Sean
llvm-svn: 230988
When debugging LTO issues with ld64, we use -save-temps to save the merged
optimized bitcode file, then invoke ld64 again on the single bitcode file to
speed up debugging code generation passes and ld64 stuff after code generation.
llvm linking a single bitcode file via lto_codegen_add_module will generate a
different bitcode file from the single input. With the newly-added
lto_codegen_set_module, we can make sure the destination module is the same as
the input.
lto_codegen_set_module will transfer the ownship of the module to code
generator.
rdar://19024554
llvm-svn: 230290
BDCE is a bit-tracking dead code elimination pass. It is based on ADCE (the
"aggressive DCE" pass), with the added capability to track dead bits of integer
valued instructions and remove those instructions when all of the bits are
dead.
Currently, it does not actually do this all-bits-dead removal, but rather
replaces the instruction's uses with a constant zero, and lets instcombine (and
the later run of ADCE) do the rest. Because we essentially get a run of ADCE
"for free" while tracking the dead bits, we also do what ADCE does and removes
actually-dead instructions as well (this includes instructions newly trivially
dead because all bits were dead, but not all such instructions can be removed).
The motivation for this is a case like:
int __attribute__((const)) foo(int i);
int bar(int x) {
x |= (4 & foo(5));
x |= (8 & foo(3));
x |= (16 & foo(2));
x |= (32 & foo(1));
x |= (64 & foo(0));
x |= (128& foo(4));
return x >> 4;
}
As it turns out, if you order the bit-field insertions so that all of the dead
ones come last, then instcombine will remove them. However, if you pick some
other order (such as the one above), the fact that some of the calls to foo()
are useless is not locally obvious, and we don't remove them (without this
pass).
I did a quick compile-time overhead check using sqlite from the test suite
(Release+Asserts). BDCE took ~0.4% of the compilation time (making it about
twice as expensive as ADCE).
I've not looked at why yet, but we eliminate instructions due to having
all-dead bits in:
External/SPEC/CFP2006/447.dealII/447.dealII
External/SPEC/CINT2006/400.perlbench/400.perlbench
External/SPEC/CINT2006/403.gcc/403.gcc
MultiSource/Applications/ClamAV/clamscan
MultiSource/Benchmarks/7zip/7zip-benchmark
llvm-svn: 229462
lto_codegen_compile_optimized. Also add lto_api_version.
Before this commit, we can only dump the optimized bitcode after running
lto_codegen_compile, but it includes some impacts of running codegen passes,
one example is StackProtector pass. We will get assertion failure when running
llc on the optimized bitcode, because StackProtector is effectively run twice.
After splitting lto_codegen_compile, the linker can choose to dump the bitcode
before running lto_codegen_compile_optimized.
lto_api_version is added so ld64 can check for runtime-availability of the new
API.
rdar://19565500
llvm-svn: 228000
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532. Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.
I have a follow-up patch prepared for `clang`. If this breaks other
sub-projects, I apologize in advance :(. Help me compile it on Darwin
I'll try to fix it. FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.
This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.
Here's a quick guide for updating your code:
- `Metadata` is the root of a class hierarchy with three main classes:
`MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from
the `Value` class hierarchy. It is typeless -- i.e., instances do
*not* have a `Type`.
- `MDNode`'s operands are all `Metadata *` (instead of `Value *`).
- `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.
If you're referring solely to resolved `MDNode`s -- post graph
construction -- just use `MDNode*`.
- `MDNode` (and the rest of `Metadata`) have only limited support for
`replaceAllUsesWith()`.
As long as an `MDNode` is pointing at a forward declaration -- the
result of `MDNode::getTemporary()` -- it maintains a side map of its
uses and can RAUW itself. Once the forward declarations are fully
resolved RAUW support is dropped on the ground. This means that
uniquing collisions on changing operands cause nodes to become
"distinct". (This already happened fairly commonly, whenever an
operand went to null.)
If you're constructing complex (non self-reference) `MDNode` cycles,
you need to call `MDNode::resolveCycles()` on each node (or on a
top-level node that somehow references all of the nodes). Also,
don't do that. Metadata cycles (and the RAUW machinery needed to
construct them) are expensive.
- An `MDNode` can only refer to a `Constant` through a bridge called
`ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).
As a side effect, accessing an operand of an `MDNode` that is known
to be, e.g., `ConstantInt`, takes three steps: first, cast from
`Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
third, cast down to `ConstantInt`.
The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
metadata schema owners transition away from using `Constant`s when
the type isn't important (and they don't care about referring to
`GlobalValue`s).
In the meantime, I've added transitional API to the `mdconst`
namespace that matches semantics with the old code, in order to
avoid adding the error-prone three-step equivalent to every call
site. If your old code was:
MDNode *N = foo();
bar(isa <ConstantInt>(N->getOperand(0)));
baz(cast <ConstantInt>(N->getOperand(1)));
bak(cast_or_null <ConstantInt>(N->getOperand(2)));
bat(dyn_cast <ConstantInt>(N->getOperand(3)));
bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));
you can trivially match its semantics with:
MDNode *N = foo();
bar(mdconst::hasa <ConstantInt>(N->getOperand(0)));
baz(mdconst::extract <ConstantInt>(N->getOperand(1)));
bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2)));
bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3)));
bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));
and when you transition your metadata schema to `MDInt`:
MDNode *N = foo();
bar(isa <MDInt>(N->getOperand(0)));
baz(cast <MDInt>(N->getOperand(1)));
bak(cast_or_null <MDInt>(N->getOperand(2)));
bat(dyn_cast <MDInt>(N->getOperand(3)));
bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));
- A `CallInst` -- specifically, intrinsic instructions -- can refer to
metadata through a bridge called `MetadataAsValue`. This is a
subclass of `Value` where `getType()->isMetadataTy()`.
`MetadataAsValue` is the *only* class that can legally refer to a
`LocalAsMetadata`, which is a bridged form of non-`Constant` values
like `Argument` and `Instruction`. It can also refer to any other
`Metadata` subclass.
(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)
llvm-svn: 223802
Add API for specifying which `LLVMContext` each `lto_module_t` and
`lto_code_gen_t` is in.
In particular, this enables the following flow:
for (auto &File : Files) {
lto_module_t M = lto_module_create_in_local_context(File...);
querySymbols(M);
lto_module_dispose(M);
}
lto_code_gen_t CG = lto_codegen_create_in_local_context();
for (auto &File : FilesToLink) {
lto_module_t M = lto_module_create_in_codegen_context(File..., CG);
lto_codegen_add_module(CG, M);
lto_module_dispose(M);
}
lto_codegen_compile(CG);
lto_codegen_write_merged_modules(CG, ...);
lto_codegen_dispose(CG);
This flow has a few benefits.
- Only one module (two if you count the combined module in the code
generator) is in memory at a time.
- Metadata (and constants) from files that are parsed to query symbols
but not linked into the code generator don't pollute the global
context.
- The first for loop can be parallelized, since each module is in its
own context.
- When the code generator is disposed, the memory from LTO gets freed.
rdar://problem/18767512
llvm-svn: 221733
This adds a ScalarEvolution-powered transformation that updates load, store and
memory intrinsic pointer alignments based on invariant((a+q) & b == 0)
expressions. Many of the simple cases we can get with ValueTracking, but we
still need something like this for the more complicated cases (such as those
with an offset) that require some algebra. Note that gcc's
__builtin_assume_aligned's optional third argument provides exactly for this
kind of 'misalignment' offset for which this kind of logic is necessary.
The primary motivation is to fixup alignments for vector loads/stores after
vectorization (and unrolling). This pass is added to the optimization pipeline
just after the SLP vectorizer runs (which, admittedly, does not preserve SE,
although I imagine it could). Regardless, I actually don't think that the
preservation matters too much in this case: SE computes lazily, and this pass
won't issue any SE queries unless there are any assume intrinsics, so there
should be no real additional cost in the common case (SLP does preserve DT and
LoopInfo).
llvm-svn: 217344
Add header guards to files that were missing guards. Remove #endif comments
as they don't seem common in LLVM (we can easily add them back if we decide
they're useful)
Changes made by clang-tidy with minor tweaks.
llvm-svn: 215558
be deleted. This will be reapplied as soon as possible and before
the 3.6 branch date at any rate.
Approved by Jim Grosbach, Lang Hames, Rafael Espindola.
This reverts commits r215111, 215115, 215116, 215117, 215136.
llvm-svn: 215154
This commit adds scoped noalias metadata. The primary motivations for this
feature are:
1. To preserve noalias function attribute information when inlining
2. To provide the ability to model block-scope C99 restrict pointers
Neither of these two abilities are added here, only the necessary
infrastructure. In fact, there should be no change to existing functionality,
only the addition of new features. The logic that converts noalias function
parameters into this metadata during inlining will come in a follow-up commit.
What is added here is the ability to generally specify noalias memory-access
sets. Regarding the metadata, alias-analysis scopes are defined similar to TBAA
nodes:
!scope0 = metadata !{ metadata !"scope of foo()" }
!scope1 = metadata !{ metadata !"scope 1", metadata !scope0 }
!scope2 = metadata !{ metadata !"scope 2", metadata !scope0 }
!scope3 = metadata !{ metadata !"scope 2.1", metadata !scope2 }
!scope4 = metadata !{ metadata !"scope 2.2", metadata !scope2 }
Loads and stores can be tagged with an alias-analysis scope, and also, with a
noalias tag for a specific scope:
... = load %ptr1, !alias.scope !{ !scope1 }
... = load %ptr2, !alias.scope !{ !scope1, !scope2 }, !noalias !{ !scope1 }
When evaluating an aliasing query, if one of the instructions is associated
with an alias.scope id that is identical to the noalias scope associated with
the other instruction, or is a descendant (in the scope hierarchy) of the
noalias scope associated with the other instruction, then the two memory
accesses are assumed not to alias.
Note that is the first element of the scope metadata is a string, then it can
be combined accross functions and translation units. The string can be replaced
by a self-reference to create globally unqiue scope identifiers.
[Note: This overview is slightly stylized, since the metadata nodes really need
to just be numbers (!0 instead of !scope0), and the scope lists are also global
unnamed metadata.]
Existing noalias metadata in a callee is "cloned" for use by the inlined code.
This is necessary because the aliasing scopes are unique to each call site
(because of possible control dependencies on the aliasing properties). For
example, consider a function: foo(noalias a, noalias b) { *a = *b; } that gets
inlined into bar() { ... if (...) foo(a1, b1); ... if (...) foo(a2, b2); } --
now just because we know that a1 does not alias with b1 at the first call site,
and a2 does not alias with b2 at the second call site, we cannot let inlining
these functons have the metadata imply that a1 does not alias with b2.
llvm-svn: 213864
Merges equivalent loads on both sides of a hammock/diamond
and hoists into into the header.
Merges equivalent stores on both sides of a hammock/diamond
and sinks it to the footer.
Can enable if conversion and tolerate better load misses
and store operand latencies.
llvm-svn: 213396
This attribute indicates that the parameter or return pointer is
dereferenceable. Practically speaking, loads from such a pointer within the
associated byte range are safe to speculatively execute. Such pointer
parameters are common in source languages (C++ references, for example).
llvm-svn: 213385
After a number of previous small iterations, the functions
llvm_start_multithreaded() and llvm_stop_multithreaded() have
been reduced essentially to no-ops. This change removes them
entirely.
Reviewed by: rnk, dblaikie
Differential Revision: http://reviews.llvm.org/D4216
llvm-svn: 211287