This option is a subset of -Bsymbolic-functions. It applies to STB_GLOBAL
STT_FUNC definitions.
The address of a vague linkage function (STB_WEAK STT_FUNC, e.g. an inline
function, a template instantiation) seen by a -Bsymbolic-functions linked
shared object may be different from the address seen from outside the shared
object. Such cases are uncommon. (ELF/Mach-O programs may use
`-fvisibility-inlines-hidden` to break such pointer equality. On Windows,
correct dllexport and dllimport are needed to make pointer equality work.
Windows link.exe enables /OPT:ICF by default so different inline functions may
have the same address.)
```
// a.cc -> a.o -> a.so (-Bsymbolic-functions)
inline void f() {}
void *g() { return (void *)&f; }
// b.cc -> b.o -> exe
// The address is different!
inline void f() {}
```
-Bsymbolic-non-weak-functions is a safer (C++ conforming) subset of
-Bsymbolic-functions, which can make such programs work.
Implementations usually emit a vague linkage definition in a COMDAT group. We
could detect the group (with more code) but I feel that we should just check
STB_WEAK for simplicity. A weak definition will thus serve as an escape hatch
for rare cases when users want interposition on definitions.
GNU ld feature request: https://sourceware.org/bugzilla/show_bug.cgi?id=27871
Longer write-up: https://maskray.me/blog/2021-05-16-elf-interposition-and-bsymbolic
If Linux distributions migrate to protected non-vague-linkage external linkage
functions by default, the linker option can still be handy because it allows
rapid experiment without recompilation. Protected function addresses currently
have deep issues in GNU ld.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D102570
This is somewhat of a repeat of D66658 but for sections in PT_TLS
segments. Although such sections don't need to be aligned such that
address and offset are congruent modulo the page size, they do need
to be congruent modulo the segment alignment, otherwise the
whole PT_TLS will be unaligned. We therefore use the normal calculation
to determine the section's address within the PT_LOAD rather than
bailing out early due to being SHT_NOBITS.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D106987
clang may place dynamic initializations for explicitly specialized class
template static data members in comdat.
Such in-comdat SHT_INIT_ARRAY was an abuse but we have to work around it for a while.
Change removeUnusedSyntheticSections() to actually remove empty
SyntheticSections in inputSections.
In addition to doing what removeUnusedSyntheticSections() was meant
to do, this will also make the shuffle-sections tests, which shuffles
inputSections, less sensitive to empty Synthetic Sections that
will not appear in the final image.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D106427
Change-Id: I589eaf596472161a4395fb658aea0fad73318088
This generalizes D70146 (SHT_NOTE) to more reserved sections and makes our rules
more consistent. Now SHF_GROUP is more similar to SHF_LINK_ORDER.
For SHT_INIT_ARRAY/SHT_FINI_ARRAY, the rule will be closer to PE/COFF link.exe.
Previously sanitizers use llvm.global_ctors to make module_ctor a GC
root, which is considered an abuse.
https://groups.google.com/g/generic-abi/c/TpleUEkNoQI
We can squeak through on compatibility issues because compilers otherwise don't
use SHF_GROUP special sections.
In PGO, a C++ external linkage function `foo` has a private counter
`__profc_foo` and a private `__profd_foo` in a `comdat nodeduplicate`.
A `__attribute__((weak))` function `foo` has a weak hidden counter `__profc_foo`
and a private `__profd_foo` in a `comdat nodeduplicate`.
In `ld.lld a.o b.o`, say a.o defines an external linkage `foo` and b.o
defines a weak `foo`. Currently we treat `comdat nodeduplicate` as `comdat any`,
ld.lld will incorrectly consider `b.o:__profc_foo` non-prevailing. In the worst
case when `b.o:__profd_foo` is retained and `b.o:__profc_foo` isn't, there will
be dangling reference causing an `undefined hidden symbol` error.
Add SelectionKind to `Comdat` in IRSymtab and let linkers ignore nodeduplicate comdat.
Differential Revision: https://reviews.llvm.org/D106228
`clang -fuse-ld=lld -static-pie -fpie` produced executable
currently crashes and this patch makes it work.
See https://sourceware.org/bugzilla/show_bug.cgi?id=27164
and https://sourceware.org/pipermail/libc-alpha/2021-July/128810.html
While it seems unreasonable to keep csu/libc-start.c ARCH_APPLY_IREL unclear in
static-pie mode and have an unneeded diff -u =(ld.bfd --verbose) =(ld.bfd -pie
--verbose) difference, glibc folks don't want to fix their code.
I feel sad about that but this patch can remove an iffy condition for lld/ELF
as well: `needsInterpSection()`.
The ELF specification says "The link editor honors the common definition and
ignores the weak ones." GNU ld and our Symbol::compare follow this, but the
--fortran-common code (D86142) made a mistake on the precedence.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51082
Reviewed By: peter.smith, sfertile
Differential Revision: https://reviews.llvm.org/D105945
This is a follow up to https://reviews.llvm.org/D104080, and ca3bdb57fa (diff-e64a48fabe31db213a631fdc5f2acb51bdddf3f16a8fb2928784f4c579229585). The implementation of call graph profile was changed from a black box section to relocation approach. This was done to be compatible with post processing tools like strip/objcopy, and llvm equivalent. When they are invoked on object file before the final linking step with this new approach the symbol indices correctness is preserved.
The GNU binutils tools change the REL section to RELA section, unlike llvm tools. For example when strip -S is run on the ELF object files, as an intermediate step before linking. To preserve compatibility this patch extends implementation in LLD and ELFDumper to support both REL and RELA sections for call graph profile.
Reviewed By: MaskRay, jhenderson
Differential Revision: https://reviews.llvm.org/D105217
This patch is a followup patch to https://reviews.llvm.org/D105760 which adds this relocation. This handles the relocation in lld.
The s_branch family of instruction does the following:
PC = PC + signext(simm * 4) + 4
so we we do the opposite on the target address before writing it in the instruction stream.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D105761
Since D100490 this case is diagnosed for -z rel. This commit implements
R_AARCH64_TLSDESC cases for AArch64::getImplicitAddend() and
AArch64::relocate(). However, there are probably further relocation types
that need to be handled for full support of -z rel.
Fixes https://bugs.llvm.org/show_bug.cgi?id=47009
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100544
I found this missing case with the new --check-dynamic-relocation flag
while running the lld tests with --apply-dynamic-relocs enabled by default.
This is the same as D101452 just for RISC-V
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101454
I found this missing case with the new --check-dynamic-relocation flag
while running the lld tests with --apply-dynamic-relocs enabled by default.
This also fixes a broken CHECK in lld/test/ELF/x86-64-gotpc-relax.s:
The test wasn't using CHECK-NEXT, so it was passing despite the output
actually containing relocations. I am not sure when this changed, but I
think this behaviour is correct.
Found with D101450 + enabling --apply-dynamic-relocs by default.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101452
There used to be many cases where addends for Elf_Rel were not emitted in
the final object file (mostly when building for MIPS64 since the input .o
files use RELA but the output uses REL). These cases have been fixed since,
but this patch adds a check to ensure that the written values are correct.
It is based on a previous patch that I added to the CHERI fork of LLD since
we were using MIPS64 as a baseline. The work has now almost entirely
shifted to RISC-V and Arm Morello (which use Elf_Rela), but I thought
it would be useful to upstream our local changes anyway.
This patch adds a (hidden) command line flag --check-dynamic-relocations
that can be used to enable these checks. It is also on by default in
assertions builds for targets that handle all dynamic relocations kinds
that LLD can emit in Target::getImplicitAddend(). Currently this is
enabled for ARM, MIPS, and I386.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101450
This patch changes the DynamicReloc class to store an enum instead
of the overloaded useSymVA member to make it easier to understand
and fix incorrect addends being written in some corner cases. The
change is motivated by a follow-up review that checks the value of
implicit Elf_Rel addends written to the output file.
This patch fixes an incorrect output when using `-z rela` for i386 files
with R_386_GOT32 relocations (not that this really matters since it's an
unsupported configuration).
Storing the relocation expression kind also addresses an incorrect addend
FIXME in ppc64-abs64-dyn.s introduced in D63383.
DynamicReloc now also has a special case for the MIPS TLS relocations
(DynamicReloc::AgainstSymbolWithTargetVA) since the
R_MIPS_TLS_TPREL{32/64} the symbol VA to the GOT for preemptible
symbols. I'm not sure if the symbol value actually should be written
for R_MIPS_TLS_TPREL32, but this patch does not attempt to change
that behaviour.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100490
For
```
SECTIONS {
text.0 : {}
text.1 : {}
text.2 : {}
} INSERT AFTER .data;
```
the current order is `.data text.2 text.1 text.0`. It makes more sense to
preserve the specified order and thus improve compatibility with GNU ld.
For
```
SECTIONS { text.0 : {} } INSERT AFTER .data;
SECTIONS { text.3 : {} } INSERT AFTER .data;
```
GNU ld somehow collects sections with `INSERT AFTER .data` together (IMO
inconsistent) but I think it makes more sense to execute the commands in order
and get `.data text.3 text.0` instead.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D105158
See the comment for my understanding of -no-pie and -shared expectation.
-no-pie has freedom on choices. We choose dynamic relocations to be consistent
with the handling of GOT-generating relocations.
Note: GNU ld has arch-varying behaviors and its x86 -pie has a very
complex rule:
if there is at least one GOT-generating or PLT-generating relocation and
-z dynamic-undefined-weak (enabled by default) is in effect, generate a
dynamic relocation.
We don't emulate its rule.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D105164
There are a couple of problems with the code to patch
unrelocated BLX instructions:
1. The calculation of the PC needs to take into account
the alignment of the instruction. The Thumb BLX
uses alignDown(PC, 4) for the source address.
2. The calculation of the PC bias is hard-coded to 4
which works for Thumb, but when there is a BLX the
branch will be in Arm state so it needs an 8 byte
PC bias.
No asssembler generates an unrelocated BLX instruction
so these problems do not affect real world programs.
However we should still fix them.
Differential Revision: https://reviews.llvm.org/D104905
Modify the D13209 logic: for a script inside the sysroot, if an absolute path
does not exist, report an error instead of falling back to the path without the
sysroot prefix.
This matches GNU ld, which makes sense to me: we don't want to find an arbitrary
file in the host.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D104894
... even on targets preferring RELA. The section is only consumed by ld.lld
which can handle REL.
Follow-up to D104080 as I explained in the review. There are two advantages:
* The D104080 code only handles RELA, so arm/i386/mips32 etc may warn for -fprofile-use=/-fprofile-sample-use= usage.
* Decrease object file size for RELA targets
While here, change the relocation to relocate weights, instead of 0,1,2,3,..
I failed to catch the issue during review.
Currently when .llvm.call-graph-profile is created by llvm it explicitly encodes the symbol indices. This section is basically a black box for post processing tools. For example, if we run strip -s on the object files the symbol table changes, but indices in that section do not. In non-visible behavior indices point to wrong symbols. The visible behavior indices point outside of Symbol table: "invalid symbol index".
This patch changes the format by using R_*_NONE relocations to indicate the from/to symbols. The Frequency (Weight) will still be in the .llvm.call-graph-profile, but symbol information will be in relocation section. In LLD information from both sections is used to reconstruct call graph profile. Relocations themselves will never be applied.
With this approach post processing tools that handle relocations correctly work for this section also. Tools can add/remove symbols and as long as they handle relocation sections with this approach information stays correct.
Doing a quick experiment with clang-13.
The size went up from 107KB to 322KB, aggregate of all the input sections. Size of clang-13 binary is ~118MB. For users of -fprofile-use/-fprofile-sample-use the size of object files will go up slightly, it will not impact final binary size.
Reviewed By: jhenderson, MaskRay
Differential Revision: https://reviews.llvm.org/D104080
getLineNumber() was counting the number of line feeds from the start of
the buffer to the current token. For large linker scripts this became a
performance bottleneck. For one 4MB linker script over 4 minutes was
spent in getLineNumber's StringRef::count.
Store the line number from the last token, and only count the additional
line feeds since the last token.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D104137
During PHDR creation, the case where an output section does not require a
PT_LOAD header but still occupies memory in the current VMA region was not handled.
If such an output section interleaves two output sections that have the same
VMA and LMA regions set, we would previously re-use the existing PT_LOAD header
for the second output section.
However, since the memory region is not contiguous, we need to start a new PT_LOAD
segment.
This fixes https://bugs.llvm.org/show_bug.cgi?id=50558
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D103815
This implements https://sourceware.org/bugzilla/show_bug.cgi?id=26404
An `OVERWRITE_SECTIONS` command is a `SECTIONS` variant which contains several
output section descriptions. The output sections do not have specify an order.
Similar to `INSERT [BEFORE|AFTER]`, `LinkerScript::hasSectionsCommand` is not
set, so the built-in rules (see `docs/ELF/linker_script.rst`) still apply.
`OVERWRITE_SECTIONS` can be more convenient than `INSERT` because it does not
need an anchor section.
The initial syntax is intentionally narrow to facilitate backward compatible
extensions in the future. Symbol assignments cannot be used.
This feature is versatile. To list a few usage:
* Use `section : { KEEP(...) }` to retain input sections under GC
* Define encapsulation symbols (start/end) for an output section
* Use `section : ALIGN(...) : { ... }` to overalign an output section (similar to ld64 `-sectalign`)
When an output section is specified by both `OVERWRITE_SECTIONS` and
`INSERT`, `INSERT` is processed after overwrite sections. To make this work,
this patch changes `InsertCommand` to use name based matching instead of pointer
based matching. (This may cause a difference when `INSERT` moves one output
section more than once. Such duplicate commands should not be used in practice
(seems that in GNU ld the output sections may just disappear).)
A linker script can be used without -T/--script. The traditional `SECTIONS`
commands are concatenated, so a wrong rule can be more noticeable from the
section order. This feature if misused can be less noticeable, just like
`INSERT`.
Differential Revision: https://reviews.llvm.org/D103303
In a -no-pie link we optimize R_PLT_PC to R_PC. Currently we resolve a branch
relocation to the link-time zero address. However such a choice tends to cause
relocation overflow possibility for RISC architectures.
* aarch64: GNU ld: rewrite the instruction to a NOP; ld.lld: branch to the next instruction
* mips: GNU ld: branch to the start of the text segment (?); ld.lld: branch to zero
* ppc32: GNU ld: rewrite the instruction to a NOP; ld.lld: branch to the current instruction
* ppc64: GNU ld: rewrite the instruction to a NOP; ld.lld: branch to the current instruction
* riscv: GNU ld: branch to the absolute zero address (with instruction rewriting)
* i386/x86_64: GNU ld/ld.lld: branch to the link-time zero address
I think that resolving to the same location is a good choice. The instruction,
if triggered, is clearly an undefined behavior. Resolving to the same location
can cause an infinite loop (making the user aware of the issue) while ensuring
no overflow.
Reviewed By: jrtc27
Differential Revision: https://reviews.llvm.org/D103001
Given the following scenario:
```
// Cat.cpp
struct Animal { virtual void makeNoise() const = 0; };
struct Cat : Animal { void makeNoise() const override; };
extern "C" int puts(char const *);
void Cat::makeNoise() const { puts("Meow"); }
void doThingWithCat(Animal *a) { static_cast<Cat *>(a)->makeNoise(); }
// CatUser.cpp
struct Animal { virtual void makeNoise() const = 0; };
struct Cat : Animal { void makeNoise() const override; };
void doThingWithCat(Animal *a);
void useDoThingWithCat() {
Cat *d = new Cat;
doThingWithCat(d);
}
// cat.ver
{
global: _Z17useDoThingWithCatv;
local: *;
};
$ clang++ Cat.cpp CatUser.cpp -fpic -flto=thin -fwhole-program-vtables
-shared -O3 -fuse-ld=lld -Wl,--lto-whole-program-visibility
-Wl,--version-script,cat.ver
```
We cannot devirtualize `Cat::makeNoise`. The issue is complex:
Due to `-fsplit-lto-unit` and usage of type metadata, we place the Cat
vtable declaration into module 0 and the Cat vtable definition with type
metadata into module 1, causing duplicate entries (Undefined followed by
Defined) in the `lto::InputFile::symbols()` output.
In `BitcodeFile::parse`, after processing the `Undefined` then the
`Defined`, the final state is `Defined`.
In `BitcodeCompiler::add`, for the first symbol, `computeBinding`
returns `STB_LOCAL`, then we reset it to `Undefined` because it is
prevailing (`versionId` is `preserved`). For the second symbol, because
the state is now `Undefined`, `computeBinding` returns `STB_GLOBAL`,
causing `ExportDynamic` to be true and suppressing devirtualization.
In D77280, the `computeBinding` change used a stricter `isDefined()`
condition to make weak``Lazy` symbol work.
This patch relaxes the condition to weaker `!isLazy()` to keep it
working while making the devirtualization work as well.
Differential Revision: https://reviews.llvm.org/D98686
D62727 removed GotEntrySize and GotPltEntrySize with a comment that they
are always equal to wordsize(), but that is not entirely true: X32 has a
word size of 4, but needs 8-byte GOT entries. This restores gotEntrySize
for both, adjusted for current naming conventions, but defaults it to
config->wordsize to keep things simple for architectures other than
x86_64.
This partially reverts D62727.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D102509
This option will be available in GNU ld 2.27 (https://sourceware.org/bugzilla/show_bug.cgi?id=27834).
This option can cancel previously specified -Bsymbolic and
-Bsymbolic-functions. This is useful for excluding some links when the
default uses -Bsymbolic-functions.
Reviewed By: jhenderson, peter.smith
Differential Revision: https://reviews.llvm.org/D102383
Currently, when reporting unresolved symbols in shared libraries, if an
undefined symbol is firstly seen in a regular object file that shadows
the reference for the same symbol in a shared object. As a result, the
error for the unresolved symbol in the shared library is not reported.
If referencing sections in regular object files are discarded because of
'--gc-sections', no reports about such symbols are generated, and the
linker finishes successfully, generating an output image that fails on
the run.
The patch fixes the issue by keeping symbols, which should be checked,
for each shared library separately.
Differential Revision: https://reviews.llvm.org/D101996
This fixes an issue where mixed TOC / NOTOC calls can call the incorrect
thunks if a previous thunk already exists. The issue appears when a TOC
funciton calls a NOTOC callee and then a different NOTOC function calls the same
NOTOC callee. In this case the linker would sometimes incorrectly call the
same thunk for both cases.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101837
This is a slight improvement to the help text, as I was slightly
surprised when strip-all did more than remove the symbol table.
Currently, we match gold's help text for strip-all and strip-debug.
I think that the GNU documentation for these options is not particularly
clear. However, I have opted to make only a minor change here and keep
the help text similar to gold's as these are mature options that are
well understood.
ld.bfd (https://sourceware.org/binutils/docs/ld/Options.html) has a
similar implication although it defines strip-debug as a subset of
strip-all. However, felt that noting that strip-all implies strip-debug
is better; because, with the ld.bfd approach you have to read both the
--strip-debug and the --strip-all help text to understand the behaviour
of --strip-all (and the --strip-all help text doesn't indicate that he
--strip-debug help text is related).
Differential Revision: https://reviews.llvm.org/D101890
Adopt my suggestion in https://reviews.llvm.org/D91426#2653926 ,
generalizing the ppc64 specific code.
GNU ld and glibc ld.so has a contract about the first few entries of .got .
There are somewhat complex conditions when the header is needed. This patch
switches to a simpler approach: add a header unconditionally if
_GLOBAL_OFFSET_TABLE_ is used or the number of entries is more than just the
header.
Stop using the compatibility spellings of `OF_{None,Text,Append}`
left behind by 1f67a3cba9. A follow-up
will remove them.
Differential Revision: https://reviews.llvm.org/D101650
GNU ld -r can create .rela.eh_frame with unordered r_offset values.
(With LLD, we can craft such a case by reordering sections in .eh_frame.)
This is currently unsupported and will trigger
`assert(pieces[i].inputOff <= off ...` in `OffsetGetter::get`
(the content is corrupted in a -DLLVM_ENABLE_ASSERTIONS=off build).
This patch supports this case.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D101116
The original page no longer works, so use a web.archive.org link instead.
Reviewed By: atanasyan
Differential Revision: https://reviews.llvm.org/D100949
This is the same problem as 127176e59e, but for static TLS rather than
dynamic TLS. Although we know the symbol will be the one in our own TLS
segment, and thus the offset of it within that, we don't know where in
the static TLS block our data will be allocated and thus we must emit a
dynamic relocation for this case.
Reviewed By: MaskRay, atanasyan
Differential Revision: https://reviews.llvm.org/D101381
Whilst not wrong (unless using static PIE where the relocations are
likely not implemented by the runtime), this is inefficient, as the TLS
module indices and offsets are independent of the executable's load
address.
Reviewed By: MaskRay, atanasyan
Differential Revision: https://reviews.llvm.org/D101382