Implement pass 3 of bind opcodes from ld64 (which supports both 32-bit and 64-bit).
Pass 3 implementation condenses BIND_OPCODE_DO_BIND_ADD_ADDR_ULEB opcode
to BIND_OPCODE_DO_BIND_ADD_ADDR_IMM_SCALED. This change is already behind an
O2 flag so it shouldn't impact current performance. I verified ld64's output with x86_64 LLD
and they were both emitting the same optimized bind opcodes (although in a slightly different
order). Tested with arm64_32 LLD and compared that with x86 LLD that the order of the bind
opcodes are the same (offset values are different which should be expected).
Reviewed By: int3, #lld-macho, MaskRay
Differential Revision: https://reviews.llvm.org/D106128
In PGO, a C++ external linkage function `foo` has a private counter
`__profc_foo` and a private `__profd_foo` in a `comdat nodeduplicate`.
A `__attribute__((weak))` function `foo` has a weak hidden counter `__profc_foo`
and a private `__profd_foo` in a `comdat nodeduplicate`.
In `ld.lld a.o b.o`, say a.o defines an external linkage `foo` and b.o
defines a weak `foo`. Currently we treat `comdat nodeduplicate` as `comdat any`,
ld.lld will incorrectly consider `b.o:__profc_foo` non-prevailing. In the worst
case when `b.o:__profd_foo` is retained and `b.o:__profc_foo` isn't, there will
be dangling reference causing an `undefined hidden symbol` error.
Add SelectionKind to `Comdat` in IRSymtab and let linkers ignore nodeduplicate comdat.
Differential Revision: https://reviews.llvm.org/D106228
This avoids duplication and simplifies the code in several places
without increasing the size of the symbol union (at least not
above the assert'd limit of 120 bytes).
Originally commit: 9b965b37c7
Reverted in: 16aac493e5.
Differential Revision: https://reviews.llvm.org/D106026
This reverts commit 321b2bef09.
`for (BindIR *p = &opcodes[0]; p->opcode != BIND_OPCODE_DONE; ++p) {` has a heap-buffer-overflow with test/MachO/bind-opcodes.
Implement pass 3 of bind opcodes from ld64 (which supports both 32-bit and 64-bit).
Pass 3 implementation condenses BIND_OPCODE_DO_BIND_ADD_ADDR_ULEB opcode
to BIND_OPCODE_DO_BIND_ADD_ADDR_IMM_SCALED. This change is already behind an
O2 flag so it shouldn't impact current performance. I verified ld64's output with x86_64 LLD
and they were both emitting the same optimized bind opcodes (although in a slightly different
order). Tested with arm64_32 LLD and compared that with x86 LLD that the order of the bind
opcodes are the same (offset values are different which should be expected).
Reviewed By: int3, #lld-macho
Differential Revision: https://reviews.llvm.org/D106128
This avoids duplication and simplifies the code in several places
without increasing the size of the symbol union (at least not
above the assert'd limit of 120 bytes).
Differential Revision: https://reviews.llvm.org/D106026
Debug info sections need R_WASM_FUNCTION_OFFSET_I32 relocs (with FK_Data_4 fixup
kinds) to refer to functions (instead of R_WASM_TABLE_INDEX as is used in data
sections). Usually this is done in a convoluted way, with unnamed temp data
symbols which target the start of the function, in which case
WasmObjectWriter::recordRelocation converts it to use the section symbol
instead. However in some cases the function can actually be undefined; in this
case the dwarf generator uses the function symbol (a named undefined function
symbol) instead. In that case the section-symbol transform doesn't work and we
need to generate the correct reloc type a different way. In this change
WebAssemblyWasmObjectWriter::getRelocType takes the fixup section type into
account to choose the correct reloc type.
Fixes PR50408
Differential Revision: https://reviews.llvm.org/D103557
ICF previously operated only within a given OutputSection. We would
merge all CFStrings first, then merge all regular code sections in a
second phase. This worked fine since CFStrings would never reference
regular `__text` sections. However, I would like to expand ICF to merge
functions that reference unwind info. Unwind info references the LSDA
section, which can in turn reference the `__text` section, so we cannot
perform ICF in phases.
In order to have ICF operate on InputSections spanning multiple
OutputSections, we need a way to distinguish InputSections that are
destined for different OutputSections, so that we don't fold across
section boundaries. We achieve this by creating OutputSections early,
and setting `InputSection::parent` to point to them. This is what
LLD-ELF does. (This change should also make it easier to implement the
`section$start$` symbols.)
This diff also folds InputSections w/o checking their flags, which I
think is the right behavior -- if they are destined for the same
OutputSection, they will have the same flags in the output (even if
their input flags differ). I.e. the `parent` pointer check subsumes the
`flags` check. In practice this has nearly no effect (ICF did not become
any more effective on chromium_framework).
I've also updated ICF.cpp's block comment to better reflect its current
status.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D105641
In D105866, we used an intermediate container to store a list of opcodes. Here,
we use that data structure to help us perform optimization passes that would allow
a more efficient encoding of bind opcodes. Currently, the functionality mirrors the
optimization pass {1,2} done in ld64 for bind opcodes under optimization gate
to prevent slight regressions.
Reviewed By: int3, #lld-macho
Differential Revision: https://reviews.llvm.org/D105867
We want to incorporate some of the optimization passes in bind opcodes from ld64.
This revision makes no functional changes but to start storing opcodes in intermediate
containers in preparation for implementing the optimization passes in a follow-up revision.
Differential Revision: https://reviews.llvm.org/D105866
`clang -fuse-ld=lld -static-pie -fpie` produced executable
currently crashes and this patch makes it work.
See https://sourceware.org/bugzilla/show_bug.cgi?id=27164
and https://sourceware.org/pipermail/libc-alpha/2021-July/128810.html
While it seems unreasonable to keep csu/libc-start.c ARCH_APPLY_IREL unclear in
static-pie mode and have an unneeded diff -u =(ld.bfd --verbose) =(ld.bfd -pie
--verbose) difference, glibc folks don't want to fix their code.
I feel sad about that but this patch can remove an iffy condition for lld/ELF
as well: `needsInterpSection()`.
This adds support for the lld-only `--thinlto-cache-policy` option, as well as
implementations for ld64's `-cache_path_lto`, `-prune_interval_lto`,
`-prune_after_lto`, and `-max_relative_cache_size_lto`.
Test is adapted from lld/test/ELF/lto/cache.ll
Differential Revision: https://reviews.llvm.org/D105922
The ELF specification says "The link editor honors the common definition and
ignores the weak ones." GNU ld and our Symbol::compare follow this, but the
--fortran-common code (D86142) made a mistake on the precedence.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51082
Reviewed By: peter.smith, sfertile
Differential Revision: https://reviews.llvm.org/D105945
This is a follow up to https://reviews.llvm.org/D104080, and ca3bdb57fa (diff-e64a48fabe31db213a631fdc5f2acb51bdddf3f16a8fb2928784f4c579229585). The implementation of call graph profile was changed from a black box section to relocation approach. This was done to be compatible with post processing tools like strip/objcopy, and llvm equivalent. When they are invoked on object file before the final linking step with this new approach the symbol indices correctness is preserved.
The GNU binutils tools change the REL section to RELA section, unlike llvm tools. For example when strip -S is run on the ELF object files, as an intermediate step before linking. To preserve compatibility this patch extends implementation in LLD and ELFDumper to support both REL and RELA sections for call graph profile.
Reviewed By: MaskRay, jhenderson
Differential Revision: https://reviews.llvm.org/D105217
This patch is a followup patch to https://reviews.llvm.org/D105760 which adds this relocation. This handles the relocation in lld.
The s_branch family of instruction does the following:
PC = PC + signext(simm * 4) + 4
so we we do the opposite on the target address before writing it in the instruction stream.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D105761
* Adjust strsize so llvm-objdump doesn't complain about it extending
past the end of file
* Remove symbol that was referencing a deleted section
* Adjust n_sect of the remaining `_main` symbol to point at the right
section
The mappings we were using had a small number of keys, so a vector is
probably better. This allows us to remove the last usage of std::map in
our codebase.
I also used `removeSimulator` to simplify the code a bit further.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D105786
lld currently only references dyld_stub_binder when it's needed.
ld64 always references it when libSystem is linked.
Match ld64.
The (somewhat lame) motivation is that `nm` on a binary without any
export writes a "no symbols" warning to stderr, and this change makes
it so that every binary in practice has at least a reference to
dyld_stub_binder, which suppresses that.
Every "real" output file will reference dyld_stub_binder, so most
of the time this shouldn't make much of a difference. And if you
really don't want to have this reference for whatever reason, you
can stop passing -lSystem, like you have to for ld64 anyways.
(After linking any dylib, we dump the exported list of symbols to
a txt file with `nm` and only relink downstream deps if that txt
file changes. A nicer fix is to make lld optionally write .tbd files
with the public interface of a linked dylib and use that instead,
but for now the txt files are what we do.)
Differential Revision: https://reviews.llvm.org/D105782
This is for aesthetic reasons, I'm not aware of anything that needs
this in practice. It does have a few effects:
- `-undefined dynamic_lookup` now has an effect for dyld_stub_binder.
This matches ld64.
- `-U dyld_stub_binder` now works like you'd expect (it doesn't work in ld64).
- The error message for a missing dyld_stub_binder symbol now looks like
other undefined reference symbols, it changes from
symbol dyld_stub_binder not found (normally in libSystem.dylib). Needed to perform lazy binding.
to
error: undefined symbol: dyld_stub_binder
>>> referenced by lazy binding (normally in libSystem.dylib)
Also add test coverage for that error message.
But in practice, this should have no interesting effects since everything links
in dyld_stub_binder via libSystem anyways.
Differential Revision: https://reviews.llvm.org/D105781
Add a bit more detail to the comments, and check that the final binary
does indeed have a `__unwind_info` section (D105557 previosly regressed
this).
Also rename the test to emphasize that we are testing relocations
compact unwind, not relocations in general.
Two changess:
- Drop assertions that all symbols are in GOT
- Set allEntriesAreOmitted correctly
Related bug: 50812
Differential Revision: https://reviews.llvm.org/D105364
This to protect against non-sensical instruction sequences being assembled,
which would either cause asserts/crashes further down, or a Wasm module being output that doesn't validate.
Unlike a validator, this type checker is able to give type-errors as part of the parsing process, which makes the assembler much friendlier to be used by humans writing manual input.
Because the MC system is single pass (instructions aren't even stored in MC format, they are directly output) the type checker has to be single pass as well, which means that from now on .globaltype and .functype decls must come before their use. An extra pass is added to Codegen to collect information for this purpose, since AsmPrinter is normally single pass / streaming as well, and would otherwise generate this information on the fly.
A `-no-type-check` flag was added to llvm-mc (and any other tools that take asm input) that surpresses type errors, as a quick escape hatch for tests that were not intended to be type correct.
This is a first version of the type checker that ignores control flow, i.e. it checks that types are correct along the linear path, but not the branch path. This will still catch most errors. Branch checking could be added in the future.
Differential Revision: https://reviews.llvm.org/D104945
Since D100490 this case is diagnosed for -z rel. This commit implements
R_AARCH64_TLSDESC cases for AArch64::getImplicitAddend() and
AArch64::relocate(). However, there are probably further relocation types
that need to be handled for full support of -z rel.
Fixes https://bugs.llvm.org/show_bug.cgi?id=47009
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100544
I found this missing case with the new --check-dynamic-relocation flag
while running the lld tests with --apply-dynamic-relocs enabled by default.
This is the same as D101452 just for RISC-V
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101454
I found this missing case with the new --check-dynamic-relocation flag
while running the lld tests with --apply-dynamic-relocs enabled by default.
This also fixes a broken CHECK in lld/test/ELF/x86-64-gotpc-relax.s:
The test wasn't using CHECK-NEXT, so it was passing despite the output
actually containing relocations. I am not sure when this changed, but I
think this behaviour is correct.
Found with D101450 + enabling --apply-dynamic-relocs by default.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101452
There used to be many cases where addends for Elf_Rel were not emitted in
the final object file (mostly when building for MIPS64 since the input .o
files use RELA but the output uses REL). These cases have been fixed since,
but this patch adds a check to ensure that the written values are correct.
It is based on a previous patch that I added to the CHERI fork of LLD since
we were using MIPS64 as a baseline. The work has now almost entirely
shifted to RISC-V and Arm Morello (which use Elf_Rela), but I thought
it would be useful to upstream our local changes anyway.
This patch adds a (hidden) command line flag --check-dynamic-relocations
that can be used to enable these checks. It is also on by default in
assertions builds for targets that handle all dynamic relocations kinds
that LLD can emit in Target::getImplicitAddend(). Currently this is
enabled for ARM, MIPS, and I386.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101450
This patch changes the DynamicReloc class to store an enum instead
of the overloaded useSymVA member to make it easier to understand
and fix incorrect addends being written in some corner cases. The
change is motivated by a follow-up review that checks the value of
implicit Elf_Rel addends written to the output file.
This patch fixes an incorrect output when using `-z rela` for i386 files
with R_386_GOT32 relocations (not that this really matters since it's an
unsupported configuration).
Storing the relocation expression kind also addresses an incorrect addend
FIXME in ppc64-abs64-dyn.s introduced in D63383.
DynamicReloc now also has a special case for the MIPS TLS relocations
(DynamicReloc::AgainstSymbolWithTargetVA) since the
R_MIPS_TLS_TPREL{32/64} the symbol VA to the GOT for preemptible
symbols. I'm not sure if the symbol value actually should be written
for R_MIPS_TLS_TPREL32, but this patch does not attempt to change
that behaviour.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100490
C++23 will make these conversions ambiguous - so fix them to make the
codebase forward-compatible with C++23 (& a follow-up change I've made
will make this ambiguous/invalid even in <C++23 so we don't regress
this & it generally improves the code anyway)
Change "dyn_cast" to "isa" to get rid of the unused
variable "bitcodeFile".
gcc warned with
lld/MachO/Driver.cpp:531:17: warning: unused variable 'bitcodeFile' [-Wunused-variable]
531 | if (auto *bitcodeFile = dyn_cast<BitcodeFile>(file)) {
| ^~~~~~~~~~~
When memory is declared in the Wasm module, we rely on the implicit zero
initialization behavior and do not explicitly output .bss sections. The means
that they do not have associated `outputSec` entries, which was causing
segfaults in the mapfile support. Fix the issue by guarding against null
`outputSec` and falling back to using a zero offset.
Differential Revision: https://reviews.llvm.org/D102951
LLD on 32-bit Windows would frequently fail on large projects with
an exception "thread constructor failed: Exec format error". The stack
trace pointed to this usage of std::async, and looking at the
implementation in libc++ it seems using std::async with
std::launch::async results in the immediate creation of a new thread
for every call. This could result in a potentially unbounded number
of threads, depending on the number of input files. This seems to
be hitting some limit in 32-bit Windows host.
I took the easy route, and only use threads on 64-bit Windows, not all
Windows as before. I was thinking a more proper solution might
involve using a thread pool rather than blindly spawning any number
of new threads, but that may have other unforeseen consequences.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D105506
If the input has compact unwind info but all of it is removed
after dead stripping, we would crash. Now we don't write any
__unwind_info section at all, like ld64.
This is a bit awkward to implement because we only know the final
state of unwind info after UnwindInfoSectionImpl<Ptr>::finalize(),
which is called after sections are added. So add a small amount of
bookkeeping to relocateCompactUnwind() instead (which runs earlier)
so that we can predict what finalize() will do before it runs.
Fixes PR51010.
Differential Revision: https://reviews.llvm.org/D105557
This implements the part of -export_dynamic that adds external
symbols as dead strip roots even for executables.
It does not yet implement the effect -export_dynamic has for LTO.
I tried just replacing `config->outputType != MH_EXECUTE` with
`(config->outputType != MH_EXECUTE || config->exportDynamic)` in
LTO.cpp, but then local symbols make it into the symbol table too,
which is too much (and also doesn't match ld64). So punt on this
for now until I understand it better.
(D91583 may or may not be related too).
Differential Revision: https://reviews.llvm.org/D105482
This is the other flag clang passes when calling clang with two -arch
flags (which means with this, `clang -arch x86_64 -arch arm64 -fuse-ld=lld ...`
now no longer prints any warnings \o/). Since clang calls the linker several
times in that setup, it's not clear to the user from which invocation the
errors are. The flag's help text is
Specifies that the linker should augment error and warning messages
with the architecture name.
In ld64, the only effect of the flag is that undefined symbols are prefaced
with
Undefined symbols for architecture x86_64:
instead of the usual "Undefined symbols:". So for now, let's add this
only to undefined symbol errors too. That's probably the most common
linker diagnostic.
Another idea would be to prefix errors and warnings with "ld64.lld(x86_64):"
instead of the usual "ld64.lld:", but I'm not sure if people would
misunderstand that as a comment about the arch of ld itself.
But open to suggestions on what effect this flag should have :) And we
don't have to get it perfect now, we can iterate on it.
Differential Revision: https://reviews.llvm.org/D105450
This is one of two flags clang passes to the linker when giving calling
clang with multiple -arch flags.
I think it'd make sense to also use finalOutput instead of outputFile
in CodeSignatureSection() and when replacing @executable_path, but
ld64 doesn't do that, so I'll at least put those in separate commits.
Differential Revision: https://reviews.llvm.org/D105449
I think this is an old way for doing what is done with
-reexport_library these days, but it's e.g. still used in libunwind's
build (the opensource.apple.com one, not the llvm one).
Differential Revision: https://reviews.llvm.org/D105448
Size-wise, BIND_OPCODE_SET_SYMBOL_TRAILING_FLAGS_IMM is the most
expensive opcode, since it comes with an associated symbol string. We
were previously emitting it once per binding, instead of once per
symbol. This diff groups all bindings for a given symbol together and
ensures we only emit one such opcode per symbol. This matches ld64's
behavior.
While this is a relatively small win on chromium_framework (-72KiB), for
programs that have more dynamic bindings, the difference can be quite
large.
This change is perf-neutral when linking chromium_framework.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D105075
clang and gcc both seem to emit relocations in reverse order of
address. That means we can match relocations to their containing
subsections in `O(relocs + subsections)` rather than the `O(relocs *
log(subsections))` that our previous binary search implementation
required.
Unfortunately, `ld -r` can still emit unsorted relocations, so we have a
fallback code path for that (less common) case.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 4.04 4.11 4.075 4.0775 0.018027756
+ 20 3.95 4.02 3.98 3.985 0.020900768
Difference at 95.0% confidence
-0.0925 +/- 0.0124919
-2.26855% +/- 0.306361%
(Student's t, pooled s = 0.0195172)
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D105410
Two bugs:
1. This tries to take the address of the last symbol plus the length
of the last symbol. However, the sorted vector is cuPtrVector,
not cuVector. Also, cuPtrVector has tombstone values removed
and cuVector doesn't. If there was a stripped value at the end,
the "last" element's value was UINT64_MAX, which meant the
sentinel value was one less than the length of that "last"
dead symbol.
2. We have to subtract in.header->addr. For 64-bit binaries that's
(1 << 32) and functionAddress is 32-bit so this is a no-op, but
for 32-bit binaries the sentinel's value was too large.
I believe this has no effect in practice since the first-level
binary search code in libunwind (in UnwindCursor.hpp) does:
uint32_t low = 0;
uint32_t high = sectionHeader.indexCount();
uint32_t last = high - 1;
while (low < high) {
uint32_t mid = (low + high) / 2;
if ((mid == last) ||
(topIndex.functionOffset(mid + 1) > targetFunctionOffset)) {
low = mid;
break;
} else {
low = mid + 1;
}
So the address of the last entry in the first-level table isn't really
checked -- except for the very end, but the check against `last` means
we just run the loop once more than necessary. But it makes `unwinddump` output
look less confusing, and it's what it looks was the intention here.
(No test since I can't think of a way to make FileCheck check that one
number is larger than another.)
Differential Revision: https://reviews.llvm.org/D105404
If linking directly against a DLL without an import library, the
DLL export symbols might not contain stdcall decorations.
If we have an undefined symbol with decoration, and we happen to have
a matching undecorated symbol (which either is lazy and can be loaded,
or already defined), then alias it against that instead.
This matches what's done in reverse, when we have a def file
declaring to export a symbol without decoration, but we only have
a defined decorated symbol. In that case we do a fuzzy match
(SymbolTable::findMangle). This case is more straightforward; if we
have a decorated undefined symbol, just strip the decoration and look
for the corresponding undecorated symbol name.
Add warnings and options for either silencing the warning or disabling
the whole feature, corresponding to how ld.bfd does it.
(This feature works for any symbol decoration mismatch, not only when
linking against a DLL directly; ld.bfd also tolerates it anywhere,
and also fixes up mismatches in the other direction, like
SymbolTable::findMangle, for any symbol, not only exports. But in
practice, at least for lld, it would primarily end up used for linking
against DLLs.)
Differential Revision: https://reviews.llvm.org/D104532
As the COFF linker is capable of linking directly against a DLL now
(after D104530, as long as it is running in mingw mode), don't error
out here but successfully load libraries specified with "-l" from DLLs
if that's what ld.bfd would have matched.
Differential Revision: https://reviews.llvm.org/D104531
GNU ld.bfd supports linking directly against DLLs without using an
import library, and some projects have picked up on this habit.
(There's no one single unsurmountable issue with using import
libraries, but this is a regularly surfacing missing feature.)
As long as one is linking by name (instead of by ordinal), the DLL
export table contains most of the information needed. (One can
inspect what section a symbol points at, to see if it's a function
or data symbol. The practical implementation of this loops over all
sections for each symbol, but as long as they're not very many, that
should hopefully be tolerable performance wise.)
One exception where the information in the DLL isn't entirely enough
is on i386 with stdcall functions; depending on how they're done,
the exported function name can be a plain undecorated name, while
the import library would contain the full decorated symbol name. This
issue is addressed separately in a different patch.
This is implemented mimicing the structure of a regular import library,
with one InputFile corresponding to the static archive that just adds
lazy symbols, which then are fetched when they are needed. When such
a symbol is fetched, we synthesize a coff_import_header structure
in memory and create a regular ImportFile out of it.
The implementation could be even smaller by just creating ImportFiles
for every symbol available immediately, but that would have the
drawback of actually ending up importing all symbols unless running
with GC enabled (and mingw mode defaults to having it disabled for
historical reasons).
Differential Revision: https://reviews.llvm.org/D104530
We have been creating many ConcatInputSections with identical values due
to .subsections_via_symbols. This diff factors out the identical values
into a Shared struct, to reduce memory consumption and make copying
cheaper.
I also changed `callSiteCount` from a uint32_t to a 31-bit field to save an
extra word.
All in all, this takes InputSection from 120 to 72 bytes (and
ConcatInputSection from 160 to 112 bytes), i.e. 30% size reduction in
ConcatInputSection.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 4.14 4.24 4.18 4.183 0.027548999
+ 20 4.04 4.11 4.075 4.0775 0.018027756
Difference at 95.0% confidence
-0.1055 +/- 0.0149005
-2.52211% +/- 0.356215%
(Student's t, pooled s = 0.0232803)
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D105305
`__cfstring` is a special literal section, so instead of breaking it up
at symbol boundaries, we break it up at fixed-width boundaries (since
each literal is the same size). Symbols can only occur at one of those
boundaries, so this is strictly more powerful than
`.subsections_via_symbols`.
With that in place, we then run the section through ICF.
This change is about perf-neutral when linking chromium_framework.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D105045
This is a pretty big refactoring diff, so here are the motivations:
Previously, ICF ran after scanRelocations(), where we emitting
bind/rebase opcodes etc. So we had a bunch of redundant leftovers after
ICF. Having ICF run before Writer seems like a better design, and is
what LLD-ELF does, so this diff refactors it accordingly.
However, ICF had two dependencies on things occurring in Writer: 1) it
needs literals to be deduplicated beforehand and 2) it needs to know
which functions have unwind info, which was being handled by
`UnwindInfoSection::prepareRelocations()`.
In order to do literal deduplication earlier, we need to add literal
input sections to their corresponding output sections. So instead of
putting all input sections into the big `inputSections` vector, and then
filtering them by type later on, I've changed things so that literal
sections get added directly to their output sections during the 'gather'
phase. Likewise for compact unwind sections -- they get added directly
to the UnwindInfoSection now. This latter change is not strictly
necessary, but makes it easier for ICF to determine which functions have
unwind info.
Adding literal sections directly to their output sections means that we
can no longer determine `inputOrder` from iterating over
`inputSections`. Instead, we store that order explicitly on
InputSection. Bloating the size of InputSection for this purpose would
be unfortunate -- but LLD-ELF has already solved this problem: it reuses
`outSecOff` to store this order value.
One downside of this refactor is that we now make an additional pass
over the unwind info relocations to figure out which functions have
unwind info, since want to know that before `processRelocations()`. I've
made sure to run that extra loop only if ICF is enabled, so there should
be no overhead in non-optimizing runs of the linker.
The upside of all this is that the `inputSections` vector now contains
only ConcatInputSections that are destined for ConcatOutputSections, so
we can clean up a bunch of code that just existed to filter out other
elements from that vector.
I will test for the lack of redundant binds/rebases in the upcoming
cfstring deduplication diff. While binds/rebases can also happen in the
regular `.text` section, they're more common in `.data` sections, so it
seems more natural to test it that way.
This change is perf-neutral when linking chromium_framework.
Reviewed By: oontvoo
Differential Revision: https://reviews.llvm.org/D105044
Everything (including test) modified from ELF/COFF. Using the same syntax
(--lto-O3, etc) as ELF.
Differential Revision: https://reviews.llvm.org/D105223
Previously, we only applied the renames to
ConcatOutputSections.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D105079
For
```
SECTIONS {
text.0 : {}
text.1 : {}
text.2 : {}
} INSERT AFTER .data;
```
the current order is `.data text.2 text.1 text.0`. It makes more sense to
preserve the specified order and thus improve compatibility with GNU ld.
For
```
SECTIONS { text.0 : {} } INSERT AFTER .data;
SECTIONS { text.3 : {} } INSERT AFTER .data;
```
GNU ld somehow collects sections with `INSERT AFTER .data` together (IMO
inconsistent) but I think it makes more sense to execute the commands in order
and get `.data text.3 text.0` instead.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D105158
See the comment for my understanding of -no-pie and -shared expectation.
-no-pie has freedom on choices. We choose dynamic relocations to be consistent
with the handling of GOT-generating relocations.
Note: GNU ld has arch-varying behaviors and its x86 -pie has a very
complex rule:
if there is at least one GOT-generating or PLT-generating relocation and
-z dynamic-undefined-weak (enabled by default) is in effect, generate a
dynamic relocation.
We don't emulate its rule.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D105164
A couple of filecheck patterns had not been hooked up with
the patterns suffering from some drift. As this test is old
and llvm-objdump has improved a lot, take this opportunity to
hide the instruction encoding. I've also taken out a lot of
the explanatory comments that llvm-objdump improvements make
redundant, as these comments oftern don't get updated when addresses
change.
Differential Revision: https://reviews.llvm.org/D104907
There are a couple of problems with the code to patch
unrelocated BLX instructions:
1. The calculation of the PC needs to take into account
the alignment of the instruction. The Thumb BLX
uses alignDown(PC, 4) for the source address.
2. The calculation of the PC bias is hard-coded to 4
which works for Thumb, but when there is a BLX the
branch will be in Arm state so it needs an 8 byte
PC bias.
No asssembler generates an unrelocated BLX instruction
so these problems do not affect real world programs.
However we should still fix them.
Differential Revision: https://reviews.llvm.org/D104905
SymtabSection::emitStabs() writes the symbol table in the order
of externalSymbols, which has the order of symtab->getSymbols(),
which is just the order symbols are added to the symbol table.
In practice, symbols in the symbol files of input .o files are
sorted, but since that's not guaranteed we sort them in
ObjFile::parseSymbols(). To make sure several symbols with the same
address keep the order they're in the input file, we have to use
stable_sort().
In practice, std::sort() on already-sorted inputs won't change the order
of just adjacent elements, and while in theory std::sort() could use a
random pivot, in practice the code should be deterministic as it was
previously too.
But now lld/test/MachO/stabs.s passes with LLVM_ENABLE_EXPENSIVE_CHECKS=ON
(the last test that was failing with that set).
Fixes a regression from D99972.
While here, remove an empty section in stabs.s and move
.subsections_via_symbols to the end where it usually is (this part no
behavior change).
Differential Revision: https://reviews.llvm.org/D105071
Fixes PR50637.
Downstream bug: https://crbug.com/1218958
Currently, we split __cstring along symbol boundaries with .subsections_via_symbols
when not deduplicating, and along null bytes when deduplicating. This change splits
along null bytes unconditionally, and preserves original alignment in the non-
deduplicated case.
Removing subsections-section-relocs.s because with this change, __cstring
is never reordered based on the order file.
Differential Revision: https://reviews.llvm.org/D104919
The two different thread_local_regular sections (__thread_data and
more_thread_data) had nondeterminstic ordering for two reasons:
1. https://reviews.llvm.org/D102972 changed concatOutputSections
from MapVector to DenseMap, so when we iterate it to make
output segments, we would add the two sections to the __DATA
output segment in nondeterministic order.
2. The same change also moved the two stable_sort()s for segments
and sections to sort(). Since sections with assigned priority
(such as TLV data) have the same priority for all sections,
this is incorrect -- we must use stable_sort() so that the
initial (input-order-based) order remains.
As a side effect, we now (deterministically) put the __common
section in front of __bss (while previously we happened to
put it after it). (__common and __bss are both zerofill so
both have order INT_MAX, but common symbols are added to
inputSections before normal sections are collected.)
Makes lld/test/MachO/tlv.s and lld/test/MachO/tlv-dylib.s pass with
LLVM_ENABLE_EXPENSIVE_CHECKS=ON.
Differential Revision: https://reviews.llvm.org/D105054
Make sure we don't wrongly fold two sections that refer to
symbols with the same value if they are not both absolute /
non-absolute.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D104876
Literal sections can be deduplicated before running ICF. That makes it
easy to compare them during ICF: we can tell if two literals are
constant-equal by comparing their offsets in their OutputSection.
LLD-ELF takes a similar approach.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D104671
This test has always failed on 32 bit armv8 bots:
https://lab.llvm.org/buildbot/#/builders/178/builds/42
Due to the output order of some symbols changing.
I don't think this is an Arm specific issue so disabling
on 32 bit while it's investigated.
The patch reuses the common code to print memory operand addresses as
instruction comments. This helps to align the comments and enables using
target-specific comment markers when `evaluateMemoryOperandAddress()` is
implemented for them.
Differential Revision: https://reviews.llvm.org/D104861
libunwind uses unwind info to find the function address belonging
to the current instruction pointer. libunwind/src/CompactUnwinder.hpp's
step functions read functionStart for UNWIND_X86_64_MODE_STACK_IND
(and for nothing else), so these encodings need a dedicated entry
per function, so that the runtime can get the stacksize off the
`subq` instrunction in the function's prologue.
This matches ld64.
(CompactUnwinder.hpp from https://opensource.apple.com/source/libunwind/
also reads functionStart in a few more cases if `SUPPORT_OLD_BINARIES` is set,
but it defaults to 0, and ld64 seems to not worry about these additional
cases.)
Related upstream bug: https://crbug.com/1220175
Differential Revision: https://reviews.llvm.org/D104978
Modify the D13209 logic: for a script inside the sysroot, if an absolute path
does not exist, report an error instead of falling back to the path without the
sysroot prefix.
This matches GNU ld, which makes sense to me: we don't want to find an arbitrary
file in the host.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D104894
Commit 728cc0075e made comdat symbols
from LTO objects be treated as any regular comdat symbol. This works
great for symbols that actually are IMAGE_COMDAT_SELECT_ANY, but
if the symbols have a less trivial selection type that require comparing
either the section chunk size or contents, we can't check that before
actually doing the LTO compilation.
Therefore bring back one aspect of handling from before; that comdat
resolution with a leader from an LTO symbol is essentially skipped,
like it was before 728cc0075e.
Differential Revision: https://reviews.llvm.org/D104605
... even on targets preferring RELA. The section is only consumed by ld.lld
which can handle REL.
Follow-up to D104080 as I explained in the review. There are two advantages:
* The D104080 code only handles RELA, so arm/i386/mips32 etc may warn for -fprofile-use=/-fprofile-sample-use= usage.
* Decrease object file size for RELA targets
While here, change the relocation to relocate weights, instead of 0,1,2,3,..
I failed to catch the issue during review.
`icfEqClass` only makes sense on ConcatInputSections since (in contrast
to literal sections) they are deduplicated as an atomic unit.
Similarly, `hasPersonality` and `replacement` don't make sense on
literal sections.
This mirrors LLD-ELF, which stores `icfEqClass` only on non-mergeable
sections.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D104670
We previously did this only for x86_64, but it turns out that
arm64 needs this too -- see PR50791.
Ultimately this is a hack, and we should avoid over-aligning strings
that don't need it. I'm just having a hard time figuring out how ld64 is
determining the right alignment.
No new test for this since we were already testing this behavior for
x86_64, and extending it to arm64 seems too trivial.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104835
Currently when .llvm.call-graph-profile is created by llvm it explicitly encodes the symbol indices. This section is basically a black box for post processing tools. For example, if we run strip -s on the object files the symbol table changes, but indices in that section do not. In non-visible behavior indices point to wrong symbols. The visible behavior indices point outside of Symbol table: "invalid symbol index".
This patch changes the format by using R_*_NONE relocations to indicate the from/to symbols. The Frequency (Weight) will still be in the .llvm.call-graph-profile, but symbol information will be in relocation section. In LLD information from both sections is used to reconstruct call graph profile. Relocations themselves will never be applied.
With this approach post processing tools that handle relocations correctly work for this section also. Tools can add/remove symbols and as long as they handle relocation sections with this approach information stays correct.
Doing a quick experiment with clang-13.
The size went up from 107KB to 322KB, aggregate of all the input sections. Size of clang-13 binary is ~118MB. For users of -fprofile-use/-fprofile-sample-use the size of object files will go up slightly, it will not impact final binary size.
Reviewed By: jhenderson, MaskRay
Differential Revision: https://reviews.llvm.org/D104080
Add tests for pending TODOs, plus some global cleanups:
* No fold: func has personality/LSDA
* Fold: reference to absolute symbol with different name but identical value
* No fold: reloc references to absolute symbols with different values
* No fold: N_ALT_ENTRY symbols
Differential Revision: https://reviews.llvm.org/D104721
"""Bitcode symbols only exist before LTO runs, and only serve the purpose of
resolving visibility so LTO can better optimize. Running LTO creates ObjFiles
from BitcodeFiles, and those ObjFiles contain regular Defined symbols (with
isec set and all) that will replace the bitcode symbols. So things should
(hopefully) work as-is :)"""
-- https://reviews.llvm.org/rGdbbc8d8333f29cf4ad6f4793da1adf71bbfdac69#inline-6081
This particular linker invocation is only run to check that we accept
options, but we don't inspect the generated command line. As all other
commands in the file have their output piped to FileCheck, the lit test
doesn't print any other output; therefore silence this one for consistency
as well.
This is consistent with how clang prints its internal commands with
-### and -v.
When linking with -verbose, we get log messages from the actual
linking written to stderr. By printing the command to the same stream,
we make sure they appear in a sensible chronological order.
Differential Revision: https://reviews.llvm.org/D104527
getLineNumber() was counting the number of line feeds from the start of
the buffer to the current token. For large linker scripts this became a
performance bottleneck. For one 4MB linker script over 4 minutes was
spent in getLineNumber's StringRef::count.
Store the line number from the last token, and only count the additional
line feeds since the last token.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D104137
This reverts commit e1adf90826.
This appears to affect the way that C++ mangled symbols appear in the
import library when using a .def file that names a C++ free function
with no name decoration. I will follow up with a reduced test case
shortly.
Fixes PR50529. With this, lld-linked Chromium base_unittests passes on arm macs.
Surprisingly, no measurable impact on link time.
Differential Revision: https://reviews.llvm.org/D104681
The variable used to need the wider scope, but doesn't after the
reland. See LC_LINKER_OPTIONS-related discussion on
https://reviews.llvm.org/D104353 for background.
Real zerofill sections go after __thread_bss, since zerofill sections
must all be at the end of their segment and __thread_bss must be right
after __thread_data.
Works fine already, but wasn't tested as far as I can tell.
Also tweak comment about zerofill sections a bit.
No behavior change.
Differential Revision: https://reviews.llvm.org/D104609
We make it less than INT_MAX in order not to conflict with the ordering
of zerofill sections, which must always be placed at the end of their
segment.
This is the more structural fix for the issue addressed in {D104596}.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104607
This is run every time around in the main linker loop. Once a match
has been found, stop trying to rematch such a symbol.
Not sure if this has any actual measurable performance impact though
(SymbolTable::findMangle() iterates over the whole symbol table for
each call and does fuzzy matching on top of that) but this makes the
code more reassuring to read at least. (This is in practice run for def
files listing undecorated stdcall functions to be exported.)
Differential Revision: https://reviews.llvm.org/D104529
Pass the original argv[0] to the coff linker, as the coff linker uses
the basename of argv[0] as the log prefix.
This makes error messages to be printed with a "ld.lld:" prefix
instead of "lld-link:". The current "lld-link:" prefix can be confusing
to users, as they're invoking the MinGW linker (and might not even have
a lld-link executable).
Keep the first argument as lld-link when printing the command line, to
make it an actually reproducible standalone command.
Differential Revision: https://reviews.llvm.org/D104526
The exact location doesn't matter, but it should be in front
of __thread_bss. We put it right in front of __thread_data
which is where ld64 seems to put it as well.
Fixes PR50769.
(As mentioned on the bug, there is probably a more structural
fix too, see comment 5. If we don't address this, it's likely
we'll run into this again with other synthetic sections. But
for now, let's fix the immediate breakage.)
Differential Revision: https://reviews.llvm.org/D104596
...instead of S_NON_LAZY_SYMBOL_POINTERS. This matches ld64.
Part of PR50769.
While here, also remove an old TODO that was done in D87178.
Differential Revision: https://reviews.llvm.org/D104594
findLibrary() returned a StringRef while findFramework & other helper
functions returned std::strings. Standardize on std::string.
(I initially tried making the helper functions all return StringRefs,
but I realized we shouldn't return input StringRefs since their
lifetimes would not be obvious from the calling code.)
Previously, we asserted that such a case was invalid, but in fact
`ld -r` can emit such symbols if the input contained a (true) private
extern, or if it contained a symbol started with "L".
Non-extern symbols marked as private extern are essentially equivalent
to regular TU-scoped symbols, so no new functionality is needed.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104502
The `icf` command-line option is not present in ld64, so it should use the LLD option syntax, which begins with double dashes and separates primary option from any suboption with the equal sign.
Differential Revision: https://reviews.llvm.org/D104548
This change revisits https://reviews.llvm.org/D79248 which originally
added support for the --unresolved-symbols flag.
At the time I thought it would make sense to add a third option to this
flag called `import-functions` but it turns out (as was suspects by on
the reviewers IIRC) that this option can be authoganal.
Instead I've added a new option called `--import-undefined` that only
operates on symbols that can be imported (for example, function symbols
can always be imported as opposed to data symbols we can only be
imported when compiling with PIC).
This option gives us the full expresivitiy that emscripten needs to be
able allow reporting of undefined data symbols as well as the option to
disable that.
This change does remove the `--unresolved-symbols=import-functions`
option, which is been in the codebase now for about a year but I would
be extremely surprised if anyone was using it.
Differential Revision: https://reviews.llvm.org/D103290
ICF = Identical C(ode|OMDAT) Folding
This is the LLD ELF/COFF algorithm, adapted for MachO. So far, only `-icf all` is supported. In order to support `-icf safe`, we will need to port address-significance tables (`.addrsig` directives) to MachO, which will come in later diffs.
`check-{llvm,clang,lld}` have 0 regressions for `lld -icf all` vs. baseline ld64.
We only run ICF on `__TEXT,__text` for reasons explained in the block comment in `ConcatOutputSection.cpp`.
Here is the perf impact for linking `chromium_framekwork` on a Mac Pro (16-core Xeon W) for the non-ICF case vs. pre-ICF:
```
N Min Max Median Avg Stddev
x 20 4.27 4.44 4.34 4.349 0.043029977
+ 20 4.37 4.46 4.405 4.4115 0.025188761
Difference at 95.0% confidence
0.0625 +/- 0.0225658
1.43711% +/- 0.518873%
(Student's t, pooled s = 0.0352566)
```
Reviewed By: #lld-macho, int3
Differential Revision: https://reviews.llvm.org/D103292
We need to dedup archive loads (similar to what we do for dylib
loads).
I noticed this issue after building some Swift stuff that used
`-force_load_swift_libs`, as it caused some Swift archives to be loaded
many times.
Reviewed By: #lld-macho, thakis, MaskRay
Differential Revision: https://reviews.llvm.org/D104353
After D77330, the comments are inconsistent with the disassembled code.
As the value of `far` has been changed, a thunk to reach it is now
generated, and target addresses of branch instructions are different
from what was initially expected.
The patch fixes that and makes the test closer to what it was originally.
Differential Revision: https://reviews.llvm.org/D104286
The following class isn't part of the export table; there's a
second correctly placed comment about the things that actually
belong to the export table.
I removed them in rG5de7467e982 but @thakis pointed out that
they were useful to keep, so here they are again. I've also converted
the `!isCoalescedWeak()` asserts into `!shouldOmitFromOutput()` asserts,
since the latter check subsumes the former.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104169
It's a warning in ld64. While having LLD be stricter would be nice, it
makes it harder for it to be a drop-in replacement into existing builds.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104333
During PHDR creation, the case where an output section does not require a
PT_LOAD header but still occupies memory in the current VMA region was not handled.
If such an output section interleaves two output sections that have the same
VMA and LMA regions set, we would previously re-use the existing PT_LOAD header
for the second output section.
However, since the memory region is not contiguous, we need to start a new PT_LOAD
segment.
This fixes https://bugs.llvm.org/show_bug.cgi?id=50558
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D103815