Stub dylibs differ from "real" dylibs in that they lack any content in
their sections. What they do have are export tries and symbol tables,
which means we can still link against them. I am unclear how to
properly create these stub dylibs; XCode 11.3's `lipo` is able to create
stub dylibs, but those lack LC_ID_DYLIB load commands and are considered
invalid by most tooling. Newer versions of `lipo` aren't able to create
stub dylibs at all. However, recent SDKs in XCode still come with valid
stub dylibs, so it still seems worthwhile to support them. The YAML in
this diff's test was generated by taking a non-stub dylib and editing
the appropriate fields.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D89012
This covers a few cases that aren't otherwise tested:
1) Non-ascii symbol names are ordered.
2) Comments, whitespace and blank lines are trimmed.
3) Missing order files result in an error.
Reviewed by: MaskRay, grimar
Differential Revision: https://reviews.llvm.org/D90933
I noticed when running a large link with the --time-trace option that
there were several areas which were missing any specific time trace
categories (aside from the generic link/ExecuteLinker categories). This
patch adds new categories to fill most of the "gaps", or to provide more
detail than was previously provided.
Reviewed by: MaskRay, grimar, russell.gallop
Differential Revision: https://reviews.llvm.org/D90686
On LP64/Windows platforms, this decreases sizeof(InputSection) from 208 (larger
on Windows) to 184.
For a large executable (7.6GiB, inputSections.size()=5105122,
make<InputSection> called 4835760 times), this decreases cgroup
memory.max_usage_in_bytes by 0.6%
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D91018
Add a calling convention called amdgpu_gfx for real function calls
within graphics shaders. For the moment, this uses the same calling
convention as other calls in amdgpu, with registers excluded for return
address, stack pointer and stack buffer descriptor.
Differential Revision: https://reviews.llvm.org/D88540
This is more or less a port of rL329598 (D45275) to the COFF linker.
Since there were already LTO-related settings under -opt:, I added
them there instead of new flags.
Differential Revision: https://reviews.llvm.org/D90624
Make it possible for lld users to provide a custom script that would help to
find missing libraries. A possible scenario could be:
% clang /tmp/a.c -fuse-ld=lld -loauth -Wl,--error-handling-script=/tmp/addLibrary.py
unable to find library -loauth
looking for relevant packages to provides that library
liboauth-0.9.7-4.el7.i686
liboauth-devel-0.9.7-4.el7.i686
liboauth-0.9.7-4.el7.x86_64
liboauth-devel-0.9.7-4.el7.x86_64
pix-1.6.1-3.el7.x86_64
Where addLibrary would be called with the missing library name as first argument
(in that case addLibrary.py oauth)
Differential Revision: https://reviews.llvm.org/D87758
Match MSVC linker output - align all debug directories on four bytes,
while removing debug directory alignment. This would have the same
effect on CETCOMPAT support as D89919.
Chromium bug: https://crbug.com/1136664
Differential Revision: https://reviews.llvm.org/D89921
In the presence of a gap, the st_value field of a STT_SECTION symbol is the
address of the first input section (incorrect if there is a gap). Set it to the
output section address instead.
In -r mode, this bug can cause an incorrect non-zero st_value of a STT_SECTION
symbol (while output sections have zero addresses, input sections may have
non-zero outSecOff). The non-zero st_value can cause the final link to have
incorrect relocation computation (both GNU ld and LLD add st_value of the
STT_SECTION symbol to the output section address).
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D90520
This test was checking behaviour that only exists in the debug
configuration so will fail in release builds.
Perhaps there is way to keep this test around and only run
it in debug builds but for now I'm removing so fix the
release builders.
Differential Revision: https://reviews.llvm.org/D90542
I had envisioned the ghash step as a big up front step, but as currently
written, the timers are nested, and we are notionally adding types from
objects, so we might as well arrange the timers this way.
This preprocessor define was meant to be used to conditionally include VCSVersion.inc. However, the define was always set, and it was the content of the header that was conditionally generated. Therefore HAVE_VCS_VERSION_INC should be cleaned up.
Reviewed By: gribozavr2, MaskRay
Differential Revision: https://reviews.llvm.org/D84623
This field to represents the amount of static data needed by
an dynamic library or executable it should not include things
like heap or stack areas, which in the case of `-pie` are
not determined until runtime (e.g. __stack_pointer is imported).
Differential Revision: https://reviews.llvm.org/D90261
While MC did not produce R_X86_64_GOTPCRELX for test/binop instructions
(movl/adcl/addl/andl/...) before the previous commit, this code path has been
exercised by -fno-integrated-as for GNU as since 2016: -no-pie relaxing
may incorrectly access loc[-3] and produce a corrupted instruction.
Simply handle test/binop R_X86_64_GOTPCRELX like R_X86_64_GOTPCREL.
This partially reverts D85994.
In glibc, elf/dl-sym.c calls the raw `__tls_get_addr` by specifying the
tls_index parameter. Such a call does not have a pairing R_PPC64_TLSGD/R_PPC64_TLSLD.
This is legitimate. Since we cannot distinguish the benign case from cases due
to toolchain issues, we have to be permissive.
Acked by Stefan Pintilie
Add support to LLD for PC Relative Thread Local Storage for Local Dynamic.
This patch adds support for two relocations: R_PPC64_GOT_TLSLD_PCREL34 and
R_PPC64_DTPREL34.
The Local Dynamic code is:
```
pla r3, x@got@tlsld@pcrel R_PPC64_GOT_TLSLD_PCREL34
bl __tls_get_addr@notoc(x@tlsld) R_PPC64_TLSLD
R_PPC64_REL24_NOTOC
...
paddi r9, r3, x@dtprel R_PPC64_DTPREL34
```
After relaxation to Local Exec:
```
paddi r3, r13, 0x1000
nop
...
paddi r9, r3, x@dtprel R_PPC64_DTPREL34
```
Reviewed By: NeHuang, sfertile
Differential Revision: https://reviews.llvm.org/D87504
These are all inspired by existing test coverage we have in an internal
testsuite.
Reviewed by: grimar, MaskRay
Differential Revision: https://reviews.llvm.org/D89775
For a diagnostic `A refers to B` where B refers to a bitcode file, if the
symbol gets optimized out, the user may see `A refers to <internal>`; if the
symbol is retained, the user may see `A refers to lto.tmp`.
Save the reference InputFile * in the DenseMap so that the original filename is
available in reportBackrefs().
The ELF spec says
> If the sh_flags field for this section header includes the attribute SHF_INFO_LINK, then this member represents a section header table index.
Set SHF_INFO_LINK so that binary manipulation tools know that sh_info is
a section header table index instead of (the number of local symbols in the case of SHT_SYMTAB/SHT_DYNSYM).
We have already added SHF_INFO_LINK for --emit-relocs retained SHT_REL[A].
For example, we can teach llvm-objcopy to preserve the section index of the sh_info referenced section if
SHF_INFO_LINK is set. (GNU objcopy recognizes .rel[a].plt and updates
sh_info even if SHF_INFO_LINK is not set).
Reviewed By: grimar, psmith
Differential Revision: https://reviews.llvm.org/D89828
The combination has not been tested before. In the case of ICF,
`e.section->getVA(0)` equals the start address of the output section.
This can cause incorrect overlapping with the actual function at the
start of the output section and potentially trigger a GDB internal error
in `dw2_find_pc_sect_compunit_symtab` (presumably because:
if a short address range incorrectly starts at the start address of the
output section, GDB may pick it instead of the correct longer address
range. When mapping an address within the long address range but
out of the scope of the short address range, the routine may find
nothing - while the code asserts that it can find something).
Note that in the case of ICF there may be duplicate address range entries,
but GDB appears to be fine with them.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D89751
This reverts commit 1b589f4d4d and relands the D89463
with the fix: update `MappingTraits<FileFilter>::validate()` in ClangTidyOptions.cpp to
match the new signature (change the return type to "std::string" from "StringRef").
Original commit message:
This:
Changes the return type of MappingTraits<T>>::validate to std::string
instead of StringRef. It allows to create more complex error messages.
It introduces std::vector<std::pair<StringRef, bool>> getEntries():
a new virtual method of Section, which is the base class for all sections.
It returns names of special section specific keys (e.g. "Entries") and flags that says if them exist in a YAML.
The code in validate() uses this list of entries descriptions to generalize validation.
This approach was discussed in the D89039 thread.
Differential revision: https://reviews.llvm.org/D89463
This:
1) Changes the return type of `MappingTraits<T>>::validate` to `std::string`
instead of `StringRef`. It allows to create more complex error messages.
2) It introduces std::vector<std::pair<StringRef, bool>> getEntries():
a new virtual method of Section, which is the base class for all sections.
It returns names of special section specific keys (e.g. "Entries") and flags that
says if them exist in a YAML. The code in validate() uses this list of entries
descriptions to generalize validation.
This approach was discussed in the D89039 thread.
Differential revision: https://reviews.llvm.org/D89463
Add a simple forwarding option in the MinGW frontend, and implement
the private -wrap option in the COFF linker.
The feature in lld-link isn't gated by the -lldmingw option, but
the option is left as a private, undocumented option primarily
used by the MinGW driver.
The implementation is significantly based on the support for --wrap
in the ELF linker, but many small nuance details are different
between the ELF and COFF linkers, ending up with more than a few
implementation differences.
This fixes https://bugs.llvm.org/show_bug.cgi?id=47384.
Differential Revision: https://reviews.llvm.org/D89004
Reapplied with the bitfield member canInline fixed so it doesn't break
builds targeting windows.
This reverts commit a012c704b5.
Breaks Windows builds.
C:\src\llvm-mint\lld\COFF\Symbols.cpp(26,1): error: static_assert failed due to requirement 'sizeof(lld::coff::SymbolUnion) <= 48' "symbols should be optimized for memory usage"
static_assert(sizeof(SymbolUnion) <= 48,
Add a simple forwarding option in the MinGW frontend, and implement
the private -wrap option in the COFF linker.
The feature in lld-link isn't gated by the -lldmingw option, but
the option is left as a private, undocumented option primarily
used by the MinGW driver.
The implementation is significantly based on the support for --wrap
in the ELF linker, but many small nuance details are different
between the ELF and COFF linkers, ending up with more than a few
implementation differences.
This fixes https://bugs.llvm.org/show_bug.cgi?id=47384.
Differential Revision: https://reviews.llvm.org/D89004
This should fix cases when e.g. auto import is enabled without
mingw mode in total being enabled.
Differential Revision: https://reviews.llvm.org/D89006
ICF was not able to merge equivalent sections because of relocations to
sections ineligible for ICF that use alternative symbols, e.g. symbol
aliases or section relative relocations.
Merging in this scenario has been enabled by giving the sections that
are ineligible for ICF a unique ID, i.e. an equivalence class of their
own. This approach also provides another benefit as it improves the
hashing that is used to perform the initial equivalance grouping for
ICF. This is because the ICF ineligible sections can now contribute a
unique value towards the hashes instead of the same value of zero. This
has been seen to reduce link time with ICF by ~68% for objects compiled
with -fprofile-instr-generate.
In order to facilitate this use of a unique ID, the existing
inconsistent approach to the setting of the InputSection eqClass in ICF
has been changed so that there is a clear distinction between the
eqClass values of ICF eligible sections and those of the ineligible
sections that have a unique ID. This inconsistency could have caused
incorrect equivalence class equality in the past, although it appears
that no issues were encountered in actual use.
Differential Revision: https://reviews.llvm.org/D88830
Fixes https://bugs.llvm.org/show_bug.cgi?id=46473
LLD wasn't previously specifying any specific alignment in the TLS table's Characteristics field so the loader would just assume the default value (16 bytes). This works most of the time except if you have thread locals that want specific higher alignments (e.g. 32 as in the bug) *even* if they specify an alignment on the thread local. This change updates LLD to take the max alignment from tls section.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D88637
Revert individual wip commits and will instead follow up with a
single commit with all the changes. Makes cherry-picking easier
and will contain all the right tags.
This reverts commit 32a4ad3b6c.
This reverts commit 7fe13af676.
This reverts commit 51fbc1bef6.
This reverts commit f80950a8bb.
This reverts commit 0778cad9f3.
This reverts commit 8b70d527d7.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46473
LLD wasn't previously specifying any specific alignment in the TLS table's Characteristics field so the loader would just assume the default value (16 bytes). This works most of the time except if you have thread locals that want specific higher alignments (e.g. 32 as in the bug) *even* if they specify an alignment on the thread local. This change updates LLD to take the max alignment from tls section.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D88637
Summary:
This patch does the following:
1. Make InitTargetOptionsFromCodeGenFlags() accepts Triple as a
parameter, because some options' default value is triple dependant.
2. DataSections is turned on by default on AIX for llc.
3. Test cases change accordingly because of the default behaviour change.
4. Clang Driver passes in -fdata-sections by default on AIX.
Reviewed By: MaskRay, DiggerLin
Differential Revision: https://reviews.llvm.org/D88737
This reverts 9b5b305023 and fixes the unwanted re-ordering when generating ThinLTO indexes.
The goal of this patch is to better balance thread utilization during ThinLTO in-process linking (in llvm-lto2 or in LLD). Before this patch, large modules would often be scheduled late during execution, taking a long time to complete, thus starving the thread pool.
We now sort modules in descending order, based on each module's bitcode size, so that larger modules are processed first. By doing so, smaller modules have a better chance to keep the thread pool active, and thus avoid starvation when the bitcode compilation is almost complete.
In our case (on dual Intel Xeon Gold 6140, Windows 10 version 2004, two-stage build), this saves 15 sec when linking `clang.exe` with LLD & -flto=thin, /opt:lldltojobs=all, no ThinLTO cache, -DLLVM_INTEGRATED_CRT_ALLOC=d:\git\rpmalloc.
Before patch: 100 sec
After patch: 85 sec
Inspired by the work done by David Callahan in D60495.
Differential Revision: https://reviews.llvm.org/D87966
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
Adds more testing in basic-assembly.s and a new test tables.s.
Adds support to yaml reading and writing of tables as well.
Differential Revision: https://reviews.llvm.org/D88815
Followup on https://reviews.llvm.org/D85062 which ignores
entire library objects when no symbols are used within them.
This is shouldn't apply with `--whole-archive` since this
is specified to treat them like direct object inputs.
Differential Revision: https://reviews.llvm.org/D89290
This allows `__wasilibc_populate_libpreopen` to be GC'd in more cases
where it isn't needed, including when linked from Rust's libstd.
Differential Revision: https://reviews.llvm.org/D85062
This flag works in a similar way to the ELF linker in that it
will resolve any defined symbols to their local definition with
a shared library or -pie executable.
This flag has no effect on static linking.
Differential Revision: https://reviews.llvm.org/D89152
In ELF/InputFiles.cpp, getBitcodeMachineKind() is limited to uint8_t return
type. This works as long as EM_xxx is < 256, which is true for common
architectures, but not for some newly assigned or unofficial EM_* values.
The corresponding ELF field (e_machine) can hold uint16_t.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D89185
Similar to D66992.
In GNU ld, a -u specified symbol is a STB_DEFAULT undefined.
It cannot be changed to STB_WEAK by a later STB_WEAK undefined in a regular object file.
The behavior is consistent with our model because -u means "we need to fetch a lazy definition".
It should not be altered just because there is also a STB_WEAK undefined.
Note, our -u semantics are still different from GNU ld (https://github.com/ClangBuiltLinux/linux/issues/515):
we don't force the specified symbol to appear in .symtab This is a deliberate decision.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D88945
If a version is specified both with --{major,minor}-subsystem-version and
with --subsystem <name>:<version>, the one specified last (that actually
sets a version) takes precedance in GNU ld; thus doing the same here.
Differential Revision: https://reviews.llvm.org/D88804
As they can be set independently after D88802, we can get rid of a bit
of extra code - simplifying the logic here before adding more
complication to it later.
Differential Revision: https://reviews.llvm.org/D88803
The MinGW driver has separate options for OS and subsystem version.
Having this available in lld-link allows the MinGW driver to both match
GNU ld better and simplifies the code for merging two (potentially
mismatching) arguments into one.
Differential Revision: https://reviews.llvm.org/D88802
Parse the components as decimal, instead of decuding the base from
the string. This avoids ambiguity if the second number contains leading
zeros, which previously were parsed as indicating an octal number.
MS link.exe doesn't support hexadecimal numbers in the version numbers,
neither in /version nor in /subsystem.
Differential Revision: https://reviews.llvm.org/D88801
This adds the following two new lines to /summary:
21351 Input OBJ files (expanded from all cmd-line inputs)
61 PDB type server dependencies
38 Precomp OBJ dependencies
1420669231 Input type records <<<<
78665073382 Input type records bytes <<<<
8801393 Merged TPI records
3177158 Merged IPI records
59194 Output PDB strings
71576766 Global symbol records
25416935 Module symbol records
2103431 Public symbol records
Differential Revision: https://reviews.llvm.org/D88703
Before this patch /summary was crashing with some .PCH.OBJ files, because tpiMap[srcIdx++] was reading at the wrong location. When the TpiSource depends on a .PCH.OBJ file, the types should be offset by the previously merged PCH.OBJ set of indices.
Differential Revision: https://reviews.llvm.org/D88678
Add Thread Local Storage support for the 34 bit relocation R_PPC64_GOT_TLSGD_PCREL34 used in General Dynamic.
The compiler will produce code that looks like:
```
pla r3, x@got@tlsgd@pcrel R_PPC64_GOT_TLSGD_PCREL34
bl __tls_get_addr@notoc(x@tlsgd) R_PPC64_TLSGD
R_PPC64_REL24_NOTOC
```
LLD should be able to correctly compute the relocation for R_PPC64_GOT_TLSGD_PCREL34 as well as do the following two relaxations where possible:
General Dynamic to Local Exec:
```
paddi r3, r13, x@tprel
nop
```
and General Dynamic to Initial Exec:
```
pld r3, x@got@tprel@pcrel
add r3, r3, r13
```
Note:
This patch adds support for the PC Relative (no TOC) version of General Dynamic on top of the existing support for the TOC version of General Dynamic.
The ABI does not provide any way to tell by looking only at the relocation `R_PPC64_TLSGD` when it is being used in a TOC instruction sequence or and when it is being used in a no TOC sequence. The TOC sequence should always be 4 byte aligned. This patch adds one to the offset of the relocation when it is being used in a no TOC sequence. In this way LLD can tell by looking at the alignment of the offset of `R_PPC64_TLSGD` whether or not it is being used as part of a TOC or no TOC sequence.
Reviewed By: NeHuang, sfertile, MaskRay
Differential Revision: https://reviews.llvm.org/D87318
Add Thread Local Storage support for the 34 bit relocation R_PPC64_GOT_TLSGD_PCREL34 used in General Dynamic.
The compiler will produce code that looks like:
```
pla r3, x@got@tlsgd@pcrel R_PPC64_GOT_TLSGD_PCREL34
bl __tls_get_addr@notoc(x@tlsgd) R_PPC64_TLSGD
R_PPC64_REL24_NOTOC
```
LLD should be able to correctly compute the relocation for R_PPC64_GOT_TLSGD_PCREL34 as well as do the following two relaxations where possible:
General Dynamic to Local Exec:
```
paddi r3, r13, x@tprel
nop
```
and General Dynamic to Initial Exec:
```
pld r3, x@got@tprel@pcrel
add r3, r3, r13
```
Note:
This patch adds support for the PC Relative (no TOC) version of General Dynamic on top of the existing support for the TOC version of General Dynamic.
The ABI does not provide any way to tell by looking only at the relocation `R_PPC64_TLSGD` when it is being used in a TOC instruction sequence or and when it is being used in a no TOC sequence. The TOC sequence should always be 4 byte aligned. This patch adds one to the offset of the relocation when it is being used in a no TOC sequence. In this way LLD can tell by looking at the alignment of the offset of `R_PPC64_TLSGD` whether or not it is being used as part of a TOC or no TOC sequence.
Reviewed By: NeHuang, sfertile, MaskRay
Differential Revision: https://reviews.llvm.org/D87318
When adding an archive member with a problem, e.g. a new bitcode with an
old archiver, containing an unsupported attribute, or an ELF file with a
malformed symbol table, the archiver would throw away the error and
simply add the member to the archive without any symbol entries. This
meant that the resultant archive could be silently unusable when not
using --whole-archive, and result in unexpected undefined symbols.
This change fixes this issue by addressing two FIXMEs and only throwing
away not-an-object errors. However, this meant that some LLD tests which
didn't need symbol tables and were using invalid members deliberately to
test the linker's malformed input handling no longer worked, so this
patch also stops the archiver from looking for symbols in an object if
it doesn't require a symbol table, and updates the tests accordingly.
Differential Revision: https://reviews.llvm.org/D88288
Reviewed by: grimar, rupprecht, MaskRay
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
The routing rules are:
sym -> __wrap_sym
__real_sym -> sym
__wrap_sym and sym are routing targets, so they need to be exposed to the symbol
table. __real_sym is not and can be eliminated if not used by regular object.
This adds support for new-style command support. In this mode, all exports
are considered command entrypoints, and the linker inserts calls to
`__wasm_call_ctors` and `__wasm_call_dtors` for all such entrypoints.
This enables support for:
- Command entrypoints taking arguments other than strings and return values
other than `int`.
- Multicall executables without requiring on the use of string-based
command-line arguments.
This new behavior is disabled when the input has an explicit call to
`__wasm_call_ctors`, indicating code not expecting new-style command
support.
This change does mean that wasm-ld no longer supports DCE-ing the
`__wasm_call_ctors` function when there are no calls to it. If there are no
calls to it, and there are ctors present, we assume it's wasm-ld's job to
insert the calls. This seems ok though, because if there are ctors present,
the program is expecting them to be called. This change affects the
init-fini-gc.ll test.
In particular allow explict exporting of `__stack_pointer` but
exclud this from `--export-all` to avoid requiring the mutable
globals feature whenenve `--export-all` is used.
This uncovered a bug in populateTargetFeatures regarding checking
if the mutable-globals feature is allowed.
See: https://github.com/WebAssembly/binaryen/issues/2934
Differential Revision: https://reviews.llvm.org/D88506
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
They operate like Defined symbols but with no associated InputSection.
Note that `ld64` seems to treat the weak definition flag like a no-op for
absolute symbols, so I have replicated that behavior.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D87909
Apparently this is used in real programs. I've handled this by reusing
the logic we already have for branch (function call) relocations.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D87852
Not 100% sure but it appears that bundles are almost identical to
dylibs, aside from the fact that they do not contain `LC_ID_DYLIB`. ld64's code
seems to treat bundles and dylibs identically in most places.
Supporting bundles allows us to run e.g. XCTests, as all test suites are
compiled into bundles which get dynamically loaded by the `xctest` test runner.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D87856
* Implement rebase opcodes. Rebase opcodes tell dyld where absolute
addresses have been encoded in the binary. If the binary is not loaded
at its preferred address, dyld has to rebase these addresses by adding
an offset to them.
* Support `-pie` and use it to test rebase opcodes.
This is necessary for absolute address references in dylibs, bundles etc
to work.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D87199
The R2 save stub will now support offsets up to 64 bits.
There are three cases that will be used.
1) The offset fits in 26 bits.
```
b <26 bit offset>
```
2) The offset does not fit in 26 bits but fits in 34 bits.
```
paddi r12, 0, <34 bit offset>, 1
mtctr r12
bctr
```
3) The offset does not fit in 34 bits. Since this is an R2 save stub we can use
the TOC in R2. We are not loading the offset but the actual address we want to
branch to.
```
addis r12, r2, <address in TOC lo>
ld r12 <address in TOC hi>(r12)
mtctr r12
bctr
```
In case 1) the stub is only 8 bytes while in cases 2) and 3) the stub will be
20 bytes.
Reviewed By: MaskRay, sfertile, NeHuang
Differential Revision: https://reviews.llvm.org/D87916