Also sync help texts for the option between elf and coff ports.
Decisions:
- Do this even if /lldignoreenv is passed. /reproduce: does not affect
the main output, and this makes the env var more convenient to use.
(On the other hand, it's now possible to set this env var and forget
about it, and all future builds in the same shell will be much slower.
That's true for ld.lld, but posix shells have an easy way to set an
env var for a single command; in cmd.exe this is not possible without
contortions. Then again, lld-link runs in posix shells too.)
Original patch rebased across D68378 and D68381.
Differential Revision: https://reviews.llvm.org/D67707
Without this extra flag we can't distingish between stub functions and
functions that happen to have address 0 (relative to __table_base).
Adding this flag bit the base symbol class actually avoids growing the
SymbolUnion struct which would not be true if we added it to the
FunctionSymbol subclass (due to bitbacking).
The previous approach of setting it's table index to zero worked for
normal static relocations but not for `-fPIC` code.
See https://github.com/emscripten-core/emscripten/issues/12819
Differential Revision: https://reviews.llvm.org/D92038
In https://reviews.llvm.org/D89072 I added static const data members
to the debug subsection for globals. It skipped emitting an S_CONSTANT if it
didn't have a value, which meant the subsection could be empty.
This patch fixes the empty subsection issue.
Differential Revision: https://reviews.llvm.org/D92049
Local values are constants or addresses that can't be folded into
the instruction that uses them. FastISel materializes these in a
"local value" area that always dominates the current insertion
point, to try to avoid materializing these values more than once
(per block).
https://reviews.llvm.org/D43093 added code to sink these local
value instructions to their first use, which has two beneficial
effects. One, it is likely to avoid some unnecessary spills and
reloads; two, it allows us to attach the debug location of the
user to the local value instruction. The latter effect can
improve the debugging experience for debuggers with a "set next
statement" feature, such as the Visual Studio debugger and PS4
debugger, because instructions to set up constants for a given
statement will be associated with the appropriate source line.
There are also some constants (primarily addresses) that could be
produced by no-op casts or GEP instructions; the main difference
from "local value" instructions is that these are values from
separate IR instructions, and therefore could have multiple users
across multiple basic blocks. D43093 avoided sinking these, even
though they were emitted to the same "local value" area as the
other instructions. The patch comment for D43093 states:
Local values may also be used by no-op casts, which adds the
register to the RegFixups table. Without reversing the RegFixups
map direction, we don't have enough information to sink these
instructions.
This patch undoes most of D43093, and instead flushes the local
value map after(*) every IR instruction, using that instruction's
debug location. This avoids sometimes incorrect locations used
previously, and emits instructions in a more natural order.
This does mean materialized values are not re-used across IR
instruction boundaries; however, only about 5% of those values
were reused in an experimental self-build of clang.
(*) Actually, just prior to the next instruction. It seems like
it would be cleaner the other way, but I was having trouble
getting that to work.
Differential Revision: https://reviews.llvm.org/D91734
With this change, `TargetInfo::adjustRelaxExpr` is only related to TLS
relaxations and a subsequent clean-up can delete the `data` parameter.
Differential Revision: https://reviews.llvm.org/D92079
This commit factors out a WasmTableType definition from WasmTable, as is
the case for WasmGlobal and other data types. Also add support for
extracting the SymbolName for a table from the linking section's symbol
table.
Differential Revision: https://reviews.llvm.org/D91849
Enables overriding earlier --lto-whole-program-visibility.
Variant of D91583 while discussing alternate ways to identify and
handle the --export-dynamic case.
Differential Revision: https://reviews.llvm.org/D92060
Also use "unknown flag 'flag'" instead of "unknown flag: flag" for
consistency with the other ports.
Differential Revision: https://reviews.llvm.org/D91970
This patch:
- adds an ld64.lld.darwinnew symlink for lld, to go with f2710d4b57,
so that `clang -fuse-ld=lld.darwinnew` can be used to test new
Mach-O lld while it's in bring-up. (The expectation is that we'll
remove this again once new Mach-O lld is the defauld and only Mach-O
lld.)
- lets the clang driver know if the linker is lld (currently
only triggered if `-fuse-ld=lld` or `-fuse-ld=lld.darwinnew` is
passed). Currently only used for the next point, but could be used
to implement other features that need close coordination between
compiler and linker, e.g. having a diag for calling `clang++` instead
of `clang` when link errors are caused by a missing C++ stdlib.
- lets the clang driver pass `-demangle` to Mach-O lld (both old and
new), in addition to ld64
- implements -demangle for new Mach-O lld
- changes demangleItanium() to accept _Z, __Z, ___Z, ____Z prefixes
(and updates one test added in D68014). Mach-O has an extra
underscore for symbols, and the three (or, on Mach-O, four)
underscores are used for block names.
Differential Revision: https://reviews.llvm.org/D91884
llvm-symbolizer used to use the DIA SDK for symbolization on
Windows; this patch switches to using native symbolization, which was
implemented recently.
Users can still make the symbolizer use DIA by adding the `-dia` flag
in the LLVM_SYMBOLIZER_OPTS environment variable.
Differential Revision: https://reviews.llvm.org/D91814
This allows to reuse the RelocationResolver from the code
that doesn't want to deal with `RelocationRef` class.
I am going to use it in llvm-readobj. See the description
of D91530 for more details.
Differential revision: https://reviews.llvm.org/D91533
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
This is a more full featured version of ``--allow-undefined``.
The semantics of the different methods are as follows:
report-all:
Report all unresolved symbols. This is the default. Normally the
linker will generate an error message for each reported unresolved
symbol but the option ``--warn-unresolved-symbols`` can change this
to a warning.
ignore-all:
Resolve all undefined symbols to zero. For data and function
addresses this is trivial. For direct function calls, the linker
will generate a trapping stub function in place of the undefined
function.
import-functions:
Generate WebAssembly imports for any undefined functions. Undefined
data symbols are resolved to zero as in `ignore-all`. This
corresponds to the legacy ``--allow-undefined`` flag.
The plan is to followup with a new mode called `import-dynamic` which
allows for statically linked binaries to refer to both data and
functions symbols from the embedder.
Differential Revision: https://reviews.llvm.org/D79248
As mentioned in https://reviews.llvm.org/D67479#1667256 ,
* `--[no-]allow-shlib-undefined` control the diagnostic for an unresolved symbol in a shared object
* `-z defs/-z undefs` control the diagnostic for an unresolved symbol in a regular object file
* `--unresolved-symbols=` controls both bits.
In addition, make --warn-unresolved-symbols affect --no-allow-shlib-undefined.
This patch makes the behavior match GNU ld.
Reviewed By: psmith
Differential Revision: https://reviews.llvm.org/D91510
This adds `--[no-]color-diagnostics[=auto,never,always]` to
the MachO port and harmonizes the flag in the other ports:
- Consistently use MetaVarName
- Consistently document the non-eq version as alias of the eq version
- Use B<> in the ports that have it (no-op, shorter)
- Fix oversight in COFF port that made the --no flag have the wrong
prefix
Differential Revision: https://reviews.llvm.org/D91640
`try ... catch` in an inline function produces `.gcc_except_table.*` in a COMDAT
group with GCC or newer Clang (since D83655). For --gc-sections, currently we
scan `.eh_frame` pieces and mark liveness of such a `.gcc_except_table.*` and
then the associated `.text.*` (if a member in a section group is retained, the
others should be retained as well).
Essentially all `.text.*` and `.gcc_except_table.*` compiled from inline
functions with `try ... catch` cannot be discarded by the imprecise
--gc-sections. Compared with the state before D83655, the output
`.gcc_except_table` is smaller (non-prevailing copies in COMDAT groups can now
be discarded) but `.text` may be larger, i.e. size regression.
This patch teaches the .eh_frame piece scanning code to not mark
`.gcc_except_table` in a section group, thus allow unused `.text.*` and
`.gcc_except_table.*` in a section group to be discarded.
Note, non-group `.gcc_except_table` can still not be discarded. That is the status quo.
Reviewed By: grimar, echristo
Differential Revision: https://reviews.llvm.org/D91579
`-flavor` is difficult to use through the clang driver since it
must be the first argument.
clang's `-fuse-ld=foo` looks for `ld64.foo` when targeting darwin,
so it's easiest if darwinnew accepts some `ld64.foo`. Let's go with
`ld64.lld.darwinnew`, so that `clang -fuse-ld=lld.darwinnew` does
the right thing (assuming a symlink with the name `ld64.ld.darwinnew
exists in the right place).
This is temporary until darwinnew replaces ld64.lld, and it only
exists to make testing the new lld port easier.
These relocations represent offsets from the __tls_base symbol.
Previously we were just using normal MEMORY_ADDR relocations and relying
on the linker to select a segment-offset rather and absolute value in
Symbol::getVirtualAddress(). Using an explicit relocation type allows
allow us to clearly distinguish absolute from relative relocations based
on the relocation information alone.
One place this is useful is being able to reject absolute relocation in
the PIC case, but still accept TLS relocations.
Differential Revision: https://reviews.llvm.org/D91276
Fixes PR48071
* The Rust compiler produces SHF_ALLOC `.debug_gdb_scripts` (which normally does not have the flag)
* `.debug_gdb_scripts` sections are removed from `inputSections` due to --strip-debug/--strip-all
* When processing --gc-sections, pieces of a SHF_MERGE section can be marked live separately
`=>` segfault when marking liveness of a `.debug_gdb_scripts` which is not split into pieces (because it is not in `inputSections`)
This patch circumvents the problem by not treating SHF_ALLOC ".debug*" as debug sections (to prevent --strip-debug's stripping)
(which is still useful on its own).
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D91291
Input sections `.ctors/.ctors.N` may go to either the output section `.init_array` or the output section `.ctors`:
* output `.ctors`: currently we sort them by name. This patch changes to sort by priority from high to low. If N in `.ctors.N` is in the form of %05u, there is no semantic difference. Actually GCC and Clang do use %05u. (In the test `ctors_dtors_priority.s` and Gold's test `gold/testsuite/script_test_14.s`, we can see %03u, but they are not really produced by compilers.)
* output `.init_array`: users can provide an input section description `SORT_BY_INIT_PRIORITY(.init_array.* .ctors.*)` to mix `.init_array.*` and `.ctors.*`. This can make .init_array.N and .ctors.(65535-N) interchangeable.
With this change, users can mix `.ctors.N` and `.init_array.N` in `.init_array` (PR44698 and PR48096) with linker scripts. As an example:
```
SECTIONS {
.init_array : {
*(SORT_BY_INIT_PRIORITY(.init_array.* .ctors.*))
*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin?.o *crtend.o *crtend?.o ) .ctors)
}
} INSERT AFTER .fini_array;
SECTIONS {
.fini_array : {
*(SORT_BY_INIT_PRIORITY(.fini_array.* .dtors.*))
*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin?.o *crtend.o *crtend?.o ) .dtors)
}
} INSERT BEFORE .init_array;
```
Reviewed By: psmith
Differential Revision: https://reviews.llvm.org/D91187
According to
https://sourceware.org/binutils/docs/ld/Input-Section-Basics.html#Input-Section-Basics
for `*(.a .b)`, the order should match the input order:
* for `ld 1.o 2.o`, sections from 1.o precede sections from 2.o
* within a file, `.a` and `.b` appear in the section header table order
This patch implements the behavior. The interaction with `SORT*` and --sort-section is:
Matched sections are ordered by radix sort with the keys being `(SORT*, --sort-section, input order)`,
where `SORT*` (if present) is most significant.
> Note, multiple `SORT*` within an input section description has undocumented and
> confusing behaviors in GNU ld:
> https://sourceware.org/pipermail/binutils/2020-November/114083.html
> Therefore multiple `SORT*` is not the focus for this patch but
> this patch still strives to have an explainable behavior.
As an example, we partition `SORT(a.*) b.* c.* SORT(d.*)`, into
`SORT(a.*) | b.* c.* | SORT(d.*)` and perform sorting within groups. Sections
matched by patterns between two `SORT*` are sorted by input order. If
--sort-alignment is given, they are sorted by --sort-alignment, breaking tie by
input order.
This patch also allows a section to be matched by multiple patterns, previously
duplicated sections could occupy more space in the output and had erroneous zero bytes.
The patch is in preparation for support for
`*(SORT_BY_INIT_PRIORITY(.init_array.* .ctors.*)) *(.init_array .ctors)`,
which will allow LLD to mix .ctors*/.init_array* like GNU ld (gold's --ctors-in-init-array)
PR44698 and PR48096
Reviewed By: grimar, psmith
Differential Revision: https://reviews.llvm.org/D91127
The second `SORT` in `*(SORT(...) SORT(...))` is incorrectly parsed as a file pattern.
Fix the bug by stopping at `SORT*` in `readInputSectionsList`.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D91180
This is a follow-up for D70378 (Cover usage of LLD as a library).
While debugging an intermittent failure on a bot, I recalled this scenario which
causes the issue:
1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach
lld:🧝:Obj-File::ObjFile() which goes straight into its base ELFFileBase(),
then ELFFileBase::init().
2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a
half-initialized ObjFile instance.
3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we
hapily restore the control flow to CrashRecoveryContext::RunSafely() then back
in lld::safeLldMain().
4.Before this patch, we called errorHandler().reset() just after, and this
attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried
to free the half-initialized ObjFile instance, and more precisely its
ObjFile::dwarf member.
Sometimes that worked, sometimes it failed and was catched by the
CrashRecoveryContext. This scenario was the reason we called
errorHandler().reset() through a CrashRecoveryContext.
But in some rare cases, the above repro somehow corrupted the heap, creating a
stack overflow. When the CrashRecoveryContext's filter (that is,
__except (ExceptionFilter(GetExceptionInformation()))) tried to handle the
exception, it crashed again since the stack was exhausted -- and that took the
whole application down. That is the issue seen on the bot. Locally it happens
about 1 times out of 15.
Now this situation can happen anywhere in LLD. Since catching stack overflows is
not a reliable scenario ATM when using CrashRecoveryContext, we're now
preventing further re-entrance when such failures occur, by signaling
lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above),
only one iteration will be executed, instead of two.
Differential Revision: https://reviews.llvm.org/D88348
This broke both Firefox and Chromium (PR47905) due to what seems like dllimport
function not being handled correctly.
> This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
> Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
>
> Reviewed By: rnk
>
> Differential Revision: https://reviews.llvm.org/D87544
This reverts commit cfd8481da1.
Previously we limited the use of atomics and TLS to programs
linked with `--shared-memory`.
However, as of https://reviews.llvm.org/D79530 we now allow
programs that use atomic to be linked without `--shared-memory`.
For this to be useful we also want to all TLS usage in such
programs. In this case, since we know we are single threaded
we simply include the TLS data as a regular active segment
and create an immutable `__tls_base` global that point to the
start of this segment.
Fixes: https://github.com/emscripten-core/emscripten/issues/12489
Differential Revision: https://reviews.llvm.org/D91115
Just enough to consume some bitcode files and link them. There's more
to be done around the symbol resolution API and the LTO config, but I don't yet
understand what all the various LTO settings do...
Reviewed By: #lld-macho, compnerd, smeenai, MaskRay
Differential Revision: https://reviews.llvm.org/D90663
We should have maxprot == initprot for all non-i386 architectures, which
is what ld64 does.
Reviewed By: #lld-macho, compnerd
Differential Revision: https://reviews.llvm.org/D89420
Apple devtools use this to locate the dSYM files for a given
binary.
The UUID is computed based on an MD5 hash of the binary's contents. In order to
hash the contents, we must first write them, but LC_UUID itself must be part of
the written contents in order for all the offsets to be calculated correctly.
We resolve this circular paradox by first writing an LC_UUID with an all-zero
UUID, then updating the UUID with its real value later.
I'm not sure there's a good way to test that the value of the UUID is
"as expected", so I've just checked that it's present.
Reviewed By: #lld-macho, compnerd, smeenai
Differential Revision: https://reviews.llvm.org/D89418
Stub dylibs differ from "real" dylibs in that they lack any content in
their sections. What they do have are export tries and symbol tables,
which means we can still link against them. I am unclear how to
properly create these stub dylibs; XCode 11.3's `lipo` is able to create
stub dylibs, but those lack LC_ID_DYLIB load commands and are considered
invalid by most tooling. Newer versions of `lipo` aren't able to create
stub dylibs at all. However, recent SDKs in XCode still come with valid
stub dylibs, so it still seems worthwhile to support them. The YAML in
this diff's test was generated by taking a non-stub dylib and editing
the appropriate fields.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D89012
This covers a few cases that aren't otherwise tested:
1) Non-ascii symbol names are ordered.
2) Comments, whitespace and blank lines are trimmed.
3) Missing order files result in an error.
Reviewed by: MaskRay, grimar
Differential Revision: https://reviews.llvm.org/D90933
I noticed when running a large link with the --time-trace option that
there were several areas which were missing any specific time trace
categories (aside from the generic link/ExecuteLinker categories). This
patch adds new categories to fill most of the "gaps", or to provide more
detail than was previously provided.
Reviewed by: MaskRay, grimar, russell.gallop
Differential Revision: https://reviews.llvm.org/D90686
On LP64/Windows platforms, this decreases sizeof(InputSection) from 208 (larger
on Windows) to 184.
For a large executable (7.6GiB, inputSections.size()=5105122,
make<InputSection> called 4835760 times), this decreases cgroup
memory.max_usage_in_bytes by 0.6%
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D91018
Add a calling convention called amdgpu_gfx for real function calls
within graphics shaders. For the moment, this uses the same calling
convention as other calls in amdgpu, with registers excluded for return
address, stack pointer and stack buffer descriptor.
Differential Revision: https://reviews.llvm.org/D88540
This is more or less a port of rL329598 (D45275) to the COFF linker.
Since there were already LTO-related settings under -opt:, I added
them there instead of new flags.
Differential Revision: https://reviews.llvm.org/D90624
Make it possible for lld users to provide a custom script that would help to
find missing libraries. A possible scenario could be:
% clang /tmp/a.c -fuse-ld=lld -loauth -Wl,--error-handling-script=/tmp/addLibrary.py
unable to find library -loauth
looking for relevant packages to provides that library
liboauth-0.9.7-4.el7.i686
liboauth-devel-0.9.7-4.el7.i686
liboauth-0.9.7-4.el7.x86_64
liboauth-devel-0.9.7-4.el7.x86_64
pix-1.6.1-3.el7.x86_64
Where addLibrary would be called with the missing library name as first argument
(in that case addLibrary.py oauth)
Differential Revision: https://reviews.llvm.org/D87758
Match MSVC linker output - align all debug directories on four bytes,
while removing debug directory alignment. This would have the same
effect on CETCOMPAT support as D89919.
Chromium bug: https://crbug.com/1136664
Differential Revision: https://reviews.llvm.org/D89921
In the presence of a gap, the st_value field of a STT_SECTION symbol is the
address of the first input section (incorrect if there is a gap). Set it to the
output section address instead.
In -r mode, this bug can cause an incorrect non-zero st_value of a STT_SECTION
symbol (while output sections have zero addresses, input sections may have
non-zero outSecOff). The non-zero st_value can cause the final link to have
incorrect relocation computation (both GNU ld and LLD add st_value of the
STT_SECTION symbol to the output section address).
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D90520
This test was checking behaviour that only exists in the debug
configuration so will fail in release builds.
Perhaps there is way to keep this test around and only run
it in debug builds but for now I'm removing so fix the
release builders.
Differential Revision: https://reviews.llvm.org/D90542
I had envisioned the ghash step as a big up front step, but as currently
written, the timers are nested, and we are notionally adding types from
objects, so we might as well arrange the timers this way.
This preprocessor define was meant to be used to conditionally include VCSVersion.inc. However, the define was always set, and it was the content of the header that was conditionally generated. Therefore HAVE_VCS_VERSION_INC should be cleaned up.
Reviewed By: gribozavr2, MaskRay
Differential Revision: https://reviews.llvm.org/D84623
This field to represents the amount of static data needed by
an dynamic library or executable it should not include things
like heap or stack areas, which in the case of `-pie` are
not determined until runtime (e.g. __stack_pointer is imported).
Differential Revision: https://reviews.llvm.org/D90261
While MC did not produce R_X86_64_GOTPCRELX for test/binop instructions
(movl/adcl/addl/andl/...) before the previous commit, this code path has been
exercised by -fno-integrated-as for GNU as since 2016: -no-pie relaxing
may incorrectly access loc[-3] and produce a corrupted instruction.
Simply handle test/binop R_X86_64_GOTPCRELX like R_X86_64_GOTPCREL.
This partially reverts D85994.
In glibc, elf/dl-sym.c calls the raw `__tls_get_addr` by specifying the
tls_index parameter. Such a call does not have a pairing R_PPC64_TLSGD/R_PPC64_TLSLD.
This is legitimate. Since we cannot distinguish the benign case from cases due
to toolchain issues, we have to be permissive.
Acked by Stefan Pintilie
Add support to LLD for PC Relative Thread Local Storage for Local Dynamic.
This patch adds support for two relocations: R_PPC64_GOT_TLSLD_PCREL34 and
R_PPC64_DTPREL34.
The Local Dynamic code is:
```
pla r3, x@got@tlsld@pcrel R_PPC64_GOT_TLSLD_PCREL34
bl __tls_get_addr@notoc(x@tlsld) R_PPC64_TLSLD
R_PPC64_REL24_NOTOC
...
paddi r9, r3, x@dtprel R_PPC64_DTPREL34
```
After relaxation to Local Exec:
```
paddi r3, r13, 0x1000
nop
...
paddi r9, r3, x@dtprel R_PPC64_DTPREL34
```
Reviewed By: NeHuang, sfertile
Differential Revision: https://reviews.llvm.org/D87504
These are all inspired by existing test coverage we have in an internal
testsuite.
Reviewed by: grimar, MaskRay
Differential Revision: https://reviews.llvm.org/D89775
For a diagnostic `A refers to B` where B refers to a bitcode file, if the
symbol gets optimized out, the user may see `A refers to <internal>`; if the
symbol is retained, the user may see `A refers to lto.tmp`.
Save the reference InputFile * in the DenseMap so that the original filename is
available in reportBackrefs().
The ELF spec says
> If the sh_flags field for this section header includes the attribute SHF_INFO_LINK, then this member represents a section header table index.
Set SHF_INFO_LINK so that binary manipulation tools know that sh_info is
a section header table index instead of (the number of local symbols in the case of SHT_SYMTAB/SHT_DYNSYM).
We have already added SHF_INFO_LINK for --emit-relocs retained SHT_REL[A].
For example, we can teach llvm-objcopy to preserve the section index of the sh_info referenced section if
SHF_INFO_LINK is set. (GNU objcopy recognizes .rel[a].plt and updates
sh_info even if SHF_INFO_LINK is not set).
Reviewed By: grimar, psmith
Differential Revision: https://reviews.llvm.org/D89828
The combination has not been tested before. In the case of ICF,
`e.section->getVA(0)` equals the start address of the output section.
This can cause incorrect overlapping with the actual function at the
start of the output section and potentially trigger a GDB internal error
in `dw2_find_pc_sect_compunit_symtab` (presumably because:
if a short address range incorrectly starts at the start address of the
output section, GDB may pick it instead of the correct longer address
range. When mapping an address within the long address range but
out of the scope of the short address range, the routine may find
nothing - while the code asserts that it can find something).
Note that in the case of ICF there may be duplicate address range entries,
but GDB appears to be fine with them.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D89751
This reverts commit 1b589f4d4d and relands the D89463
with the fix: update `MappingTraits<FileFilter>::validate()` in ClangTidyOptions.cpp to
match the new signature (change the return type to "std::string" from "StringRef").
Original commit message:
This:
Changes the return type of MappingTraits<T>>::validate to std::string
instead of StringRef. It allows to create more complex error messages.
It introduces std::vector<std::pair<StringRef, bool>> getEntries():
a new virtual method of Section, which is the base class for all sections.
It returns names of special section specific keys (e.g. "Entries") and flags that says if them exist in a YAML.
The code in validate() uses this list of entries descriptions to generalize validation.
This approach was discussed in the D89039 thread.
Differential revision: https://reviews.llvm.org/D89463
This:
1) Changes the return type of `MappingTraits<T>>::validate` to `std::string`
instead of `StringRef`. It allows to create more complex error messages.
2) It introduces std::vector<std::pair<StringRef, bool>> getEntries():
a new virtual method of Section, which is the base class for all sections.
It returns names of special section specific keys (e.g. "Entries") and flags that
says if them exist in a YAML. The code in validate() uses this list of entries
descriptions to generalize validation.
This approach was discussed in the D89039 thread.
Differential revision: https://reviews.llvm.org/D89463
Add a simple forwarding option in the MinGW frontend, and implement
the private -wrap option in the COFF linker.
The feature in lld-link isn't gated by the -lldmingw option, but
the option is left as a private, undocumented option primarily
used by the MinGW driver.
The implementation is significantly based on the support for --wrap
in the ELF linker, but many small nuance details are different
between the ELF and COFF linkers, ending up with more than a few
implementation differences.
This fixes https://bugs.llvm.org/show_bug.cgi?id=47384.
Differential Revision: https://reviews.llvm.org/D89004
Reapplied with the bitfield member canInline fixed so it doesn't break
builds targeting windows.
This reverts commit a012c704b5.
Breaks Windows builds.
C:\src\llvm-mint\lld\COFF\Symbols.cpp(26,1): error: static_assert failed due to requirement 'sizeof(lld::coff::SymbolUnion) <= 48' "symbols should be optimized for memory usage"
static_assert(sizeof(SymbolUnion) <= 48,
Add a simple forwarding option in the MinGW frontend, and implement
the private -wrap option in the COFF linker.
The feature in lld-link isn't gated by the -lldmingw option, but
the option is left as a private, undocumented option primarily
used by the MinGW driver.
The implementation is significantly based on the support for --wrap
in the ELF linker, but many small nuance details are different
between the ELF and COFF linkers, ending up with more than a few
implementation differences.
This fixes https://bugs.llvm.org/show_bug.cgi?id=47384.
Differential Revision: https://reviews.llvm.org/D89004
This should fix cases when e.g. auto import is enabled without
mingw mode in total being enabled.
Differential Revision: https://reviews.llvm.org/D89006
ICF was not able to merge equivalent sections because of relocations to
sections ineligible for ICF that use alternative symbols, e.g. symbol
aliases or section relative relocations.
Merging in this scenario has been enabled by giving the sections that
are ineligible for ICF a unique ID, i.e. an equivalence class of their
own. This approach also provides another benefit as it improves the
hashing that is used to perform the initial equivalance grouping for
ICF. This is because the ICF ineligible sections can now contribute a
unique value towards the hashes instead of the same value of zero. This
has been seen to reduce link time with ICF by ~68% for objects compiled
with -fprofile-instr-generate.
In order to facilitate this use of a unique ID, the existing
inconsistent approach to the setting of the InputSection eqClass in ICF
has been changed so that there is a clear distinction between the
eqClass values of ICF eligible sections and those of the ineligible
sections that have a unique ID. This inconsistency could have caused
incorrect equivalence class equality in the past, although it appears
that no issues were encountered in actual use.
Differential Revision: https://reviews.llvm.org/D88830
Fixes https://bugs.llvm.org/show_bug.cgi?id=46473
LLD wasn't previously specifying any specific alignment in the TLS table's Characteristics field so the loader would just assume the default value (16 bytes). This works most of the time except if you have thread locals that want specific higher alignments (e.g. 32 as in the bug) *even* if they specify an alignment on the thread local. This change updates LLD to take the max alignment from tls section.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D88637
Revert individual wip commits and will instead follow up with a
single commit with all the changes. Makes cherry-picking easier
and will contain all the right tags.
This reverts commit 32a4ad3b6c.
This reverts commit 7fe13af676.
This reverts commit 51fbc1bef6.
This reverts commit f80950a8bb.
This reverts commit 0778cad9f3.
This reverts commit 8b70d527d7.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46473
LLD wasn't previously specifying any specific alignment in the TLS table's Characteristics field so the loader would just assume the default value (16 bytes). This works most of the time except if you have thread locals that want specific higher alignments (e.g. 32 as in the bug) *even* if they specify an alignment on the thread local. This change updates LLD to take the max alignment from tls section.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D88637
Summary:
This patch does the following:
1. Make InitTargetOptionsFromCodeGenFlags() accepts Triple as a
parameter, because some options' default value is triple dependant.
2. DataSections is turned on by default on AIX for llc.
3. Test cases change accordingly because of the default behaviour change.
4. Clang Driver passes in -fdata-sections by default on AIX.
Reviewed By: MaskRay, DiggerLin
Differential Revision: https://reviews.llvm.org/D88737
This reverts 9b5b305023 and fixes the unwanted re-ordering when generating ThinLTO indexes.
The goal of this patch is to better balance thread utilization during ThinLTO in-process linking (in llvm-lto2 or in LLD). Before this patch, large modules would often be scheduled late during execution, taking a long time to complete, thus starving the thread pool.
We now sort modules in descending order, based on each module's bitcode size, so that larger modules are processed first. By doing so, smaller modules have a better chance to keep the thread pool active, and thus avoid starvation when the bitcode compilation is almost complete.
In our case (on dual Intel Xeon Gold 6140, Windows 10 version 2004, two-stage build), this saves 15 sec when linking `clang.exe` with LLD & -flto=thin, /opt:lldltojobs=all, no ThinLTO cache, -DLLVM_INTEGRATED_CRT_ALLOC=d:\git\rpmalloc.
Before patch: 100 sec
After patch: 85 sec
Inspired by the work done by David Callahan in D60495.
Differential Revision: https://reviews.llvm.org/D87966
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
Adds more testing in basic-assembly.s and a new test tables.s.
Adds support to yaml reading and writing of tables as well.
Differential Revision: https://reviews.llvm.org/D88815
Followup on https://reviews.llvm.org/D85062 which ignores
entire library objects when no symbols are used within them.
This is shouldn't apply with `--whole-archive` since this
is specified to treat them like direct object inputs.
Differential Revision: https://reviews.llvm.org/D89290
This allows `__wasilibc_populate_libpreopen` to be GC'd in more cases
where it isn't needed, including when linked from Rust's libstd.
Differential Revision: https://reviews.llvm.org/D85062
This flag works in a similar way to the ELF linker in that it
will resolve any defined symbols to their local definition with
a shared library or -pie executable.
This flag has no effect on static linking.
Differential Revision: https://reviews.llvm.org/D89152
In ELF/InputFiles.cpp, getBitcodeMachineKind() is limited to uint8_t return
type. This works as long as EM_xxx is < 256, which is true for common
architectures, but not for some newly assigned or unofficial EM_* values.
The corresponding ELF field (e_machine) can hold uint16_t.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D89185
Similar to D66992.
In GNU ld, a -u specified symbol is a STB_DEFAULT undefined.
It cannot be changed to STB_WEAK by a later STB_WEAK undefined in a regular object file.
The behavior is consistent with our model because -u means "we need to fetch a lazy definition".
It should not be altered just because there is also a STB_WEAK undefined.
Note, our -u semantics are still different from GNU ld (https://github.com/ClangBuiltLinux/linux/issues/515):
we don't force the specified symbol to appear in .symtab This is a deliberate decision.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D88945
If a version is specified both with --{major,minor}-subsystem-version and
with --subsystem <name>:<version>, the one specified last (that actually
sets a version) takes precedance in GNU ld; thus doing the same here.
Differential Revision: https://reviews.llvm.org/D88804
As they can be set independently after D88802, we can get rid of a bit
of extra code - simplifying the logic here before adding more
complication to it later.
Differential Revision: https://reviews.llvm.org/D88803
The MinGW driver has separate options for OS and subsystem version.
Having this available in lld-link allows the MinGW driver to both match
GNU ld better and simplifies the code for merging two (potentially
mismatching) arguments into one.
Differential Revision: https://reviews.llvm.org/D88802
Parse the components as decimal, instead of decuding the base from
the string. This avoids ambiguity if the second number contains leading
zeros, which previously were parsed as indicating an octal number.
MS link.exe doesn't support hexadecimal numbers in the version numbers,
neither in /version nor in /subsystem.
Differential Revision: https://reviews.llvm.org/D88801
This adds the following two new lines to /summary:
21351 Input OBJ files (expanded from all cmd-line inputs)
61 PDB type server dependencies
38 Precomp OBJ dependencies
1420669231 Input type records <<<<
78665073382 Input type records bytes <<<<
8801393 Merged TPI records
3177158 Merged IPI records
59194 Output PDB strings
71576766 Global symbol records
25416935 Module symbol records
2103431 Public symbol records
Differential Revision: https://reviews.llvm.org/D88703
Before this patch /summary was crashing with some .PCH.OBJ files, because tpiMap[srcIdx++] was reading at the wrong location. When the TpiSource depends on a .PCH.OBJ file, the types should be offset by the previously merged PCH.OBJ set of indices.
Differential Revision: https://reviews.llvm.org/D88678
Add Thread Local Storage support for the 34 bit relocation R_PPC64_GOT_TLSGD_PCREL34 used in General Dynamic.
The compiler will produce code that looks like:
```
pla r3, x@got@tlsgd@pcrel R_PPC64_GOT_TLSGD_PCREL34
bl __tls_get_addr@notoc(x@tlsgd) R_PPC64_TLSGD
R_PPC64_REL24_NOTOC
```
LLD should be able to correctly compute the relocation for R_PPC64_GOT_TLSGD_PCREL34 as well as do the following two relaxations where possible:
General Dynamic to Local Exec:
```
paddi r3, r13, x@tprel
nop
```
and General Dynamic to Initial Exec:
```
pld r3, x@got@tprel@pcrel
add r3, r3, r13
```
Note:
This patch adds support for the PC Relative (no TOC) version of General Dynamic on top of the existing support for the TOC version of General Dynamic.
The ABI does not provide any way to tell by looking only at the relocation `R_PPC64_TLSGD` when it is being used in a TOC instruction sequence or and when it is being used in a no TOC sequence. The TOC sequence should always be 4 byte aligned. This patch adds one to the offset of the relocation when it is being used in a no TOC sequence. In this way LLD can tell by looking at the alignment of the offset of `R_PPC64_TLSGD` whether or not it is being used as part of a TOC or no TOC sequence.
Reviewed By: NeHuang, sfertile, MaskRay
Differential Revision: https://reviews.llvm.org/D87318
Add Thread Local Storage support for the 34 bit relocation R_PPC64_GOT_TLSGD_PCREL34 used in General Dynamic.
The compiler will produce code that looks like:
```
pla r3, x@got@tlsgd@pcrel R_PPC64_GOT_TLSGD_PCREL34
bl __tls_get_addr@notoc(x@tlsgd) R_PPC64_TLSGD
R_PPC64_REL24_NOTOC
```
LLD should be able to correctly compute the relocation for R_PPC64_GOT_TLSGD_PCREL34 as well as do the following two relaxations where possible:
General Dynamic to Local Exec:
```
paddi r3, r13, x@tprel
nop
```
and General Dynamic to Initial Exec:
```
pld r3, x@got@tprel@pcrel
add r3, r3, r13
```
Note:
This patch adds support for the PC Relative (no TOC) version of General Dynamic on top of the existing support for the TOC version of General Dynamic.
The ABI does not provide any way to tell by looking only at the relocation `R_PPC64_TLSGD` when it is being used in a TOC instruction sequence or and when it is being used in a no TOC sequence. The TOC sequence should always be 4 byte aligned. This patch adds one to the offset of the relocation when it is being used in a no TOC sequence. In this way LLD can tell by looking at the alignment of the offset of `R_PPC64_TLSGD` whether or not it is being used as part of a TOC or no TOC sequence.
Reviewed By: NeHuang, sfertile, MaskRay
Differential Revision: https://reviews.llvm.org/D87318
When adding an archive member with a problem, e.g. a new bitcode with an
old archiver, containing an unsupported attribute, or an ELF file with a
malformed symbol table, the archiver would throw away the error and
simply add the member to the archive without any symbol entries. This
meant that the resultant archive could be silently unusable when not
using --whole-archive, and result in unexpected undefined symbols.
This change fixes this issue by addressing two FIXMEs and only throwing
away not-an-object errors. However, this meant that some LLD tests which
didn't need symbol tables and were using invalid members deliberately to
test the linker's malformed input handling no longer worked, so this
patch also stops the archiver from looking for symbols in an object if
it doesn't require a symbol table, and updates the tests accordingly.
Differential Revision: https://reviews.llvm.org/D88288
Reviewed by: grimar, rupprecht, MaskRay
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D87544
The routing rules are:
sym -> __wrap_sym
__real_sym -> sym
__wrap_sym and sym are routing targets, so they need to be exposed to the symbol
table. __real_sym is not and can be eliminated if not used by regular object.
This adds support for new-style command support. In this mode, all exports
are considered command entrypoints, and the linker inserts calls to
`__wasm_call_ctors` and `__wasm_call_dtors` for all such entrypoints.
This enables support for:
- Command entrypoints taking arguments other than strings and return values
other than `int`.
- Multicall executables without requiring on the use of string-based
command-line arguments.
This new behavior is disabled when the input has an explicit call to
`__wasm_call_ctors`, indicating code not expecting new-style command
support.
This change does mean that wasm-ld no longer supports DCE-ing the
`__wasm_call_ctors` function when there are no calls to it. If there are no
calls to it, and there are ctors present, we assume it's wasm-ld's job to
insert the calls. This seems ok though, because if there are ctors present,
the program is expecting them to be called. This change affects the
init-fini-gc.ll test.
In particular allow explict exporting of `__stack_pointer` but
exclud this from `--export-all` to avoid requiring the mutable
globals feature whenenve `--export-all` is used.
This uncovered a bug in populateTargetFeatures regarding checking
if the mutable-globals feature is allowed.
See: https://github.com/WebAssembly/binaryen/issues/2934
Differential Revision: https://reviews.llvm.org/D88506
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
They operate like Defined symbols but with no associated InputSection.
Note that `ld64` seems to treat the weak definition flag like a no-op for
absolute symbols, so I have replicated that behavior.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D87909
Apparently this is used in real programs. I've handled this by reusing
the logic we already have for branch (function call) relocations.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D87852
Not 100% sure but it appears that bundles are almost identical to
dylibs, aside from the fact that they do not contain `LC_ID_DYLIB`. ld64's code
seems to treat bundles and dylibs identically in most places.
Supporting bundles allows us to run e.g. XCTests, as all test suites are
compiled into bundles which get dynamically loaded by the `xctest` test runner.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D87856
* Implement rebase opcodes. Rebase opcodes tell dyld where absolute
addresses have been encoded in the binary. If the binary is not loaded
at its preferred address, dyld has to rebase these addresses by adding
an offset to them.
* Support `-pie` and use it to test rebase opcodes.
This is necessary for absolute address references in dylibs, bundles etc
to work.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D87199
The R2 save stub will now support offsets up to 64 bits.
There are three cases that will be used.
1) The offset fits in 26 bits.
```
b <26 bit offset>
```
2) The offset does not fit in 26 bits but fits in 34 bits.
```
paddi r12, 0, <34 bit offset>, 1
mtctr r12
bctr
```
3) The offset does not fit in 34 bits. Since this is an R2 save stub we can use
the TOC in R2. We are not loading the offset but the actual address we want to
branch to.
```
addis r12, r2, <address in TOC lo>
ld r12 <address in TOC hi>(r12)
mtctr r12
bctr
```
In case 1) the stub is only 8 bytes while in cases 2) and 3) the stub will be
20 bytes.
Reviewed By: MaskRay, sfertile, NeHuang
Differential Revision: https://reviews.llvm.org/D87916
https://github.com/WebAssembly/threads/issues/144 updated the
WebAssembly threads proposal to make atomic operations on unshared memories
valid. This change updates the feature checking in the linker accordingly.
Production WebAssembly engines have recently been updated to allow this
behvaior, but after this change users who accidentally use atomics with unshared
memories on older versions of the engines will get validation errors at runtime
rather than link errors.
Differential Revision: https://reviews.llvm.org/D79530
Library users should not need to call errorHandler().reset() explicitly.
google/iree calls lld:🧝:link and without the patch some global
variables are not cleaned up in the next invocation.
".text.split." holds symbols which are split out from functions in
other input sections. For example, with -fsplit-machine-functions,
placing the cold parts in .text.split instead of .text.unlikely mitigates
against poor profile inaccuracy. Techniques such as hugepage remapping can
make conservative decisions at the section granularity.
Differential Revision: https://reviews.llvm.org/D87840
In lit tests, we run each LLD invocation twice (LLD_IN_TEST=2), without shutting down the process in-between. This ensures a full cleanup is properly done between runs.
Only active for the COFF driver for now. Other drivers still use LLD_IN_TEST=1 which executes just one iteration with full cleanup, like before.
When the environment variable LLD_IN_TEST is unset, a shortcut is taken, only one iteration is executed, no cleanup for faster exit, like before.
A public API, lld::safeLldMain(), is also available when using LLD as a library.
Differential Revision: https://reviews.llvm.org/D70378