{D123302} got me looking deeper at `includeInSymtab`. I thought it was a
little odd that there were excluded (live) symbols for which
`includeInSymtab` was false; we shouldn't have so many different ways to
exclude a symbol. As such, this diff makes the `L`-prefixed-symbol
exclusion code use `includeInSymtab` too. (Note that as part of our
support for `__eh_frame`, we will also be excluding all `__eh_frame`
symbols from the symtab in a future diff.)
Another thing I noticed is that the `emitStabs` code never has to deal
with excluded symbols because `SymtabSection::finalize()` already
filters them out. As such, I've updated the comments and asserts from
{D123302} to reflect this.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D123433
The previous implementation of UnwindInfoSection materialized
all the compact unwind entries & applied their relocations, then parsed
the resulting data to generate the final unwind info. This design had
some unfortunate conseqeuences: since relocations can only be applied
after their referents have had addresses assigned, operations that need
to happen before address assignment must contort themselves. (See
{D113582} and observe how this diff greatly simplifies it.)
Moreover, it made synthesizing new compact unwind entries awkward.
Handling PR50956 will require us to do this synthesis, and is the main
motivation behind this diff.
Previously, instead of generating a new CompactUnwindEntry directly, we
would have had to generate a ConcatInputSection with a number of
`Reloc`s that would then get "flattened" into a CompactUnwindEntry.
This diff introduces an internal representation of `CompactUnwindEntry`
(the former `CompactUnwindEntry` has been renamed to
`CompactUnwindLayout`). The new CompactUnwindEntry stores references to
its personality symbol and LSDA section directly, without the use of
`Reloc` structs.
In addition to being easier to work with, this diff also allows us to
handle unwind info whose personality symbols are located in sections
placed after the `__unwind_info`.
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D123276
Using a portable format specifier avoids a "format specifies type
'unsigned long long' but the argument has type 'uint64_t' (aka 'unsigned
long') [-Werror,-Wformat]" error depending on the exact definition of
`uint64_t`.
This diff is motivated by my work to add proper DWARF unwind support. As
detailed in PR50956 functions that need DWARF unwind need to have
compact unwind entries synthesized for them. These CU entries encode an
offset within `__eh_frame` that points to the corresponding DWARF FDE.
In order to encode this offset during
`UnwindInfoSectionImpl::finalize()`, we need to first assign values to
`InputSection::outSecOff` for each `__eh_frame` subsection. But
`__eh_frame` is ordered after `__unwind_info` (according to ld64 at
least), which puts us in a bit of a bind: `outSecOff` gets assigned
during finalization, but `__eh_frame` is being finalized after
`__unwind_info`.
But it occurred to me that there's no real need for most
ConcatOutputSections to be finalized sequentially. It's only necessary
for text-containing ConcatOutputSections that may contain branch relocs
which may need thunks. ConcatOutputSections containing other types of
data can be finalized in any order.
This diff moves the finalization logic for non-text sections into a
separate `finalizeContents()` method. This method is called before
section address assignment & unwind info finalization takes place. In
theory we could call these `finalizeContents()` methods in parallel, but
in practice it seems to be faster to do it all on the main thread.
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D123279
I was wondering if SymtabSection::emitStabs() should check
defined->includeInSymtab. Add asserts and comments explaining why that's not
necessary.
No behavior change.
Differential Revision: https://reviews.llvm.org/D123302
{D118797} means that we can now check the name/segname of a given
section directly, instead of having to look those properties up on one
of its subsections. This allows us to simplify our code.
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D123275
Our compact unwind handling code currently has some logic to locate a
symbol at a given offset in an InputSection. The EH frame code will need
to do something similar, so let's factor out the code.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D123301
This matches ld64, and makes dsymutil work better with lld's output.
Fixes PR54783, see there for details.
Reduces time needed to run dsymutil on Chromium Framework from 8m30s
(which is already down from 26 min with D123218) to 6m30s and removes
many lines of "could not find object file symbol for symbol" from dsymutil output
(previously: several MB of those messages, now dsymutil is completely silent).
Differential Revision: https://reviews.llvm.org/D123252
This removes options for performing LTO with the legacy pass
manager in LLD. Options that explicitly enable the new pass manager
are retained as no-ops.
Differential Revision: https://reviews.llvm.org/D123219
Or rather, error out if it is set to something other than ON. This
removes the ability to enable the legacy pass manager by default,
but does not remove the ability to explicitly enable it through
various flags like -flegacy-pass-manager or -enable-new-pm=0.
I checked, and our test suite definitely doesn't pass with
LLVM_ENABLE_NEW_PASS_MANAGER=OFF anymore.
Differential Revision: https://reviews.llvm.org/D123126
Returning `std::array<uint8_t, N>` is better ergonomics for the hashing functions usage, instead of a `StringRef`:
* When returning `StringRef`, client code is "jumping through hoops" to do string manipulations instead of dealing with fixed array of bytes directly, which is more natural
* Returning `std::array<uint8_t, N>` avoids the need for the hasher classes to keep a field just for the purpose of wrapping it and returning it as a `StringRef`
As part of this patch also:
* Introduce `TruncatedBLAKE3` which is useful for using BLAKE3 as the hasher type for `HashBuilder` with non-default hash sizes.
* Make `MD5Result` inherit from `std::array<uint8_t, 16>` which improves & simplifies its API.
Differential Revision: https://reviews.llvm.org/D123100
When two local symbols (think: file-scope static functions, or functions in
unnamed namespaces) with the same name in two different translation units
both needed thunks, ld64.lld previously created external thunks for both
of them. These thunks ended up with the same name, leading to a duplicate
symbol error for the thunk symbols.
Instead, give thunks for local symbols local visibility.
(Hitting this requires a jump to a local symbol from over 128 MiB away.
It's unlikely that a single .o file is 128 MiB large, but with ICF
you can end up with a situation where the local symbol is ICF'd with
a symbol in a separate translation unit. And that can introduce a
large enough jump to require a thunk.)
Fixes PR54599.
Differential Revision: https://reviews.llvm.org/D122624
`config->priorities` has been used to hold the intermediate state during the construction of the order in which sections should be laid out. This is not a good place to hold this state since the intermediate state is not a "configuration" for LLD. It should be encapsulated in a class for building a mapping from section to priority (which I created in this diff as the `PriorityBuilder` class).
The same thing is being done for `config->callGraphProfile`.
Reviewed By: #lld-macho, int3
Differential Revision: https://reviews.llvm.org/D122156
Update DataInCode's calculation of `endAddr` to use `getSize()` instead
of `getFileSize()` -- while in practice they're the same for
non-zerofill sections (which code sections are), we still should treat
address sizes / offsets as distinct from file sizes / offsets.
Since Mach-O has a two-level namespace (unlike ELF), we can usually set
this property to true.
(I believe this setting is only available in the new LTO backend, so I
can't really use ld64 / libLTO's behavior as a reference here... I'm
just doing what I think is correct.)
See {D119294} for the work done to calculate the `interposable` used in
this diff.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D119506
All references to interposable symbols can be redirected at runtime to
point to a different symbol definition (with the same name). For
example, if both dylib A and B define symbol _foo, and we load A before
B at runtime, then all references to _foo within dylib B will point to
the definition in dylib A.
ld64 makes all extern symbols interposable when linking with
`-flat_namespace`.
TODO 1: Support `-interposable` and `-interposable_list`, which should
just be a matter of parsing those CLI flags and setting the
`Defined::interposable` bit.
TODO 2: Set Reloc::FinalDefinitionInLinkageUnit correctly with this info
(we are currently not setting it at all, so we're erring on the
conservative side, but we should help the LTO backend generate more
optimal code.)
Reviewed By: modimo, MaskRay
Differential Revision: https://reviews.llvm.org/D119294
Previously, we only allowed this for DylibSymbols. However, in order to
properly support `-flat_namespace` as well as `-interposable`, we need
to allow this for Defined symbols too. Therefore we hoist the
`lazyBindOffset` and the `stubsHelperIndex` into the parent Symbol
class.
The actual change to support interposition under `-flat_namespace` is in
{D119294}; the NFC changes here have been split out for easier review.
Perf regression isn't stat sig on my 3.2 GHz 16-Core Intel Xeon W linking
chromium_framework:
base diff difference (95% CI)
sys_time 1.227 ± 0.021 1.234 ± 0.031 [ -0.3% .. +1.5%]
user_time 3.665 ± 0.036 3.674 ± 0.035 [ -0.2% .. +0.7%]
wall_time 4.596 ± 0.055 4.609 ± 0.064 [ -0.3% .. +0.9%]
samples 34 47
Max RSS regression is barely stat sig:
base diff difference (95% CI)
time 1003664356.324 ± 15404053.912 1010380403.613 ± 10578309.455 [ +0.0% .. +1.3%]
samples 37 31
Reviewed By: modimo
Differential Revision: https://reviews.llvm.org/D121351
This reverts commit e049a87f04.
That commit breaks the build with errors of the form:
/usr/local/google/home/saugustine/llvm/llvm-project/lld/MachO/ExportTrie.cpp:148:11: error: definition of implicitly declared destructor
TrieNode::~TrieNode() {
The code can be used in multi-threads and the allocator is not thread safe.
fixes PR/54378
Reviewed By: int3, #lld-macho
Differential Revision: https://reviews.llvm.org/D121638
Previously, we aligned every cstring to 16 bytes as a temporary hack to
deal with https://github.com/llvm/llvm-project/issues/50135. However, it
was highly wasteful in terms of binary size.
To recap, in contrast to ELF, which puts strings that need different
alignments into different sections, `clang`'s Mach-O backend puts them
all in one section. Strings that need to be aligned have the .p2align
directive emitted before them, which simply translates into zero padding
in the object file. In other words, we have to infer the alignment of
the cstrings from their addresses.
We differ slightly from ld64 in how we've chosen to align these
cstrings. Both LLD and ld64 preserve the number of trailing zeros in
each cstring's address in the input object files. When deduplicating
identical cstrings, both linkers pick the cstring whose address has more
trailing zeros, and preserve the alignment of that address in the final
binary. However, ld64 goes a step further and also preserves the offset
of the cstring from the last section-aligned address. I.e. if a cstring
is at offset 18 in the input, with a section alignment of 16, then both
LLD and ld64 will ensure the final address is 2-byte aligned (since
`18 == 16 + 2`). But ld64 will also ensure that the final address is of
the form 16 * k + 2 for some k (which implies 2-byte alignment).
Note that ld64's heuristic means that a dedup'ed cstring's final address is
dependent on the order of the input object files. E.g. if in addition to the
cstring at offset 18 above, we have a duplicate one in another file with a
`.cstring` section alignment of 2 and an offset of zero, then ld64 will pick
the cstring from the object file earlier on the command line (since both have
the same number of trailing zeros in their address). So the final cstring may
either be at some address `16 * k + 2` or at some address `2 * k`.
I've opted not to follow this behavior primarily for implementation
simplicity, and secondarily to save a few more bytes. It's not clear to me
that preserving the section alignment + offset is ever necessary, and there
are many cases that are clearly redundant. In particular, if an x86_64 object
file contains some strings that are accessed via SIMD instructions, then the
.cstring section in the object file will be 16-byte-aligned (since SIMD
requires its operand addresses to be 16-byte aligned). However, there will
typically also be other cstrings in the same file that aren't used via SIMD
and don't need this alignment. They will be emitted at some arbitrary address
`A`, but ld64 will treat them as being 16-byte aligned with an offset of
`16 % A`.
I have verified that the two repros in https://github.com/llvm/llvm-project/issues/50135
work well with the new alignment behavior.
Fixes https://github.com/llvm/llvm-project/issues/54036.
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D121342
ld64 breaks down `__objc_classrefs` on a per-word level and deduplicates
them. This greatly reduces the number of bind entries emitted (and
therefore the amount of work `dyld` has to do at runtime). For
chromium_framework, this change to LLD cuts the number of (non-lazy)
binds from 912 to 190, getting us to parity with ld64 in this aspect.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D121053
`__cfstring` has embedded addends that foil ICF's hashing / equality
checks. (We can ignore embedded addends when doing ICF because the same
information gets recorded in our Reloc structs.) Therefore, in order to
properly dedup CFStrings, we create a mutable copy of the CFString and
zero out the embedded addends before performing any hashing / equality
checks.
(We did in fact have a partial implementation of CFString deduplication
already. However, it only worked when the cstrings they point to are at
identical offsets in their object files.)
I anticipate this approach can be extended to other similar
statically-allocated struct sections in the future.
In addition, we previously treated all references with differing addends
as unequal. This is not true when the references are to literals:
different addends may point to the same literal in the output binary. In
particular, `__cfstring` has such references to `__cstring`. I've
adjusted ICF's `equalsConstant` logic accordingly, and I've added a few
more tests to make sure the addend-comparison code path is adequately
covered.
Fixes https://github.com/llvm/llvm-project/issues/51281.
Reviewed By: #lld-macho, Roger
Differential Revision: https://reviews.llvm.org/D120137
... from a `uint64_t` to a `uint32_t`. (LLD-ELF uses a `uint32_t` too.)
About a 1.7% reduction in peak RSS when linking chromium_framework on my
3.2 GHz 16-Core Intel Xeon W Mac Pro, and no stat sig change in wall
time.
</Users/jezng/test2.sh ["before"]> </Users/jezng/test2.sh ["after"]> difference (95% CI)
RSS 1003036672.000 ± 9891065.259 985539505.231 ± 10272748.749 [ -2.3% .. -1.2%]
samples 27 26
base diff difference (95% CI)
sys_time 1.277 ± 0.023 1.277 ± 0.024 [ -0.9% .. +0.9%]
user_time 6.682 ± 0.046 6.598 ± 0.043 [ -1.6% .. -0.9%]
wall_time 5.904 ± 0.062 5.895 ± 0.063 [ -0.7% .. +0.4%]
samples 46 28
No appreciable change (~0.01%) in number of `equals` comparisons either:
Before:
ld64.lld: ICF needed 8 iterations
ld64.lld: equalsConstant() called 701643 times
ld64.lld: equalsVariable() called 3438526 times
After:
ld64.lld: ICF needed 8 iterations
ld64.lld: equalsConstant() called 701729 times
ld64.lld: equalsVariable() called 3438526 times
Reviewed By: #lld-macho, MaskRay, thakis
Differential Revision: https://reviews.llvm.org/D121052
The existing hashing of stubsHelperIndex has mostly been a no-op* for
some time now (ever since we made ICF run before dylib symbols get their
stubs indices assigned). I guess we could consider hashing the name +
filename of the DylibSymbol instead, but I'm not sure the overhead's
worth it... moreover, LLD/ELF only hashes their Defined symbols as well.
*: Technically it does change the hash value since stubsHelperIndex is
initialized to `UINT32_MAX` by default. But since all stubsHelperIndex
values are the same at when ICF runs, they don't add any useful
information to the hash.
This is debug code that is disabled by default. It'll provide a easy way
to figure out the impact (if any) of tweaking ICF's hashing algorithm
(since a poor quality hash will result in many more `equals*` calls).
Reviewed By: #lld-macho, oontvoo
Differential Revision: https://reviews.llvm.org/D121051
This was based off @thakis' draft in {D103517}. I employed templates to ensure
the support for `-why_live` wouldn't slow down the regular non-why-live code
path.
No stat sig perf difference on my 3.2 GHz 16-Core Intel Xeon W:
base diff difference (95% CI)
sys_time 1.195 ± 0.015 1.199 ± 0.022 [ -0.4% .. +1.0%]
user_time 3.716 ± 0.022 3.701 ± 0.025 [ -0.7% .. -0.1%]
wall_time 4.606 ± 0.034 4.597 ± 0.046 [ -0.6% .. +0.2%]
samples 44 37
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D120377
This mirrors the code structure in `lld/ELF`. It also paves the way for
an upcoming diff where I templatize things.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D120376
Symbols for which `canBeOmittedFromSymbolTable()` is true should be
treated as private externs. This diff tries to do that by unsetting the
ExportDynamic bit. It seems to mostly work with the FullLTO backend, but
with the ThinLTO backend, the `local_unnamed_addr` symbols still fail to
be properly hidden. Nonetheless, this is a step in the right direction.
I've documented all the remaining differences between our behavior and
LD64's in the lto-internalized-unnamed-addr.ll test.
See also https://discourse.llvm.org/t/mach-o-lto-handling-of-linkonce-odr-unnamed-addr/60015
Reviewed By: #lld-macho, thevinster
Differential Revision: https://reviews.llvm.org/D119767
If both an order file and a call graph profile are present, the edges of the
call graph which use symbols present in the order file are not used. All of
the symbols in the order file will appear at the beginning of the section just
as they do currently. In other words, the highest priority derived from the
call graph will be below the lowest priority derived from the order file.
Practically, this change renames CallGraphSort.{h,cpp} to SectionPriorities.{h,cpp},
and most order file and call graph profile related code is moved into the new
file to reduce duplication.
Differential Revision: https://reviews.llvm.org/D117354
Main motivation: including `llvm/CodeGen/CommandFlags.h` in
`CommonLinkerContext.h` means that the declaration of `llvm::Reloc` is
visible in any file that includes `CommonLinkerContext.h`. Since our
cpp files have both `using namespace llvm` and `using namespace
lld::macho`, this results in conflicts with `lld::macho::Reloc`.
I suppose we could put `llvm::Reloc` into a nested namespace, but in general,
I think we should avoid transitively including too many header files in
a very widely used header like `CommonLinkerContext.h`.
RegisterCodeGenFlags' ctor initializes a bunch of function-`static`
structures and does nothing else, so it should be fine to "initialize"
it as a temporary stack variable rather than as a file static.
Reviewed By: aganea
Differential Revision: https://reviews.llvm.org/D119913
`parseSections()` is a getting a bit large unwieldy, let's factor out
logic where we can.
Other minor changes in this diff:
* `"__cg_profile"` is now a global constexpr
* We now use `checkError()` instead of `fatal()`-ing without handling
the Error
* Check for `callGraphProfileSort` before checking the section name,
since the boolean comparison is likely cheaper
Reviewed By: #lld-macho, lgrey, oontvoo
Differential Revision: https://reviews.llvm.org/D119892
By unsetting this property, we are now able to internalize more symbols
during LTO. I compared the output of `-save-temps` for both LLD and
ld64, and we now match ld64's behavior as far as `lto-internalize.ll` is
concerned.
(Thanks @smeenai for working on an initial version of this diff!)
Fixes https://github.com/llvm/llvm-project/issues/50574.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D119372