2021-05-26 02:57:16 +08:00
|
|
|
//===- ConcatOutputSection.cpp --------------------------------------------===//
|
2020-05-02 07:29:06 +08:00
|
|
|
//
|
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2021-05-26 02:57:16 +08:00
|
|
|
#include "ConcatOutputSection.h"
|
2021-03-30 08:33:48 +08:00
|
|
|
#include "Config.h"
|
2021-04-26 07:00:24 +08:00
|
|
|
#include "OutputSegment.h"
|
2021-03-30 08:33:48 +08:00
|
|
|
#include "SymbolTable.h"
|
|
|
|
#include "Symbols.h"
|
|
|
|
#include "SyntheticSections.h"
|
|
|
|
#include "Target.h"
|
2022-01-21 03:53:18 +08:00
|
|
|
#include "lld/Common/CommonLinkerContext.h"
|
2020-05-02 07:29:06 +08:00
|
|
|
#include "llvm/BinaryFormat/MachO.h"
|
2021-04-26 07:00:24 +08:00
|
|
|
#include "llvm/Support/ScopedPrinter.h"
|
2021-05-20 00:58:17 +08:00
|
|
|
#include "llvm/Support/TimeProfiler.h"
|
2021-03-30 08:33:48 +08:00
|
|
|
|
2020-05-02 07:29:06 +08:00
|
|
|
using namespace llvm;
|
|
|
|
using namespace llvm::MachO;
|
|
|
|
using namespace lld;
|
|
|
|
using namespace lld::macho;
|
|
|
|
|
[lld-macho] Have ICF operate on all sections at once
ICF previously operated only within a given OutputSection. We would
merge all CFStrings first, then merge all regular code sections in a
second phase. This worked fine since CFStrings would never reference
regular `__text` sections. However, I would like to expand ICF to merge
functions that reference unwind info. Unwind info references the LSDA
section, which can in turn reference the `__text` section, so we cannot
perform ICF in phases.
In order to have ICF operate on InputSections spanning multiple
OutputSections, we need a way to distinguish InputSections that are
destined for different OutputSections, so that we don't fold across
section boundaries. We achieve this by creating OutputSections early,
and setting `InputSection::parent` to point to them. This is what
LLD-ELF does. (This change should also make it easier to implement the
`section$start$` symbols.)
This diff also folds InputSections w/o checking their flags, which I
think is the right behavior -- if they are destined for the same
OutputSection, they will have the same flags in the output (even if
their input flags differ). I.e. the `parent` pointer check subsumes the
`flags` check. In practice this has nearly no effect (ICF did not become
any more effective on chromium_framework).
I've also updated ICF.cpp's block comment to better reflect its current
status.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D105641
2021-07-18 01:42:26 +08:00
|
|
|
MapVector<NamePair, ConcatOutputSection *> macho::concatOutputSections;
|
|
|
|
|
[lld-macho] Implement cstring deduplication
Our implementation draws heavily from LLD-ELF's, which in turn delegates
its string deduplication to llvm-mc's StringTableBuilder. The messiness of
this diff is largely due to the fact that we've previously assumed that
all InputSections get concatenated together to form the output. This is
no longer true with CStringInputSections, which split their contents into
StringPieces. StringPieces are much more lightweight than InputSections,
which is important as we create a lot of them. They may also overlap in
the output, which makes it possible for strings to be tail-merged. In
fact, the initial version of this diff implemented tail merging, but
I've dropped it for reasons I'll explain later.
**Alignment Issues**
Mergeable cstring literals are found under the `__TEXT,__cstring`
section. In contrast to ELF, which puts strings that need different
alignments into different sections, clang's Mach-O backend puts them all
in one section. Strings that need to be aligned have the `.p2align`
directive emitted before them, which simply translates into zero padding
in the object file.
I *think* ld64 extracts the desired per-string alignment from this data
by preserving each string's offset from the last section-aligned
address. I'm not entirely certain since it doesn't seem consistent about
doing this; but perhaps this can be chalked up to cases where ld64 has
to deduplicate strings with different offset/alignment combos -- it
seems to pick one of their alignments to preserve. This doesn't seem
correct in general; we can in fact can induce ld64 to produce a crashing
binary just by linking in an additional object file that only contains
cstrings and no code. See PR50563 for details.
Moreover, this scheme seems rather inefficient: since unaligned and
aligned strings are all put in the same section, which has a single
alignment value, it doesn't seem possible to tell whether a given string
doesn't have any alignment requirements. Preserving offset+alignments
for strings that don't need it is wasteful.
In practice, the crashes seen so far seem to stem from x86_64 SIMD
operations on cstrings. X86_64 requires SIMD accesses to be
16-byte-aligned. So for now, I'm thinking of just aligning all strings
to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise
it's simpler than preserving per-string alignment+offsets. It also
avoids the aforementioned crash after deduplication of
differently-aligned strings. Finally, the overhead is not huge: using
16-byte alignment (vs no alignment) is only a 0.5% size overhead when
linking chromium_framework.
With these alignment requirements, it doesn't make sense to attempt tail
merging -- most strings will not be eligible since their overlaps aren't
likely to start at a 16-byte boundary. Tail-merging (with alignment) for
chromium_framework only improves size by 0.3%.
It's worth noting that LLD-ELF only does tail merging at `-O2`. By
default (at `-O1`), it just deduplicates w/o tail merging. @thakis has
also mentioned that they saw it regress compressed size in some cases
and therefore turned it off. `ld64` does not seem to do tail merging at
all.
**Performance Numbers**
CString deduplication reduces chromium_framework from 250MB to 242MB, or
about a 3.2% reduction.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.99 4.14 4.015 4.0365 0.0492336
Difference at 95.0% confidence
0.0865 +/- 0.027245
2.18987% +/- 0.689746%
(Student's t, pooled s = 0.0425673)
As expected, cstring merging incurs some non-trivial overhead.
When passing `--no-literal-merge`, it seems that performance is the
same, i.e. the refactoring in this diff didn't cost us.
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.89 4.02 3.935 3.9435 0.043197831
No difference proven at 95.0% confidence
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
|
|
|
void ConcatOutputSection::addInput(ConcatInputSection *input) {
|
[lld-macho] Have ICF operate on all sections at once
ICF previously operated only within a given OutputSection. We would
merge all CFStrings first, then merge all regular code sections in a
second phase. This worked fine since CFStrings would never reference
regular `__text` sections. However, I would like to expand ICF to merge
functions that reference unwind info. Unwind info references the LSDA
section, which can in turn reference the `__text` section, so we cannot
perform ICF in phases.
In order to have ICF operate on InputSections spanning multiple
OutputSections, we need a way to distinguish InputSections that are
destined for different OutputSections, so that we don't fold across
section boundaries. We achieve this by creating OutputSections early,
and setting `InputSection::parent` to point to them. This is what
LLD-ELF does. (This change should also make it easier to implement the
`section$start$` symbols.)
This diff also folds InputSections w/o checking their flags, which I
think is the right behavior -- if they are destined for the same
OutputSection, they will have the same flags in the output (even if
their input flags differ). I.e. the `parent` pointer check subsumes the
`flags` check. In practice this has nearly no effect (ICF did not become
any more effective on chromium_framework).
I've also updated ICF.cpp's block comment to better reflect its current
status.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D105641
2021-07-18 01:42:26 +08:00
|
|
|
assert(input->parent == this);
|
2020-05-02 07:29:06 +08:00
|
|
|
if (inputs.empty()) {
|
|
|
|
align = input->align;
|
2021-07-02 08:33:55 +08:00
|
|
|
flags = input->getFlags();
|
2020-05-02 07:29:06 +08:00
|
|
|
} else {
|
|
|
|
align = std::max(align, input->align);
|
2021-06-10 00:12:10 +08:00
|
|
|
finalizeFlags(input);
|
2020-05-02 07:29:06 +08:00
|
|
|
}
|
|
|
|
inputs.push_back(input);
|
|
|
|
}
|
|
|
|
|
2021-03-30 08:33:48 +08:00
|
|
|
// Branch-range extension can be implemented in two ways, either through ...
|
|
|
|
//
|
|
|
|
// (1) Branch islands: Single branch instructions (also of limited range),
|
|
|
|
// that might be chained in multiple hops to reach the desired
|
|
|
|
// destination. On ARM64, as 16 branch islands are needed to hop between
|
|
|
|
// opposite ends of a 2 GiB program. LD64 uses branch islands exclusively,
|
|
|
|
// even when it needs excessive hops.
|
|
|
|
//
|
|
|
|
// (2) Thunks: Instruction(s) to load the destination address into a scratch
|
|
|
|
// register, followed by a register-indirect branch. Thunks are
|
|
|
|
// constructed to reach any arbitrary address, so need not be
|
|
|
|
// chained. Although thunks need not be chained, a program might need
|
|
|
|
// multiple thunks to the same destination distributed throughout a large
|
|
|
|
// program so that all call sites can have one within range.
|
|
|
|
//
|
2021-08-28 11:27:03 +08:00
|
|
|
// The optimal approach is to mix islands for destinations within two hops,
|
2021-03-30 08:33:48 +08:00
|
|
|
// and use thunks for destinations at greater distance. For now, we only
|
|
|
|
// implement thunks. TODO: Adding support for branch islands!
|
|
|
|
//
|
|
|
|
// Internally -- as expressed in LLD's data structures -- a
|
|
|
|
// branch-range-extension thunk comprises ...
|
|
|
|
//
|
|
|
|
// (1) new Defined privateExtern symbol for the thunk named
|
|
|
|
// <FUNCTION>.thunk.<SEQUENCE>, which references ...
|
|
|
|
// (2) new InputSection, which contains ...
|
|
|
|
// (3.1) new data for the instructions to load & branch to the far address +
|
|
|
|
// (3.2) new Relocs on instructions to load the far address, which reference ...
|
|
|
|
// (4.1) existing Defined extern symbol for the real function in __text, or
|
|
|
|
// (4.2) existing DylibSymbol for the real function in a dylib
|
|
|
|
//
|
|
|
|
// Nearly-optimal thunk-placement algorithm features:
|
|
|
|
//
|
|
|
|
// * Single pass: O(n) on the number of call sites.
|
|
|
|
//
|
|
|
|
// * Accounts for the exact space overhead of thunks - no heuristics
|
|
|
|
//
|
|
|
|
// * Exploits the full range of call instructions - forward & backward
|
|
|
|
//
|
|
|
|
// Data:
|
|
|
|
//
|
|
|
|
// * DenseMap<Symbol *, ThunkInfo> thunkMap: Maps the function symbol
|
|
|
|
// to its thunk bookkeeper.
|
|
|
|
//
|
|
|
|
// * struct ThunkInfo (bookkeeper): Call instructions have limited range, and
|
|
|
|
// distant call sites might be unable to reach the same thunk, so multiple
|
|
|
|
// thunks are necessary to serve all call sites in a very large program. A
|
|
|
|
// thunkInfo stores state for all thunks associated with a particular
|
|
|
|
// function: (a) thunk symbol, (b) input section containing stub code, and
|
|
|
|
// (c) sequence number for the active thunk incarnation. When an old thunk
|
|
|
|
// goes out of range, we increment the sequence number and create a new
|
|
|
|
// thunk named <FUNCTION>.thunk.<SEQUENCE>.
|
|
|
|
//
|
|
|
|
// * A thunk incarnation comprises (a) private-extern Defined symbol pointing
|
|
|
|
// to (b) an InputSection holding machine instructions (similar to a MachO
|
|
|
|
// stub), and (c) Reloc(s) that reference the real function for fixing-up
|
|
|
|
// the stub code.
|
|
|
|
//
|
|
|
|
// * std::vector<InputSection *> MergedInputSection::thunks: A vector parallel
|
|
|
|
// to the inputs vector. We store new thunks via cheap vector append, rather
|
|
|
|
// than costly insertion into the inputs vector.
|
|
|
|
//
|
|
|
|
// Control Flow:
|
|
|
|
//
|
|
|
|
// * During address assignment, MergedInputSection::finalize() examines call
|
|
|
|
// sites by ascending address and creates thunks. When a function is beyond
|
|
|
|
// the range of a call site, we need a thunk. Place it at the largest
|
|
|
|
// available forward address from the call site. Call sites increase
|
|
|
|
// monotonically and thunks are always placed as far forward as possible;
|
|
|
|
// thus, we place thunks at monotonically increasing addresses. Once a thunk
|
|
|
|
// is placed, it and all previous input-section addresses are final.
|
|
|
|
//
|
2021-08-28 07:20:29 +08:00
|
|
|
// * ConcatInputSection::finalize() and ConcatInputSection::writeTo() merge
|
2021-03-30 08:33:48 +08:00
|
|
|
// the inputs and thunks vectors (both ordered by ascending address), which
|
|
|
|
// is simple and cheap.
|
|
|
|
|
|
|
|
DenseMap<Symbol *, ThunkInfo> lld::macho::thunkMap;
|
|
|
|
|
|
|
|
// Determine whether we need thunks, which depends on the target arch -- RISC
|
|
|
|
// (i.e., ARM) generally does because it has limited-range branch/call
|
|
|
|
// instructions, whereas CISC (i.e., x86) generally doesn't. RISC only needs
|
|
|
|
// thunks for programs so large that branch source & destination addresses
|
|
|
|
// might differ more than the range of branch instruction(s).
|
2021-05-26 02:57:16 +08:00
|
|
|
bool ConcatOutputSection::needsThunks() const {
|
2021-03-30 08:33:48 +08:00
|
|
|
if (!target->usesThunks())
|
|
|
|
return false;
|
|
|
|
uint64_t isecAddr = addr;
|
2021-12-11 14:01:13 +08:00
|
|
|
for (ConcatInputSection *isec : inputs)
|
2021-03-30 08:33:48 +08:00
|
|
|
isecAddr = alignTo(isecAddr, isec->align) + isec->getSize();
|
2021-08-30 03:19:19 +08:00
|
|
|
if (isecAddr - addr + in.stubs->getSize() <=
|
|
|
|
std::min(target->backwardBranchRange, target->forwardBranchRange))
|
2021-03-30 08:33:48 +08:00
|
|
|
return false;
|
|
|
|
// Yes, this program is large enough to need thunks.
|
2021-12-11 14:01:13 +08:00
|
|
|
for (ConcatInputSection *isec : inputs) {
|
2021-03-30 08:33:48 +08:00
|
|
|
for (Reloc &r : isec->relocs) {
|
|
|
|
if (!target->hasAttr(r.type, RelocAttrBits::BRANCH))
|
|
|
|
continue;
|
|
|
|
auto *sym = r.referent.get<Symbol *>();
|
|
|
|
// Pre-populate the thunkMap and memoize call site counts for every
|
|
|
|
// InputSection and ThunkInfo. We do this for the benefit of
|
2021-05-26 02:57:16 +08:00
|
|
|
// ConcatOutputSection::estimateStubsInRangeVA()
|
2021-03-30 08:33:48 +08:00
|
|
|
ThunkInfo &thunkInfo = thunkMap[sym];
|
|
|
|
// Knowing ThunkInfo call site count will help us know whether or not we
|
|
|
|
// might need to create more for this referent at the time we are
|
2021-08-28 11:27:03 +08:00
|
|
|
// estimating distance to __stubs in estimateStubsInRangeVA().
|
2021-03-30 08:33:48 +08:00
|
|
|
++thunkInfo.callSiteCount;
|
2021-12-11 14:01:13 +08:00
|
|
|
// We can avoid work on InputSections that have no BRANCH relocs.
|
|
|
|
isec->hasCallSites = true;
|
2021-03-30 08:33:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Since __stubs is placed after __text, we must estimate the address
|
|
|
|
// beyond which stubs are within range of a simple forward branch.
|
2021-08-28 11:27:03 +08:00
|
|
|
// This is called exactly once, when the last input section has been finalized.
|
2021-05-26 02:57:16 +08:00
|
|
|
uint64_t ConcatOutputSection::estimateStubsInRangeVA(size_t callIdx) const {
|
2021-08-28 11:27:03 +08:00
|
|
|
// Tally the functions which still have call sites remaining to process,
|
|
|
|
// which yields the maximum number of thunks we might yet place.
|
2021-03-30 08:33:48 +08:00
|
|
|
size_t maxPotentialThunks = 0;
|
|
|
|
for (auto &tp : thunkMap) {
|
|
|
|
ThunkInfo &ti = tp.second;
|
2021-08-28 11:27:03 +08:00
|
|
|
// This overcounts: Only sections that are in forward jump range from the
|
|
|
|
// currently-active section get finalized, and all input sections are
|
|
|
|
// finalized when estimateStubsInRangeVA() is called. So only backward
|
|
|
|
// jumps will need thunks, but we count all jumps.
|
|
|
|
if (ti.callSitesUsed < ti.callSiteCount)
|
|
|
|
maxPotentialThunks += 1;
|
2021-03-30 08:33:48 +08:00
|
|
|
}
|
|
|
|
// Tally the total size of input sections remaining to process.
|
2021-08-28 11:27:03 +08:00
|
|
|
uint64_t isecVA = inputs[callIdx]->getVA();
|
|
|
|
uint64_t isecEnd = isecVA;
|
|
|
|
for (size_t i = callIdx; i < inputs.size(); i++) {
|
2021-03-30 08:33:48 +08:00
|
|
|
InputSection *isec = inputs[i];
|
|
|
|
isecEnd = alignTo(isecEnd, isec->align) + isec->getSize();
|
|
|
|
}
|
|
|
|
// Estimate the address after which call sites can safely call stubs
|
|
|
|
// directly rather than through intermediary thunks.
|
2021-08-30 03:19:19 +08:00
|
|
|
uint64_t forwardBranchRange = target->forwardBranchRange;
|
2021-08-28 11:27:03 +08:00
|
|
|
assert(isecEnd > forwardBranchRange &&
|
|
|
|
"should not run thunk insertion if all code fits in jump range");
|
|
|
|
assert(isecEnd - isecVA <= forwardBranchRange &&
|
|
|
|
"should only finalize sections in jump range");
|
2021-03-30 08:33:48 +08:00
|
|
|
uint64_t stubsInRangeVA = isecEnd + maxPotentialThunks * target->thunkSize +
|
2021-08-30 03:19:19 +08:00
|
|
|
in.stubs->getSize() - forwardBranchRange;
|
2021-03-30 08:33:48 +08:00
|
|
|
log("thunks = " + std::to_string(thunkMap.size()) +
|
|
|
|
", potential = " + std::to_string(maxPotentialThunks) +
|
|
|
|
", stubs = " + std::to_string(in.stubs->getSize()) + ", isecVA = " +
|
|
|
|
to_hexString(isecVA) + ", threshold = " + to_hexString(stubsInRangeVA) +
|
|
|
|
", isecEnd = " + to_hexString(isecEnd) +
|
|
|
|
", tail = " + to_hexString(isecEnd - isecVA) +
|
2021-08-30 03:19:19 +08:00
|
|
|
", slop = " + to_hexString(forwardBranchRange - (isecEnd - isecVA)));
|
2021-03-30 08:33:48 +08:00
|
|
|
return stubsInRangeVA;
|
|
|
|
}
|
|
|
|
|
2021-05-26 02:57:16 +08:00
|
|
|
void ConcatOutputSection::finalize() {
|
2020-05-02 07:29:06 +08:00
|
|
|
uint64_t isecAddr = addr;
|
|
|
|
uint64_t isecFileOff = fileOff;
|
[lld-macho] Implement cstring deduplication
Our implementation draws heavily from LLD-ELF's, which in turn delegates
its string deduplication to llvm-mc's StringTableBuilder. The messiness of
this diff is largely due to the fact that we've previously assumed that
all InputSections get concatenated together to form the output. This is
no longer true with CStringInputSections, which split their contents into
StringPieces. StringPieces are much more lightweight than InputSections,
which is important as we create a lot of them. They may also overlap in
the output, which makes it possible for strings to be tail-merged. In
fact, the initial version of this diff implemented tail merging, but
I've dropped it for reasons I'll explain later.
**Alignment Issues**
Mergeable cstring literals are found under the `__TEXT,__cstring`
section. In contrast to ELF, which puts strings that need different
alignments into different sections, clang's Mach-O backend puts them all
in one section. Strings that need to be aligned have the `.p2align`
directive emitted before them, which simply translates into zero padding
in the object file.
I *think* ld64 extracts the desired per-string alignment from this data
by preserving each string's offset from the last section-aligned
address. I'm not entirely certain since it doesn't seem consistent about
doing this; but perhaps this can be chalked up to cases where ld64 has
to deduplicate strings with different offset/alignment combos -- it
seems to pick one of their alignments to preserve. This doesn't seem
correct in general; we can in fact can induce ld64 to produce a crashing
binary just by linking in an additional object file that only contains
cstrings and no code. See PR50563 for details.
Moreover, this scheme seems rather inefficient: since unaligned and
aligned strings are all put in the same section, which has a single
alignment value, it doesn't seem possible to tell whether a given string
doesn't have any alignment requirements. Preserving offset+alignments
for strings that don't need it is wasteful.
In practice, the crashes seen so far seem to stem from x86_64 SIMD
operations on cstrings. X86_64 requires SIMD accesses to be
16-byte-aligned. So for now, I'm thinking of just aligning all strings
to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise
it's simpler than preserving per-string alignment+offsets. It also
avoids the aforementioned crash after deduplication of
differently-aligned strings. Finally, the overhead is not huge: using
16-byte alignment (vs no alignment) is only a 0.5% size overhead when
linking chromium_framework.
With these alignment requirements, it doesn't make sense to attempt tail
merging -- most strings will not be eligible since their overlaps aren't
likely to start at a 16-byte boundary. Tail-merging (with alignment) for
chromium_framework only improves size by 0.3%.
It's worth noting that LLD-ELF only does tail merging at `-O2`. By
default (at `-O1`), it just deduplicates w/o tail merging. @thakis has
also mentioned that they saw it regress compressed size in some cases
and therefore turned it off. `ld64` does not seem to do tail merging at
all.
**Performance Numbers**
CString deduplication reduces chromium_framework from 250MB to 242MB, or
about a 3.2% reduction.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.99 4.14 4.015 4.0365 0.0492336
Difference at 95.0% confidence
0.0865 +/- 0.027245
2.18987% +/- 0.689746%
(Student's t, pooled s = 0.0425673)
As expected, cstring merging incurs some non-trivial overhead.
When passing `--no-literal-merge`, it seems that performance is the
same, i.e. the refactoring in this diff didn't cost us.
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.89 4.02 3.935 3.9435 0.043197831
No difference proven at 95.0% confidence
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
|
|
|
auto finalizeOne = [&](ConcatInputSection *isec) {
|
2020-06-14 11:06:29 +08:00
|
|
|
isecAddr = alignTo(isecAddr, isec->align);
|
|
|
|
isecFileOff = alignTo(isecFileOff, isec->align);
|
|
|
|
isec->outSecOff = isecAddr - addr;
|
2021-03-30 08:33:48 +08:00
|
|
|
isec->isFinal = true;
|
2020-06-14 11:06:29 +08:00
|
|
|
isecAddr += isec->getSize();
|
|
|
|
isecFileOff += isec->getFileSize();
|
2021-03-30 08:33:48 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
if (!needsThunks()) {
|
[lld-macho] Implement cstring deduplication
Our implementation draws heavily from LLD-ELF's, which in turn delegates
its string deduplication to llvm-mc's StringTableBuilder. The messiness of
this diff is largely due to the fact that we've previously assumed that
all InputSections get concatenated together to form the output. This is
no longer true with CStringInputSections, which split their contents into
StringPieces. StringPieces are much more lightweight than InputSections,
which is important as we create a lot of them. They may also overlap in
the output, which makes it possible for strings to be tail-merged. In
fact, the initial version of this diff implemented tail merging, but
I've dropped it for reasons I'll explain later.
**Alignment Issues**
Mergeable cstring literals are found under the `__TEXT,__cstring`
section. In contrast to ELF, which puts strings that need different
alignments into different sections, clang's Mach-O backend puts them all
in one section. Strings that need to be aligned have the `.p2align`
directive emitted before them, which simply translates into zero padding
in the object file.
I *think* ld64 extracts the desired per-string alignment from this data
by preserving each string's offset from the last section-aligned
address. I'm not entirely certain since it doesn't seem consistent about
doing this; but perhaps this can be chalked up to cases where ld64 has
to deduplicate strings with different offset/alignment combos -- it
seems to pick one of their alignments to preserve. This doesn't seem
correct in general; we can in fact can induce ld64 to produce a crashing
binary just by linking in an additional object file that only contains
cstrings and no code. See PR50563 for details.
Moreover, this scheme seems rather inefficient: since unaligned and
aligned strings are all put in the same section, which has a single
alignment value, it doesn't seem possible to tell whether a given string
doesn't have any alignment requirements. Preserving offset+alignments
for strings that don't need it is wasteful.
In practice, the crashes seen so far seem to stem from x86_64 SIMD
operations on cstrings. X86_64 requires SIMD accesses to be
16-byte-aligned. So for now, I'm thinking of just aligning all strings
to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise
it's simpler than preserving per-string alignment+offsets. It also
avoids the aforementioned crash after deduplication of
differently-aligned strings. Finally, the overhead is not huge: using
16-byte alignment (vs no alignment) is only a 0.5% size overhead when
linking chromium_framework.
With these alignment requirements, it doesn't make sense to attempt tail
merging -- most strings will not be eligible since their overlaps aren't
likely to start at a 16-byte boundary. Tail-merging (with alignment) for
chromium_framework only improves size by 0.3%.
It's worth noting that LLD-ELF only does tail merging at `-O2`. By
default (at `-O1`), it just deduplicates w/o tail merging. @thakis has
also mentioned that they saw it regress compressed size in some cases
and therefore turned it off. `ld64` does not seem to do tail merging at
all.
**Performance Numbers**
CString deduplication reduces chromium_framework from 250MB to 242MB, or
about a 3.2% reduction.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.99 4.14 4.015 4.0365 0.0492336
Difference at 95.0% confidence
0.0865 +/- 0.027245
2.18987% +/- 0.689746%
(Student's t, pooled s = 0.0425673)
As expected, cstring merging incurs some non-trivial overhead.
When passing `--no-literal-merge`, it seems that performance is the
same, i.e. the refactoring in this diff didn't cost us.
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.89 4.02 3.935 3.9435 0.043197831
No difference proven at 95.0% confidence
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
|
|
|
for (ConcatInputSection *isec : inputs)
|
2021-03-30 08:33:48 +08:00
|
|
|
finalizeOne(isec);
|
|
|
|
size = isecAddr - addr;
|
|
|
|
fileSize = isecFileOff - fileOff;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-08-30 03:19:19 +08:00
|
|
|
uint64_t forwardBranchRange = target->forwardBranchRange;
|
|
|
|
uint64_t backwardBranchRange = target->backwardBranchRange;
|
2021-03-30 08:33:48 +08:00
|
|
|
uint64_t stubsInRangeVA = TargetInfo::outOfRangeVA;
|
|
|
|
size_t thunkSize = target->thunkSize;
|
|
|
|
size_t relocCount = 0;
|
|
|
|
size_t callSiteCount = 0;
|
|
|
|
size_t thunkCallCount = 0;
|
|
|
|
size_t thunkCount = 0;
|
|
|
|
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
// Walk all sections in order. Finalize all sections that are less than
|
|
|
|
// forwardBranchRange in front of it.
|
|
|
|
// isecVA is the address of the current section.
|
|
|
|
// isecAddr is the start address of the first non-finalized section.
|
|
|
|
|
2021-03-30 08:33:48 +08:00
|
|
|
// inputs[finalIdx] is for finalization (address-assignment)
|
|
|
|
size_t finalIdx = 0;
|
|
|
|
// Kick-off by ensuring that the first input section has an address
|
|
|
|
for (size_t callIdx = 0, endIdx = inputs.size(); callIdx < endIdx;
|
|
|
|
++callIdx) {
|
|
|
|
if (finalIdx == callIdx)
|
|
|
|
finalizeOne(inputs[finalIdx++]);
|
[lld-macho] Implement cstring deduplication
Our implementation draws heavily from LLD-ELF's, which in turn delegates
its string deduplication to llvm-mc's StringTableBuilder. The messiness of
this diff is largely due to the fact that we've previously assumed that
all InputSections get concatenated together to form the output. This is
no longer true with CStringInputSections, which split their contents into
StringPieces. StringPieces are much more lightweight than InputSections,
which is important as we create a lot of them. They may also overlap in
the output, which makes it possible for strings to be tail-merged. In
fact, the initial version of this diff implemented tail merging, but
I've dropped it for reasons I'll explain later.
**Alignment Issues**
Mergeable cstring literals are found under the `__TEXT,__cstring`
section. In contrast to ELF, which puts strings that need different
alignments into different sections, clang's Mach-O backend puts them all
in one section. Strings that need to be aligned have the `.p2align`
directive emitted before them, which simply translates into zero padding
in the object file.
I *think* ld64 extracts the desired per-string alignment from this data
by preserving each string's offset from the last section-aligned
address. I'm not entirely certain since it doesn't seem consistent about
doing this; but perhaps this can be chalked up to cases where ld64 has
to deduplicate strings with different offset/alignment combos -- it
seems to pick one of their alignments to preserve. This doesn't seem
correct in general; we can in fact can induce ld64 to produce a crashing
binary just by linking in an additional object file that only contains
cstrings and no code. See PR50563 for details.
Moreover, this scheme seems rather inefficient: since unaligned and
aligned strings are all put in the same section, which has a single
alignment value, it doesn't seem possible to tell whether a given string
doesn't have any alignment requirements. Preserving offset+alignments
for strings that don't need it is wasteful.
In practice, the crashes seen so far seem to stem from x86_64 SIMD
operations on cstrings. X86_64 requires SIMD accesses to be
16-byte-aligned. So for now, I'm thinking of just aligning all strings
to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise
it's simpler than preserving per-string alignment+offsets. It also
avoids the aforementioned crash after deduplication of
differently-aligned strings. Finally, the overhead is not huge: using
16-byte alignment (vs no alignment) is only a 0.5% size overhead when
linking chromium_framework.
With these alignment requirements, it doesn't make sense to attempt tail
merging -- most strings will not be eligible since their overlaps aren't
likely to start at a 16-byte boundary. Tail-merging (with alignment) for
chromium_framework only improves size by 0.3%.
It's worth noting that LLD-ELF only does tail merging at `-O2`. By
default (at `-O1`), it just deduplicates w/o tail merging. @thakis has
also mentioned that they saw it regress compressed size in some cases
and therefore turned it off. `ld64` does not seem to do tail merging at
all.
**Performance Numbers**
CString deduplication reduces chromium_framework from 250MB to 242MB, or
about a 3.2% reduction.
Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W:
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.99 4.14 4.015 4.0365 0.0492336
Difference at 95.0% confidence
0.0865 +/- 0.027245
2.18987% +/- 0.689746%
(Student's t, pooled s = 0.0425673)
As expected, cstring merging incurs some non-trivial overhead.
When passing `--no-literal-merge`, it seems that performance is the
same, i.e. the refactoring in this diff didn't cost us.
N Min Max Median Avg Stddev
x 20 3.91 4.03 3.935 3.95 0.034641016
+ 20 3.89 4.02 3.935 3.9435 0.043197831
No difference proven at 95.0% confidence
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
|
|
|
ConcatInputSection *isec = inputs[callIdx];
|
2021-03-30 08:33:48 +08:00
|
|
|
assert(isec->isFinal);
|
|
|
|
uint64_t isecVA = isec->getVA();
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
|
|
|
|
// Assign addresses up-to the forward branch-range limit.
|
|
|
|
// Every call instruction needs a small number of bytes (on Arm64: 4),
|
2021-09-18 23:15:21 +08:00
|
|
|
// and each inserted thunk needs a slightly larger number of bytes
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
// (on Arm64: 12). If a section starts with a branch instruction and
|
|
|
|
// contains several branch instructions in succession, then the distance
|
|
|
|
// from the current position to the position where the thunks are inserted
|
|
|
|
// grows. So leave room for a bunch of thunks.
|
2022-01-06 07:05:09 +08:00
|
|
|
unsigned slop = 256 * thunkSize;
|
2021-08-30 03:19:19 +08:00
|
|
|
while (finalIdx < endIdx && isecAddr + inputs[finalIdx]->getSize() <
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
isecVA + forwardBranchRange - slop)
|
2021-03-30 08:33:48 +08:00
|
|
|
finalizeOne(inputs[finalIdx++]);
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
|
2021-12-11 14:01:13 +08:00
|
|
|
if (!isec->hasCallSites)
|
2021-03-30 08:33:48 +08:00
|
|
|
continue;
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
|
2021-03-30 08:33:48 +08:00
|
|
|
if (finalIdx == endIdx && stubsInRangeVA == TargetInfo::outOfRangeVA) {
|
|
|
|
// When we have finalized all input sections, __stubs (destined
|
|
|
|
// to follow __text) comes within range of forward branches and
|
|
|
|
// we can estimate the threshold address after which we can
|
|
|
|
// reach any stub with a forward branch. Note that although it
|
|
|
|
// sits in the middle of a loop, this code executes only once.
|
|
|
|
// It is in the loop because we need to call it at the proper
|
|
|
|
// time: the earliest call site from which the end of __text
|
|
|
|
// (and start of __stubs) comes within range of a forward branch.
|
|
|
|
stubsInRangeVA = estimateStubsInRangeVA(callIdx);
|
|
|
|
}
|
|
|
|
// Process relocs by ascending address, i.e., ascending offset within isec
|
|
|
|
std::vector<Reloc> &relocs = isec->relocs;
|
2021-07-05 13:13:30 +08:00
|
|
|
// FIXME: This property does not hold for object files produced by ld64's
|
|
|
|
// `-r` mode.
|
2021-03-30 08:33:48 +08:00
|
|
|
assert(is_sorted(relocs,
|
|
|
|
[](Reloc &a, Reloc &b) { return a.offset > b.offset; }));
|
|
|
|
for (Reloc &r : reverse(relocs)) {
|
|
|
|
++relocCount;
|
|
|
|
if (!target->hasAttr(r.type, RelocAttrBits::BRANCH))
|
|
|
|
continue;
|
|
|
|
++callSiteCount;
|
|
|
|
// Calculate branch reachability boundaries
|
|
|
|
uint64_t callVA = isecVA + r.offset;
|
2021-08-30 03:19:19 +08:00
|
|
|
uint64_t lowVA =
|
|
|
|
backwardBranchRange < callVA ? callVA - backwardBranchRange : 0;
|
|
|
|
uint64_t highVA = callVA + forwardBranchRange;
|
2021-03-30 08:33:48 +08:00
|
|
|
// Calculate our call referent address
|
|
|
|
auto *funcSym = r.referent.get<Symbol *>();
|
|
|
|
ThunkInfo &thunkInfo = thunkMap[funcSym];
|
|
|
|
// The referent is not reachable, so we need to use a thunk ...
|
|
|
|
if (funcSym->isInStubs() && callVA >= stubsInRangeVA) {
|
2021-08-28 11:27:03 +08:00
|
|
|
assert(callVA != TargetInfo::outOfRangeVA);
|
2021-03-30 08:33:48 +08:00
|
|
|
// ... Oh, wait! We are close enough to the end that __stubs
|
|
|
|
// are now within range of a simple forward branch.
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
uint64_t funcVA = funcSym->resolveBranchVA();
|
|
|
|
++thunkInfo.callSitesUsed;
|
2021-08-30 03:19:19 +08:00
|
|
|
if (lowVA <= funcVA && funcVA <= highVA) {
|
2021-03-30 08:33:48 +08:00
|
|
|
// The referent is reachable with a simple call instruction.
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
++thunkInfo.thunkCallCount;
|
|
|
|
++thunkCallCount;
|
|
|
|
// If an existing thunk is reachable, use it ...
|
|
|
|
if (thunkInfo.sym) {
|
|
|
|
uint64_t thunkVA = thunkInfo.isec->getVA();
|
2021-08-30 03:19:19 +08:00
|
|
|
if (lowVA <= thunkVA && thunkVA <= highVA) {
|
2021-03-30 08:33:48 +08:00
|
|
|
r.referent = thunkInfo.sym;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
2021-08-28 11:27:03 +08:00
|
|
|
// ... otherwise, create a new thunk.
|
2021-03-30 08:33:48 +08:00
|
|
|
if (isecAddr > highVA) {
|
[lld/mac] Leave more room for thunks in thunk placement code
Fixes PR51578 in practice.
Currently there's only enough room for a single thunk, which for real-life code
isn't enough. The error case only happens when there are many branch statements
very close to each other (0 or 1 instructions apart), with the function at the
finalization barrier small.
There's a FIXME on what to do if we hit this case, but that suggestion sounds
complicated to me (see end of PR51578 comment 5 for why).
Instead, just leave more room for thunks. Chromium's unit_tests links fine with
room for 3 thunks. Leave room for 100, which should fix this for most cases in
practice.
There's little cost for leaving lots of room: This slop value only determines
when we finalize sections, and we insert thunks for forward jumps into
unfinalized sections. So leaving room means we'll need a few more thunks, but
the thunk jump range is 128 MiB while a single thunk is just 12 bytes.
For Chromium's unit_tests:
With a slop of 3: thunk calls = 355418, thunks = 10903
With a slop of 100: thunk calls = 355426, thunks = 10904
Chances are 100 is enough for all use cases we'll hit in practice, but even
bumping it to 1000 would probably be fine.
Differential Revision: https://reviews.llvm.org/D108930
2021-08-31 02:32:29 +08:00
|
|
|
// There were too many consecutive branch instructions for `slop`
|
|
|
|
// above. If you hit this: For the current algorithm, just bumping up
|
|
|
|
// slop above and trying again is probably simplest. (See also PR51578
|
|
|
|
// comment 5).
|
2021-03-30 08:33:48 +08:00
|
|
|
fatal(Twine(__FUNCTION__) + ": FIXME: thunk range overrun");
|
|
|
|
}
|
2021-07-02 08:33:55 +08:00
|
|
|
thunkInfo.isec =
|
|
|
|
make<ConcatInputSection>(isec->getSegName(), isec->getName());
|
2021-03-30 08:33:48 +08:00
|
|
|
thunkInfo.isec->parent = this;
|
2021-08-28 07:20:29 +08:00
|
|
|
|
|
|
|
// This code runs after dead code removal. Need to set the `live` bit
|
|
|
|
// on the thunk isec so that asserts that check that only live sections
|
|
|
|
// get written are happy.
|
|
|
|
thunkInfo.isec->live = true;
|
|
|
|
|
2022-01-21 03:53:18 +08:00
|
|
|
StringRef thunkName = saver().save(funcSym->getName() + ".thunk." +
|
|
|
|
std::to_string(thunkInfo.sequence++));
|
2021-03-30 08:33:48 +08:00
|
|
|
r.referent = thunkInfo.sym = symtab->addDefined(
|
|
|
|
thunkName, /*file=*/nullptr, thunkInfo.isec, /*value=*/0,
|
|
|
|
/*size=*/thunkSize, /*isWeakDef=*/false, /*isPrivateExtern=*/true,
|
[lld/mac] Implement -dead_strip
Also adds support for live_support sections, no_dead_strip sections,
.no_dead_strip symbols.
Chromium Framework 345MB unstripped -> 250MB stripped
(vs 290MB unstripped -> 236M stripped with ld64).
Doing dead stripping is a bit faster than not, because so much less
data needs to be processed:
% ministat lld_*
x lld_nostrip.txt
+ lld_strip.txt
N Min Max Median Avg Stddev
x 10 3.929414 4.07692 4.0269079 4.0089678 0.044214794
+ 10 3.8129408 3.9025559 3.8670411 3.8642573 0.024779651
Difference at 95.0% confidence
-0.144711 +/- 0.0336749
-3.60967% +/- 0.839989%
(Student's t, pooled s = 0.0358398)
This interacts with many parts of the linker. I tried to add test coverage
for all added `isLive()` checks, so that some test will fail if any of them
is removed. I checked that the test expectations for the most part match
ld64's behavior (except for live-support-iterations.s, see the comment
in the test). Interacts with:
- debug info
- export tries
- import opcodes
- flags like -exported_symbol(s_list)
- -U / dynamic_lookup
- mod_init_funcs, mod_term_funcs
- weak symbol handling
- unwind info
- stubs
- map files
- -sectcreate
- undefined, dylib, common, defined (both absolute and normal) symbols
It's possible it interacts with more features I didn't think of,
of course.
I also did some manual testing:
- check-llvm check-clang check-lld work with lld with this patch
as host linker and -dead_strip enabled
- Chromium still starts
- Chromium's base_unittests still pass, including unwind tests
Implemenation-wise, this is InputSection-based, so it'll work for
object files with .subsections_via_symbols (which includes all
object files generated by clang). I first based this on the COFF
implementation, but later realized that things are more similar to ELF.
I think it'd be good to refactor MarkLive.cpp to look more like the ELF
part at some point, but I'd like to get a working state checked in first.
Mechanical parts:
- Rename canOmitFromOutput to wasCoalesced (no behavior change)
since it really is for weak coalesced symbols
- Add noDeadStrip to Defined, corresponding to N_NO_DEAD_STRIP
(`.no_dead_strip` in asm)
Fixes PR49276.
Differential Revision: https://reviews.llvm.org/D103324
2021-05-08 05:10:05 +08:00
|
|
|
/*isThumb=*/false, /*isReferencedDynamically=*/false,
|
2021-11-09 08:50:34 +08:00
|
|
|
/*noDeadStrip=*/false, /*isWeakDefCanBeHidden=*/false);
|
[lld-macho] Associate compact unwind entries with function symbols
Compact unwind entries (CUEs) contain pointers to their respective
function symbols. However, during the link process, it's far more useful
to have pointers from the function symbol to the CUE than vice versa.
This diff adds that pointer in the form of `Defined::compactUnwind`.
In particular, when doing dead-stripping, we want to mark CUEs live when
their function symbol is live; and when doing ICF, we want to dedup
sections iff the symbols in that section have identical CUEs. In both
cases, we want to be able to locate the symbols within a given section,
as well as locate the CUEs belonging to those symbols. So this diff also
adds `InputSection::symbols`.
The ultimate goal of this refactor is to have ICF support dedup'ing
functions with unwind info, but that will be handled in subsequent
diffs. This diff focuses on simplifying `-dead_strip` --
`findFunctionsWithUnwindInfo` is no longer necessary, and
`Defined::isLive()` is now a lot simpler. Moreover, UnwindInfoSection no
longer has to check for dead CUEs -- we simply avoid adding them in the
first place.
Additionally, we now support stripping of dead LSDAs, which follows
quite naturally since `markLive()` can now reach them via the CUEs.
Reviewed By: #lld-macho, gkm
Differential Revision: https://reviews.llvm.org/D109944
2021-10-27 04:04:04 +08:00
|
|
|
thunkInfo.sym->used = true;
|
2021-03-30 08:33:48 +08:00
|
|
|
target->populateThunk(thunkInfo.isec, funcSym);
|
|
|
|
finalizeOne(thunkInfo.isec);
|
|
|
|
thunks.push_back(thunkInfo.isec);
|
|
|
|
++thunkCount;
|
|
|
|
}
|
2020-05-02 07:29:06 +08:00
|
|
|
}
|
|
|
|
size = isecAddr - addr;
|
|
|
|
fileSize = isecFileOff - fileOff;
|
2021-03-30 08:33:48 +08:00
|
|
|
|
|
|
|
log("thunks for " + parent->name + "," + name +
|
|
|
|
": funcs = " + std::to_string(thunkMap.size()) +
|
|
|
|
", relocs = " + std::to_string(relocCount) +
|
|
|
|
", all calls = " + std::to_string(callSiteCount) +
|
|
|
|
", thunk calls = " + std::to_string(thunkCallCount) +
|
|
|
|
", thunks = " + std::to_string(thunkCount));
|
2020-05-02 07:29:06 +08:00
|
|
|
}
|
|
|
|
|
2021-05-26 02:57:16 +08:00
|
|
|
void ConcatOutputSection::writeTo(uint8_t *buf) const {
|
2021-03-30 08:33:48 +08:00
|
|
|
// Merge input sections from thunk & ordinary vectors
|
|
|
|
size_t i = 0, ie = inputs.size();
|
|
|
|
size_t t = 0, te = thunks.size();
|
|
|
|
while (i < ie || t < te) {
|
2021-10-27 03:14:25 +08:00
|
|
|
while (i < ie && (t == te || inputs[i]->empty() ||
|
2021-03-30 08:33:48 +08:00
|
|
|
inputs[i]->outSecOff < thunks[t]->outSecOff)) {
|
2021-06-14 07:43:34 +08:00
|
|
|
inputs[i]->writeTo(buf + inputs[i]->outSecOff);
|
2021-03-30 08:33:48 +08:00
|
|
|
++i;
|
|
|
|
}
|
|
|
|
while (t < te && (i == ie || thunks[t]->outSecOff < inputs[i]->outSecOff)) {
|
2021-06-14 07:43:34 +08:00
|
|
|
thunks[t]->writeTo(buf + thunks[t]->outSecOff);
|
2021-03-30 08:33:48 +08:00
|
|
|
++t;
|
|
|
|
}
|
|
|
|
}
|
2020-05-02 07:29:06 +08:00
|
|
|
}
|
|
|
|
|
2021-06-10 00:12:10 +08:00
|
|
|
void ConcatOutputSection::finalizeFlags(InputSection *input) {
|
2021-07-02 08:33:55 +08:00
|
|
|
switch (sectionType(input->getFlags())) {
|
2021-06-10 00:12:10 +08:00
|
|
|
default /*type-unspec'ed*/:
|
2021-07-25 22:09:37 +08:00
|
|
|
// FIXME: Add additional logic here when supporting emitting obj files.
|
2021-06-10 00:12:10 +08:00
|
|
|
break;
|
|
|
|
case S_4BYTE_LITERALS:
|
|
|
|
case S_8BYTE_LITERALS:
|
|
|
|
case S_16BYTE_LITERALS:
|
|
|
|
case S_CSTRING_LITERALS:
|
|
|
|
case S_ZEROFILL:
|
|
|
|
case S_LAZY_SYMBOL_POINTERS:
|
|
|
|
case S_MOD_TERM_FUNC_POINTERS:
|
|
|
|
case S_THREAD_LOCAL_REGULAR:
|
|
|
|
case S_THREAD_LOCAL_ZEROFILL:
|
|
|
|
case S_THREAD_LOCAL_VARIABLES:
|
|
|
|
case S_THREAD_LOCAL_INIT_FUNCTION_POINTERS:
|
|
|
|
case S_THREAD_LOCAL_VARIABLE_POINTERS:
|
|
|
|
case S_NON_LAZY_SYMBOL_POINTERS:
|
|
|
|
case S_SYMBOL_STUBS:
|
2021-07-02 08:33:55 +08:00
|
|
|
flags |= input->getFlags();
|
2021-06-10 00:12:10 +08:00
|
|
|
break;
|
|
|
|
}
|
2020-05-02 07:29:06 +08:00
|
|
|
}
|
[lld-macho] Have ICF operate on all sections at once
ICF previously operated only within a given OutputSection. We would
merge all CFStrings first, then merge all regular code sections in a
second phase. This worked fine since CFStrings would never reference
regular `__text` sections. However, I would like to expand ICF to merge
functions that reference unwind info. Unwind info references the LSDA
section, which can in turn reference the `__text` section, so we cannot
perform ICF in phases.
In order to have ICF operate on InputSections spanning multiple
OutputSections, we need a way to distinguish InputSections that are
destined for different OutputSections, so that we don't fold across
section boundaries. We achieve this by creating OutputSections early,
and setting `InputSection::parent` to point to them. This is what
LLD-ELF does. (This change should also make it easier to implement the
`section$start$` symbols.)
This diff also folds InputSections w/o checking their flags, which I
think is the right behavior -- if they are destined for the same
OutputSection, they will have the same flags in the output (even if
their input flags differ). I.e. the `parent` pointer check subsumes the
`flags` check. In practice this has nearly no effect (ICF did not become
any more effective on chromium_framework).
I've also updated ICF.cpp's block comment to better reflect its current
status.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D105641
2021-07-18 01:42:26 +08:00
|
|
|
|
|
|
|
ConcatOutputSection *
|
|
|
|
ConcatOutputSection::getOrCreateForInput(const InputSection *isec) {
|
|
|
|
NamePair names = maybeRenameSection({isec->getSegName(), isec->getName()});
|
|
|
|
ConcatOutputSection *&osec = concatOutputSections[names];
|
|
|
|
if (!osec)
|
|
|
|
osec = make<ConcatOutputSection>(names.second);
|
|
|
|
return osec;
|
|
|
|
}
|
|
|
|
|
|
|
|
NamePair macho::maybeRenameSection(NamePair key) {
|
|
|
|
auto newNames = config->sectionRenameMap.find(key);
|
|
|
|
if (newNames != config->sectionRenameMap.end())
|
|
|
|
return newNames->second;
|
|
|
|
return key;
|
|
|
|
}
|