llvm-project/lld/MachO/ConcatOutputSection.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

368 lines
15 KiB
C++
Raw Normal View History

//===- ConcatOutputSection.cpp --------------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "ConcatOutputSection.h"
#include "Config.h"
#include "OutputSegment.h"
#include "SymbolTable.h"
#include "Symbols.h"
#include "SyntheticSections.h"
#include "Target.h"
#include "lld/Common/ErrorHandler.h"
#include "lld/Common/Memory.h"
#include "llvm/BinaryFormat/MachO.h"
#include "llvm/Support/ScopedPrinter.h"
#include "llvm/Support/TimeProfiler.h"
using namespace llvm;
using namespace llvm::MachO;
using namespace lld;
using namespace lld::macho;
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
void ConcatOutputSection::addInput(ConcatInputSection *input) {
if (inputs.empty()) {
align = input->align;
flags = input->flags;
} else {
align = std::max(align, input->align);
mergeFlags(input);
}
inputs.push_back(input);
input->parent = this;
}
// Branch-range extension can be implemented in two ways, either through ...
//
// (1) Branch islands: Single branch instructions (also of limited range),
// that might be chained in multiple hops to reach the desired
// destination. On ARM64, as 16 branch islands are needed to hop between
// opposite ends of a 2 GiB program. LD64 uses branch islands exclusively,
// even when it needs excessive hops.
//
// (2) Thunks: Instruction(s) to load the destination address into a scratch
// register, followed by a register-indirect branch. Thunks are
// constructed to reach any arbitrary address, so need not be
// chained. Although thunks need not be chained, a program might need
// multiple thunks to the same destination distributed throughout a large
// program so that all call sites can have one within range.
//
// The optimal approach is to mix islands for distinations within two hops,
// and use thunks for destinations at greater distance. For now, we only
// implement thunks. TODO: Adding support for branch islands!
//
// Internally -- as expressed in LLD's data structures -- a
// branch-range-extension thunk comprises ...
//
// (1) new Defined privateExtern symbol for the thunk named
// <FUNCTION>.thunk.<SEQUENCE>, which references ...
// (2) new InputSection, which contains ...
// (3.1) new data for the instructions to load & branch to the far address +
// (3.2) new Relocs on instructions to load the far address, which reference ...
// (4.1) existing Defined extern symbol for the real function in __text, or
// (4.2) existing DylibSymbol for the real function in a dylib
//
// Nearly-optimal thunk-placement algorithm features:
//
// * Single pass: O(n) on the number of call sites.
//
// * Accounts for the exact space overhead of thunks - no heuristics
//
// * Exploits the full range of call instructions - forward & backward
//
// Data:
//
// * DenseMap<Symbol *, ThunkInfo> thunkMap: Maps the function symbol
// to its thunk bookkeeper.
//
// * struct ThunkInfo (bookkeeper): Call instructions have limited range, and
// distant call sites might be unable to reach the same thunk, so multiple
// thunks are necessary to serve all call sites in a very large program. A
// thunkInfo stores state for all thunks associated with a particular
// function: (a) thunk symbol, (b) input section containing stub code, and
// (c) sequence number for the active thunk incarnation. When an old thunk
// goes out of range, we increment the sequence number and create a new
// thunk named <FUNCTION>.thunk.<SEQUENCE>.
//
// * A thunk incarnation comprises (a) private-extern Defined symbol pointing
// to (b) an InputSection holding machine instructions (similar to a MachO
// stub), and (c) Reloc(s) that reference the real function for fixing-up
// the stub code.
//
// * std::vector<InputSection *> MergedInputSection::thunks: A vector parallel
// to the inputs vector. We store new thunks via cheap vector append, rather
// than costly insertion into the inputs vector.
//
// Control Flow:
//
// * During address assignment, MergedInputSection::finalize() examines call
// sites by ascending address and creates thunks. When a function is beyond
// the range of a call site, we need a thunk. Place it at the largest
// available forward address from the call site. Call sites increase
// monotonically and thunks are always placed as far forward as possible;
// thus, we place thunks at monotonically increasing addresses. Once a thunk
// is placed, it and all previous input-section addresses are final.
//
// * MergedInputSection::finalize() and MergedInputSection::writeTo() merge
// the inputs and thunks vectors (both ordered by ascending address), which
// is simple and cheap.
DenseMap<Symbol *, ThunkInfo> lld::macho::thunkMap;
// Determine whether we need thunks, which depends on the target arch -- RISC
// (i.e., ARM) generally does because it has limited-range branch/call
// instructions, whereas CISC (i.e., x86) generally doesn't. RISC only needs
// thunks for programs so large that branch source & destination addresses
// might differ more than the range of branch instruction(s).
bool ConcatOutputSection::needsThunks() const {
if (!target->usesThunks())
return false;
uint64_t isecAddr = addr;
for (InputSection *isec : inputs)
isecAddr = alignTo(isecAddr, isec->align) + isec->getSize();
if (isecAddr - addr + in.stubs->getSize() <= target->branchRange)
return false;
// Yes, this program is large enough to need thunks.
for (InputSection *isec : inputs) {
for (Reloc &r : isec->relocs) {
if (!target->hasAttr(r.type, RelocAttrBits::BRANCH))
continue;
auto *sym = r.referent.get<Symbol *>();
// Pre-populate the thunkMap and memoize call site counts for every
// InputSection and ThunkInfo. We do this for the benefit of
// ConcatOutputSection::estimateStubsInRangeVA()
ThunkInfo &thunkInfo = thunkMap[sym];
// Knowing ThunkInfo call site count will help us know whether or not we
// might need to create more for this referent at the time we are
// estimating distance to __stubs in .
++thunkInfo.callSiteCount;
// Knowing InputSection call site count will help us avoid work on those
// that have no BRANCH relocs.
++isec->callSiteCount;
}
}
return true;
}
// Since __stubs is placed after __text, we must estimate the address
// beyond which stubs are within range of a simple forward branch.
uint64_t ConcatOutputSection::estimateStubsInRangeVA(size_t callIdx) const {
uint64_t branchRange = target->branchRange;
size_t endIdx = inputs.size();
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
ConcatInputSection *isec = inputs[callIdx];
uint64_t isecVA = isec->getVA();
// Tally the non-stub functions which still have call sites
// remaining to process, which yields the maximum number
// of thunks we might yet place.
size_t maxPotentialThunks = 0;
for (auto &tp : thunkMap) {
ThunkInfo &ti = tp.second;
maxPotentialThunks +=
!tp.first->isInStubs() && ti.callSitesUsed < ti.callSiteCount;
}
// Tally the total size of input sections remaining to process.
uint64_t isecEnd = isec->getVA();
for (size_t i = callIdx; i < endIdx; i++) {
InputSection *isec = inputs[i];
isecEnd = alignTo(isecEnd, isec->align) + isec->getSize();
}
// Estimate the address after which call sites can safely call stubs
// directly rather than through intermediary thunks.
uint64_t stubsInRangeVA = isecEnd + maxPotentialThunks * target->thunkSize +
in.stubs->getSize() - branchRange;
log("thunks = " + std::to_string(thunkMap.size()) +
", potential = " + std::to_string(maxPotentialThunks) +
", stubs = " + std::to_string(in.stubs->getSize()) + ", isecVA = " +
to_hexString(isecVA) + ", threshold = " + to_hexString(stubsInRangeVA) +
", isecEnd = " + to_hexString(isecEnd) +
", tail = " + to_hexString(isecEnd - isecVA) +
", slop = " + to_hexString(branchRange - (isecEnd - isecVA)));
return stubsInRangeVA;
}
void ConcatOutputSection::finalize() {
uint64_t isecAddr = addr;
uint64_t isecFileOff = fileOff;
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
auto finalizeOne = [&](ConcatInputSection *isec) {
isecAddr = alignTo(isecAddr, isec->align);
isecFileOff = alignTo(isecFileOff, isec->align);
isec->outSecOff = isecAddr - addr;
isec->isFinal = true;
isecAddr += isec->getSize();
isecFileOff += isec->getFileSize();
};
if (!needsThunks()) {
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
for (ConcatInputSection *isec : inputs)
finalizeOne(isec);
size = isecAddr - addr;
fileSize = isecFileOff - fileOff;
return;
}
uint64_t branchRange = target->branchRange;
uint64_t stubsInRangeVA = TargetInfo::outOfRangeVA;
size_t thunkSize = target->thunkSize;
size_t relocCount = 0;
size_t callSiteCount = 0;
size_t thunkCallCount = 0;
size_t thunkCount = 0;
// inputs[finalIdx] is for finalization (address-assignment)
size_t finalIdx = 0;
// Kick-off by ensuring that the first input section has an address
for (size_t callIdx = 0, endIdx = inputs.size(); callIdx < endIdx;
++callIdx) {
if (finalIdx == callIdx)
finalizeOne(inputs[finalIdx++]);
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
ConcatInputSection *isec = inputs[callIdx];
assert(isec->isFinal);
uint64_t isecVA = isec->getVA();
// Assign addresses up-to the forward branch-range limit
while (finalIdx < endIdx &&
isecAddr + inputs[finalIdx]->getSize() < isecVA + branchRange)
finalizeOne(inputs[finalIdx++]);
if (isec->callSiteCount == 0)
continue;
if (finalIdx == endIdx && stubsInRangeVA == TargetInfo::outOfRangeVA) {
// When we have finalized all input sections, __stubs (destined
// to follow __text) comes within range of forward branches and
// we can estimate the threshold address after which we can
// reach any stub with a forward branch. Note that although it
// sits in the middle of a loop, this code executes only once.
// It is in the loop because we need to call it at the proper
// time: the earliest call site from which the end of __text
// (and start of __stubs) comes within range of a forward branch.
stubsInRangeVA = estimateStubsInRangeVA(callIdx);
}
// Process relocs by ascending address, i.e., ascending offset within isec
std::vector<Reloc> &relocs = isec->relocs;
assert(is_sorted(relocs,
[](Reloc &a, Reloc &b) { return a.offset > b.offset; }));
for (Reloc &r : reverse(relocs)) {
++relocCount;
if (!target->hasAttr(r.type, RelocAttrBits::BRANCH))
continue;
++callSiteCount;
// Calculate branch reachability boundaries
uint64_t callVA = isecVA + r.offset;
uint64_t lowVA = branchRange < callVA ? callVA - branchRange : 0;
uint64_t highVA = callVA + branchRange;
// Calculate our call referent address
auto *funcSym = r.referent.get<Symbol *>();
ThunkInfo &thunkInfo = thunkMap[funcSym];
// The referent is not reachable, so we need to use a thunk ...
if (funcSym->isInStubs() && callVA >= stubsInRangeVA) {
// ... Oh, wait! We are close enough to the end that __stubs
// are now within range of a simple forward branch.
continue;
}
uint64_t funcVA = funcSym->resolveBranchVA();
++thunkInfo.callSitesUsed;
if (lowVA < funcVA && funcVA < highVA) {
// The referent is reachable with a simple call instruction.
continue;
}
++thunkInfo.thunkCallCount;
++thunkCallCount;
// If an existing thunk is reachable, use it ...
if (thunkInfo.sym) {
uint64_t thunkVA = thunkInfo.isec->getVA();
if (lowVA < thunkVA && thunkVA < highVA) {
r.referent = thunkInfo.sym;
continue;
}
}
// ... otherwise, create a new thunk
if (isecAddr > highVA) {
// When there is small-to-no margin between highVA and
// isecAddr and the distance between subsequent call sites is
// smaller than thunkSize, then a new thunk can go out of
// range. Fix by unfinalizing inputs[finalIdx] to reduce the
// distance between callVA and highVA, then shift some thunks
// to occupy address-space formerly occupied by the
// unfinalized inputs[finalIdx].
fatal(Twine(__FUNCTION__) + ": FIXME: thunk range overrun");
}
thunkInfo.isec = make<ConcatInputSection>(isec->segname, isec->name);
thunkInfo.isec->parent = this;
StringRef thunkName = saver.save(funcSym->getName() + ".thunk." +
std::to_string(thunkInfo.sequence++));
r.referent = thunkInfo.sym = symtab->addDefined(
thunkName, /*file=*/nullptr, thunkInfo.isec, /*value=*/0,
/*size=*/thunkSize, /*isWeakDef=*/false, /*isPrivateExtern=*/true,
[lld/mac] Implement -dead_strip Also adds support for live_support sections, no_dead_strip sections, .no_dead_strip symbols. Chromium Framework 345MB unstripped -> 250MB stripped (vs 290MB unstripped -> 236M stripped with ld64). Doing dead stripping is a bit faster than not, because so much less data needs to be processed: % ministat lld_* x lld_nostrip.txt + lld_strip.txt N Min Max Median Avg Stddev x 10 3.929414 4.07692 4.0269079 4.0089678 0.044214794 + 10 3.8129408 3.9025559 3.8670411 3.8642573 0.024779651 Difference at 95.0% confidence -0.144711 +/- 0.0336749 -3.60967% +/- 0.839989% (Student's t, pooled s = 0.0358398) This interacts with many parts of the linker. I tried to add test coverage for all added `isLive()` checks, so that some test will fail if any of them is removed. I checked that the test expectations for the most part match ld64's behavior (except for live-support-iterations.s, see the comment in the test). Interacts with: - debug info - export tries - import opcodes - flags like -exported_symbol(s_list) - -U / dynamic_lookup - mod_init_funcs, mod_term_funcs - weak symbol handling - unwind info - stubs - map files - -sectcreate - undefined, dylib, common, defined (both absolute and normal) symbols It's possible it interacts with more features I didn't think of, of course. I also did some manual testing: - check-llvm check-clang check-lld work with lld with this patch as host linker and -dead_strip enabled - Chromium still starts - Chromium's base_unittests still pass, including unwind tests Implemenation-wise, this is InputSection-based, so it'll work for object files with .subsections_via_symbols (which includes all object files generated by clang). I first based this on the COFF implementation, but later realized that things are more similar to ELF. I think it'd be good to refactor MarkLive.cpp to look more like the ELF part at some point, but I'd like to get a working state checked in first. Mechanical parts: - Rename canOmitFromOutput to wasCoalesced (no behavior change) since it really is for weak coalesced symbols - Add noDeadStrip to Defined, corresponding to N_NO_DEAD_STRIP (`.no_dead_strip` in asm) Fixes PR49276. Differential Revision: https://reviews.llvm.org/D103324
2021-05-08 05:10:05 +08:00
/*isThumb=*/false, /*isReferencedDynamically=*/false,
/*noDeadStrip=*/false);
target->populateThunk(thunkInfo.isec, funcSym);
finalizeOne(thunkInfo.isec);
thunks.push_back(thunkInfo.isec);
++thunkCount;
}
}
size = isecAddr - addr;
fileSize = isecFileOff - fileOff;
log("thunks for " + parent->name + "," + name +
": funcs = " + std::to_string(thunkMap.size()) +
", relocs = " + std::to_string(relocCount) +
", all calls = " + std::to_string(callSiteCount) +
", thunk calls = " + std::to_string(thunkCallCount) +
", thunks = " + std::to_string(thunkCount));
}
void ConcatOutputSection::writeTo(uint8_t *buf) const {
// Merge input sections from thunk & ordinary vectors
size_t i = 0, ie = inputs.size();
size_t t = 0, te = thunks.size();
while (i < ie || t < te) {
while (i < ie && (t == te || inputs[i]->getSize() == 0 ||
inputs[i]->outSecOff < thunks[t]->outSecOff)) {
inputs[i]->writeTo(buf + inputs[i]->outSecOff);
++i;
}
while (t < te && (i == ie || thunks[t]->outSecOff < inputs[i]->outSecOff)) {
thunks[t]->writeTo(buf + thunks[t]->outSecOff);
++t;
}
}
}
// TODO: this is most likely wrong; reconsider how section flags
// are actually merged. The logic presented here was written without
// any form of informed research.
void ConcatOutputSection::mergeFlags(InputSection *input) {
uint8_t baseType = sectionType(flags);
uint8_t inputType = sectionType(input->flags);
if (baseType != inputType)
error("Cannot merge section " + input->name + " (type=0x" +
to_hexString(inputType) + ") into " + name + " (type=0x" +
to_hexString(baseType) + "): inconsistent types");
constexpr uint32_t strictFlags = S_ATTR_DEBUG | S_ATTR_STRIP_STATIC_SYMS |
S_ATTR_NO_DEAD_STRIP | S_ATTR_LIVE_SUPPORT;
if ((input->flags ^ flags) & strictFlags)
error("Cannot merge section " + input->name + " (flags=0x" +
to_hexString(input->flags) + ") into " + name + " (flags=0x" +
to_hexString(flags) + "): strict flags differ");
// Negate pure instruction presence if any section isn't pure.
uint32_t pureMask = ~S_ATTR_PURE_INSTRUCTIONS | (input->flags & flags);
// Merge the rest
flags |= input->flags;
flags &= pureMask;
}
void ConcatOutputSection::eraseOmittedInputSections() {
// Remove the duplicates from inputs
inputs.erase(std::remove_if(inputs.begin(), inputs.end(),
[](const ConcatInputSection *isec) -> bool {
return isec->shouldOmitFromOutput();
}),
inputs.end());
}