llvm-project/lld/MachO/SyntheticSections.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

623 lines
21 KiB
C
Raw Normal View History

//===- SyntheticSections.h -------------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LLD_MACHO_SYNTHETIC_SECTIONS_H
#define LLD_MACHO_SYNTHETIC_SECTIONS_H
#include "Config.h"
#include "ExportTrie.h"
#include "InputSection.h"
#include "OutputSection.h"
#include "OutputSegment.h"
#include "Target.h"
2021-07-18 01:42:26 +08:00
#include "Writer.h"
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/Hashing.h"
#include "llvm/ADT/SetVector.h"
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
#include "llvm/MC/StringTableBuilder.h"
#include "llvm/Support/MathExtras.h"
#include "llvm/Support/raw_ostream.h"
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
#include <unordered_map>
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
namespace llvm {
class DWARFUnit;
} // namespace llvm
namespace lld {
namespace macho {
class Defined;
class DylibSymbol;
class LoadCommand;
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
class ObjFile;
class UnwindInfoSection;
class SyntheticSection : public OutputSection {
public:
SyntheticSection(const char *segname, const char *name);
virtual ~SyntheticSection() = default;
static bool classof(const OutputSection *sec) {
return sec->kind() == SyntheticKind;
}
[lld-macho] Refactor segment/section creation, sorting, and merging Summary: There were a few issues with the previous setup: 1. The section sorting comparator used a declarative map of section names to determine the correct order, but it turns out we need to match on more than just names -- in particular, an upcoming diff will sort based on whether the S_ZERO_FILL flag is set. This diff changes the sorter to a more imperative but flexible form. 2. We were sorting OutputSections stored in a MapVector, which left the MapVector in an inconsistent state -- the wrong keys map to the wrong values! In practice, we weren't doing key lookups (only container iteration) after the sort, so this was fine, but it was still a dubious state of affairs. This diff copies the OutputSections to a vector before sorting them. 3. We were adding unneeded OutputSections to OutputSegments and then filtering them out later, which meant that we had to remember whether an OutputSegment was in a pre- or post-filtered state. This diff only adds the sections to the segments if they are needed. In addition to those major changes, two minor ones worth noting: 1. I renamed all OutputSection variable names to `osec`, to parallel `isec`. Previously we were using some inconsistent combination of `osec`, `os`, and `section`. 2. I added a check (and a test) for InputSections with names that clashed with those of our synthetic OutputSections. Reviewers: #lld-macho Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D81887
2020-06-15 15:03:24 +08:00
StringRef segname;
// This fake InputSection makes it easier for us to write code that applies
// generically to both user inputs and synthetics.
InputSection *isec;
};
// All sections in __LINKEDIT should inherit from this.
class LinkEditSection : public SyntheticSection {
public:
LinkEditSection(const char *segname, const char *name)
: SyntheticSection(segname, name) {
align = target->wordSize;
}
// Implementations of this method can assume that the regular (non-__LINKEDIT)
// sections already have their addresses assigned.
virtual void finalizeContents() {}
// Sections in __LINKEDIT are special: their offsets are recorded in the
// load commands like LC_DYLD_INFO_ONLY and LC_SYMTAB, instead of in section
// headers.
bool isHidden() const override final { return true; }
virtual uint64_t getRawSize() const = 0;
// codesign (or more specifically libstuff) checks that each section in
// __LINKEDIT ends where the next one starts -- no gaps are permitted. We
// therefore align every section's start and end points to WordSize.
//
// NOTE: This assumes that the extra bytes required for alignment can be
// zero-valued bytes.
uint64_t getSize() const override final {
return llvm::alignTo(getRawSize(), align);
}
};
// The header of the Mach-O file, which must have a file offset of zero.
class MachHeaderSection final : public SyntheticSection {
public:
MachHeaderSection();
bool isHidden() const override { return true; }
uint64_t getSize() const override;
void writeTo(uint8_t *buf) const override;
void addLoadCommand(LoadCommand *);
protected:
std::vector<LoadCommand *> loadCommands;
uint32_t sizeOfCmds = 0;
};
// A hidden section that exists solely for the purpose of creating the
// __PAGEZERO segment, which is used to catch null pointer dereferences.
class PageZeroSection final : public SyntheticSection {
public:
PageZeroSection();
bool isHidden() const override { return true; }
uint64_t getSize() const override { return target->pageZeroSize; }
uint64_t getFileSize() const override { return 0; }
void writeTo(uint8_t *buf) const override {}
};
// This is the base class for the GOT and TLVPointer sections, which are nearly
// functionally identical -- they will both be populated by dyld with addresses
// to non-lazily-loaded dylib symbols. The main difference is that the
// TLVPointerSection stores references to thread-local variables.
class NonLazyPointerSectionBase : public SyntheticSection {
public:
NonLazyPointerSectionBase(const char *segname, const char *name);
const llvm::SetVector<const Symbol *> &getEntries() const { return entries; }
bool isNeeded() const override { return !entries.empty(); }
uint64_t getSize() const override {
return entries.size() * target->wordSize;
}
void writeTo(uint8_t *buf) const override;
void addEntry(Symbol *sym);
uint64_t getVA(uint32_t gotIndex) const {
return addr + gotIndex * target->wordSize;
}
private:
llvm::SetVector<const Symbol *> entries;
};
class GotSection final : public NonLazyPointerSectionBase {
public:
GotSection();
};
class TlvPointerSection final : public NonLazyPointerSectionBase {
public:
TlvPointerSection();
};
struct Location {
const InputSection *isec;
uint64_t offset;
Location(const InputSection *isec, uint64_t offset)
: isec(isec), offset(offset) {}
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
uint64_t getVA() const { return isec->getVA(offset); }
};
// Stores rebase opcodes, which tell dyld where absolute addresses have been
// encoded in the binary. If the binary is not loaded at its preferred address,
// dyld has to rebase these addresses by adding an offset to them.
class RebaseSection final : public LinkEditSection {
public:
RebaseSection();
void finalizeContents() override;
uint64_t getRawSize() const override { return contents.size(); }
bool isNeeded() const override { return !locations.empty(); }
void writeTo(uint8_t *buf) const override;
void addEntry(const InputSection *isec, uint64_t offset) {
if (config->isPic)
locations.push_back({isec, offset});
}
private:
std::vector<Location> locations;
SmallVector<char, 128> contents;
};
struct BindingEntry {
int64_t addend;
Location target;
BindingEntry(int64_t addend, Location target)
: addend(addend), target(std::move(target)) {}
};
template <class Sym>
using BindingsMap = llvm::DenseMap<Sym, std::vector<BindingEntry>>;
// Stores bind opcodes for telling dyld which symbols to load non-lazily.
class BindingSection final : public LinkEditSection {
public:
BindingSection();
void finalizeContents() override;
uint64_t getRawSize() const override { return contents.size(); }
bool isNeeded() const override { return !bindingsMap.empty(); }
void writeTo(uint8_t *buf) const override;
void addEntry(const DylibSymbol *dysym, const InputSection *isec,
uint64_t offset, int64_t addend = 0) {
bindingsMap[dysym].emplace_back(addend, Location(isec, offset));
}
private:
BindingsMap<const DylibSymbol *> bindingsMap;
SmallVector<char, 128> contents;
};
// Stores bind opcodes for telling dyld which weak symbols need coalescing.
// There are two types of entries in this section:
//
// 1) Non-weak definitions: This is a symbol definition that weak symbols in
// other dylibs should coalesce to.
//
// 2) Weak bindings: These tell dyld that a given symbol reference should
[lld/mac] Implement -dead_strip Also adds support for live_support sections, no_dead_strip sections, .no_dead_strip symbols. Chromium Framework 345MB unstripped -> 250MB stripped (vs 290MB unstripped -> 236M stripped with ld64). Doing dead stripping is a bit faster than not, because so much less data needs to be processed: % ministat lld_* x lld_nostrip.txt + lld_strip.txt N Min Max Median Avg Stddev x 10 3.929414 4.07692 4.0269079 4.0089678 0.044214794 + 10 3.8129408 3.9025559 3.8670411 3.8642573 0.024779651 Difference at 95.0% confidence -0.144711 +/- 0.0336749 -3.60967% +/- 0.839989% (Student's t, pooled s = 0.0358398) This interacts with many parts of the linker. I tried to add test coverage for all added `isLive()` checks, so that some test will fail if any of them is removed. I checked that the test expectations for the most part match ld64's behavior (except for live-support-iterations.s, see the comment in the test). Interacts with: - debug info - export tries - import opcodes - flags like -exported_symbol(s_list) - -U / dynamic_lookup - mod_init_funcs, mod_term_funcs - weak symbol handling - unwind info - stubs - map files - -sectcreate - undefined, dylib, common, defined (both absolute and normal) symbols It's possible it interacts with more features I didn't think of, of course. I also did some manual testing: - check-llvm check-clang check-lld work with lld with this patch as host linker and -dead_strip enabled - Chromium still starts - Chromium's base_unittests still pass, including unwind tests Implemenation-wise, this is InputSection-based, so it'll work for object files with .subsections_via_symbols (which includes all object files generated by clang). I first based this on the COFF implementation, but later realized that things are more similar to ELF. I think it'd be good to refactor MarkLive.cpp to look more like the ELF part at some point, but I'd like to get a working state checked in first. Mechanical parts: - Rename canOmitFromOutput to wasCoalesced (no behavior change) since it really is for weak coalesced symbols - Add noDeadStrip to Defined, corresponding to N_NO_DEAD_STRIP (`.no_dead_strip` in asm) Fixes PR49276. Differential Revision: https://reviews.llvm.org/D103324
2021-05-08 05:10:05 +08:00
// coalesce to a non-weak definition if one is found. Note that unlike the
// entries in the BindingSection, the bindings here only refer to these
// symbols by name, but do not specify which dylib to load them from.
class WeakBindingSection final : public LinkEditSection {
public:
WeakBindingSection();
void finalizeContents() override;
uint64_t getRawSize() const override { return contents.size(); }
bool isNeeded() const override {
return !bindingsMap.empty() || !definitions.empty();
}
void writeTo(uint8_t *buf) const override;
void addEntry(const Symbol *symbol, const InputSection *isec, uint64_t offset,
int64_t addend = 0) {
bindingsMap[symbol].emplace_back(addend, Location(isec, offset));
}
bool hasEntry() const { return !bindingsMap.empty(); }
void addNonWeakDefinition(const Defined *defined) {
definitions.emplace_back(defined);
}
bool hasNonWeakDefinition() const { return !definitions.empty(); }
private:
BindingsMap<const Symbol *> bindingsMap;
std::vector<const Defined *> definitions;
SmallVector<char, 128> contents;
};
// The following sections implement lazy symbol binding -- very similar to the
// PLT mechanism in ELF.
//
// ELF's .plt section is broken up into two sections in Mach-O: StubsSection
// and StubHelperSection. Calls to functions in dylibs will end up calling into
// StubsSection, which contains indirect jumps to addresses stored in the
// LazyPointerSection (the counterpart to ELF's .plt.got).
//
// We will first describe how non-weak symbols are handled.
//
// At program start, the LazyPointerSection contains addresses that point into
// one of the entry points in the middle of the StubHelperSection. The code in
// StubHelperSection will push on the stack an offset into the
// LazyBindingSection. The push is followed by a jump to the beginning of the
// StubHelperSection (similar to PLT0), which then calls into dyld_stub_binder.
// dyld_stub_binder is a non-lazily-bound symbol, so this call looks it up in
// the GOT.
//
// The stub binder will look up the bind opcodes in the LazyBindingSection at
// the given offset. The bind opcodes will tell the binder to update the
// address in the LazyPointerSection to point to the symbol, so that subsequent
// calls don't have to redo the symbol resolution. The binder will then jump to
// the resolved symbol.
//
// With weak symbols, the situation is slightly different. Since there is no
// "weak lazy" lookup, function calls to weak symbols are always non-lazily
// bound. We emit both regular non-lazy bindings as well as weak bindings, in
// order that the weak bindings may overwrite the non-lazy bindings if an
// appropriate symbol is found at runtime. However, the bound addresses will
// still be written (non-lazily) into the LazyPointerSection.
class StubsSection final : public SyntheticSection {
public:
StubsSection();
uint64_t getSize() const override;
bool isNeeded() const override { return !entries.empty(); }
void finalize() override;
void writeTo(uint8_t *buf) const override;
const llvm::SetVector<Symbol *> &getEntries() const { return entries; }
// Returns whether the symbol was added. Note that every stubs entry will
// have a corresponding entry in the LazyPointerSection.
bool addEntry(Symbol *);
uint64_t getVA(uint32_t stubsIndex) const {
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
assert(isFinal || target->usesThunks());
// ConcatOutputSection::finalize() can seek the address of a
// stub before its address is assigned. Before __stubs is
// finalized, return a contrived out-of-range address.
return isFinal ? addr + stubsIndex * target->stubSize
: TargetInfo::outOfRangeVA;
}
bool isFinal = false; // is address assigned?
private:
llvm::SetVector<Symbol *> entries;
};
class StubHelperSection final : public SyntheticSection {
public:
StubHelperSection();
uint64_t getSize() const override;
bool isNeeded() const override;
void writeTo(uint8_t *buf) const override;
void setup();
DylibSymbol *stubBinder = nullptr;
[lld/mac] Implement support for private extern symbols Private extern symbols are used for things scoped to the linkage unit. They cause duplicate symbol errors (so they're in the symbol table, unlike TU-scoped truly local symbols), but they don't make it into the export trie. They are created e.g. by compiling with -fvisibility=hidden. If two weak symbols have differing privateness, the combined symbol is non-private external. (Example: inline functions and some TUs that include the header defining it were built with -fvisibility-inlines-hidden and some weren't). A weak private external symbol implicitly has its "weak" dropped and behaves like a regular strong private external symbol: Weak is an export trie concept, and private symbols are not in the export trie. If a weak and a strong symbol have different privateness, the strong symbol wins. If two common symbols have differing privateness, the larger symbol wins. If they have the same size, the privateness of the symbol seen later during the link wins (!) -- this is a bit lame, but it matches ld64 and this behavior takes 2 lines less to implement than the less surprising "result is non-private external), so match ld64. (Example: `int a` in two .c files, both built with -fcommon, one built with -fvisibility=hidden and one without.) This also makes `__dyld_private` a true TU-local symbol, matching ld64. To make this work, make the `const char*` StringRefZ ctor to correctly set `size` (without this, writing the string table crashed when calling getName() on the __dyld_private symbol). Mention in CommonSymbol's comment that common symbols are now disabled by default in clang. Mention in -keep_private_externs's HelpText that the flag only has an effect with `-r` (which we don't implement yet -- so this patch here doesn't regress any behavior around -r + -keep_private_externs)). ld64 doesn't explicitly document it, but the commit text of http://reviews.llvm.org/rL216146 does, and ld64's OutputFile::buildSymbolTable() checks `_options.outputKind() == Options::kObjectFile` before calling `_options.keepPrivateExterns()` (the only reference to that function). Fixes PR48536. Differential Revision: https://reviews.llvm.org/D93609
2020-12-18 02:30:18 +08:00
Defined *dyldPrivate = nullptr;
};
// Note that this section may also be targeted by non-lazy bindings. In
// particular, this happens when branch relocations target weak symbols.
class LazyPointerSection final : public SyntheticSection {
public:
LazyPointerSection();
uint64_t getSize() const override;
bool isNeeded() const override;
void writeTo(uint8_t *buf) const override;
};
class LazyBindingSection final : public LinkEditSection {
public:
LazyBindingSection();
void finalizeContents() override;
uint64_t getRawSize() const override { return contents.size(); }
bool isNeeded() const override { return !entries.empty(); }
void writeTo(uint8_t *buf) const override;
// Note that every entry here will by referenced by a corresponding entry in
// the StubHelperSection.
void addEntry(DylibSymbol *dysym);
const llvm::SetVector<DylibSymbol *> &getEntries() const { return entries; }
private:
uint32_t encode(const DylibSymbol &);
llvm::SetVector<DylibSymbol *> entries;
SmallVector<char, 128> contents;
llvm::raw_svector_ostream os{contents};
};
// Stores a trie that describes the set of exported symbols.
class ExportSection final : public LinkEditSection {
public:
ExportSection();
void finalizeContents() override;
uint64_t getRawSize() const override { return size; }
bool isNeeded() const override { return size; }
void writeTo(uint8_t *buf) const override;
bool hasWeakSymbol = false;
private:
TrieBuilder trieBuilder;
size_t size = 0;
};
// Stores 'data in code' entries that describe the locations of
// data regions inside code sections.
class DataInCodeSection final : public LinkEditSection {
public:
DataInCodeSection();
void finalizeContents() override;
uint64_t getRawSize() const override {
return sizeof(llvm::MachO::data_in_code_entry) * entries.size();
}
void writeTo(uint8_t *buf) const override;
private:
std::vector<llvm::MachO::data_in_code_entry> entries;
};
// Stores ULEB128 delta encoded addresses of functions.
class FunctionStartsSection final : public LinkEditSection {
public:
FunctionStartsSection();
void finalizeContents() override;
uint64_t getRawSize() const override { return contents.size(); }
void writeTo(uint8_t *buf) const override;
private:
SmallVector<char, 128> contents;
};
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
// Stores the strings referenced by the symbol table.
class StringTableSection final : public LinkEditSection {
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
public:
StringTableSection();
// Returns the start offset of the added string.
uint32_t addString(StringRef);
uint64_t getRawSize() const override { return size; }
void writeTo(uint8_t *buf) const override;
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
static constexpr size_t emptyStringIndex = 1;
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
private:
// ld64 emits string tables which start with a space and a zero byte. We
// match its behavior here since some tools depend on it.
// Consequently, the empty string will be at index 1, not zero.
std::vector<StringRef> strings{" "};
size_t size = 2;
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
};
struct SymtabEntry {
Symbol *sym;
size_t strx;
};
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
struct StabsEntry {
uint8_t type = 0;
uint32_t strx = StringTableSection::emptyStringIndex;
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
uint8_t sect = 0;
uint16_t desc = 0;
uint64_t value = 0;
StabsEntry() = default;
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
explicit StabsEntry(uint8_t type) : type(type) {}
};
// Symbols of the same type must be laid out contiguously: we choose to emit
// all local symbols first, then external symbols, and finally undefined
// symbols. For each symbol type, the LC_DYSYMTAB load command will record the
// range (start index and total number) of those symbols in the symbol table.
class SymtabSection : public LinkEditSection {
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
public:
void finalizeContents() override;
uint32_t getNumSymbols() const;
uint32_t getNumLocalSymbols() const {
return stabs.size() + localSymbols.size();
}
uint32_t getNumExternalSymbols() const { return externalSymbols.size(); }
uint32_t getNumUndefinedSymbols() const { return undefinedSymbols.size(); }
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
private:
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
void emitBeginSourceStab(llvm::DWARFUnit *compileUnit);
void emitEndSourceStab();
void emitObjectFileStab(ObjFile *);
void emitEndFunStab(Defined *);
void emitStabs();
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
protected:
SymtabSection(StringTableSection &);
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
StringTableSection &stringTableSection;
// STABS symbols are always local symbols, but we represent them with special
// entries because they may use fields like n_sect and n_desc differently.
[lld-macho] Emit STABS symbols for debugging, and drop debug sections Debug sections contain a large amount of data. In order not to bloat the size of the final binary, we remove them and instead emit STABS symbols for `dsymutil` and the debugger to locate their contents in the object files. With this diff, `dsymutil` is able to locate the debug info. However, we need a few more features before `lldb` is able to work well with our binaries -- e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols, emitting `LC_UUID`, and more. Those will be handled in follow-up diffs. Note also that the STABS we emit differ slightly from what ld64 does. First, we emit the path to the source file as one `N_SO` symbol instead of two. (`ld64` emits one `N_SO` for the dirname and one of the basename.) Second, we do not emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions, because the `N_FUN` STABS already serve that purpose. @clayborg recommended these changes based on his knowledge of what the debugging tools look for. Additionally, this current implementation doesn't accurately reflect the size of function symbols. It uses the size of their containing sectioins as a proxy, but that is only accurate if `.subsections_with_symbols` is set, and if there isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two options to solve this: 1. We can split up subsections by symbol even if `.subsections_with_symbols` is not set, but include constraints to ensure those subsections retain their order in the final output. This is `ld64`'s approach. 2. We could just add a `size` field to our `Symbol` class. This seems simpler, and I'm more inclined toward it, but I'm not sure if there are use cases that it doesn't handle well. As such I'm punting on the decision for now. Reviewed By: clayborg Differential Revision: https://reviews.llvm.org/D89257
2020-12-02 06:45:01 +08:00
std::vector<StabsEntry> stabs;
std::vector<SymtabEntry> localSymbols;
std::vector<SymtabEntry> externalSymbols;
std::vector<SymtabEntry> undefinedSymbols;
[lld-macho][reland] Add basic symbol table output This diff implements basic support for writing a symbol table. Attributes are loosely supported for extern symbols and not at all for other types. Initial version by Kellie Medlin <kelliem@fb.com> Originally committed in a3d95a50ee33 and reverted in fbae153ca583 due to UBSAN erroring over unaligned writes. That has been fixed in the current diff with the following changes: ``` diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp --- a/lld/MachO/SyntheticSections.cpp +++ b/lld/MachO/SyntheticSections.cpp @@ -133,6 +133,9 @@ SymtabSection::SymtabSection(StringTableSection &stringTableSection) : stringTableSection(stringTableSection) { segname = segment_names::linkEdit; name = section_names::symbolTable; + // TODO: When we introduce the SyntheticSections superclass, we should make + // all synthetic sections aligned to WordSize by default. + align = WordSize; } size_t SymtabSection::getSize() const { diff --git a/lld/MachO/Writer.cpp b/lld/MachO/Writer.cpp --- a/lld/MachO/Writer.cpp +++ b/lld/MachO/Writer.cpp @@ -371,6 +371,7 @@ void Writer::assignAddresses(OutputSegment *seg) { ArrayRef<InputSection *> sections = p.second; for (InputSection *isec : sections) { addr = alignTo(addr, isec->align); + // We must align the file offsets too to avoid misaligned writes of + // structs. + fileOff = alignTo(fileOff, isec->align); isec->addr = addr; addr += isec->getSize(); fileOff += isec->getFileSize(); @@ -396,6 +397,7 @@ void Writer::writeSections() { uint64_t fileOff = seg->fileOff; for (auto &sect : seg->getSections()) { for (InputSection *isec : sect.second) { + fileOff = alignTo(fileOff, isec->align); isec->writeTo(buf + fileOff); fileOff += isec->getFileSize(); } ``` I don't think it's easy to write a test for alignment (that doesn't involve brittly hard-coding file offsets), so there isn't one... but UBSAN builds pass now. Differential Revision: https://reviews.llvm.org/D79050
2020-04-29 07:58:19 +08:00
};
template <class LP> SymtabSection *makeSymtabSection(StringTableSection &);
// The indirect symbol table is a list of 32-bit integers that serve as indices
// into the (actual) symbol table. The indirect symbol table is a
2020-12-02 09:27:33 +08:00
// concatenation of several sub-arrays of indices, each sub-array belonging to
// a separate section. The starting offset of each sub-array is stored in the
// reserved1 header field of the respective section.
//
// These sub-arrays provide symbol information for sections that store
// contiguous sequences of symbol references. These references can be pointers
// (e.g. those in the GOT and TLVP sections) or assembly sequences (e.g.
// function stubs).
class IndirectSymtabSection final : public LinkEditSection {
public:
IndirectSymtabSection();
void finalizeContents() override;
uint32_t getNumSymbols() const;
uint64_t getRawSize() const override {
return getNumSymbols() * sizeof(uint32_t);
}
bool isNeeded() const override;
void writeTo(uint8_t *buf) const override;
};
// The code signature comes at the very end of the linked output file.
class CodeSignatureSection final : public LinkEditSection {
public:
// NOTE: These values are duplicated in llvm-objcopy's MachO/Object.h file
// and any changes here, should be repeated there.
static constexpr uint8_t blockSizeShift = 12;
static constexpr size_t blockSize = (1 << blockSizeShift); // 4 KiB
static constexpr size_t hashSize = 256 / 8;
static constexpr size_t blobHeadersSize = llvm::alignTo<8>(
sizeof(llvm::MachO::CS_SuperBlob) + sizeof(llvm::MachO::CS_BlobIndex));
static constexpr uint32_t fixedHeadersSize =
blobHeadersSize + sizeof(llvm::MachO::CS_CodeDirectory);
uint32_t fileNamePad = 0;
uint32_t allHeadersSize = 0;
StringRef fileName;
CodeSignatureSection();
uint64_t getRawSize() const override;
bool isNeeded() const override { return true; }
void writeTo(uint8_t *buf) const override;
uint32_t getBlockCount() const;
void writeHashes(uint8_t *buf) const;
};
class BitcodeBundleSection final : public SyntheticSection {
public:
BitcodeBundleSection();
uint64_t getSize() const override { return xarSize; }
void finalize() override;
void writeTo(uint8_t *buf) const override;
private:
llvm::SmallString<261> xarPath;
uint64_t xarSize;
};
class CStringSection : public SyntheticSection {
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
public:
CStringSection();
void addInput(CStringInputSection *);
uint64_t getSize() const override { return size; }
virtual void finalizeContents();
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
bool isNeeded() const override { return !inputs.empty(); }
void writeTo(uint8_t *buf) const override;
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
std::vector<CStringInputSection *> inputs;
private:
uint64_t size;
};
class DeduplicatedCStringSection final : public CStringSection {
public:
DeduplicatedCStringSection();
uint64_t getSize() const override { return builder.getSize(); }
void finalizeContents() override;
void writeTo(uint8_t *buf) const override { builder.write(buf); }
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
private:
llvm::StringTableBuilder builder;
};
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
/*
* This section contains deduplicated literal values. The 16-byte values are
* laid out first, followed by the 8- and then the 4-byte ones.
*/
class WordLiteralSection final : public SyntheticSection {
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
public:
using UInt128 = std::pair<uint64_t, uint64_t>;
// I don't think the standard guarantees the size of a pair, so let's make
// sure it's exact -- that way we can construct it via `mmap`.
static_assert(sizeof(UInt128) == 16, "");
WordLiteralSection();
void addInput(WordLiteralInputSection *);
[lld-macho] Move ICF earlier to avoid emitting redundant binds This is a pretty big refactoring diff, so here are the motivations: Previously, ICF ran after scanRelocations(), where we emitting bind/rebase opcodes etc. So we had a bunch of redundant leftovers after ICF. Having ICF run before Writer seems like a better design, and is what LLD-ELF does, so this diff refactors it accordingly. However, ICF had two dependencies on things occurring in Writer: 1) it needs literals to be deduplicated beforehand and 2) it needs to know which functions have unwind info, which was being handled by `UnwindInfoSection::prepareRelocations()`. In order to do literal deduplication earlier, we need to add literal input sections to their corresponding output sections. So instead of putting all input sections into the big `inputSections` vector, and then filtering them by type later on, I've changed things so that literal sections get added directly to their output sections during the 'gather' phase. Likewise for compact unwind sections -- they get added directly to the UnwindInfoSection now. This latter change is not strictly necessary, but makes it easier for ICF to determine which functions have unwind info. Adding literal sections directly to their output sections means that we can no longer determine `inputOrder` from iterating over `inputSections`. Instead, we store that order explicitly on InputSection. Bloating the size of InputSection for this purpose would be unfortunate -- but LLD-ELF has already solved this problem: it reuses `outSecOff` to store this order value. One downside of this refactor is that we now make an additional pass over the unwind info relocations to figure out which functions have unwind info, since want to know that before `processRelocations()`. I've made sure to run that extra loop only if ICF is enabled, so there should be no overhead in non-optimizing runs of the linker. The upside of all this is that the `inputSections` vector now contains only ConcatInputSections that are destined for ConcatOutputSections, so we can clean up a bunch of code that just existed to filter out other elements from that vector. I will test for the lack of redundant binds/rebases in the upcoming cfstring deduplication diff. While binds/rebases can also happen in the regular `.text` section, they're more common in `.data` sections, so it seems more natural to test it that way. This change is perf-neutral when linking chromium_framework. Reviewed By: oontvoo Differential Revision: https://reviews.llvm.org/D105044
2021-07-02 08:33:42 +08:00
void finalizeContents();
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
void writeTo(uint8_t *buf) const override;
uint64_t getSize() const override {
return literal16Map.size() * 16 + literal8Map.size() * 8 +
literal4Map.size() * 4;
}
bool isNeeded() const override {
return !literal16Map.empty() || !literal4Map.empty() ||
!literal8Map.empty();
}
uint64_t getLiteral16Offset(uintptr_t buf) const {
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
return literal16Map.at(*reinterpret_cast<const UInt128 *>(buf)) * 16;
}
uint64_t getLiteral8Offset(uintptr_t buf) const {
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
return literal16Map.size() * 16 +
literal8Map.at(*reinterpret_cast<const uint64_t *>(buf)) * 8;
}
uint64_t getLiteral4Offset(uintptr_t buf) const {
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
return literal16Map.size() * 16 + literal8Map.size() * 8 +
literal4Map.at(*reinterpret_cast<const uint32_t *>(buf)) * 4;
}
private:
[lld-macho] Move ICF earlier to avoid emitting redundant binds This is a pretty big refactoring diff, so here are the motivations: Previously, ICF ran after scanRelocations(), where we emitting bind/rebase opcodes etc. So we had a bunch of redundant leftovers after ICF. Having ICF run before Writer seems like a better design, and is what LLD-ELF does, so this diff refactors it accordingly. However, ICF had two dependencies on things occurring in Writer: 1) it needs literals to be deduplicated beforehand and 2) it needs to know which functions have unwind info, which was being handled by `UnwindInfoSection::prepareRelocations()`. In order to do literal deduplication earlier, we need to add literal input sections to their corresponding output sections. So instead of putting all input sections into the big `inputSections` vector, and then filtering them by type later on, I've changed things so that literal sections get added directly to their output sections during the 'gather' phase. Likewise for compact unwind sections -- they get added directly to the UnwindInfoSection now. This latter change is not strictly necessary, but makes it easier for ICF to determine which functions have unwind info. Adding literal sections directly to their output sections means that we can no longer determine `inputOrder` from iterating over `inputSections`. Instead, we store that order explicitly on InputSection. Bloating the size of InputSection for this purpose would be unfortunate -- but LLD-ELF has already solved this problem: it reuses `outSecOff` to store this order value. One downside of this refactor is that we now make an additional pass over the unwind info relocations to figure out which functions have unwind info, since want to know that before `processRelocations()`. I've made sure to run that extra loop only if ICF is enabled, so there should be no overhead in non-optimizing runs of the linker. The upside of all this is that the `inputSections` vector now contains only ConcatInputSections that are destined for ConcatOutputSections, so we can clean up a bunch of code that just existed to filter out other elements from that vector. I will test for the lack of redundant binds/rebases in the upcoming cfstring deduplication diff. While binds/rebases can also happen in the regular `.text` section, they're more common in `.data` sections, so it seems more natural to test it that way. This change is perf-neutral when linking chromium_framework. Reviewed By: oontvoo Differential Revision: https://reviews.llvm.org/D105044
2021-07-02 08:33:42 +08:00
std::vector<WordLiteralInputSection *> inputs;
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
template <class T> struct Hasher {
llvm::hash_code operator()(T v) const { return llvm::hash_value(v); }
};
// We're using unordered_map instead of DenseMap here because we need to
// support all possible integer values -- there are no suitable tombstone
// values for DenseMap.
std::unordered_map<UInt128, uint64_t, Hasher<UInt128>> literal16Map;
std::unordered_map<uint64_t, uint64_t> literal8Map;
std::unordered_map<uint32_t, uint64_t> literal4Map;
};
struct InStruct {
MachHeaderSection *header = nullptr;
[lld-macho] Implement cstring deduplication Our implementation draws heavily from LLD-ELF's, which in turn delegates its string deduplication to llvm-mc's StringTableBuilder. The messiness of this diff is largely due to the fact that we've previously assumed that all InputSections get concatenated together to form the output. This is no longer true with CStringInputSections, which split their contents into StringPieces. StringPieces are much more lightweight than InputSections, which is important as we create a lot of them. They may also overlap in the output, which makes it possible for strings to be tail-merged. In fact, the initial version of this diff implemented tail merging, but I've dropped it for reasons I'll explain later. **Alignment Issues** Mergeable cstring literals are found under the `__TEXT,__cstring` section. In contrast to ELF, which puts strings that need different alignments into different sections, clang's Mach-O backend puts them all in one section. Strings that need to be aligned have the `.p2align` directive emitted before them, which simply translates into zero padding in the object file. I *think* ld64 extracts the desired per-string alignment from this data by preserving each string's offset from the last section-aligned address. I'm not entirely certain since it doesn't seem consistent about doing this; but perhaps this can be chalked up to cases where ld64 has to deduplicate strings with different offset/alignment combos -- it seems to pick one of their alignments to preserve. This doesn't seem correct in general; we can in fact can induce ld64 to produce a crashing binary just by linking in an additional object file that only contains cstrings and no code. See PR50563 for details. Moreover, this scheme seems rather inefficient: since unaligned and aligned strings are all put in the same section, which has a single alignment value, it doesn't seem possible to tell whether a given string doesn't have any alignment requirements. Preserving offset+alignments for strings that don't need it is wasteful. In practice, the crashes seen so far seem to stem from x86_64 SIMD operations on cstrings. X86_64 requires SIMD accesses to be 16-byte-aligned. So for now, I'm thinking of just aligning all strings to 16 bytes on x86_64. This is indeed wasteful, but implementation-wise it's simpler than preserving per-string alignment+offsets. It also avoids the aforementioned crash after deduplication of differently-aligned strings. Finally, the overhead is not huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead when linking chromium_framework. With these alignment requirements, it doesn't make sense to attempt tail merging -- most strings will not be eligible since their overlaps aren't likely to start at a 16-byte boundary. Tail-merging (with alignment) for chromium_framework only improves size by 0.3%. It's worth noting that LLD-ELF only does tail merging at `-O2`. By default (at `-O1`), it just deduplicates w/o tail merging. @thakis has also mentioned that they saw it regress compressed size in some cases and therefore turned it off. `ld64` does not seem to do tail merging at all. **Performance Numbers** CString deduplication reduces chromium_framework from 250MB to 242MB, or about a 3.2% reduction. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.99 4.14 4.015 4.0365 0.0492336 Difference at 95.0% confidence 0.0865 +/- 0.027245 2.18987% +/- 0.689746% (Student's t, pooled s = 0.0425673) As expected, cstring merging incurs some non-trivial overhead. When passing `--no-literal-merge`, it seems that performance is the same, i.e. the refactoring in this diff didn't cost us. N Min Max Median Avg Stddev x 20 3.91 4.03 3.935 3.95 0.034641016 + 20 3.89 4.02 3.935 3.9435 0.043197831 No difference proven at 95.0% confidence Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D102964
2021-06-08 11:47:12 +08:00
CStringSection *cStringSection = nullptr;
[lld-macho] Deduplicate fixed-width literals Conceptually, the implementation is pretty straightforward: we put each literal value into a hashtable, and then write out the keys of that hashtable at the end. In contrast with ELF, the Mach-O format does not support variable-length literals that aren't strings. Its literals are either 4, 8, or 16 bytes in length. LLD-ELF dedups its literals via sorting + uniq'ing, but since we don't need to worry about overly-long values, we should be able to do a faster job by just hashing. That said, the implementation right now is far from optimal, because we add to those hashtables serially. To parallelize this, we'll need a basic concurrent hashtable (only needs to support concurrent writes w/o interleave reads), which shouldn't be to hard to implement, but I'd like to punt on it for now. Numbers for linking chromium_framework on my 3.2 GHz 16-Core Intel Xeon W: N Min Max Median Avg Stddev x 20 4.27 4.39 4.315 4.3225 0.033225703 + 20 4.36 4.82 4.44 4.4845 0.13152846 Difference at 95.0% confidence 0.162 +/- 0.0613971 3.74783% +/- 1.42041% (Student's t, pooled s = 0.0959262) This corresponds to binary size savings of 2MB out of 335MB, or 0.6%. It's not a great tradeoff as-is, but as mentioned our implementation can be signficantly optimized, and literal dedup will unlock more opportunities for ICF to identify identical structures that reference the same literals. Reviewed By: #lld-macho, gkm Differential Revision: https://reviews.llvm.org/D103113
2021-06-12 07:49:50 +08:00
WordLiteralSection *wordLiteralSection = nullptr;
RebaseSection *rebase = nullptr;
BindingSection *binding = nullptr;
WeakBindingSection *weakBinding = nullptr;
LazyBindingSection *lazyBinding = nullptr;
ExportSection *exports = nullptr;
GotSection *got = nullptr;
TlvPointerSection *tlvPointers = nullptr;
LazyPointerSection *lazyPointers = nullptr;
StubsSection *stubs = nullptr;
StubHelperSection *stubHelper = nullptr;
UnwindInfoSection *unwindInfo = nullptr;
ConcatInputSection *imageLoaderCache = nullptr;
};
extern InStruct in;
[lld-macho] Refactor segment/section creation, sorting, and merging Summary: There were a few issues with the previous setup: 1. The section sorting comparator used a declarative map of section names to determine the correct order, but it turns out we need to match on more than just names -- in particular, an upcoming diff will sort based on whether the S_ZERO_FILL flag is set. This diff changes the sorter to a more imperative but flexible form. 2. We were sorting OutputSections stored in a MapVector, which left the MapVector in an inconsistent state -- the wrong keys map to the wrong values! In practice, we weren't doing key lookups (only container iteration) after the sort, so this was fine, but it was still a dubious state of affairs. This diff copies the OutputSections to a vector before sorting them. 3. We were adding unneeded OutputSections to OutputSegments and then filtering them out later, which meant that we had to remember whether an OutputSegment was in a pre- or post-filtered state. This diff only adds the sections to the segments if they are needed. In addition to those major changes, two minor ones worth noting: 1. I renamed all OutputSection variable names to `osec`, to parallel `isec`. Previously we were using some inconsistent combination of `osec`, `os`, and `section`. 2. I added a check (and a test) for InputSections with names that clashed with those of our synthetic OutputSections. Reviewers: #lld-macho Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D81887
2020-06-15 15:03:24 +08:00
extern std::vector<SyntheticSection *> syntheticSections;
void createSyntheticSymbols();
} // namespace macho
} // namespace lld
#endif