2015-12-05 07:11:05 +08:00
|
|
|
//===- PDB.cpp ------------------------------------------------------------===//
|
|
|
|
//
|
2019-01-19 16:50:56 +08:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2015-12-05 07:11:05 +08:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2016-09-16 06:24:51 +08:00
|
|
|
#include "PDB.h"
|
2021-09-17 07:48:26 +08:00
|
|
|
#include "COFFLinkerContext.h"
|
2016-11-12 08:00:51 +08:00
|
|
|
#include "Chunks.h"
|
2016-11-22 01:22:35 +08:00
|
|
|
#include "Config.h"
|
[LLD][COFF] Early dependency detection
We introduce a new class hierarchy for debug types merging (in DebugTypes.h). The end-goal is to parallelize the type merging - please see the plan in D59226.
Previously, dependency discovery was done on the fly, much later, during the type merging loop. Unfortunately, parallelizing the type merging requires the dependencies to be merged in first, before any dependent ObjFile, thus this early discovery.
The overall intention for this path is to discover debug information dependencies at a much earlier stage, when processing input files. Currently, two types of dependency are supported: PDB type servers (when compiling with MSVC /Zi) and precompiled headers OBJs (when compiling with MSVC /Yc and /Yu). Once discovered, an explicit link is added into the dependent ObjFile, through the new debug types class hierarchy introduced in DebugTypes.h.
Differential Revision: https://reviews.llvm.org/D59053
llvm-svn: 357383
2019-04-01 21:36:59 +08:00
|
|
|
#include "DebugTypes.h"
|
2017-10-21 03:48:26 +08:00
|
|
|
#include "Driver.h"
|
2016-11-12 08:00:51 +08:00
|
|
|
#include "SymbolTable.h"
|
|
|
|
#include "Symbols.h"
|
2019-04-03 04:43:19 +08:00
|
|
|
#include "TypeMerger.h"
|
2017-07-28 02:25:59 +08:00
|
|
|
#include "Writer.h"
|
[PDB] Print the most redundant type record indices with /summary
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8b917bacfaa1f800fc1e242559f76be) and adding a key method to
clang::Sema (586f65d31f32ca6bc8cfdb8a4f61bee5057bf6c8) which saved a
fair amount of object file size.
Example output for clang.pdb:
Top 10 types responsible for the most TPI input bytes:
index total bytes count size
0x3890: 8,671,220 = 1,805 * 4,804
0xE13BE: 5,634,720 = 252 * 22,360
0x6874C: 5,181,600 = 408 * 12,700
0x2A1F: 4,520,528 = 1,574 * 2,872
0x64BFF: 4,024,020 = 469 * 8,580
0x1123: 4,012,020 = 2,157 * 1,860
0x6952: 3,753,792 = 912 * 4,116
0xC16F: 3,630,888 = 633 * 5,736
0x69DD: 3,601,160 = 985 * 3,656
0x678D: 3,577,904 = 319 * 11,216
In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.
The user can then use llvm-pdbutil to find out what the record is:
$ llvm-pdbutil dump -types -type-index 0x3890
Types (TPI Stream)
============================================================
Showing 1 records.
0x3890 | LF_FIELDLIST [size = 4804]
- LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
- LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
- LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
- LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
...
In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.
The next largest type is ASTContext:
$ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
0xE13BE | LF_FIELDLIST [size = 22360]
- LF_BCLASS
type = 0x653EA, offset = 0, attrs = public
- LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
- LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
- LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
- LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
...
ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.
This was the output before I split Intrinsic::ID:
Top 10 types responsible for the most TPI input:
0x686C: 69,823,920 = 1,070 * 65,256
0x686D: 69,819,640 = 1,070 * 65,252
0x686E: 69,819,640 = 1,070 * 65,252
0x686B: 16,371,000 = 1,070 * 15,300
...
These records were all lists of intrinsic enums.
Reviewers: MaskRay, ruiu
Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71437
2019-11-26 03:36:47 +08:00
|
|
|
#include "lld/Common/Timer.h"
|
2018-09-12 06:35:01 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/DebugFrameDataSubsection.h"
|
2017-06-20 01:21:45 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/DebugSubsectionRecord.h"
|
2017-12-15 02:07:04 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/GlobalTypeTableBuilder.h"
|
2017-05-20 03:26:58 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/LazyRandomTypeCollection.h"
|
2017-12-01 02:39:50 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/MergingTypeTableBuilder.h"
|
2017-08-12 03:00:03 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/RecordName.h"
|
2017-08-09 02:34:44 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/SymbolDeserializer.h"
|
2018-12-07 01:49:15 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/SymbolRecordHelpers.h"
|
2017-07-11 05:01:37 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/SymbolSerializer.h"
|
2017-06-22 01:25:56 +08:00
|
|
|
#include "llvm/DebugInfo/CodeView/TypeIndexDiscovery.h"
|
2016-09-16 12:32:33 +08:00
|
|
|
#include "llvm/DebugInfo/MSF/MSFBuilder.h"
|
2016-09-16 02:55:18 +08:00
|
|
|
#include "llvm/DebugInfo/MSF/MSFCommon.h"
|
2017-07-18 08:21:25 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/GenericError.h"
|
2017-07-11 05:01:37 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/DbiModuleDescriptorBuilder.h"
|
2017-01-26 06:38:55 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/DbiStream.h"
|
|
|
|
#include "llvm/DebugInfo/PDB/Native/DbiStreamBuilder.h"
|
2017-08-09 12:23:25 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/GSIStreamBuilder.h"
|
2017-01-26 06:38:55 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/InfoStream.h"
|
|
|
|
#include "llvm/DebugInfo/PDB/Native/InfoStreamBuilder.h"
|
2017-07-18 08:21:25 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/NativeSession.h"
|
2017-01-26 06:38:55 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/PDBFile.h"
|
|
|
|
#include "llvm/DebugInfo/PDB/Native/PDBFileBuilder.h"
|
2017-05-03 04:19:42 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/PDBStringTableBuilder.h"
|
2017-07-20 01:26:07 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/TpiHashing.h"
|
2017-01-26 06:38:55 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/Native/TpiStream.h"
|
|
|
|
#include "llvm/DebugInfo/PDB/Native/TpiStreamBuilder.h"
|
2017-07-18 08:21:25 +08:00
|
|
|
#include "llvm/DebugInfo/PDB/PDB.h"
|
2016-11-01 05:09:21 +08:00
|
|
|
#include "llvm/Object/COFF.h"
|
2018-03-27 07:43:29 +08:00
|
|
|
#include "llvm/Object/CVDebugRecord.h"
|
2017-03-03 04:52:51 +08:00
|
|
|
#include "llvm/Support/BinaryByteStream.h"
|
2019-10-09 17:06:30 +08:00
|
|
|
#include "llvm/Support/CRC.h"
|
2015-12-09 02:39:55 +08:00
|
|
|
#include "llvm/Support/Endian.h"
|
2018-11-06 03:20:47 +08:00
|
|
|
#include "llvm/Support/Errc.h"
|
[PDB] Print the most redundant type record indices with /summary
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8b917bacfaa1f800fc1e242559f76be) and adding a key method to
clang::Sema (586f65d31f32ca6bc8cfdb8a4f61bee5057bf6c8) which saved a
fair amount of object file size.
Example output for clang.pdb:
Top 10 types responsible for the most TPI input bytes:
index total bytes count size
0x3890: 8,671,220 = 1,805 * 4,804
0xE13BE: 5,634,720 = 252 * 22,360
0x6874C: 5,181,600 = 408 * 12,700
0x2A1F: 4,520,528 = 1,574 * 2,872
0x64BFF: 4,024,020 = 469 * 8,580
0x1123: 4,012,020 = 2,157 * 1,860
0x6952: 3,753,792 = 912 * 4,116
0xC16F: 3,630,888 = 633 * 5,736
0x69DD: 3,601,160 = 985 * 3,656
0x678D: 3,577,904 = 319 * 11,216
In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.
The user can then use llvm-pdbutil to find out what the record is:
$ llvm-pdbutil dump -types -type-index 0x3890
Types (TPI Stream)
============================================================
Showing 1 records.
0x3890 | LF_FIELDLIST [size = 4804]
- LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
- LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
- LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
- LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
...
In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.
The next largest type is ASTContext:
$ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
0xE13BE | LF_FIELDLIST [size = 22360]
- LF_BCLASS
type = 0x653EA, offset = 0, attrs = public
- LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
- LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
- LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
- LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
...
ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.
This was the output before I split Intrinsic::ID:
Top 10 types responsible for the most TPI input:
0x686C: 69,823,920 = 1,070 * 65,256
0x686D: 69,819,640 = 1,070 * 65,252
0x686E: 69,819,640 = 1,070 * 65,252
0x686B: 16,371,000 = 1,070 * 15,300
...
These records were all lists of intrinsic enums.
Reviewers: MaskRay, ruiu
Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71437
2019-11-26 03:36:47 +08:00
|
|
|
#include "llvm/Support/FormatAdapters.h"
|
2018-01-06 03:12:40 +08:00
|
|
|
#include "llvm/Support/FormatVariadic.h"
|
2017-02-17 07:35:45 +08:00
|
|
|
#include "llvm/Support/Path.h"
|
2016-11-22 01:22:35 +08:00
|
|
|
#include "llvm/Support/ScopedPrinter.h"
|
2015-12-05 07:11:05 +08:00
|
|
|
#include <memory>
|
|
|
|
|
|
|
|
using namespace llvm;
|
2016-11-22 01:22:35 +08:00
|
|
|
using namespace llvm::codeview;
|
2020-02-20 09:05:42 +08:00
|
|
|
using namespace lld;
|
|
|
|
using namespace lld::coff;
|
2015-12-05 07:11:05 +08:00
|
|
|
|
2016-11-12 08:00:51 +08:00
|
|
|
using llvm::object::coff_section;
|
2021-03-11 06:51:52 +08:00
|
|
|
using llvm::pdb::StringTableFixup;
|
2016-11-12 08:00:51 +08:00
|
|
|
|
2016-09-16 12:32:33 +08:00
|
|
|
static ExitOnError exitOnErr;
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
namespace {
|
2018-09-13 05:02:01 +08:00
|
|
|
class DebugSHandler;
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
class PDBLinker {
|
2018-09-13 05:02:01 +08:00
|
|
|
friend DebugSHandler;
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
public:
|
2021-09-17 07:48:26 +08:00
|
|
|
PDBLinker(COFFLinkerContext &ctx)
|
2022-01-21 03:53:18 +08:00
|
|
|
: builder(bAlloc()), tMerger(ctx, bAlloc()), ctx(ctx) {
|
2018-03-24 02:43:39 +08:00
|
|
|
// This isn't strictly necessary, but link.exe usually puts an empty string
|
|
|
|
// as the first "valid" string in the string table, so we do the same in
|
|
|
|
// order to maintain as much byte-for-byte compatibility as possible.
|
|
|
|
pdbStrTab.insert("");
|
|
|
|
}
|
2017-07-14 08:14:58 +08:00
|
|
|
|
|
|
|
/// Emit the basic PDB structure: initial streams, headers, etc.
|
2018-09-16 02:37:22 +08:00
|
|
|
void initialize(llvm::codeview::DebugInfo *buildId);
|
2017-07-14 08:14:58 +08:00
|
|
|
|
2018-03-24 03:57:25 +08:00
|
|
|
/// Add natvis files specified on the command line.
|
|
|
|
void addNatvisFiles();
|
|
|
|
|
2020-04-08 04:16:22 +08:00
|
|
|
/// Add named streams specified on the command line.
|
|
|
|
void addNamedStreams();
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
/// Link CodeView from each object file in the symbol table into the PDB.
|
|
|
|
void addObjectsToPDB();
|
|
|
|
|
2020-05-04 00:29:03 +08:00
|
|
|
/// Add every live, defined public symbol to the PDB.
|
|
|
|
void addPublicsToPDB();
|
|
|
|
|
2019-03-30 04:25:34 +08:00
|
|
|
/// Link info for each import file in the symbol table into the PDB.
|
2021-09-17 07:48:26 +08:00
|
|
|
void addImportFilesToPDB();
|
2019-03-30 04:25:34 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
void createModuleDBI(ObjFile *file);
|
|
|
|
|
2018-11-06 03:20:47 +08:00
|
|
|
/// Link CodeView from a single object file into the target (output) PDB.
|
|
|
|
/// When a precompiled headers object is linked, its TPI map might be provided
|
|
|
|
/// externally.
|
2020-05-09 21:58:15 +08:00
|
|
|
void addDebug(TpiSource *source);
|
|
|
|
|
2020-06-04 09:08:55 +08:00
|
|
|
void addDebugSymbols(TpiSource *source);
|
2017-07-14 08:14:58 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// Analyze the symbol records to separate module symbols from global symbols,
|
|
|
|
// find string references, and calculate how large the symbol stream will be
|
|
|
|
// in the PDB.
|
|
|
|
void analyzeSymbolSubsection(SectionChunk *debugChunk,
|
|
|
|
uint32_t &moduleSymOffset,
|
|
|
|
uint32_t &nextRelocIndex,
|
|
|
|
std::vector<StringTableFixup> &stringTableFixups,
|
|
|
|
BinaryStreamRef symData);
|
|
|
|
|
|
|
|
// Write all module symbols from all all live debug symbol subsections of the
|
|
|
|
// given object file into the given stream writer.
|
|
|
|
Error writeAllModuleSymbolRecords(ObjFile *file, BinaryStreamWriter &writer);
|
|
|
|
|
|
|
|
// Callback to copy and relocate debug symbols during PDB file writing.
|
|
|
|
static Error commitSymbolsForObject(void *ctx, void *obj,
|
|
|
|
BinaryStreamWriter &writer);
|
|
|
|
|
|
|
|
// Copy the symbol record, relocate it, and fix the alignment if necessary.
|
|
|
|
// Rewrite type indices in the record. Replace unrecognized symbol records
|
|
|
|
// with S_SKIP records.
|
|
|
|
void writeSymbolRecord(SectionChunk *debugChunk,
|
|
|
|
ArrayRef<uint8_t> sectionContents, CVSymbol sym,
|
|
|
|
size_t alignedSize, uint32_t &nextRelocIndex,
|
|
|
|
std::vector<uint8_t> &storage);
|
2018-11-14 07:44:39 +08:00
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
/// Add the section map and section contributions to the PDB.
|
2021-09-17 07:48:26 +08:00
|
|
|
void addSections(ArrayRef<uint8_t> sectionTable);
|
2017-08-04 05:15:09 +08:00
|
|
|
|
2018-09-16 02:37:22 +08:00
|
|
|
/// Write the PDB to disk and store the Guid generated for it in *Guid.
|
|
|
|
void commit(codeview::GUID *guid);
|
2017-07-14 08:14:58 +08:00
|
|
|
|
2019-03-15 02:45:08 +08:00
|
|
|
// Print statistics regarding the final PDB
|
|
|
|
void printStats();
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
private:
|
|
|
|
|
|
|
|
pdb::PDBFileBuilder builder;
|
|
|
|
|
2019-04-03 04:43:19 +08:00
|
|
|
TypeMerger tMerger;
|
2017-12-15 02:07:04 +08:00
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
COFFLinkerContext &ctx;
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
/// PDBs use a single global string table for filenames in the file checksum
|
|
|
|
/// table.
|
|
|
|
DebugStringTableSubsection pdbStrTab;
|
|
|
|
|
|
|
|
llvm::SmallString<128> nativePath;
|
|
|
|
|
2019-03-15 02:45:08 +08:00
|
|
|
// For statistics
|
|
|
|
uint64_t globalSymbols = 0;
|
|
|
|
uint64_t moduleSymbols = 0;
|
|
|
|
uint64_t publicSymbols = 0;
|
2020-10-02 21:36:11 +08:00
|
|
|
uint64_t nbTypeRecords = 0;
|
|
|
|
uint64_t nbTypeRecordsBytes = 0;
|
2017-07-14 08:14:58 +08:00
|
|
|
};
|
2018-09-13 05:02:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
/// Represents an unrelocated DEBUG_S_FRAMEDATA subsection.
|
|
|
|
struct UnrelocatedFpoData {
|
|
|
|
SectionChunk *debugChunk = nullptr;
|
|
|
|
ArrayRef<uint8_t> subsecData;
|
|
|
|
uint32_t relocIndex = 0;
|
|
|
|
};
|
|
|
|
|
|
|
|
/// The size of the magic bytes at the beginning of a symbol section or stream.
|
|
|
|
enum : uint32_t { kSymbolStreamMagicSize = 4 };
|
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
class DebugSHandler {
|
|
|
|
PDBLinker &linker;
|
|
|
|
|
|
|
|
/// The object file whose .debug$S sections we're processing.
|
|
|
|
ObjFile &file;
|
|
|
|
|
|
|
|
/// The result of merging type indices.
|
2020-06-04 09:08:55 +08:00
|
|
|
TpiSource *source;
|
2018-09-13 05:02:01 +08:00
|
|
|
|
|
|
|
/// The DEBUG_S_STRINGTABLE subsection. These strings are referred to by
|
|
|
|
/// index from other records in the .debug$S section. All of these strings
|
|
|
|
/// need to be added to the global PDB string table, and all references to
|
|
|
|
/// these strings need to have their indices re-written to refer to the
|
|
|
|
/// global PDB string table.
|
2020-05-15 02:21:53 +08:00
|
|
|
DebugStringTableSubsectionRef cvStrTab;
|
2018-09-13 05:02:01 +08:00
|
|
|
|
|
|
|
/// The DEBUG_S_FILECHKSMS subsection. As above, these are referred to
|
|
|
|
/// by other records in the .debug$S section and need to be merged into the
|
|
|
|
/// PDB.
|
|
|
|
DebugChecksumsSubsectionRef checksums;
|
|
|
|
|
|
|
|
/// The DEBUG_S_FRAMEDATA subsection(s). There can be more than one of
|
|
|
|
/// these and they need not appear in any specific order. However, they
|
|
|
|
/// contain string table references which need to be re-written, so we
|
|
|
|
/// collect them all here and re-write them after all subsections have been
|
|
|
|
/// discovered and processed.
|
2021-03-11 06:51:52 +08:00
|
|
|
std::vector<UnrelocatedFpoData> frameDataSubsecs;
|
|
|
|
|
|
|
|
/// List of string table references in symbol records. Later they will be
|
|
|
|
/// applied to the symbols during PDB writing.
|
|
|
|
std::vector<StringTableFixup> stringTableFixups;
|
|
|
|
|
|
|
|
/// Sum of the size of all module symbol records across all .debug$S sections.
|
|
|
|
/// Includes record realignment and the size of the symbol stream magic
|
|
|
|
/// prefix.
|
|
|
|
uint32_t moduleStreamSize = kSymbolStreamMagicSize;
|
|
|
|
|
|
|
|
/// Next relocation index in the current .debug$S section. Resets every
|
|
|
|
/// handleDebugS call.
|
|
|
|
uint32_t nextRelocIndex = 0;
|
2018-09-13 05:02:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
void advanceRelocIndex(SectionChunk *debugChunk, ArrayRef<uint8_t> subsec);
|
2018-09-13 05:02:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
void addUnrelocatedSubsection(SectionChunk *debugChunk,
|
|
|
|
const DebugSubsectionRecord &ss);
|
|
|
|
|
|
|
|
void addFrameDataSubsection(SectionChunk *debugChunk,
|
|
|
|
const DebugSubsectionRecord &ss);
|
|
|
|
|
|
|
|
void recordStringTableReferences(CVSymbol sym, uint32_t symOffset);
|
2020-06-02 02:34:09 +08:00
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
public:
|
2020-06-04 09:08:55 +08:00
|
|
|
DebugSHandler(PDBLinker &linker, ObjFile &file, TpiSource *source)
|
|
|
|
: linker(linker), file(file), source(source) {}
|
2018-09-13 05:02:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
void handleDebugS(SectionChunk *debugChunk);
|
2019-06-04 02:15:38 +08:00
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
void finish();
|
|
|
|
};
|
2017-07-14 08:14:58 +08:00
|
|
|
}
|
|
|
|
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
// Visual Studio's debugger requires absolute paths in various places in the
|
|
|
|
// PDB to work without additional configuration:
|
|
|
|
// https://docs.microsoft.com/en-us/visualstudio/debugger/debug-source-files-common-properties-solution-property-pages-dialog-box
|
|
|
|
static void pdbMakeAbsolute(SmallVectorImpl<char> &fileName) {
|
2018-10-13 01:26:19 +08:00
|
|
|
// The default behavior is to produce paths that are valid within the context
|
|
|
|
// of the machine that you perform the link on. If the linker is running on
|
|
|
|
// a POSIX system, we will output absolute POSIX paths. If the linker is
|
|
|
|
// running on a Windows system, we will output absolute Windows paths. If the
|
|
|
|
// user desires any other kind of behavior, they should explicitly pass
|
|
|
|
// /pdbsourcepath, in which case we will treat the exact string the user
|
|
|
|
// passed in as the gospel and not normalize, canonicalize it.
|
|
|
|
if (sys::path::is_absolute(fileName, sys::path::Style::windows) ||
|
|
|
|
sys::path::is_absolute(fileName, sys::path::Style::posix))
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
return;
|
2018-10-13 01:26:19 +08:00
|
|
|
|
|
|
|
// It's not absolute in any path syntax. Relative paths necessarily refer to
|
|
|
|
// the local file system, so we can make it native without ending up with a
|
|
|
|
// nonsensical path.
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
if (config->pdbSourcePath.empty()) {
|
2019-02-06 08:50:35 +08:00
|
|
|
sys::path::native(fileName);
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
sys::fs::make_absolute(fileName);
|
2021-09-01 07:03:44 +08:00
|
|
|
sys::path::remove_dots(fileName, true);
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
return;
|
|
|
|
}
|
2018-10-13 01:26:19 +08:00
|
|
|
|
2019-02-06 08:50:35 +08:00
|
|
|
// Try to guess whether /PDBSOURCEPATH is a unix path or a windows path.
|
|
|
|
// Since PDB's are more of a Windows thing, we make this conservative and only
|
|
|
|
// decide that it's a unix path if we're fairly certain. Specifically, if
|
|
|
|
// it starts with a forward slash.
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
SmallString<128> absoluteFileName = config->pdbSourcePath;
|
2019-02-06 08:50:35 +08:00
|
|
|
sys::path::Style guessedStyle = absoluteFileName.startswith("/")
|
|
|
|
? sys::path::Style::posix
|
|
|
|
: sys::path::Style::windows;
|
|
|
|
sys::path::append(absoluteFileName, guessedStyle, fileName);
|
|
|
|
sys::path::native(absoluteFileName, guessedStyle);
|
|
|
|
sys::path::remove_dots(absoluteFileName, true, guessedStyle);
|
|
|
|
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
fileName = std::move(absoluteFileName);
|
|
|
|
}
|
|
|
|
|
2017-03-25 01:26:38 +08:00
|
|
|
static void addTypeInfo(pdb::TpiStreamBuilder &tpiBuilder,
|
2017-11-30 03:35:21 +08:00
|
|
|
TypeCollection &typeTable) {
|
2017-03-25 01:26:38 +08:00
|
|
|
// Start the TPI or IPI stream header.
|
|
|
|
tpiBuilder.setVersionHeader(pdb::PdbTpiV80);
|
|
|
|
|
2017-07-20 01:26:07 +08:00
|
|
|
// Flatten the in memory type table and hash each type.
|
2017-11-30 03:35:21 +08:00
|
|
|
typeTable.ForEachRecord([&](TypeIndex ti, const CVType &type) {
|
2017-07-20 01:26:07 +08:00
|
|
|
auto hash = pdb::hashTypeRecord(type);
|
|
|
|
if (auto e = hash.takeError())
|
|
|
|
fatal("type hashing error");
|
2017-11-30 03:35:21 +08:00
|
|
|
tpiBuilder.addTypeRecord(type.RecordData, *hash);
|
2017-03-25 01:26:38 +08:00
|
|
|
});
|
|
|
|
}
|
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
static void addGHashTypeInfo(COFFLinkerContext &ctx,
|
|
|
|
pdb::PDBFileBuilder &builder) {
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
// Start the TPI or IPI stream header.
|
|
|
|
builder.getTpiBuilder().setVersionHeader(pdb::PdbTpiV80);
|
|
|
|
builder.getIpiBuilder().setVersionHeader(pdb::PdbTpiV80);
|
2021-09-17 07:48:26 +08:00
|
|
|
for_each(ctx.tpiSourceList, [&](TpiSource *source) {
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
builder.getTpiBuilder().addTypeRecords(source->mergedTpi.recs,
|
|
|
|
source->mergedTpi.recSizes,
|
|
|
|
source->mergedTpi.recHashes);
|
|
|
|
builder.getIpiBuilder().addTypeRecords(source->mergedIpi.recs,
|
|
|
|
source->mergedIpi.recSizes,
|
|
|
|
source->mergedIpi.recHashes);
|
|
|
|
});
|
2017-06-22 01:25:56 +08:00
|
|
|
}
|
|
|
|
|
2018-01-06 03:12:40 +08:00
|
|
|
static void
|
2021-03-11 06:51:52 +08:00
|
|
|
recordStringTableReferences(CVSymbol sym, uint32_t symOffset,
|
|
|
|
std::vector<StringTableFixup> &stringTableFixups) {
|
2018-01-06 03:12:40 +08:00
|
|
|
// For now we only handle S_FILESTATIC, but we may need the same logic for
|
|
|
|
// S_DEFRANGE and S_DEFRANGE_SUBFIELD. However, I cannot seem to generate any
|
|
|
|
// PDBs that contain these types of records, so because of the uncertainty
|
|
|
|
// they are omitted here until we can prove that it's necessary.
|
2021-03-11 06:51:52 +08:00
|
|
|
switch (sym.kind()) {
|
|
|
|
case SymbolKind::S_FILESTATIC: {
|
2018-01-06 03:12:40 +08:00
|
|
|
// FileStaticSym::ModFileOffset
|
2021-03-11 06:51:52 +08:00
|
|
|
uint32_t ref = *reinterpret_cast<const ulittle32_t *>(&sym.data()[8]);
|
|
|
|
stringTableFixups.push_back({ref, symOffset + 8});
|
2018-01-06 03:12:40 +08:00
|
|
|
break;
|
2021-03-11 06:51:52 +08:00
|
|
|
}
|
2018-01-06 03:12:40 +08:00
|
|
|
case SymbolKind::S_DEFRANGE:
|
|
|
|
case SymbolKind::S_DEFRANGE_SUBFIELD:
|
|
|
|
log("Not fixing up string table reference in S_DEFRANGE / "
|
|
|
|
"S_DEFRANGE_SUBFIELD record");
|
|
|
|
break;
|
2018-01-06 03:28:39 +08:00
|
|
|
default:
|
|
|
|
break;
|
2018-01-06 03:12:40 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-08-09 02:34:44 +08:00
|
|
|
static SymbolKind symbolKind(ArrayRef<uint8_t> recordData) {
|
|
|
|
const RecordPrefix *prefix =
|
|
|
|
reinterpret_cast<const RecordPrefix *>(recordData.data());
|
|
|
|
return static_cast<SymbolKind>(uint16_t(prefix->RecordKind));
|
|
|
|
}
|
|
|
|
|
|
|
|
/// MSVC translates S_PROC_ID_END to S_END, and S_[LG]PROC32_ID to S_[LG]PROC32
|
|
|
|
static void translateIdSymbols(MutableArrayRef<uint8_t> &recordData,
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
TypeMerger &tMerger, TpiSource *source) {
|
2017-08-09 02:34:44 +08:00
|
|
|
RecordPrefix *prefix = reinterpret_cast<RecordPrefix *>(recordData.data());
|
|
|
|
|
|
|
|
SymbolKind kind = symbolKind(recordData);
|
|
|
|
|
|
|
|
if (kind == SymbolKind::S_PROC_ID_END) {
|
|
|
|
prefix->RecordKind = SymbolKind::S_END;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// In an object file, GPROC32_ID has an embedded reference which refers to the
|
|
|
|
// single object file type index namespace. This has already been translated
|
|
|
|
// to the PDB file's ID stream index space, but we need to convert this to a
|
|
|
|
// symbol that refers to the type stream index space. So we remap again from
|
|
|
|
// ID index space to type index space.
|
|
|
|
if (kind == SymbolKind::S_GPROC32_ID || kind == SymbolKind::S_LPROC32_ID) {
|
|
|
|
SmallVector<TiReference, 1> refs;
|
|
|
|
auto content = recordData.drop_front(sizeof(RecordPrefix));
|
2019-04-04 08:28:48 +08:00
|
|
|
CVSymbol sym(recordData);
|
2017-08-09 02:34:44 +08:00
|
|
|
discoverTypeIndicesInSymbol(sym, refs);
|
|
|
|
assert(refs.size() == 1);
|
|
|
|
assert(refs.front().Count == 1);
|
2019-07-11 13:40:30 +08:00
|
|
|
|
2017-08-09 02:34:44 +08:00
|
|
|
TypeIndex *ti =
|
|
|
|
reinterpret_cast<TypeIndex *>(content.data() + refs[0].Offset);
|
2019-07-16 16:26:38 +08:00
|
|
|
// `ti` is the index of a FuncIdRecord or MemberFuncIdRecord which lives in
|
2017-08-09 02:34:44 +08:00
|
|
|
// the IPI stream, whose `FunctionType` member refers to the TPI stream.
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
// Note that LF_FUNC_ID and LF_MFUNC_ID have the same record layout, and
|
2017-08-09 02:34:44 +08:00
|
|
|
// in both cases we just need the second type index.
|
|
|
|
if (!ti->isSimple() && !ti->isNoneType()) {
|
2021-03-12 05:53:14 +08:00
|
|
|
TypeIndex newType = TypeIndex(SimpleTypeKind::NotTranslated);
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
if (config->debugGHashes) {
|
2020-10-01 05:40:53 +08:00
|
|
|
auto idToType = tMerger.funcIdToType.find(*ti);
|
2021-03-12 05:53:14 +08:00
|
|
|
if (idToType != tMerger.funcIdToType.end())
|
|
|
|
newType = idToType->second;
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
} else {
|
2021-03-12 05:53:14 +08:00
|
|
|
if (tMerger.getIDTable().contains(*ti)) {
|
|
|
|
CVType funcIdData = tMerger.getIDTable().getType(*ti);
|
|
|
|
if (funcIdData.length() >= 8 && (funcIdData.kind() == LF_FUNC_ID ||
|
|
|
|
funcIdData.kind() == LF_MFUNC_ID)) {
|
|
|
|
newType = *reinterpret_cast<const TypeIndex *>(&funcIdData.data()[8]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (newType == TypeIndex(SimpleTypeKind::NotTranslated)) {
|
|
|
|
warn(formatv("procedure symbol record for `{0}` in {1} refers to PDB "
|
|
|
|
"item index {2:X} which is not a valid function ID record",
|
|
|
|
getSymbolName(CVSymbol(recordData)),
|
|
|
|
source->file->getName(), ti->getIndex()));
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
}
|
2021-03-12 05:53:14 +08:00
|
|
|
*ti = newType;
|
2017-08-09 02:34:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
kind = (kind == SymbolKind::S_GPROC32_ID) ? SymbolKind::S_GPROC32
|
|
|
|
: SymbolKind::S_LPROC32;
|
|
|
|
prefix->RecordKind = uint16_t(kind);
|
|
|
|
}
|
2017-06-22 01:25:56 +08:00
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
namespace {
|
2017-07-07 00:39:32 +08:00
|
|
|
struct ScopeRecord {
|
|
|
|
ulittle32_t ptrParent;
|
|
|
|
ulittle32_t ptrEnd;
|
|
|
|
};
|
2021-03-11 06:51:52 +08:00
|
|
|
} // namespace
|
2017-07-07 00:39:32 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
/// Given a pointer to a symbol record that opens a scope, return a pointer to
|
|
|
|
/// the scope fields.
|
|
|
|
static ScopeRecord *getSymbolScopeFields(void *sym) {
|
|
|
|
return reinterpret_cast<ScopeRecord *>(reinterpret_cast<char *>(sym) +
|
|
|
|
sizeof(RecordPrefix));
|
|
|
|
}
|
2017-07-07 00:39:32 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// To open a scope, push the offset of the current symbol record onto the
|
|
|
|
// stack.
|
|
|
|
static void scopeStackOpen(SmallVectorImpl<uint32_t> &stack,
|
|
|
|
std::vector<uint8_t> &storage) {
|
|
|
|
stack.push_back(storage.size());
|
2017-07-07 00:39:32 +08:00
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// To close a scope, update the record that opened the scope.
|
|
|
|
static void scopeStackClose(SmallVectorImpl<uint32_t> &stack,
|
|
|
|
std::vector<uint8_t> &storage,
|
|
|
|
uint32_t storageBaseOffset, ObjFile *file) {
|
2017-07-07 00:39:32 +08:00
|
|
|
if (stack.empty()) {
|
|
|
|
warn("symbol scopes are not balanced in " + file->getName());
|
|
|
|
return;
|
|
|
|
}
|
2021-03-11 06:51:52 +08:00
|
|
|
|
|
|
|
// Update ptrEnd of the record that opened the scope to point to the
|
|
|
|
// current record, if we are writing into the module symbol stream.
|
|
|
|
uint32_t offOpen = stack.pop_back_val();
|
|
|
|
uint32_t offEnd = storageBaseOffset + storage.size();
|
|
|
|
uint32_t offParent = stack.empty() ? 0 : (stack.back() + storageBaseOffset);
|
|
|
|
ScopeRecord *scopeRec = getSymbolScopeFields(&(storage)[offOpen]);
|
|
|
|
scopeRec->ptrParent = offParent;
|
|
|
|
scopeRec->ptrEnd = offEnd;
|
2017-07-07 00:39:32 +08:00
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
static bool symbolGoesInModuleStream(const CVSymbol &sym,
|
|
|
|
unsigned symbolScopeDepth) {
|
2017-08-12 03:00:03 +08:00
|
|
|
switch (sym.kind()) {
|
|
|
|
case SymbolKind::S_GDATA32:
|
|
|
|
case SymbolKind::S_CONSTANT:
|
2020-05-07 03:20:24 +08:00
|
|
|
case SymbolKind::S_GTHREAD32:
|
2017-08-12 03:00:03 +08:00
|
|
|
// We really should not be seeing S_PROCREF and S_LPROCREF in the first place
|
|
|
|
// since they are synthesized by the linker in response to S_GPROC32 and
|
|
|
|
// S_LPROC32, but if we do see them, don't put them in the module stream I
|
|
|
|
// guess.
|
|
|
|
case SymbolKind::S_PROCREF:
|
|
|
|
case SymbolKind::S_LPROCREF:
|
|
|
|
return false;
|
2018-12-05 05:48:46 +08:00
|
|
|
// S_UDT records go in the module stream if it is not a global S_UDT.
|
|
|
|
case SymbolKind::S_UDT:
|
2021-03-11 06:51:52 +08:00
|
|
|
return symbolScopeDepth > 0;
|
2017-08-12 03:00:03 +08:00
|
|
|
// S_GDATA32 does not go in the module stream, but S_LDATA32 does.
|
|
|
|
case SymbolKind::S_LDATA32:
|
2020-05-07 03:20:24 +08:00
|
|
|
case SymbolKind::S_LTHREAD32:
|
2017-08-12 03:00:03 +08:00
|
|
|
default:
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-05-07 03:20:24 +08:00
|
|
|
static bool symbolGoesInGlobalsStream(const CVSymbol &sym,
|
2021-03-11 06:51:52 +08:00
|
|
|
unsigned symbolScopeDepth) {
|
2017-08-12 03:00:03 +08:00
|
|
|
switch (sym.kind()) {
|
|
|
|
case SymbolKind::S_CONSTANT:
|
|
|
|
case SymbolKind::S_GDATA32:
|
2020-05-07 03:20:24 +08:00
|
|
|
case SymbolKind::S_GTHREAD32:
|
2017-08-12 03:00:03 +08:00
|
|
|
case SymbolKind::S_GPROC32:
|
|
|
|
case SymbolKind::S_LPROC32:
|
2021-03-11 06:51:52 +08:00
|
|
|
case SymbolKind::S_GPROC32_ID:
|
|
|
|
case SymbolKind::S_LPROC32_ID:
|
2017-08-12 03:00:03 +08:00
|
|
|
// We really should not be seeing S_PROCREF and S_LPROCREF in the first place
|
|
|
|
// since they are synthesized by the linker in response to S_GPROC32 and
|
|
|
|
// S_LPROC32, but if we do see them, copy them straight through.
|
|
|
|
case SymbolKind::S_PROCREF:
|
|
|
|
case SymbolKind::S_LPROCREF:
|
|
|
|
return true;
|
2020-05-07 03:20:24 +08:00
|
|
|
// Records that go in the globals stream, unless they are function-local.
|
2017-08-15 02:44:58 +08:00
|
|
|
case SymbolKind::S_UDT:
|
2020-05-07 03:20:24 +08:00
|
|
|
case SymbolKind::S_LDATA32:
|
|
|
|
case SymbolKind::S_LTHREAD32:
|
2021-03-11 06:51:52 +08:00
|
|
|
return symbolScopeDepth == 0;
|
2017-08-12 03:00:03 +08:00
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
static void addGlobalSymbol(pdb::GSIStreamBuilder &builder, uint16_t modIndex,
|
2021-03-11 06:51:52 +08:00
|
|
|
unsigned symOffset,
|
|
|
|
std::vector<uint8_t> &symStorage) {
|
|
|
|
CVSymbol sym(makeArrayRef(symStorage));
|
2017-08-12 03:00:03 +08:00
|
|
|
switch (sym.kind()) {
|
|
|
|
case SymbolKind::S_CONSTANT:
|
|
|
|
case SymbolKind::S_UDT:
|
|
|
|
case SymbolKind::S_GDATA32:
|
2020-05-07 03:20:24 +08:00
|
|
|
case SymbolKind::S_GTHREAD32:
|
|
|
|
case SymbolKind::S_LTHREAD32:
|
2017-08-12 03:00:03 +08:00
|
|
|
case SymbolKind::S_LDATA32:
|
|
|
|
case SymbolKind::S_PROCREF:
|
2021-03-11 06:51:52 +08:00
|
|
|
case SymbolKind::S_LPROCREF: {
|
|
|
|
// sym is a temporary object, so we have to copy and reallocate the record
|
|
|
|
// to stabilize it.
|
2022-01-21 03:53:18 +08:00
|
|
|
uint8_t *mem = bAlloc().Allocate<uint8_t>(sym.length());
|
2021-03-11 06:51:52 +08:00
|
|
|
memcpy(mem, sym.data().data(), sym.length());
|
|
|
|
builder.addGlobalSymbol(CVSymbol(makeArrayRef(mem, sym.length())));
|
2017-08-12 03:00:03 +08:00
|
|
|
break;
|
2021-03-11 06:51:52 +08:00
|
|
|
}
|
2017-08-12 03:00:03 +08:00
|
|
|
case SymbolKind::S_GPROC32:
|
|
|
|
case SymbolKind::S_LPROC32: {
|
|
|
|
SymbolRecordKind k = SymbolRecordKind::ProcRefSym;
|
|
|
|
if (sym.kind() == SymbolKind::S_LPROC32)
|
|
|
|
k = SymbolRecordKind::LocalProcRef;
|
|
|
|
ProcRefSym ps(k);
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
ps.Module = modIndex;
|
2017-08-12 03:00:03 +08:00
|
|
|
// For some reason, MSVC seems to add one to this value.
|
|
|
|
++ps.Module;
|
|
|
|
ps.Name = getSymbolName(sym);
|
|
|
|
ps.SumName = 0;
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
ps.SymOffset = symOffset;
|
2017-08-12 03:00:03 +08:00
|
|
|
builder.addGlobalSymbol(ps);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
llvm_unreachable("Invalid symbol kind!");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// Check if the given symbol record was padded for alignment. If so, zero out
|
|
|
|
// the padding bytes and update the record prefix with the new size.
|
|
|
|
static void fixRecordAlignment(MutableArrayRef<uint8_t> recordBytes,
|
|
|
|
size_t oldSize) {
|
|
|
|
size_t alignedSize = recordBytes.size();
|
|
|
|
if (oldSize == alignedSize)
|
|
|
|
return;
|
|
|
|
reinterpret_cast<RecordPrefix *>(recordBytes.data())->RecordLen =
|
|
|
|
alignedSize - 2;
|
|
|
|
memset(recordBytes.data() + oldSize, 0, alignedSize - oldSize);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Replace any record with a skip record of the same size. This is useful when
|
|
|
|
// we have reserved size for a symbol record, but type index remapping fails.
|
|
|
|
static void replaceWithSkipRecord(MutableArrayRef<uint8_t> recordBytes) {
|
|
|
|
memset(recordBytes.data(), 0, recordBytes.size());
|
|
|
|
auto *prefix = reinterpret_cast<RecordPrefix *>(recordBytes.data());
|
|
|
|
prefix->RecordKind = SymbolKind::S_SKIP;
|
|
|
|
prefix->RecordLen = recordBytes.size() - 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Copy the symbol record, relocate it, and fix the alignment if necessary.
|
|
|
|
// Rewrite type indices in the record. Replace unrecognized symbol records with
|
|
|
|
// S_SKIP records.
|
|
|
|
void PDBLinker::writeSymbolRecord(SectionChunk *debugChunk,
|
|
|
|
ArrayRef<uint8_t> sectionContents,
|
|
|
|
CVSymbol sym, size_t alignedSize,
|
|
|
|
uint32_t &nextRelocIndex,
|
|
|
|
std::vector<uint8_t> &storage) {
|
|
|
|
// Allocate space for the new record at the end of the storage.
|
|
|
|
storage.resize(storage.size() + alignedSize);
|
|
|
|
auto recordBytes = MutableArrayRef<uint8_t>(storage).take_back(alignedSize);
|
|
|
|
|
|
|
|
// Copy the symbol record and relocate it.
|
|
|
|
debugChunk->writeAndRelocateSubsection(sectionContents, sym.data(),
|
|
|
|
nextRelocIndex, recordBytes.data());
|
|
|
|
fixRecordAlignment(recordBytes, sym.length());
|
|
|
|
|
|
|
|
// Re-map all the type index references.
|
|
|
|
TpiSource *source = debugChunk->file->debugTypesObj;
|
|
|
|
if (!source->remapTypesInSymbolRecord(recordBytes)) {
|
|
|
|
log("ignoring unknown symbol record with kind 0x" + utohexstr(sym.kind()));
|
|
|
|
replaceWithSkipRecord(recordBytes);
|
|
|
|
}
|
|
|
|
|
|
|
|
// An object file may have S_xxx_ID symbols, but these get converted to
|
|
|
|
// "real" symbols in a PDB.
|
|
|
|
translateIdSymbols(recordBytes, tMerger, source);
|
|
|
|
}
|
|
|
|
|
|
|
|
void PDBLinker::analyzeSymbolSubsection(
|
|
|
|
SectionChunk *debugChunk, uint32_t &moduleSymOffset,
|
|
|
|
uint32_t &nextRelocIndex, std::vector<StringTableFixup> &stringTableFixups,
|
|
|
|
BinaryStreamRef symData) {
|
|
|
|
ObjFile *file = debugChunk->file;
|
|
|
|
uint32_t moduleSymStart = moduleSymOffset;
|
|
|
|
|
|
|
|
uint32_t scopeLevel = 0;
|
|
|
|
std::vector<uint8_t> storage;
|
|
|
|
ArrayRef<uint8_t> sectionContents = debugChunk->getContents();
|
|
|
|
|
2018-01-19 02:35:01 +08:00
|
|
|
ArrayRef<uint8_t> symsBuffer;
|
|
|
|
cantFail(symData.readBytes(0, symData.getLength(), symsBuffer));
|
2018-01-06 03:12:40 +08:00
|
|
|
|
2020-11-25 03:26:14 +08:00
|
|
|
if (symsBuffer.empty())
|
|
|
|
warn("empty symbols subsection in " + file->getName());
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
Error ec = forEachCodeViewRecord<CVSymbol>(
|
2018-11-14 07:44:39 +08:00
|
|
|
symsBuffer, [&](CVSymbol sym) -> llvm::Error {
|
2021-03-11 06:51:52 +08:00
|
|
|
// Track the current scope.
|
|
|
|
if (symbolOpensScope(sym.kind()))
|
|
|
|
++scopeLevel;
|
|
|
|
else if (symbolEndsScope(sym.kind()))
|
|
|
|
--scopeLevel;
|
|
|
|
|
|
|
|
uint32_t alignedSize =
|
2018-12-18 09:14:05 +08:00
|
|
|
alignTo(sym.length(), alignOf(CodeViewContainer::Pdb));
|
2021-03-11 06:51:52 +08:00
|
|
|
|
|
|
|
// Copy global records. Some global records (mainly procedures)
|
|
|
|
// reference the current offset into the module stream.
|
|
|
|
if (symbolGoesInGlobalsStream(sym, scopeLevel)) {
|
|
|
|
storage.clear();
|
|
|
|
writeSymbolRecord(debugChunk, sectionContents, sym, alignedSize,
|
|
|
|
nextRelocIndex, storage);
|
|
|
|
addGlobalSymbol(builder.getGsiBuilder(),
|
|
|
|
file->moduleDBI->getModuleIndex(), moduleSymOffset,
|
|
|
|
storage);
|
|
|
|
++globalSymbols;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Update the module stream offset and record any string table index
|
|
|
|
// references. There are very few of these and they will be rewritten
|
|
|
|
// later during PDB writing.
|
|
|
|
if (symbolGoesInModuleStream(sym, scopeLevel)) {
|
|
|
|
recordStringTableReferences(sym, moduleSymOffset, stringTableFixups);
|
|
|
|
moduleSymOffset += alignedSize;
|
|
|
|
++moduleSymbols;
|
|
|
|
}
|
|
|
|
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
return Error::success();
|
|
|
|
});
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// If we encountered corrupt records, ignore the whole subsection. If we wrote
|
|
|
|
// any partial records, undo that. For globals, we just keep what we have and
|
|
|
|
// continue.
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
if (ec) {
|
|
|
|
warn("corrupt symbol records in " + file->getName());
|
2021-03-11 06:51:52 +08:00
|
|
|
moduleSymOffset = moduleSymStart;
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
consumeError(std::move(ec));
|
2021-01-29 05:17:27 +08:00
|
|
|
}
|
2021-03-11 06:51:52 +08:00
|
|
|
}
|
2018-01-19 02:35:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
Error PDBLinker::writeAllModuleSymbolRecords(ObjFile *file,
|
|
|
|
BinaryStreamWriter &writer) {
|
|
|
|
std::vector<uint8_t> storage;
|
|
|
|
SmallVector<uint32_t, 4> scopes;
|
2018-01-19 02:35:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// Visit all live .debug$S sections a second time, and write them to the PDB.
|
|
|
|
for (SectionChunk *debugChunk : file->getDebugChunks()) {
|
|
|
|
if (!debugChunk->live || debugChunk->getSize() == 0 ||
|
|
|
|
debugChunk->getSectionName() != ".debug$S")
|
|
|
|
continue;
|
2018-01-19 02:35:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
ArrayRef<uint8_t> sectionContents = debugChunk->getContents();
|
|
|
|
auto contents =
|
|
|
|
SectionChunk::consumeDebugMagic(sectionContents, ".debug$S");
|
|
|
|
DebugSubsectionArray subsections;
|
|
|
|
BinaryStreamReader reader(contents, support::little);
|
|
|
|
exitOnErr(reader.readArray(subsections, contents.size()));
|
2018-01-19 02:35:01 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
uint32_t nextRelocIndex = 0;
|
|
|
|
for (const DebugSubsectionRecord &ss : subsections) {
|
|
|
|
if (ss.kind() != DebugSubsectionKind::Symbols)
|
|
|
|
continue;
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
uint32_t moduleSymStart = writer.getOffset();
|
|
|
|
scopes.clear();
|
|
|
|
storage.clear();
|
|
|
|
ArrayRef<uint8_t> symsBuffer;
|
|
|
|
BinaryStreamRef sr = ss.getRecordData();
|
|
|
|
cantFail(sr.readBytes(0, sr.getLength(), symsBuffer));
|
|
|
|
auto ec = forEachCodeViewRecord<CVSymbol>(
|
|
|
|
symsBuffer, [&](CVSymbol sym) -> llvm::Error {
|
|
|
|
// Track the current scope. Only update records in the postmerge
|
|
|
|
// pass.
|
|
|
|
if (symbolOpensScope(sym.kind()))
|
|
|
|
scopeStackOpen(scopes, storage);
|
|
|
|
else if (symbolEndsScope(sym.kind()))
|
|
|
|
scopeStackClose(scopes, storage, moduleSymStart, file);
|
|
|
|
|
|
|
|
// Copy, relocate, and rewrite each module symbol.
|
|
|
|
if (symbolGoesInModuleStream(sym, scopes.size())) {
|
|
|
|
uint32_t alignedSize =
|
|
|
|
alignTo(sym.length(), alignOf(CodeViewContainer::Pdb));
|
|
|
|
writeSymbolRecord(debugChunk, sectionContents, sym, alignedSize,
|
|
|
|
nextRelocIndex, storage);
|
|
|
|
}
|
|
|
|
return Error::success();
|
|
|
|
});
|
|
|
|
|
|
|
|
// If we encounter corrupt records in the second pass, ignore them. We
|
|
|
|
// already warned about them in the first analysis pass.
|
|
|
|
if (ec) {
|
|
|
|
consumeError(std::move(ec));
|
|
|
|
storage.clear();
|
|
|
|
}
|
2021-01-20 03:20:23 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// Writing bytes has a very high overhead, so write the entire subsection
|
|
|
|
// at once.
|
|
|
|
// TODO: Consider buffering symbols for the entire object file to reduce
|
|
|
|
// overhead even further.
|
|
|
|
if (Error e = writer.writeBytes(storage))
|
|
|
|
return e;
|
|
|
|
}
|
|
|
|
}
|
2021-01-29 05:17:27 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
return Error::success();
|
|
|
|
}
|
[PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.
Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.
With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).
Some stats on symbol sizes for the curious:
PDB size: 507M
sym bytes: 316,508,016
sym count: 18,954,971
sym byte avg: 16.7
As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.
Reviewers: zturner, aganea, ruiu
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D54554
llvm-svn: 347687
2018-11-28 03:00:23 +08:00
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
Error PDBLinker::commitSymbolsForObject(void *ctx, void *obj,
|
|
|
|
BinaryStreamWriter &writer) {
|
|
|
|
return static_cast<PDBLinker *>(ctx)->writeAllModuleSymbolRecords(
|
|
|
|
static_cast<ObjFile *>(obj), writer);
|
2017-06-22 01:25:56 +08:00
|
|
|
}
|
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
static pdb::SectionContrib createSectionContrib(COFFLinkerContext &ctx,
|
|
|
|
const Chunk *c, uint32_t modi) {
|
|
|
|
OutputSection *os = c ? ctx.getOutputSection(c) : nullptr;
|
2018-04-21 02:00:46 +08:00
|
|
|
pdb::SectionContrib sc;
|
|
|
|
memset(&sc, 0, sizeof(sc));
|
2019-03-19 03:13:23 +08:00
|
|
|
sc.ISect = os ? os->sectionIndex : llvm::pdb::kInvalidStreamIndex;
|
|
|
|
sc.Off = c && os ? c->getRVA() - os->getRVA() : 0;
|
|
|
|
sc.Size = c ? c->getSize() : -1;
|
|
|
|
if (auto *secChunk = dyn_cast_or_null<SectionChunk>(c)) {
|
2018-04-21 02:00:46 +08:00
|
|
|
sc.Characteristics = secChunk->header->Characteristics;
|
|
|
|
sc.Imod = secChunk->file->moduleDBI->getModuleIndex();
|
|
|
|
ArrayRef<uint8_t> contents = secChunk->getContents();
|
|
|
|
JamCRC crc(0);
|
2019-10-09 17:06:30 +08:00
|
|
|
crc.update(contents);
|
2018-04-21 02:00:46 +08:00
|
|
|
sc.DataCrc = crc.getCRC();
|
|
|
|
} else {
|
2019-03-19 03:13:23 +08:00
|
|
|
sc.Characteristics = os ? os->header.Characteristics : 0;
|
2018-04-21 02:00:46 +08:00
|
|
|
sc.Imod = modi;
|
|
|
|
}
|
|
|
|
sc.RelocCrc = 0; // FIXME
|
|
|
|
|
|
|
|
return sc;
|
|
|
|
}
|
|
|
|
|
2018-09-12 06:35:01 +08:00
|
|
|
static uint32_t
|
|
|
|
translateStringTableIndex(uint32_t objIndex,
|
|
|
|
const DebugStringTableSubsectionRef &objStrTable,
|
|
|
|
DebugStringTableSubsection &pdbStrTable) {
|
|
|
|
auto expectedString = objStrTable.getString(objIndex);
|
|
|
|
if (!expectedString) {
|
|
|
|
warn("Invalid string table reference");
|
|
|
|
consumeError(expectedString.takeError());
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return pdbStrTable.insert(*expectedString);
|
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
void DebugSHandler::handleDebugS(SectionChunk *debugChunk) {
|
|
|
|
// Note that we are processing the *unrelocated* section contents. They will
|
|
|
|
// be relocated later during PDB writing.
|
|
|
|
ArrayRef<uint8_t> contents = debugChunk->getContents();
|
|
|
|
contents = SectionChunk::consumeDebugMagic(contents, ".debug$S");
|
2021-01-29 05:17:27 +08:00
|
|
|
DebugSubsectionArray subsections;
|
2021-03-11 06:51:52 +08:00
|
|
|
BinaryStreamReader reader(contents, support::little);
|
|
|
|
exitOnErr(reader.readArray(subsections, contents.size()));
|
|
|
|
debugChunk->sortRelocations();
|
|
|
|
|
|
|
|
// Reset the relocation index, since this is a new section.
|
|
|
|
nextRelocIndex = 0;
|
2018-09-13 05:02:01 +08:00
|
|
|
|
|
|
|
for (const DebugSubsectionRecord &ss : subsections) {
|
2019-06-19 03:41:25 +08:00
|
|
|
// Ignore subsections with the 'ignore' bit. Some versions of the Visual C++
|
|
|
|
// runtime have subsections with this bit set.
|
|
|
|
if (uint32_t(ss.kind()) & codeview::SubsectionIgnoreFlag)
|
|
|
|
continue;
|
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
switch (ss.kind()) {
|
|
|
|
case DebugSubsectionKind::StringTable: {
|
2020-05-15 02:21:53 +08:00
|
|
|
assert(!cvStrTab.valid() &&
|
2018-09-13 05:02:01 +08:00
|
|
|
"Encountered multiple string table subsections!");
|
2020-05-15 02:21:53 +08:00
|
|
|
exitOnErr(cvStrTab.initialize(ss.getRecordData()));
|
2018-09-13 05:02:01 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case DebugSubsectionKind::FileChecksums:
|
|
|
|
assert(!checksums.valid() &&
|
|
|
|
"Encountered multiple checksum subsections!");
|
|
|
|
exitOnErr(checksums.initialize(ss.getRecordData()));
|
|
|
|
break;
|
|
|
|
case DebugSubsectionKind::Lines:
|
2019-06-04 02:15:38 +08:00
|
|
|
case DebugSubsectionKind::InlineeLines:
|
2021-03-11 06:51:52 +08:00
|
|
|
addUnrelocatedSubsection(debugChunk, ss);
|
2019-06-04 02:15:38 +08:00
|
|
|
break;
|
2021-03-11 06:51:52 +08:00
|
|
|
case DebugSubsectionKind::FrameData:
|
|
|
|
addFrameDataSubsection(debugChunk, ss);
|
2018-09-13 05:02:01 +08:00
|
|
|
break;
|
2021-03-11 06:51:52 +08:00
|
|
|
case DebugSubsectionKind::Symbols:
|
|
|
|
linker.analyzeSymbolSubsection(debugChunk, moduleStreamSize,
|
|
|
|
nextRelocIndex, stringTableFixups,
|
|
|
|
ss.getRecordData());
|
2018-09-13 05:02:01 +08:00
|
|
|
break;
|
2019-06-04 02:15:38 +08:00
|
|
|
|
|
|
|
case DebugSubsectionKind::CrossScopeImports:
|
|
|
|
case DebugSubsectionKind::CrossScopeExports:
|
|
|
|
// These appear to relate to cross-module optimization, so we might use
|
|
|
|
// these for ThinLTO.
|
|
|
|
break;
|
|
|
|
|
|
|
|
case DebugSubsectionKind::ILLines:
|
|
|
|
case DebugSubsectionKind::FuncMDTokenMap:
|
|
|
|
case DebugSubsectionKind::TypeMDTokenMap:
|
|
|
|
case DebugSubsectionKind::MergedAssemblyInput:
|
|
|
|
// These appear to relate to .Net assembly info.
|
|
|
|
break;
|
|
|
|
|
|
|
|
case DebugSubsectionKind::CoffSymbolRVA:
|
|
|
|
// Unclear what this is for.
|
|
|
|
break;
|
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
default:
|
2019-06-04 02:15:38 +08:00
|
|
|
warn("ignoring unknown debug$S subsection kind 0x" +
|
2019-06-15 06:03:23 +08:00
|
|
|
utohexstr(uint32_t(ss.kind())) + " in file " + toString(&file));
|
2018-09-13 05:02:01 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
void DebugSHandler::advanceRelocIndex(SectionChunk *sc,
|
|
|
|
ArrayRef<uint8_t> subsec) {
|
|
|
|
ptrdiff_t vaBegin = subsec.data() - sc->getContents().data();
|
|
|
|
assert(vaBegin > 0);
|
|
|
|
auto relocs = sc->getRelocs();
|
|
|
|
for (; nextRelocIndex < relocs.size(); ++nextRelocIndex) {
|
|
|
|
if (relocs[nextRelocIndex].VirtualAddress >= vaBegin)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
namespace {
|
|
|
|
/// Wrapper class for unrelocated line and inlinee line subsections, which
|
|
|
|
/// require only relocation and type index remapping to add to the PDB.
|
|
|
|
class UnrelocatedDebugSubsection : public DebugSubsection {
|
|
|
|
public:
|
|
|
|
UnrelocatedDebugSubsection(DebugSubsectionKind k, SectionChunk *debugChunk,
|
|
|
|
ArrayRef<uint8_t> subsec, uint32_t relocIndex)
|
|
|
|
: DebugSubsection(k), debugChunk(debugChunk), subsec(subsec),
|
|
|
|
relocIndex(relocIndex) {}
|
|
|
|
|
|
|
|
Error commit(BinaryStreamWriter &writer) const override;
|
|
|
|
uint32_t calculateSerializedSize() const override { return subsec.size(); }
|
|
|
|
|
|
|
|
SectionChunk *debugChunk;
|
|
|
|
ArrayRef<uint8_t> subsec;
|
|
|
|
uint32_t relocIndex;
|
|
|
|
};
|
|
|
|
} // namespace
|
|
|
|
|
|
|
|
Error UnrelocatedDebugSubsection::commit(BinaryStreamWriter &writer) const {
|
|
|
|
std::vector<uint8_t> relocatedBytes(subsec.size());
|
|
|
|
uint32_t tmpRelocIndex = relocIndex;
|
|
|
|
debugChunk->writeAndRelocateSubsection(debugChunk->getContents(), subsec,
|
|
|
|
tmpRelocIndex, relocatedBytes.data());
|
|
|
|
|
|
|
|
// Remap type indices in inlinee line records in place. Skip the remapping if
|
|
|
|
// there is no type source info.
|
|
|
|
if (kind() == DebugSubsectionKind::InlineeLines &&
|
|
|
|
debugChunk->file->debugTypesObj) {
|
|
|
|
TpiSource *source = debugChunk->file->debugTypesObj;
|
|
|
|
DebugInlineeLinesSubsectionRef inlineeLines;
|
|
|
|
BinaryStreamReader storageReader(relocatedBytes, support::little);
|
|
|
|
exitOnErr(inlineeLines.initialize(storageReader));
|
|
|
|
for (const InlineeSourceLine &line : inlineeLines) {
|
|
|
|
TypeIndex &inlinee = *const_cast<TypeIndex *>(&line.Header->Inlinee);
|
|
|
|
if (!source->remapTypeIndex(inlinee, TiRefKind::IndexRef)) {
|
|
|
|
log("bad inlinee line record in " + debugChunk->file->getName() +
|
|
|
|
" with bad inlinee index 0x" + utohexstr(inlinee.getIndex()));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return writer.writeBytes(relocatedBytes);
|
|
|
|
}
|
|
|
|
|
|
|
|
void DebugSHandler::addUnrelocatedSubsection(SectionChunk *debugChunk,
|
|
|
|
const DebugSubsectionRecord &ss) {
|
|
|
|
ArrayRef<uint8_t> subsec;
|
|
|
|
BinaryStreamRef sr = ss.getRecordData();
|
|
|
|
cantFail(sr.readBytes(0, sr.getLength(), subsec));
|
|
|
|
advanceRelocIndex(debugChunk, subsec);
|
|
|
|
file.moduleDBI->addDebugSubsection(
|
|
|
|
std::make_shared<UnrelocatedDebugSubsection>(ss.kind(), debugChunk,
|
|
|
|
subsec, nextRelocIndex));
|
|
|
|
}
|
|
|
|
|
|
|
|
void DebugSHandler::addFrameDataSubsection(SectionChunk *debugChunk,
|
|
|
|
const DebugSubsectionRecord &ss) {
|
|
|
|
// We need to re-write string table indices here, so save off all
|
|
|
|
// frame data subsections until we've processed the entire list of
|
|
|
|
// subsections so that we can be sure we have the string table.
|
|
|
|
ArrayRef<uint8_t> subsec;
|
|
|
|
BinaryStreamRef sr = ss.getRecordData();
|
|
|
|
cantFail(sr.readBytes(0, sr.getLength(), subsec));
|
|
|
|
advanceRelocIndex(debugChunk, subsec);
|
|
|
|
frameDataSubsecs.push_back({debugChunk, subsec, nextRelocIndex});
|
|
|
|
}
|
|
|
|
|
2019-06-04 02:15:38 +08:00
|
|
|
static Expected<StringRef>
|
|
|
|
getFileName(const DebugStringTableSubsectionRef &strings,
|
|
|
|
const DebugChecksumsSubsectionRef &checksums, uint32_t fileID) {
|
|
|
|
auto iter = checksums.getArray().at(fileID);
|
|
|
|
if (iter == checksums.getArray().end())
|
|
|
|
return make_error<CodeViewError>(cv_error_code::no_records);
|
|
|
|
uint32_t offset = iter->FileNameOffset;
|
|
|
|
return strings.getString(offset);
|
|
|
|
}
|
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
void DebugSHandler::finish() {
|
|
|
|
pdb::DbiStreamBuilder &dbiBuilder = linker.builder.getDbiBuilder();
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// If we found any symbol records for the module symbol stream, defer them.
|
|
|
|
if (moduleStreamSize > kSymbolStreamMagicSize)
|
|
|
|
file.moduleDBI->addUnmergedSymbols(&file, moduleStreamSize -
|
|
|
|
kSymbolStreamMagicSize);
|
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
// We should have seen all debug subsections across the entire object file now
|
|
|
|
// which means that if a StringTable subsection and Checksums subsection were
|
|
|
|
// present, now is the time to handle them.
|
2020-05-15 02:21:53 +08:00
|
|
|
if (!cvStrTab.valid()) {
|
2018-09-13 05:02:01 +08:00
|
|
|
if (checksums.valid())
|
|
|
|
fatal(".debug$S sections with a checksums subsection must also contain a "
|
|
|
|
"string table subsection");
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
if (!stringTableFixups.empty())
|
2018-09-13 05:02:01 +08:00
|
|
|
warn("No StringTable subsection was encountered, but there are string "
|
|
|
|
"table references");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// Handle FPO data. Each subsection begins with a single image base
|
|
|
|
// relocation, which is then added to the RvaStart of each frame data record
|
|
|
|
// when it is added to the PDB. The string table indices for the FPO program
|
|
|
|
// must also be rewritten to use the PDB string table.
|
|
|
|
for (const UnrelocatedFpoData &subsec : frameDataSubsecs) {
|
|
|
|
// Relocate the first four bytes of the subection and reinterpret them as a
|
|
|
|
// 32 bit integer.
|
|
|
|
SectionChunk *debugChunk = subsec.debugChunk;
|
|
|
|
ArrayRef<uint8_t> subsecData = subsec.subsecData;
|
|
|
|
uint32_t relocIndex = subsec.relocIndex;
|
|
|
|
auto unrelocatedRvaStart = subsecData.take_front(sizeof(uint32_t));
|
|
|
|
uint8_t relocatedRvaStart[sizeof(uint32_t)];
|
|
|
|
debugChunk->writeAndRelocateSubsection(debugChunk->getContents(),
|
|
|
|
unrelocatedRvaStart, relocIndex,
|
|
|
|
&relocatedRvaStart[0]);
|
|
|
|
uint32_t rvaStart;
|
|
|
|
memcpy(&rvaStart, &relocatedRvaStart[0], sizeof(uint32_t));
|
|
|
|
|
|
|
|
// Copy each frame data record, add in rvaStart, translate string table
|
|
|
|
// indices, and add the record to the PDB.
|
|
|
|
DebugFrameDataSubsectionRef fds;
|
|
|
|
BinaryStreamReader reader(subsecData, support::little);
|
|
|
|
exitOnErr(fds.initialize(reader));
|
2018-09-13 05:02:01 +08:00
|
|
|
for (codeview::FrameData fd : fds) {
|
2021-03-11 06:51:52 +08:00
|
|
|
fd.RvaStart += rvaStart;
|
2018-09-13 05:02:01 +08:00
|
|
|
fd.FrameFunc =
|
2020-05-15 02:21:53 +08:00
|
|
|
translateStringTableIndex(fd.FrameFunc, cvStrTab, linker.pdbStrTab);
|
2018-09-13 05:02:01 +08:00
|
|
|
dbiBuilder.addNewFpoData(fd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-11 06:51:52 +08:00
|
|
|
// Translate the fixups and pass them off to the module builder so they will
|
|
|
|
// be applied during writing.
|
|
|
|
for (StringTableFixup &ref : stringTableFixups) {
|
|
|
|
ref.StrTabOffset =
|
|
|
|
translateStringTableIndex(ref.StrTabOffset, cvStrTab, linker.pdbStrTab);
|
|
|
|
}
|
|
|
|
file.moduleDBI->setStringTableFixups(std::move(stringTableFixups));
|
2018-09-13 05:02:01 +08:00
|
|
|
|
|
|
|
// Make a new file checksum table that refers to offsets in the PDB-wide
|
|
|
|
// string table. Generally the string table subsection appears after the
|
|
|
|
// checksum table, so we have to do this after looping over all the
|
2020-06-02 02:34:09 +08:00
|
|
|
// subsections. The new checksum table must have the exact same layout and
|
|
|
|
// size as the original. Otherwise, the file references in the line and
|
|
|
|
// inlinee line tables will be incorrect.
|
2019-08-15 06:28:17 +08:00
|
|
|
auto newChecksums = std::make_unique<DebugChecksumsSubsection>(linker.pdbStrTab);
|
2021-11-13 06:22:00 +08:00
|
|
|
for (const FileChecksumEntry &fc : checksums) {
|
2019-06-04 02:15:38 +08:00
|
|
|
SmallString<128> filename =
|
2020-05-15 02:21:53 +08:00
|
|
|
exitOnErr(cvStrTab.getString(fc.FileNameOffset));
|
2019-06-04 02:15:38 +08:00
|
|
|
pdbMakeAbsolute(filename);
|
|
|
|
exitOnErr(dbiBuilder.addModuleSourceFile(*file.moduleDBI, filename));
|
|
|
|
newChecksums->addChecksum(filename, fc.Kind, fc.Checksum);
|
2018-09-13 05:02:01 +08:00
|
|
|
}
|
2020-06-02 02:34:09 +08:00
|
|
|
assert(checksums.getArray().getUnderlyingStream().getLength() ==
|
|
|
|
newChecksums->calculateSerializedSize() &&
|
|
|
|
"file checksum table must have same layout");
|
2019-06-04 02:15:38 +08:00
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
file.moduleDBI->addDebugSubsection(std::move(newChecksums));
|
|
|
|
}
|
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
static void warnUnusable(InputFile *f, Error e) {
|
|
|
|
if (!config->warnDebugInfoUnusable) {
|
|
|
|
consumeError(std::move(e));
|
2018-11-06 03:20:47 +08:00
|
|
|
return;
|
2020-05-09 21:58:15 +08:00
|
|
|
}
|
|
|
|
auto msg = "Cannot use debug info for '" + toString(f) + "' [LNK4099]";
|
|
|
|
if (e)
|
|
|
|
warn(msg + "\n>>> failed to load reference " + toString(std::move(e)));
|
|
|
|
else
|
|
|
|
warn(msg);
|
|
|
|
}
|
2018-04-21 02:00:46 +08:00
|
|
|
|
2020-06-02 04:12:06 +08:00
|
|
|
// Allocate memory for a .debug$S / .debug$F section and relocate it.
|
|
|
|
static ArrayRef<uint8_t> relocateDebugChunk(SectionChunk &debugChunk) {
|
2022-01-21 03:53:18 +08:00
|
|
|
uint8_t *buffer = bAlloc().Allocate<uint8_t>(debugChunk.getSize());
|
2020-06-02 04:12:06 +08:00
|
|
|
assert(debugChunk.getOutputSectionIdx() == 0 &&
|
|
|
|
"debug sections should not be in output sections");
|
|
|
|
debugChunk.writeTo(buffer);
|
|
|
|
return makeArrayRef(buffer, debugChunk.getSize());
|
|
|
|
}
|
|
|
|
|
2020-06-04 09:08:55 +08:00
|
|
|
void PDBLinker::addDebugSymbols(TpiSource *source) {
|
|
|
|
// If this TpiSource doesn't have an object file, it must be from a type
|
|
|
|
// server PDB. Type server PDBs do not contain symbols, so stop here.
|
|
|
|
if (!source->file)
|
|
|
|
return;
|
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t(ctx.symbolMergingTimer);
|
2019-03-23 06:07:27 +08:00
|
|
|
pdb::DbiStreamBuilder &dbiBuilder = builder.getDbiBuilder();
|
2020-06-04 09:08:55 +08:00
|
|
|
DebugSHandler dsh(*this, *source->file, source);
|
2018-09-13 05:02:01 +08:00
|
|
|
// Now do all live .debug$S and .debug$F sections.
|
2020-06-04 09:08:55 +08:00
|
|
|
for (SectionChunk *debugChunk : source->file->getDebugChunks()) {
|
2018-09-13 05:02:01 +08:00
|
|
|
if (!debugChunk->live || debugChunk->getSize() == 0)
|
2017-07-14 08:14:58 +08:00
|
|
|
continue;
|
2017-06-20 01:21:45 +08:00
|
|
|
|
2020-06-02 04:12:06 +08:00
|
|
|
bool isDebugS = debugChunk->getSectionName() == ".debug$S";
|
|
|
|
bool isDebugF = debugChunk->getSectionName() == ".debug$F";
|
|
|
|
if (!isDebugS && !isDebugF)
|
2017-07-14 08:14:58 +08:00
|
|
|
continue;
|
2017-06-20 01:21:45 +08:00
|
|
|
|
2020-06-02 04:12:06 +08:00
|
|
|
if (isDebugS) {
|
2021-03-11 06:51:52 +08:00
|
|
|
dsh.handleDebugS(debugChunk);
|
2020-06-02 04:12:06 +08:00
|
|
|
} else if (isDebugF) {
|
2021-03-11 06:51:52 +08:00
|
|
|
// Handle old FPO data .debug$F sections. These are relatively rare.
|
|
|
|
ArrayRef<uint8_t> relocatedDebugContents =
|
|
|
|
relocateDebugChunk(*debugChunk);
|
2018-09-13 05:02:01 +08:00
|
|
|
FixedStreamArray<object::FpoData> fpoRecords;
|
|
|
|
BinaryStreamReader reader(relocatedDebugContents, support::little);
|
|
|
|
uint32_t count = relocatedDebugContents.size() / sizeof(object::FpoData);
|
|
|
|
exitOnErr(reader.readArray(fpoRecords, count));
|
2018-01-06 03:12:40 +08:00
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
// These are already relocated and don't refer to the string table, so we
|
|
|
|
// can just copy it.
|
|
|
|
for (const object::FpoData &fd : fpoRecords)
|
|
|
|
dbiBuilder.addOldFpoData(fd);
|
2017-06-20 01:21:45 +08:00
|
|
|
}
|
2018-01-06 03:12:40 +08:00
|
|
|
}
|
|
|
|
|
2018-09-13 05:02:01 +08:00
|
|
|
// Do any post-processing now that all .debug$S sections have been processed.
|
|
|
|
dsh.finish();
|
2017-07-14 08:14:58 +08:00
|
|
|
}
|
2017-01-12 11:09:25 +08:00
|
|
|
|
2019-03-23 06:07:27 +08:00
|
|
|
// Add a module descriptor for every object file. We need to put an absolute
|
|
|
|
// path to the object into the PDB. If this is a plain object, we make its
|
|
|
|
// path absolute. If it's an object in an archive, we make the archive path
|
|
|
|
// absolute.
|
2021-03-11 06:51:52 +08:00
|
|
|
void PDBLinker::createModuleDBI(ObjFile *file) {
|
2019-03-23 06:07:27 +08:00
|
|
|
pdb::DbiStreamBuilder &dbiBuilder = builder.getDbiBuilder();
|
|
|
|
SmallString<128> objName;
|
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
bool inArchive = !file->parentName.empty();
|
|
|
|
objName = inArchive ? file->parentName : file->getName();
|
|
|
|
pdbMakeAbsolute(objName);
|
2021-07-09 04:30:14 +08:00
|
|
|
StringRef modName = inArchive ? file->getName() : objName.str();
|
2019-03-23 06:07:27 +08:00
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
file->moduleDBI = &exitOnErr(dbiBuilder.addModuleInfo(modName));
|
|
|
|
file->moduleDBI->setObjFileName(objName);
|
2021-03-11 06:51:52 +08:00
|
|
|
file->moduleDBI->setMergeSymbolsCallback(this, &commitSymbolsForObject);
|
2019-03-23 06:07:27 +08:00
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
ArrayRef<Chunk *> chunks = file->getChunks();
|
|
|
|
uint32_t modi = file->moduleDBI->getModuleIndex();
|
2019-03-23 06:07:27 +08:00
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
for (Chunk *c : chunks) {
|
|
|
|
auto *secChunk = dyn_cast<SectionChunk>(c);
|
|
|
|
if (!secChunk || !secChunk->live)
|
|
|
|
continue;
|
2021-09-17 07:48:26 +08:00
|
|
|
pdb::SectionContrib sc = createSectionContrib(ctx, secChunk, modi);
|
2020-05-09 21:58:15 +08:00
|
|
|
file->moduleDBI->setFirstSectionContrib(sc);
|
|
|
|
break;
|
2019-03-23 06:07:27 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
void PDBLinker::addDebug(TpiSource *source) {
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
// Before we can process symbol substreams from .debug$S, we need to process
|
|
|
|
// type information, file checksums, and the string table. Add type info to
|
|
|
|
// the PDB first, so that we can get the map from object file type and item
|
|
|
|
// indices to PDB type and item indices. If we are using ghashes, types have
|
|
|
|
// already been merged.
|
|
|
|
if (!config->debugGHashes) {
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t(ctx.typeMergingTimer);
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
if (Error e = source->mergeDebugT(&tMerger)) {
|
|
|
|
// If type merging failed, ignore the symbols.
|
|
|
|
warnUnusable(source->file, std::move(e));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-01 05:55:32 +08:00
|
|
|
// If type merging failed, ignore the symbols.
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
Error typeError = std::move(source->typeMergingError);
|
|
|
|
if (typeError) {
|
|
|
|
warnUnusable(source->file, std::move(typeError));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
addDebugSymbols(source);
|
2020-05-09 21:58:15 +08:00
|
|
|
}
|
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
static pdb::BulkPublic createPublic(COFFLinkerContext &ctx, Defined *def) {
|
2020-05-04 00:29:03 +08:00
|
|
|
pdb::BulkPublic pub;
|
|
|
|
pub.Name = def->getName().data();
|
|
|
|
pub.NameLen = def->getName().size();
|
|
|
|
|
|
|
|
PublicSymFlags flags = PublicSymFlags::None;
|
2017-07-28 02:25:59 +08:00
|
|
|
if (auto *d = dyn_cast<DefinedCOFF>(def)) {
|
|
|
|
if (d->getCOFFSymbol().isFunctionDefinition())
|
2020-05-04 00:29:03 +08:00
|
|
|
flags = PublicSymFlags::Function;
|
2017-07-28 02:25:59 +08:00
|
|
|
} else if (isa<DefinedImportThunk>(def)) {
|
2020-05-04 00:29:03 +08:00
|
|
|
flags = PublicSymFlags::Function;
|
2017-07-28 02:25:59 +08:00
|
|
|
}
|
[PDB] Defer public serialization until PDB writing
This reduces peak memory on my test case from 1960.14MB to 1700.63MB
(-260MB, -13.2%) with no measurable impact on CPU time. I'm currently
working with a publics stream that is about 277MB. Before this change,
we would allocate 277MB of heap memory, serialize publics into them,
hold onto that heap memory, open the PDB, and commit into it. After
this change, we defer the serialization until commit time.
In the last change I made to public writing, I re-sorted the list of
publics multiple times in place to avoid allocating new temporary data
structures. Deferring serialization until later requires that we don't
reorder the publics. Instead of sorting the publics, I partially
construct the hash table data structures, store a publics index in them,
and then sort the hash table data structures. Later, I replace the index
with the symbol record offset.
This change also addresses a FIXME and moves the list of global and
public records from GSIHashStreamBuilder to GSIStreamBuilder. Now that
publics aren't being serialized, it makes even less sense to store them
as a list of CVSymbol records. The hash table used to deduplicate
globals is moved as well, since that is specific to globals, and not
publics.
Reviewed By: aganea, hans
Differential Revision: https://reviews.llvm.org/D81296
2020-06-05 09:57:24 +08:00
|
|
|
pub.setFlags(flags);
|
2017-07-28 02:25:59 +08:00
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
OutputSection *os = ctx.getOutputSection(def->getChunk());
|
2017-07-28 02:25:59 +08:00
|
|
|
assert(os && "all publics should be in final image");
|
|
|
|
pub.Offset = def->getRVA() - os->getRVA();
|
[PDB] Defer public serialization until PDB writing
This reduces peak memory on my test case from 1960.14MB to 1700.63MB
(-260MB, -13.2%) with no measurable impact on CPU time. I'm currently
working with a publics stream that is about 277MB. Before this change,
we would allocate 277MB of heap memory, serialize publics into them,
hold onto that heap memory, open the PDB, and commit into it. After
this change, we defer the serialization until commit time.
In the last change I made to public writing, I re-sorted the list of
publics multiple times in place to avoid allocating new temporary data
structures. Deferring serialization until later requires that we don't
reorder the publics. Instead of sorting the publics, I partially
construct the hash table data structures, store a publics index in them,
and then sort the hash table data structures. Later, I replace the index
with the symbol record offset.
This change also addresses a FIXME and moves the list of global and
public records from GSIHashStreamBuilder to GSIStreamBuilder. Now that
publics aren't being serialized, it makes even less sense to store them
as a list of CVSymbol records. The hash table used to deduplicate
globals is moved as well, since that is specific to globals, and not
publics.
Reviewed By: aganea, hans
Differential Revision: https://reviews.llvm.org/D81296
2020-06-05 09:57:24 +08:00
|
|
|
pub.Segment = os->sectionIndex;
|
2017-07-28 02:25:59 +08:00
|
|
|
return pub;
|
|
|
|
}
|
|
|
|
|
2017-07-14 08:14:58 +08:00
|
|
|
// Add all object files to the PDB. Merge .debug$T sections into IpiData and
|
|
|
|
// TpiData.
|
|
|
|
void PDBLinker::addObjectsToPDB() {
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t1(ctx.addObjectsTimer);
|
2019-03-23 06:07:27 +08:00
|
|
|
|
2020-05-09 21:58:15 +08:00
|
|
|
// Create module descriptors
|
2021-09-17 07:48:26 +08:00
|
|
|
for_each(ctx.objFileInstances, [&](ObjFile *obj) { createModuleDBI(obj); });
|
2019-03-23 06:07:27 +08:00
|
|
|
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
// Reorder dependency type sources to come first.
|
2021-09-17 07:48:26 +08:00
|
|
|
tMerger.sortDependencies();
|
[PDB] Merge types in parallel when using ghashing
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-05-15 05:02:36 +08:00
|
|
|
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
// Merge type information from input files using global type hashing.
|
|
|
|
if (config->debugGHashes)
|
|
|
|
tMerger.mergeTypesWithGHash();
|
|
|
|
|
|
|
|
// Merge dependencies and then regular objects.
|
2021-09-17 07:48:26 +08:00
|
|
|
for_each(tMerger.dependencySources,
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
[&](TpiSource *source) { addDebug(source); });
|
2021-09-17 07:48:26 +08:00
|
|
|
for_each(tMerger.objectSources, [&](TpiSource *source) { addDebug(source); });
|
2017-07-14 08:14:58 +08:00
|
|
|
|
|
|
|
builder.getStringTableBuilder().setStrings(pdbStrTab);
|
2018-01-18 03:16:26 +08:00
|
|
|
t1.stop();
|
2017-06-20 01:21:45 +08:00
|
|
|
|
2017-12-15 02:07:04 +08:00
|
|
|
// Construct TPI and IPI stream contents.
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t2(ctx.tpiStreamLayoutTimer);
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
// Collect all the merged types.
|
|
|
|
if (config->debugGHashes) {
|
2021-09-17 07:48:26 +08:00
|
|
|
addGHashTypeInfo(ctx, builder);
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
} else {
|
|
|
|
addTypeInfo(builder.getTpiBuilder(), tMerger.getTypeTable());
|
|
|
|
addTypeInfo(builder.getIpiBuilder(), tMerger.getIDTable());
|
|
|
|
}
|
2018-01-18 03:16:26 +08:00
|
|
|
t2.stop();
|
2020-10-02 21:36:11 +08:00
|
|
|
|
|
|
|
if (config->showSummary) {
|
2021-09-17 07:48:26 +08:00
|
|
|
for_each(ctx.tpiSourceList, [&](TpiSource *source) {
|
2020-10-02 21:36:11 +08:00
|
|
|
nbTypeRecords += source->nbTypeRecords;
|
|
|
|
nbTypeRecordsBytes += source->nbTypeRecordsBytes;
|
|
|
|
});
|
|
|
|
}
|
2020-05-04 00:29:03 +08:00
|
|
|
}
|
2017-07-14 08:14:58 +08:00
|
|
|
|
2020-05-04 00:29:03 +08:00
|
|
|
void PDBLinker::addPublicsToPDB() {
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t3(ctx.publicsLayoutTimer);
|
2020-05-04 00:29:03 +08:00
|
|
|
// Compute the public symbols.
|
2017-08-12 03:00:03 +08:00
|
|
|
auto &gsiBuilder = builder.getGsiBuilder();
|
2020-05-04 00:29:03 +08:00
|
|
|
std::vector<pdb::BulkPublic> publics;
|
2021-09-17 07:48:26 +08:00
|
|
|
ctx.symtab.forEachSymbol([&publics, this](Symbol *s) {
|
2020-05-04 00:29:03 +08:00
|
|
|
// Only emit external, defined, live symbols that have a chunk. Static,
|
|
|
|
// non-external symbols do not appear in the symbol table.
|
2017-11-01 00:10:24 +08:00
|
|
|
auto *def = dyn_cast<Defined>(s);
|
2021-05-14 05:42:18 +08:00
|
|
|
if (def && def->isLive() && def->getChunk()) {
|
|
|
|
// Don't emit a public symbol for coverage data symbols. LLVM code
|
|
|
|
// coverage (and PGO) create a __profd_ and __profc_ symbol for every
|
|
|
|
// function. C++ mangled names are long, and tend to dominate symbol size.
|
|
|
|
// Including these names triples the size of the public stream, which
|
|
|
|
// results in bloated PDB files. These symbols generally are not helpful
|
|
|
|
// for debugging, so suppress them.
|
|
|
|
StringRef name = def->getName();
|
|
|
|
if (name.data()[0] == '_' && name.data()[1] == '_') {
|
|
|
|
// Drop the '_' prefix for x86.
|
|
|
|
if (config->machine == I386)
|
|
|
|
name = name.drop_front(1);
|
|
|
|
if (name.startswith("__profd_") || name.startswith("__profc_") ||
|
|
|
|
name.startswith("__covrec_")) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2021-09-17 07:48:26 +08:00
|
|
|
publics.push_back(createPublic(ctx, def));
|
2021-05-14 05:42:18 +08:00
|
|
|
}
|
2017-07-28 02:25:59 +08:00
|
|
|
});
|
2017-07-14 08:14:58 +08:00
|
|
|
|
2017-07-28 02:25:59 +08:00
|
|
|
if (!publics.empty()) {
|
2019-03-15 02:45:08 +08:00
|
|
|
publicSymbols = publics.size();
|
2020-05-04 00:29:03 +08:00
|
|
|
gsiBuilder.addPublicSymbols(std::move(publics));
|
2017-07-28 02:25:59 +08:00
|
|
|
}
|
2017-01-12 11:09:25 +08:00
|
|
|
}
|
|
|
|
|
2019-03-15 02:45:08 +08:00
|
|
|
void PDBLinker::printStats() {
|
|
|
|
if (!config->showSummary)
|
|
|
|
return;
|
|
|
|
|
|
|
|
SmallString<256> buffer;
|
|
|
|
raw_svector_ostream stream(buffer);
|
|
|
|
|
|
|
|
stream << center_justify("Summary", 80) << '\n'
|
|
|
|
<< std::string(80, '-') << '\n';
|
|
|
|
|
|
|
|
auto print = [&](uint64_t v, StringRef s) {
|
|
|
|
stream << format_decimal(v, 15) << " " << s << '\n';
|
|
|
|
};
|
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
print(ctx.objFileInstances.size(),
|
2019-03-15 02:45:08 +08:00
|
|
|
"Input OBJ files (expanded from all cmd-line inputs)");
|
2021-09-17 07:48:26 +08:00
|
|
|
print(ctx.typeServerSourceMappings.size(), "PDB type server dependencies");
|
|
|
|
print(ctx.precompSourceMappings.size(), "Precomp OBJ dependencies");
|
2020-10-02 21:36:11 +08:00
|
|
|
print(nbTypeRecords, "Input type records");
|
|
|
|
print(nbTypeRecordsBytes, "Input type records bytes");
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
print(builder.getTpiBuilder().getRecordCount(), "Merged TPI records");
|
|
|
|
print(builder.getIpiBuilder().getRecordCount(), "Merged IPI records");
|
2019-03-15 02:45:08 +08:00
|
|
|
print(pdbStrTab.size(), "Output PDB strings");
|
|
|
|
print(globalSymbols, "Global symbol records");
|
|
|
|
print(moduleSymbols, "Module symbol records");
|
|
|
|
print(publicSymbols, "Public symbol records");
|
|
|
|
|
[PDB] Print the most redundant type record indices with /summary
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8b917bacfaa1f800fc1e242559f76be) and adding a key method to
clang::Sema (586f65d31f32ca6bc8cfdb8a4f61bee5057bf6c8) which saved a
fair amount of object file size.
Example output for clang.pdb:
Top 10 types responsible for the most TPI input bytes:
index total bytes count size
0x3890: 8,671,220 = 1,805 * 4,804
0xE13BE: 5,634,720 = 252 * 22,360
0x6874C: 5,181,600 = 408 * 12,700
0x2A1F: 4,520,528 = 1,574 * 2,872
0x64BFF: 4,024,020 = 469 * 8,580
0x1123: 4,012,020 = 2,157 * 1,860
0x6952: 3,753,792 = 912 * 4,116
0xC16F: 3,630,888 = 633 * 5,736
0x69DD: 3,601,160 = 985 * 3,656
0x678D: 3,577,904 = 319 * 11,216
In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.
The user can then use llvm-pdbutil to find out what the record is:
$ llvm-pdbutil dump -types -type-index 0x3890
Types (TPI Stream)
============================================================
Showing 1 records.
0x3890 | LF_FIELDLIST [size = 4804]
- LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
- LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
- LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
- LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
...
In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.
The next largest type is ASTContext:
$ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
0xE13BE | LF_FIELDLIST [size = 22360]
- LF_BCLASS
type = 0x653EA, offset = 0, attrs = public
- LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
- LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
- LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
- LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
...
ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.
This was the output before I split Intrinsic::ID:
Top 10 types responsible for the most TPI input:
0x686C: 69,823,920 = 1,070 * 65,256
0x686D: 69,819,640 = 1,070 * 65,252
0x686E: 69,819,640 = 1,070 * 65,252
0x686B: 16,371,000 = 1,070 * 15,300
...
These records were all lists of intrinsic enums.
Reviewers: MaskRay, ruiu
Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71437
2019-11-26 03:36:47 +08:00
|
|
|
auto printLargeInputTypeRecs = [&](StringRef name,
|
|
|
|
ArrayRef<uint32_t> recCounts,
|
|
|
|
TypeCollection &records) {
|
|
|
|
// Figure out which type indices were responsible for the most duplicate
|
|
|
|
// bytes in the input files. These should be frequently emitted LF_CLASS and
|
|
|
|
// LF_FIELDLIST records.
|
|
|
|
struct TypeSizeInfo {
|
|
|
|
uint32_t typeSize;
|
|
|
|
uint32_t dupCount;
|
|
|
|
TypeIndex typeIndex;
|
|
|
|
uint64_t totalInputSize() const { return uint64_t(dupCount) * typeSize; }
|
|
|
|
bool operator<(const TypeSizeInfo &rhs) const {
|
2020-03-29 04:38:50 +08:00
|
|
|
if (totalInputSize() == rhs.totalInputSize())
|
|
|
|
return typeIndex < rhs.typeIndex;
|
[PDB] Print the most redundant type record indices with /summary
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8b917bacfaa1f800fc1e242559f76be) and adding a key method to
clang::Sema (586f65d31f32ca6bc8cfdb8a4f61bee5057bf6c8) which saved a
fair amount of object file size.
Example output for clang.pdb:
Top 10 types responsible for the most TPI input bytes:
index total bytes count size
0x3890: 8,671,220 = 1,805 * 4,804
0xE13BE: 5,634,720 = 252 * 22,360
0x6874C: 5,181,600 = 408 * 12,700
0x2A1F: 4,520,528 = 1,574 * 2,872
0x64BFF: 4,024,020 = 469 * 8,580
0x1123: 4,012,020 = 2,157 * 1,860
0x6952: 3,753,792 = 912 * 4,116
0xC16F: 3,630,888 = 633 * 5,736
0x69DD: 3,601,160 = 985 * 3,656
0x678D: 3,577,904 = 319 * 11,216
In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.
The user can then use llvm-pdbutil to find out what the record is:
$ llvm-pdbutil dump -types -type-index 0x3890
Types (TPI Stream)
============================================================
Showing 1 records.
0x3890 | LF_FIELDLIST [size = 4804]
- LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
- LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
- LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
- LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
...
In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.
The next largest type is ASTContext:
$ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
0xE13BE | LF_FIELDLIST [size = 22360]
- LF_BCLASS
type = 0x653EA, offset = 0, attrs = public
- LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
- LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
- LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
- LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
...
ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.
This was the output before I split Intrinsic::ID:
Top 10 types responsible for the most TPI input:
0x686C: 69,823,920 = 1,070 * 65,256
0x686D: 69,819,640 = 1,070 * 65,252
0x686E: 69,819,640 = 1,070 * 65,252
0x686B: 16,371,000 = 1,070 * 15,300
...
These records were all lists of intrinsic enums.
Reviewers: MaskRay, ruiu
Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71437
2019-11-26 03:36:47 +08:00
|
|
|
return totalInputSize() < rhs.totalInputSize();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
SmallVector<TypeSizeInfo, 0> tsis;
|
|
|
|
for (auto e : enumerate(recCounts)) {
|
|
|
|
TypeIndex typeIndex = TypeIndex::fromArrayIndex(e.index());
|
|
|
|
uint32_t typeSize = records.getType(typeIndex).length();
|
|
|
|
uint32_t dupCount = e.value();
|
|
|
|
tsis.push_back({typeSize, dupCount, typeIndex});
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!tsis.empty()) {
|
|
|
|
stream << "\nTop 10 types responsible for the most " << name
|
|
|
|
<< " input:\n";
|
|
|
|
stream << " index total bytes count size\n";
|
|
|
|
llvm::sort(tsis);
|
|
|
|
unsigned i = 0;
|
|
|
|
for (const auto &tsi : reverse(tsis)) {
|
|
|
|
stream << formatv(" {0,10:X}: {1,14:N} = {2,5:N} * {3,6:N}\n",
|
|
|
|
tsi.typeIndex.getIndex(), tsi.totalInputSize(),
|
|
|
|
tsi.dupCount, tsi.typeSize);
|
|
|
|
if (++i >= 10)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
stream
|
|
|
|
<< "Run llvm-pdbutil to print details about a particular record:\n";
|
|
|
|
stream << formatv("llvm-pdbutil dump -{0}s -{0}-index {1:X} {2}\n",
|
|
|
|
(name == "TPI" ? "type" : "id"),
|
|
|
|
tsis.back().typeIndex.getIndex(), config->pdbPath);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
|
|
|
if (!config->debugGHashes) {
|
|
|
|
// FIXME: Reimplement for ghash.
|
|
|
|
printLargeInputTypeRecs("TPI", tMerger.tpiCounts, tMerger.getTypeTable());
|
|
|
|
printLargeInputTypeRecs("IPI", tMerger.ipiCounts, tMerger.getIDTable());
|
|
|
|
}
|
[PDB] Print the most redundant type record indices with /summary
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8b917bacfaa1f800fc1e242559f76be) and adding a key method to
clang::Sema (586f65d31f32ca6bc8cfdb8a4f61bee5057bf6c8) which saved a
fair amount of object file size.
Example output for clang.pdb:
Top 10 types responsible for the most TPI input bytes:
index total bytes count size
0x3890: 8,671,220 = 1,805 * 4,804
0xE13BE: 5,634,720 = 252 * 22,360
0x6874C: 5,181,600 = 408 * 12,700
0x2A1F: 4,520,528 = 1,574 * 2,872
0x64BFF: 4,024,020 = 469 * 8,580
0x1123: 4,012,020 = 2,157 * 1,860
0x6952: 3,753,792 = 912 * 4,116
0xC16F: 3,630,888 = 633 * 5,736
0x69DD: 3,601,160 = 985 * 3,656
0x678D: 3,577,904 = 319 * 11,216
In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.
The user can then use llvm-pdbutil to find out what the record is:
$ llvm-pdbutil dump -types -type-index 0x3890
Types (TPI Stream)
============================================================
Showing 1 records.
0x3890 | LF_FIELDLIST [size = 4804]
- LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
- LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
- LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
- LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
...
In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.
The next largest type is ASTContext:
$ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
0xE13BE | LF_FIELDLIST [size = 22360]
- LF_BCLASS
type = 0x653EA, offset = 0, attrs = public
- LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
- LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
- LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
- LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
...
ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.
This was the output before I split Intrinsic::ID:
Top 10 types responsible for the most TPI input:
0x686C: 69,823,920 = 1,070 * 65,256
0x686D: 69,819,640 = 1,070 * 65,252
0x686E: 69,819,640 = 1,070 * 65,252
0x686B: 16,371,000 = 1,070 * 15,300
...
These records were all lists of intrinsic enums.
Reviewers: MaskRay, ruiu
Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D71437
2019-11-26 03:36:47 +08:00
|
|
|
|
2019-03-15 02:45:08 +08:00
|
|
|
message(buffer);
|
|
|
|
}
|
|
|
|
|
2018-03-24 03:57:25 +08:00
|
|
|
void PDBLinker::addNatvisFiles() {
|
|
|
|
for (StringRef file : config->natvisFiles) {
|
|
|
|
ErrorOr<std::unique_ptr<MemoryBuffer>> dataOrErr =
|
|
|
|
MemoryBuffer::getFile(file);
|
|
|
|
if (!dataOrErr) {
|
|
|
|
warn("Cannot open input file: " + file);
|
|
|
|
continue;
|
|
|
|
}
|
2021-02-23 03:29:55 +08:00
|
|
|
std::unique_ptr<MemoryBuffer> data = std::move(*dataOrErr);
|
|
|
|
|
|
|
|
// Can't use takeBuffer() here since addInjectedSource() takes ownership.
|
|
|
|
if (driver->tar)
|
|
|
|
driver->tar->append(relativeToRoot(data->getBufferIdentifier()),
|
|
|
|
data->getBuffer());
|
|
|
|
|
|
|
|
builder.addInjectedSource(file, std::move(data));
|
2018-03-24 03:57:25 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-08 04:16:22 +08:00
|
|
|
void PDBLinker::addNamedStreams() {
|
|
|
|
for (const auto &streamFile : config->namedStreams) {
|
|
|
|
const StringRef stream = streamFile.getKey(), file = streamFile.getValue();
|
|
|
|
ErrorOr<std::unique_ptr<MemoryBuffer>> dataOrErr =
|
|
|
|
MemoryBuffer::getFile(file);
|
|
|
|
if (!dataOrErr) {
|
|
|
|
warn("Cannot open input file: " + file);
|
|
|
|
continue;
|
|
|
|
}
|
2021-02-23 03:29:55 +08:00
|
|
|
std::unique_ptr<MemoryBuffer> data = std::move(*dataOrErr);
|
|
|
|
exitOnErr(builder.addNamedStream(stream, data->getBuffer()));
|
|
|
|
driver->takeBuffer(std::move(data));
|
2020-04-08 04:16:22 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-04-17 02:17:13 +08:00
|
|
|
static codeview::CPUType toCodeViewMachine(COFF::MachineTypes machine) {
|
|
|
|
switch (machine) {
|
|
|
|
case COFF::IMAGE_FILE_MACHINE_AMD64:
|
|
|
|
return codeview::CPUType::X64;
|
|
|
|
case COFF::IMAGE_FILE_MACHINE_ARM:
|
|
|
|
return codeview::CPUType::ARM7;
|
|
|
|
case COFF::IMAGE_FILE_MACHINE_ARM64:
|
|
|
|
return codeview::CPUType::ARM64;
|
|
|
|
case COFF::IMAGE_FILE_MACHINE_ARMNT:
|
|
|
|
return codeview::CPUType::ARMNT;
|
|
|
|
case COFF::IMAGE_FILE_MACHINE_I386:
|
|
|
|
return codeview::CPUType::Intel80386;
|
|
|
|
default:
|
|
|
|
llvm_unreachable("Unsupported CPU Type");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-01 00:36:40 +08:00
|
|
|
// Mimic MSVC which surrounds arguments containing whitespace with quotes.
|
|
|
|
// Double double-quotes are handled, so that the resulting string can be
|
|
|
|
// executed again on the cmd-line.
|
|
|
|
static std::string quote(ArrayRef<StringRef> args) {
|
|
|
|
std::string r;
|
|
|
|
r.reserve(256);
|
|
|
|
for (StringRef a : args) {
|
|
|
|
if (!r.empty())
|
|
|
|
r.push_back(' ');
|
2021-10-24 11:41:46 +08:00
|
|
|
bool hasWS = a.contains(' ');
|
|
|
|
bool hasQ = a.contains('"');
|
2018-12-01 00:36:40 +08:00
|
|
|
if (hasWS || hasQ)
|
|
|
|
r.push_back('"');
|
|
|
|
if (hasQ) {
|
|
|
|
SmallVector<StringRef, 4> s;
|
|
|
|
a.split(s, '"');
|
|
|
|
r.append(join(s, "\"\""));
|
|
|
|
} else {
|
2020-01-29 03:23:46 +08:00
|
|
|
r.append(std::string(a));
|
2018-12-01 00:36:40 +08:00
|
|
|
}
|
|
|
|
if (hasWS || hasQ)
|
|
|
|
r.push_back('"');
|
|
|
|
}
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2019-03-30 04:25:34 +08:00
|
|
|
static void fillLinkerVerRecord(Compile3Sym &cs) {
|
2018-04-17 02:17:13 +08:00
|
|
|
cs.Machine = toCodeViewMachine(config->machine);
|
2017-08-12 05:14:01 +08:00
|
|
|
// Interestingly, if we set the string to 0.0.0.0, then when trying to view
|
|
|
|
// local variables WinDbg emits an error that private symbols are not present.
|
|
|
|
// By setting this to a valid MSVC linker version string, local variables are
|
|
|
|
// displayed properly. As such, even though it is not representative of
|
2017-08-12 04:46:47 +08:00
|
|
|
// LLVM's version information, we need this for compatibility.
|
2017-07-11 05:01:37 +08:00
|
|
|
cs.Flags = CompileSym3Flags::None;
|
2017-08-12 04:46:47 +08:00
|
|
|
cs.VersionBackendBuild = 25019;
|
|
|
|
cs.VersionBackendMajor = 14;
|
|
|
|
cs.VersionBackendMinor = 10;
|
2017-07-11 05:01:37 +08:00
|
|
|
cs.VersionBackendQFE = 0;
|
2017-08-12 04:46:47 +08:00
|
|
|
|
|
|
|
// MSVC also sets the frontend to 0.0.0.0 since this is specifically for the
|
|
|
|
// linker module (which is by definition a backend), so we don't need to do
|
|
|
|
// anything here. Also, it seems we can use "LLVM Linker" for the linker name
|
|
|
|
// without any problems. Only the backend version has to be hardcoded to a
|
|
|
|
// magic number.
|
2017-07-11 05:01:37 +08:00
|
|
|
cs.VersionFrontendBuild = 0;
|
|
|
|
cs.VersionFrontendMajor = 0;
|
|
|
|
cs.VersionFrontendMinor = 0;
|
|
|
|
cs.VersionFrontendQFE = 0;
|
|
|
|
cs.Version = "LLVM Linker";
|
|
|
|
cs.setLanguage(SourceLanguage::Link);
|
2019-03-30 04:25:34 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void addCommonLinkerModuleSymbols(StringRef path,
|
2020-05-05 07:05:12 +08:00
|
|
|
pdb::DbiModuleDescriptorBuilder &mod) {
|
2019-03-30 04:25:34 +08:00
|
|
|
ObjNameSym ons(SymbolRecordKind::ObjNameSym);
|
|
|
|
EnvBlockSym ebs(SymbolRecordKind::EnvBlockSym);
|
|
|
|
Compile3Sym cs(SymbolRecordKind::Compile3Sym);
|
|
|
|
fillLinkerVerRecord(cs);
|
|
|
|
|
|
|
|
ons.Name = "* Linker *";
|
|
|
|
ons.Signature = 0;
|
2017-07-11 05:01:37 +08:00
|
|
|
|
|
|
|
ArrayRef<StringRef> args = makeArrayRef(config->argv).drop_front();
|
2018-12-01 00:36:40 +08:00
|
|
|
std::string argStr = quote(args);
|
2017-07-11 05:01:37 +08:00
|
|
|
ebs.Fields.push_back("cwd");
|
|
|
|
SmallString<64> cwd;
|
2018-12-07 01:49:15 +08:00
|
|
|
if (config->pdbSourcePath.empty())
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
sys::fs::current_path(cwd);
|
|
|
|
else
|
|
|
|
cwd = config->pdbSourcePath;
|
2017-07-11 05:01:37 +08:00
|
|
|
ebs.Fields.push_back(cwd);
|
|
|
|
ebs.Fields.push_back("exe");
|
2017-08-12 04:46:47 +08:00
|
|
|
SmallString<64> exe = config->argv[0];
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
pdbMakeAbsolute(exe);
|
2017-08-12 04:46:47 +08:00
|
|
|
ebs.Fields.push_back(exe);
|
2017-07-11 05:01:37 +08:00
|
|
|
ebs.Fields.push_back("pdb");
|
|
|
|
ebs.Fields.push_back(path);
|
|
|
|
ebs.Fields.push_back("cmd");
|
|
|
|
ebs.Fields.push_back(argStr);
|
2022-01-21 03:53:18 +08:00
|
|
|
llvm::BumpPtrAllocator &bAlloc = lld::bAlloc();
|
2017-07-11 05:01:37 +08:00
|
|
|
mod.addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2020-05-05 07:05:12 +08:00
|
|
|
ons, bAlloc, CodeViewContainer::Pdb));
|
2017-07-11 05:01:37 +08:00
|
|
|
mod.addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2020-05-05 07:05:12 +08:00
|
|
|
cs, bAlloc, CodeViewContainer::Pdb));
|
2017-07-11 05:01:37 +08:00
|
|
|
mod.addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2020-05-05 07:05:12 +08:00
|
|
|
ebs, bAlloc, CodeViewContainer::Pdb));
|
2017-07-11 05:01:37 +08:00
|
|
|
}
|
|
|
|
|
2019-03-30 04:25:34 +08:00
|
|
|
static void addLinkerModuleCoffGroup(PartialSection *sec,
|
|
|
|
pdb::DbiModuleDescriptorBuilder &mod,
|
2020-05-05 07:05:12 +08:00
|
|
|
OutputSection &os) {
|
2019-03-30 04:25:34 +08:00
|
|
|
// If there's a section, there's at least one chunk
|
|
|
|
assert(!sec->chunks.empty());
|
|
|
|
const Chunk *firstChunk = *sec->chunks.begin();
|
|
|
|
const Chunk *lastChunk = *sec->chunks.rbegin();
|
|
|
|
|
|
|
|
// Emit COFF group
|
|
|
|
CoffGroupSym cgs(SymbolRecordKind::CoffGroupSym);
|
|
|
|
cgs.Name = sec->name;
|
|
|
|
cgs.Segment = os.sectionIndex;
|
|
|
|
cgs.Offset = firstChunk->getRVA() - os.getRVA();
|
|
|
|
cgs.Size = lastChunk->getRVA() + lastChunk->getSize() - firstChunk->getRVA();
|
|
|
|
cgs.Characteristics = sec->characteristics;
|
|
|
|
|
|
|
|
// Somehow .idata sections & sections groups in the debug symbol stream have
|
|
|
|
// the "write" flag set. However the section header for the corresponding
|
|
|
|
// .idata section doesn't have it.
|
|
|
|
if (cgs.Name.startswith(".idata"))
|
|
|
|
cgs.Characteristics |= llvm::COFF::IMAGE_SCN_MEM_WRITE;
|
|
|
|
|
|
|
|
mod.addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2022-01-21 03:53:18 +08:00
|
|
|
cgs, bAlloc(), CodeViewContainer::Pdb));
|
2019-03-30 04:25:34 +08:00
|
|
|
}
|
|
|
|
|
2017-08-12 04:46:28 +08:00
|
|
|
static void addLinkerModuleSectionSymbol(pdb::DbiModuleDescriptorBuilder &mod,
|
2020-05-05 07:05:12 +08:00
|
|
|
OutputSection &os) {
|
2017-08-12 04:46:28 +08:00
|
|
|
SectionSym sym(SymbolRecordKind::SectionSym);
|
2017-08-12 04:46:47 +08:00
|
|
|
sym.Alignment = 12; // 2^12 = 4KB
|
2018-04-20 05:48:37 +08:00
|
|
|
sym.Characteristics = os.header.Characteristics;
|
2017-08-12 04:46:28 +08:00
|
|
|
sym.Length = os.getVirtualSize();
|
2018-03-16 05:13:46 +08:00
|
|
|
sym.Name = os.name;
|
2017-08-12 04:46:28 +08:00
|
|
|
sym.Rva = os.getRVA();
|
|
|
|
sym.SectionNumber = os.sectionIndex;
|
|
|
|
mod.addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2022-01-21 03:53:18 +08:00
|
|
|
sym, bAlloc(), CodeViewContainer::Pdb));
|
2019-03-30 04:25:34 +08:00
|
|
|
|
|
|
|
// Skip COFF groups in MinGW because it adds a significant footprint to the
|
|
|
|
// PDB, due to each function being in its own section
|
|
|
|
if (config->mingw)
|
|
|
|
return;
|
|
|
|
|
|
|
|
// Output COFF groups for individual chunks of this section.
|
|
|
|
for (PartialSection *sec : os.contribSections) {
|
2020-05-05 07:05:12 +08:00
|
|
|
addLinkerModuleCoffGroup(sec, mod, os);
|
2019-03-30 04:25:34 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add all import files as modules to the PDB.
|
2021-09-17 07:48:26 +08:00
|
|
|
void PDBLinker::addImportFilesToPDB() {
|
|
|
|
if (ctx.importFileInstances.empty())
|
2019-03-30 04:25:34 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
std::map<std::string, llvm::pdb::DbiModuleDescriptorBuilder *> dllToModuleDbi;
|
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
for (ImportFile *file : ctx.importFileInstances) {
|
2019-03-30 04:25:34 +08:00
|
|
|
if (!file->live)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!file->thunkSym)
|
|
|
|
continue;
|
|
|
|
|
2019-03-30 05:24:19 +08:00
|
|
|
if (!file->thunkLive)
|
|
|
|
continue;
|
|
|
|
|
2019-03-30 04:25:34 +08:00
|
|
|
std::string dll = StringRef(file->dllName).lower();
|
|
|
|
llvm::pdb::DbiModuleDescriptorBuilder *&mod = dllToModuleDbi[dll];
|
|
|
|
if (!mod) {
|
|
|
|
pdb::DbiStreamBuilder &dbiBuilder = builder.getDbiBuilder();
|
|
|
|
SmallString<128> libPath = file->parentName;
|
|
|
|
pdbMakeAbsolute(libPath);
|
|
|
|
sys::path::native(libPath);
|
|
|
|
|
|
|
|
// Name modules similar to MSVC's link.exe.
|
|
|
|
// The first module is the simple dll filename
|
|
|
|
llvm::pdb::DbiModuleDescriptorBuilder &firstMod =
|
|
|
|
exitOnErr(dbiBuilder.addModuleInfo(file->dllName));
|
|
|
|
firstMod.setObjFileName(libPath);
|
|
|
|
pdb::SectionContrib sc =
|
2021-09-17 07:48:26 +08:00
|
|
|
createSectionContrib(ctx, nullptr, llvm::pdb::kInvalidStreamIndex);
|
2019-03-30 04:25:34 +08:00
|
|
|
firstMod.setFirstSectionContrib(sc);
|
|
|
|
|
|
|
|
// The second module is where the import stream goes.
|
|
|
|
mod = &exitOnErr(dbiBuilder.addModuleInfo("Import:" + file->dllName));
|
|
|
|
mod->setObjFileName(libPath);
|
|
|
|
}
|
|
|
|
|
|
|
|
DefinedImportThunk *thunk = cast<DefinedImportThunk>(file->thunkSym);
|
2019-05-10 05:21:22 +08:00
|
|
|
Chunk *thunkChunk = thunk->getChunk();
|
2021-09-17 07:48:26 +08:00
|
|
|
OutputSection *thunkOS = ctx.getOutputSection(thunkChunk);
|
2019-03-30 04:25:34 +08:00
|
|
|
|
|
|
|
ObjNameSym ons(SymbolRecordKind::ObjNameSym);
|
|
|
|
Compile3Sym cs(SymbolRecordKind::Compile3Sym);
|
|
|
|
Thunk32Sym ts(SymbolRecordKind::Thunk32Sym);
|
|
|
|
ScopeEndSym es(SymbolRecordKind::ScopeEndSym);
|
|
|
|
|
|
|
|
ons.Name = file->dllName;
|
|
|
|
ons.Signature = 0;
|
|
|
|
|
|
|
|
fillLinkerVerRecord(cs);
|
|
|
|
|
|
|
|
ts.Name = thunk->getName();
|
|
|
|
ts.Parent = 0;
|
|
|
|
ts.End = 0;
|
|
|
|
ts.Next = 0;
|
|
|
|
ts.Thunk = ThunkOrdinal::Standard;
|
2019-05-10 05:21:22 +08:00
|
|
|
ts.Length = thunkChunk->getSize();
|
|
|
|
ts.Segment = thunkOS->sectionIndex;
|
|
|
|
ts.Offset = thunkChunk->getRVA() - thunkOS->getRVA();
|
2019-03-30 04:25:34 +08:00
|
|
|
|
2022-01-21 03:53:18 +08:00
|
|
|
llvm::BumpPtrAllocator &bAlloc = lld::bAlloc();
|
2019-03-30 04:25:34 +08:00
|
|
|
mod->addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2020-05-05 07:05:12 +08:00
|
|
|
ons, bAlloc, CodeViewContainer::Pdb));
|
2019-03-30 04:25:34 +08:00
|
|
|
mod->addSymbol(codeview::SymbolSerializer::writeOneSymbol(
|
2020-05-05 07:05:12 +08:00
|
|
|
cs, bAlloc, CodeViewContainer::Pdb));
|
2019-03-30 04:25:34 +08:00
|
|
|
|
|
|
|
CVSymbol newSym = codeview::SymbolSerializer::writeOneSymbol(
|
2020-05-05 07:05:12 +08:00
|
|
|
ts, bAlloc, CodeViewContainer::Pdb);
|
2021-03-11 06:51:52 +08:00
|
|
|
|
|
|
|
// Write ptrEnd for the S_THUNK32.
|
|
|
|
ScopeRecord *thunkSymScope =
|
|
|
|
getSymbolScopeFields(const_cast<uint8_t *>(newSym.data().data()));
|
2019-03-30 04:25:34 +08:00
|
|
|
|
|
|
|
mod->addSymbol(newSym);
|
|
|
|
|
2020-05-05 07:05:12 +08:00
|
|
|
newSym = codeview::SymbolSerializer::writeOneSymbol(es, bAlloc,
|
2019-03-30 04:25:34 +08:00
|
|
|
CodeViewContainer::Pdb);
|
2021-03-11 06:51:52 +08:00
|
|
|
thunkSymScope->ptrEnd = mod->getNextSymbolOffset();
|
2019-03-30 04:25:34 +08:00
|
|
|
|
|
|
|
mod->addSymbol(newSym);
|
|
|
|
|
|
|
|
pdb::SectionContrib sc =
|
2021-09-17 07:48:26 +08:00
|
|
|
createSectionContrib(ctx, thunk->getChunk(), mod->getModuleIndex());
|
2019-03-30 04:25:34 +08:00
|
|
|
mod->setFirstSectionContrib(sc);
|
|
|
|
}
|
2017-08-12 04:46:28 +08:00
|
|
|
}
|
|
|
|
|
2016-11-12 08:00:51 +08:00
|
|
|
// Creates a PDB file.
|
2021-09-17 07:48:26 +08:00
|
|
|
void lld::coff::createPDB(COFFLinkerContext &ctx,
|
2020-02-20 09:05:42 +08:00
|
|
|
ArrayRef<uint8_t> sectionTable,
|
|
|
|
llvm::codeview::DebugInfo *buildId) {
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t1(ctx.totalPdbLinkTimer);
|
|
|
|
PDBLinker pdb(ctx);
|
2019-07-11 13:40:30 +08:00
|
|
|
|
[LLD COFF/PDB] Incrementally update the build id.
Previously, our algorithm to compute a build id involved hashing the
executable and storing that as the GUID in the CV Debug Record chunk,
and setting the age to 1.
This breaks down in one very obvious case: a user adds some newlines to
a file, rebuilds, but changes nothing else. This causes new line
information and new file checksums to get written to the PDB, meaning
that the debug info is different, but the generated code would be the
same, so we would write the same build over again with an age of 1.
Anyone using a symbol cache would have a problem now, because the
debugger would open the executable, look at the age and guid, find a
matching PDB in the symbol cache and then load it. It would never copy
the new PDB to the symbol cache.
This patch implements the canonical Windows algorithm for updating
a build id, which is to check the existing executable first, and
re-use an existing GUID while bumping the age if it already
exists.
Differential Revision: https://reviews.llvm.org/D36758
llvm-svn: 310961
2017-08-16 05:31:41 +08:00
|
|
|
pdb.initialize(buildId);
|
2017-07-14 08:14:58 +08:00
|
|
|
pdb.addObjectsToPDB();
|
2021-09-17 07:48:26 +08:00
|
|
|
pdb.addImportFilesToPDB();
|
|
|
|
pdb.addSections(sectionTable);
|
2018-03-24 03:57:25 +08:00
|
|
|
pdb.addNatvisFiles();
|
2020-04-08 04:16:22 +08:00
|
|
|
pdb.addNamedStreams();
|
2020-05-04 00:29:03 +08:00
|
|
|
pdb.addPublicsToPDB();
|
2019-07-11 13:40:30 +08:00
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
ScopedTimer t2(ctx.diskCommitTimer);
|
2018-09-16 02:37:22 +08:00
|
|
|
codeview::GUID guid;
|
|
|
|
pdb.commit(&guid);
|
|
|
|
memcpy(&buildId->PDB70.Signature, &guid, 16);
|
2019-07-11 13:40:30 +08:00
|
|
|
|
2019-03-15 02:45:08 +08:00
|
|
|
t2.stop();
|
|
|
|
t1.stop();
|
|
|
|
pdb.printStats();
|
2017-07-14 08:14:58 +08:00
|
|
|
}
|
|
|
|
|
2018-09-16 02:37:22 +08:00
|
|
|
void PDBLinker::initialize(llvm::codeview::DebugInfo *buildId) {
|
2021-10-30 23:22:55 +08:00
|
|
|
exitOnErr(builder.initialize(config->pdbPageSize));
|
2016-09-16 02:55:18 +08:00
|
|
|
|
2018-09-16 02:37:22 +08:00
|
|
|
buildId->Signature.CVSignature = OMF::Signature::PDB70;
|
|
|
|
// Signature is set to a hash of the PDB contents when the PDB is done.
|
|
|
|
memset(buildId->PDB70.Signature, 0, 16);
|
|
|
|
buildId->PDB70.Age = 1;
|
|
|
|
|
2016-10-06 06:08:58 +08:00
|
|
|
// Create streams in MSF for predefined streams, namely
|
|
|
|
// PDB, TPI, DBI and IPI.
|
|
|
|
for (int i = 0; i < (int)pdb::kSpecialStreamCount; ++i)
|
|
|
|
exitOnErr(builder.getMsfBuilder().addStream(0));
|
2016-09-16 02:55:18 +08:00
|
|
|
|
2016-09-17 06:51:17 +08:00
|
|
|
// Add an Info stream.
|
|
|
|
auto &infoBuilder = builder.getInfoBuilder();
|
|
|
|
infoBuilder.setVersion(pdb::PdbRaw_ImplVer::PdbImplVC70);
|
2018-09-16 02:37:22 +08:00
|
|
|
infoBuilder.setHashPDBContentsToGUID(true);
|
2016-09-16 02:55:18 +08:00
|
|
|
|
2017-07-07 13:04:36 +08:00
|
|
|
// Add an empty DBI stream.
|
2017-06-13 23:49:13 +08:00
|
|
|
pdb::DbiStreamBuilder &dbiBuilder = builder.getDbiBuilder();
|
2018-09-16 02:37:22 +08:00
|
|
|
dbiBuilder.setAge(buildId->PDB70.Age);
|
Fix some differences between lld and MSVC generated PDBs.
A couple of things were different about our generated PDBs.
1) We were outputting the wrong Version on the PDB Stream.
The version we were setting was newer than what MSVC is setting.
It's not clear what the implications are, but we change LLD
to use PdbImplVC70, as MSVC does.
2) For the optional debug stream indices in the DBI Stream, we
were outputting 0 to mean "the stream is not present". MSVC
outputs uint16_t(-1), which is the "correct" way to specify
that a stream is not present. So we fix that as well.
3) We were setting the PDB Stream signature to 0. This is supposed
to be the result of calling time(nullptr). Although this leads
to non-deterministic builds, a better way to solve that is by
having a command line option explicitly for generating a
reproducible build, and have the default behavior of lld-link
match the default behavior of link.
To test this, I'm making use of the new and improved `pdb diff`
sub command. To make it suitable for writing tests against, I had
to modify the diff subcommand slightly to print less verbose output.
Previously it would always print | <column> | <value1> | <value2> |
which is quite verbose, and the values are fragile. All we really
want to know is "did we produce the same value as link?" So I added
command line options to print a single character representing the
result status (different, identical, equivalent), and another to
hide the value display. Note that just inspecting the diff output
used to write the test, you can see some things that are obviously
wrong. That is just reflective of the fact that this is the state
of affairs today, not that we're asserting that this is "correct".
We can use this as a starting point to discover differences, fix
them, and update the test.
Differential Revision: https://reviews.llvm.org/D35086
llvm-svn: 307422
2017-07-08 02:45:56 +08:00
|
|
|
dbiBuilder.setVersionHeader(pdb::PdbDbiV70);
|
2018-04-17 04:42:06 +08:00
|
|
|
dbiBuilder.setMachineType(config->machine);
|
2018-04-17 02:17:13 +08:00
|
|
|
// Technically we are not link.exe 14.11, but there are known cases where
|
|
|
|
// debugging tools on Windows expect Microsoft-specific version numbers or
|
|
|
|
// they fail to work at all. Since we know we produce PDBs that are
|
|
|
|
// compatible with LINK 14.11, we set that version number here.
|
|
|
|
dbiBuilder.setBuildNumber(14, 11);
|
2017-07-14 08:14:58 +08:00
|
|
|
}
|
2016-10-07 06:52:01 +08:00
|
|
|
|
2021-09-17 07:48:26 +08:00
|
|
|
void PDBLinker::addSections(ArrayRef<uint8_t> sectionTable) {
|
2017-07-14 08:14:58 +08:00
|
|
|
// It's not entirely clear what this is, but the * Linker * module uses it.
|
2017-08-04 05:15:09 +08:00
|
|
|
pdb::DbiStreamBuilder &dbiBuilder = builder.getDbiBuilder();
|
2017-07-14 08:14:58 +08:00
|
|
|
nativePath = config->pdbPath;
|
lld-link: Use /pdbsourcepath: for more places when present.
/pdbsourcepath: was added in https://reviews.llvm.org/D48882 to make it
possible to have relative paths in the debug info that clang-cl writes.
lld-link then makes the paths absolute at link time, which debuggers require.
This way, clang-cl's output is independent of the absolute path of the build
directory, which is useful for cacheability in distcc-like systems.
This patch extends /pdbsourcepath: (if passed) to also be used for:
1. The "cwd" stored in the env block in the pdb is /pdbsourcepath: if present
2. The "exe" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
3. The "pdb" stored in the env block in the pdb is made absolute relative
to /pdbsourcepath: instead of the cwd
4. For making absolute paths to .obj files referenced from the pdb
/pdbsourcepath: is now useful in three scenarios (the first one already working
before this change):
1. When building with full debug info, passing the real build dir to
/pdbsourcepath: allows having clang-cl's output to be independent
of the build directory path. This patch effectively doesn't change
behavior for this use case (assuming the cwd is the build dir).
2. When building without compile-time debug info but linking with /debug,
a fake fixed /pdbsourcepath: can be passed to get symbolized stacks
while making the pdb and exe independent of the current build dir.
For this two work, lld-link needs to be invoked with relative paths for
the lld-link invocation itself (for "exe"), for the pdb output name, the exe
output name (for "pdb"), and the obj input files, and no absolute path
must appear on the link command (for "cmd" in the pdb's env block).
Since no full debug info is present, it doesn't matter that the absolute
path doesn't exist on disk -- we only get symbols in stacks.
3. When building production builds with full debug info that don't have
local changes, and that get source indexed and their pdbs get uploaded
to a symbol server. /pdbsourcepath: again makes the build output independent
of the current directory, and the fixed path passed to /pdbsourcepath: can
be given the source indexing transform so that it gets mapped to a
repository path. This has the same requirements as 2.
This patch also makes it possible to create PDB files containing Windows-style
absolute paths when cross-compiling on a POSIX system.
Differential Revision: https://reviews.llvm.org/D53021
llvm-svn: 344061
2018-10-10 01:52:25 +08:00
|
|
|
pdbMakeAbsolute(nativePath);
|
2017-07-14 08:14:58 +08:00
|
|
|
uint32_t pdbFilePathNI = dbiBuilder.addECName(nativePath);
|
2017-07-07 13:04:36 +08:00
|
|
|
auto &linkerModule = exitOnErr(dbiBuilder.addModuleInfo("* Linker *"));
|
|
|
|
linkerModule.setPdbFilePathNI(pdbFilePathNI);
|
2020-05-05 07:05:12 +08:00
|
|
|
addCommonLinkerModuleSymbols(nativePath, linkerModule);
|
2016-11-16 09:10:46 +08:00
|
|
|
|
2017-08-04 05:15:09 +08:00
|
|
|
// Add section contributions. They must be ordered by ascending RVA.
|
2021-09-17 07:48:26 +08:00
|
|
|
for (OutputSection *os : ctx.outputSections) {
|
2020-05-05 07:05:12 +08:00
|
|
|
addLinkerModuleSectionSymbol(linkerModule, *os);
|
2018-09-25 18:59:29 +08:00
|
|
|
for (Chunk *c : os->chunks) {
|
2018-04-21 02:00:46 +08:00
|
|
|
pdb::SectionContrib sc =
|
2021-09-17 07:48:26 +08:00
|
|
|
createSectionContrib(ctx, c, linkerModule.getModuleIndex());
|
2018-04-21 02:00:46 +08:00
|
|
|
builder.getDbiBuilder().addSectionContrib(sc);
|
|
|
|
}
|
2017-08-12 04:46:28 +08:00
|
|
|
}
|
2017-08-04 05:15:09 +08:00
|
|
|
|
2019-03-19 03:13:23 +08:00
|
|
|
// The * Linker * first section contrib is only used along with /INCREMENTAL,
|
|
|
|
// to provide trampolines thunks for incremental function patching. Set this
|
|
|
|
// as "unused" because LLD doesn't support /INCREMENTAL link.
|
|
|
|
pdb::SectionContrib sc =
|
2021-09-17 07:48:26 +08:00
|
|
|
createSectionContrib(ctx, nullptr, llvm::pdb::kInvalidStreamIndex);
|
2019-03-19 03:13:23 +08:00
|
|
|
linkerModule.setFirstSectionContrib(sc);
|
|
|
|
|
2017-08-04 05:15:09 +08:00
|
|
|
// Add Section Map stream.
|
|
|
|
ArrayRef<object::coff_section> sections = {
|
|
|
|
(const object::coff_section *)sectionTable.data(),
|
|
|
|
sectionTable.size() / sizeof(object::coff_section)};
|
2020-01-24 04:11:50 +08:00
|
|
|
dbiBuilder.createSectionMap(sections);
|
2017-08-04 05:15:09 +08:00
|
|
|
|
2016-10-12 03:45:07 +08:00
|
|
|
// Add COFF section header stream.
|
|
|
|
exitOnErr(
|
|
|
|
dbiBuilder.addDbgStream(pdb::DbgHeaderType::SectionHdr, sectionTable));
|
2017-07-14 08:14:58 +08:00
|
|
|
}
|
2016-10-12 03:45:07 +08:00
|
|
|
|
2018-09-16 02:37:22 +08:00
|
|
|
void PDBLinker::commit(codeview::GUID *guid) {
|
2021-05-19 03:34:02 +08:00
|
|
|
// Print an error and continue if PDB writing fails. This is done mainly so
|
|
|
|
// the user can see the output of /time and /summary, which is very helpful
|
|
|
|
// when trying to figure out why a PDB file is too large.
|
|
|
|
if (Error e = builder.commit(config->pdbPath, guid)) {
|
|
|
|
checkError(std::move(e));
|
|
|
|
error("failed to write PDB file " + Twine(config->pdbPath));
|
|
|
|
}
|
2015-12-05 07:11:05 +08:00
|
|
|
}
|
2018-04-18 07:32:33 +08:00
|
|
|
|
|
|
|
static uint32_t getSecrelReloc() {
|
|
|
|
switch (config->machine) {
|
|
|
|
case AMD64:
|
|
|
|
return COFF::IMAGE_REL_AMD64_SECREL;
|
|
|
|
case I386:
|
|
|
|
return COFF::IMAGE_REL_I386_SECREL;
|
|
|
|
case ARMNT:
|
|
|
|
return COFF::IMAGE_REL_ARM_SECREL;
|
|
|
|
case ARM64:
|
|
|
|
return COFF::IMAGE_REL_ARM64_SECREL;
|
|
|
|
default:
|
|
|
|
llvm_unreachable("unknown machine type");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Try to find a line table for the given offset Addr into the given chunk C.
|
|
|
|
// If a line table was found, the line table, the string and checksum tables
|
|
|
|
// that are used to interpret the line table, and the offset of Addr in the line
|
|
|
|
// table are stored in the output arguments. Returns whether a line table was
|
|
|
|
// found.
|
|
|
|
static bool findLineTable(const SectionChunk *c, uint32_t addr,
|
2020-05-15 02:21:53 +08:00
|
|
|
DebugStringTableSubsectionRef &cvStrTab,
|
2018-04-18 07:32:33 +08:00
|
|
|
DebugChecksumsSubsectionRef &checksums,
|
|
|
|
DebugLinesSubsectionRef &lines,
|
|
|
|
uint32_t &offsetInLinetable) {
|
|
|
|
ExitOnError exitOnErr;
|
|
|
|
uint32_t secrelReloc = getSecrelReloc();
|
2019-07-11 13:40:30 +08:00
|
|
|
|
2018-04-18 07:32:33 +08:00
|
|
|
for (SectionChunk *dbgC : c->file->getDebugChunks()) {
|
|
|
|
if (dbgC->getSectionName() != ".debug$S")
|
|
|
|
continue;
|
|
|
|
|
2019-07-16 16:26:38 +08:00
|
|
|
// Build a mapping of SECREL relocations in dbgC that refer to `c`.
|
2018-04-18 07:32:33 +08:00
|
|
|
DenseMap<uint32_t, uint32_t> secrels;
|
2019-05-04 04:17:14 +08:00
|
|
|
for (const coff_relocation &r : dbgC->getRelocs()) {
|
2018-04-18 07:32:33 +08:00
|
|
|
if (r.Type != secrelReloc)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (auto *s = dyn_cast_or_null<DefinedRegular>(
|
|
|
|
c->file->getSymbols()[r.SymbolTableIndex]))
|
|
|
|
if (s->getChunk() == c)
|
|
|
|
secrels[r.VirtualAddress] = s->getValue();
|
|
|
|
}
|
|
|
|
|
|
|
|
ArrayRef<uint8_t> contents =
|
2019-02-23 09:46:18 +08:00
|
|
|
SectionChunk::consumeDebugMagic(dbgC->getContents(), ".debug$S");
|
2018-04-18 07:32:33 +08:00
|
|
|
DebugSubsectionArray subsections;
|
|
|
|
BinaryStreamReader reader(contents, support::little);
|
|
|
|
exitOnErr(reader.readArray(subsections, contents.size()));
|
|
|
|
|
|
|
|
for (const DebugSubsectionRecord &ss : subsections) {
|
|
|
|
switch (ss.kind()) {
|
|
|
|
case DebugSubsectionKind::StringTable: {
|
2020-05-15 02:21:53 +08:00
|
|
|
assert(!cvStrTab.valid() &&
|
2018-04-18 07:32:33 +08:00
|
|
|
"Encountered multiple string table subsections!");
|
2020-05-15 02:21:53 +08:00
|
|
|
exitOnErr(cvStrTab.initialize(ss.getRecordData()));
|
2018-04-18 07:32:33 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case DebugSubsectionKind::FileChecksums:
|
|
|
|
assert(!checksums.valid() &&
|
|
|
|
"Encountered multiple checksum subsections!");
|
|
|
|
exitOnErr(checksums.initialize(ss.getRecordData()));
|
|
|
|
break;
|
|
|
|
case DebugSubsectionKind::Lines: {
|
|
|
|
ArrayRef<uint8_t> bytes;
|
|
|
|
auto ref = ss.getRecordData();
|
|
|
|
exitOnErr(ref.readLongestContiguousChunk(0, bytes));
|
|
|
|
size_t offsetInDbgC = bytes.data() - dbgC->getContents().data();
|
|
|
|
|
|
|
|
// Check whether this line table refers to C.
|
|
|
|
auto i = secrels.find(offsetInDbgC);
|
|
|
|
if (i == secrels.end())
|
|
|
|
break;
|
|
|
|
|
|
|
|
// Check whether this line table covers Addr in C.
|
|
|
|
DebugLinesSubsectionRef linesTmp;
|
|
|
|
exitOnErr(linesTmp.initialize(BinaryStreamReader(ref)));
|
|
|
|
uint32_t offsetInC = i->second + linesTmp.header()->RelocOffset;
|
|
|
|
if (addr < offsetInC || addr >= offsetInC + linesTmp.header()->CodeSize)
|
|
|
|
break;
|
|
|
|
|
|
|
|
assert(!lines.header() &&
|
|
|
|
"Encountered multiple line tables for function!");
|
|
|
|
exitOnErr(lines.initialize(BinaryStreamReader(ref)));
|
|
|
|
offsetInLinetable = addr - offsetInC;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2020-05-15 02:21:53 +08:00
|
|
|
if (cvStrTab.valid() && checksums.valid() && lines.header())
|
2018-04-18 07:32:33 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Use CodeView line tables to resolve a file and line number for the given
|
2019-10-15 17:46:33 +08:00
|
|
|
// offset into the given chunk and return them, or None if a line table was
|
2018-04-18 07:32:33 +08:00
|
|
|
// not found.
|
2019-10-15 17:18:18 +08:00
|
|
|
Optional<std::pair<StringRef, uint32_t>>
|
2020-02-20 09:05:42 +08:00
|
|
|
lld::coff::getFileLineCodeView(const SectionChunk *c, uint32_t addr) {
|
2018-04-18 07:32:33 +08:00
|
|
|
ExitOnError exitOnErr;
|
|
|
|
|
2020-05-15 02:21:53 +08:00
|
|
|
DebugStringTableSubsectionRef cvStrTab;
|
2018-04-18 07:32:33 +08:00
|
|
|
DebugChecksumsSubsectionRef checksums;
|
|
|
|
DebugLinesSubsectionRef lines;
|
|
|
|
uint32_t offsetInLinetable;
|
|
|
|
|
2020-05-15 02:21:53 +08:00
|
|
|
if (!findLineTable(c, addr, cvStrTab, checksums, lines, offsetInLinetable))
|
2019-10-15 17:18:18 +08:00
|
|
|
return None;
|
2018-04-18 07:32:33 +08:00
|
|
|
|
2019-01-05 05:49:22 +08:00
|
|
|
Optional<uint32_t> nameIndex;
|
|
|
|
Optional<uint32_t> lineNumber;
|
2021-11-13 06:22:00 +08:00
|
|
|
for (const LineColumnEntry &entry : lines) {
|
2018-04-18 07:32:33 +08:00
|
|
|
for (const LineNumberEntry &ln : entry.LineNumbers) {
|
2019-01-05 05:49:22 +08:00
|
|
|
LineInfo li(ln.Flags);
|
2018-04-18 07:32:33 +08:00
|
|
|
if (ln.Offset > offsetInLinetable) {
|
2019-01-05 05:49:22 +08:00
|
|
|
if (!nameIndex) {
|
|
|
|
nameIndex = entry.NameIndex;
|
|
|
|
lineNumber = li.getStartLine();
|
|
|
|
}
|
2018-04-18 07:32:33 +08:00
|
|
|
StringRef filename =
|
2020-05-15 02:21:53 +08:00
|
|
|
exitOnErr(getFileName(cvStrTab, checksums, *nameIndex));
|
2019-10-15 17:18:18 +08:00
|
|
|
return std::make_pair(filename, *lineNumber);
|
2018-04-18 07:32:33 +08:00
|
|
|
}
|
|
|
|
nameIndex = entry.NameIndex;
|
|
|
|
lineNumber = li.getStartLine();
|
|
|
|
}
|
|
|
|
}
|
2019-01-05 05:49:22 +08:00
|
|
|
if (!nameIndex)
|
2019-10-15 17:18:18 +08:00
|
|
|
return None;
|
2020-05-15 02:21:53 +08:00
|
|
|
StringRef filename = exitOnErr(getFileName(cvStrTab, checksums, *nameIndex));
|
2019-10-15 17:18:18 +08:00
|
|
|
return std::make_pair(filename, *lineNumber);
|
2018-04-18 07:32:33 +08:00
|
|
|
}
|