Commit Graph

253 Commits

Author SHA1 Message Date
Reid Kleckner 12dd8df38b [PDB] Do not record PGO or coverage public symbols
These symbols are long, and they tend to cause the PDB file size to
overflow. They are generally not necessary when debugging problems in
user code.

This change reduces the size of chrome.dll.pdb with coverage from
6,937,108,480 bytes to 4,690,210,816 bytes.

Differential Revision: https://reviews.llvm.org/D102719
2021-05-19 12:41:31 -07:00
Reid Kleckner ac2226b0f5 [PDB] Improve error handling when writes fail
Handle PDB writing errors like any other error in LLD: emit an error and
continue. This allows the linker to print timing data and summary data
after linking, which can be helpful for finding PDB size problems. Also
report how large the file would have been.

Example output:

lld-link: error: Output data is larger than 4 GiB. File size would have been 6,937,108,480
lld-link: error: failed to write PDB file ./chrome.dll.pdb
                                    Summary
--------------------------------------------------------------------------------
          33282 Input OBJ files (expanded from all cmd-line inputs)
              4 PDB type server dependencies
              0 Precomp OBJ dependencies
       33396931 Input type records
... snip ...
  Input File Reading:           59756 ms ( 45.5%)
  GC:                            7500 ms (  5.7%)
  ICF:                           3336 ms (  2.5%)
  Code Layout:                   6329 ms (  4.8%)
  PDB Emission (Cumulative):    46192 ms ( 35.2%)
    Add Objects:                27609 ms ( 21.0%)
      Type Merging:             16740 ms ( 12.8%)
      Symbol Merging:           10761 ms (  8.2%)
    Publics Stream Layout:       9383 ms (  7.1%)
    TPI Stream Layout:           1678 ms (  1.3%)
    Commit to Disk:              3461 ms (  2.6%)
--------------------------------------------------
Total Link Time:               131244 ms (100.0%)

Differential Revision: https://reviews.llvm.org/D102713
2021-05-18 13:17:17 -07:00
Reid Kleckner b552adf8b3 [PDB] Improve warning for corrupt debug info
The S_[GL]PROC32_ID symbol records are supposed to point to function ID
records. If they don't, they are corrupt. The warning message here was
very technical, but a user has encountered it in the wild. Add some more
information and some more testing.
2021-03-11 14:28:09 -08:00
Reid Kleckner b69db4a7ab Re-land "[PDB] Defer relocating .debug$S until commit time and parallelize it"
This reverts commit bacf9cf2c5 and
reinstates commit 1a9bd5b813.

Reverting this commit did not appear to make the problem go away, so we
can go ahead and reland it.
2021-03-10 15:14:09 -08:00
Nico Weber e6d1f261a5 [lld-link] Add /reproduce: support for several flags
/reproduce: now works correctly with:
- /call-graph-ordering-file:
- /def:
- /natvis:
- /order:
- /pdbstream:

I went through all instances of MemoryBuffer::getFile() and made sure
everything that didn't already do so called takeBuffer().

For natvis, that wasn't possible since DebugInfo/PDB wants to take
owernship of the natvis buffer. For that case, I'm manually adding the
tar file entry.

/natvis: and /pdbstream: is slightly awkward, since createResponseFile()
always adds these flags to the response file but createPDB() (which
ultimately adds the files referenced by the flags) is only called if
/debug is also passed. So when using /natvis: without /debug with
/reproduce:, lld won't warn, but when linking using the response
file from the archive, it won't find the natvis file since it's not
in the tar. This isn't a new issue though, and after this patch things
at least work with using /natvis: _with_ debug with /reproduce:.
(Same for /pdbstream:)

Differential Revison: https://reviews.llvm.org/D97212
2021-02-22 16:52:49 -05:00
Reid Kleckner bacf9cf2c5 Revert "[PDB] Defer relocating .debug$S until commit time and parallelize it"
This reverts commit 1a9bd5b813.

I suspect that this patch may have caused https://crbug.com/1171438.
2021-01-28 13:17:27 -08:00
Reid Kleckner 1a9bd5b813 Reland "[PDB] Defer relocating .debug$S until commit time and parallelize it"
This reverts commit 5b7aef6eb4 and relands
6529d7c5a4.

The ASan error was debugged and determined to be the fault of an invalid
object file input in our test suite, which was fixed by my last change.
LLD's project policy is that it assumes input objects are valid, so I
have added a comment about this assumption to the relocation bounds
check.
2021-01-20 11:53:43 -08:00
Mitch Phillips 5b7aef6eb4 Revert "[PDB] Defer relocating .debug$S until commit time and parallelize it"
This reverts commit 6529d7c5a4.

Reason: Broke the ASan buildbots.
http://lab.llvm.org:8011/#/builders/99/builds/1567
2021-01-19 11:45:48 -08:00
Reid Kleckner 6529d7c5a4 [PDB] Defer relocating .debug$S until commit time and parallelize it
This is a pretty classic optimization. Instead of processing symbol
records and copying them to temporary storage, do a first pass to
measure how large the module symbol stream will be, and then copy the
data into place in the PDB file. This requires defering relocation until
much later, which accounts for most of the complexity in this patch.

This patch avoids copying the contents of all live .debug$S sections
into heap memory, which is worth about 20% of private memory usage when
making PDBs. However, this is not an unmitigated performance win,
because it can be faster to read dense, temporary, heap data than it is
to iterate symbol records in object file backed memory a second time.

Results on release chrome.dll:
peak mem: 5164.89MB -> 4072.19MB (-1,092.7MB, -21.2%)
wall-j1:  0m30.844s -> 0m32.094s (slightly slower)
wall-j3:  0m20.968s -> 0m20.312s (slightly faster)
wall-j8:  0m19.062s -> 0m17.672s (meaningfully faster)

I gathered similar numbers for a debug, component build of content.dll
in Chrome, and the performance impact of this change was in the noise.
The memory usage reduction was visible and similar.

Because of the new parallelism in the PDB commit phase, more cores makes
the new approach faster. I'm assuming that most C++ developer machines
these days are at least quad core, so I think this is a win.

Differential Revision: https://reviews.llvm.org/D94267
2021-01-12 17:46:29 -08:00
Amy Huang 1363dfaf31 [CodeView] Avoid emitting empty debug globals subsection.
In https://reviews.llvm.org/D89072 I added static const data members
to the debug subsection for globals. It skipped emitting an S_CONSTANT if it
didn't have a value, which meant the subsection could be empty.

This patch fixes the empty subsection issue.

Differential Revision: https://reviews.llvm.org/D92049
2020-11-25 16:13:32 -08:00
Reid Kleckner 32cc962ef3 [COFF] Move ghash timers under the "add objects" timer
I had envisioned the ghash step as a big up front step, but as currently
written, the timers are nested, and we are notionally adding types from
objects, so we might as well arrange the timers this way.
2020-10-31 11:08:59 -07:00
Alexandre Ganea 55b97a6d2a [LLD][COFF] Add more type record information to /summary
This adds the following two new lines to /summary:

      21351 Input OBJ files (expanded from all cmd-line inputs)
         61 PDB type server dependencies
         38 Precomp OBJ dependencies
 1420669231 Input type records         <<<<
78665073382 Input type records bytes   <<<<
    8801393 Merged TPI records
    3177158 Merged IPI records
      59194 Output PDB strings
   71576766 Global symbol records
   25416935 Module symbol records
    2103431 Public symbol records

Differential Revision: https://reviews.llvm.org/D88703
2020-10-02 09:36:11 -04:00
Reid Kleckner 5d46d7e8b2 [PDB] Use one func id DenseMap instead of per-source maps, NFC
This avoids some DenseMap copies when /Zi is in use, and results in
fewer data structures.

Differential Revision: https://reviews.llvm.org/D88617
2020-10-01 12:22:27 -07:00
Reid Kleckner 5519e4da83 Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.

This reverts commit 8d250ac3cd.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..

Original commit message:
-----------------------------------------

This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.

To give an idea, here is the /time output placed side by side:
                              BEFORE    | AFTER
  Input File Reading:           956 ms  |  968 ms
  Code Layout:                  258 ms  |  190 ms
  Commit Output File:             6 ms  |    7 ms
  PDB Emission (Cumulative):   6691 ms  | 4253 ms
    Add Objects:               4341 ms  | 2927 ms
      Type Merging:            2814 ms  | 1269 ms  -55%!
      Symbol Merging:          1509 ms  | 1645 ms
    Publics Stream Layout:      111 ms  |  112 ms
    TPI Stream Layout:          764 ms  |   26 ms  trivial
    Commit to Disk:            1322 ms  | 1036 ms  -300ms
----------------------------------------- --------
Total Link Time:               8416 ms    5882 ms  -30% overall

The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.

This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.

Algorithm
----------

Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.

In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206

The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
  tpiSources[tpiSrcIndex]->ghashes[typeIndex]

By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.

To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.

After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.

Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.

Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.

Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.

Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.

Differential Revision: https://reviews.llvm.org/D87805
2020-09-30 15:44:38 -07:00
Reid Kleckner 8d250ac3cd Revert "[PDB] Merge types in parallel when using ghashing"
This reverts commit 49b3459930.
2020-09-30 14:55:32 -07:00
Reid Kleckner 49b3459930 [PDB] Merge types in parallel when using ghashing
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.

To give an idea, here is the /time output placed side by side:
                              BEFORE    | AFTER
  Input File Reading:           956 ms  |  968 ms
  Code Layout:                  258 ms  |  190 ms
  Commit Output File:             6 ms  |    7 ms
  PDB Emission (Cumulative):   6691 ms  | 4253 ms
    Add Objects:               4341 ms  | 2927 ms
      Type Merging:            2814 ms  | 1269 ms  -55%!
      Symbol Merging:          1509 ms  | 1645 ms
    Publics Stream Layout:      111 ms  |  112 ms
    TPI Stream Layout:          764 ms  |   26 ms  trivial
    Commit to Disk:            1322 ms  | 1036 ms  -300ms
----------------------------------------- --------
Total Link Time:               8416 ms    5882 ms  -30% overall

The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.

This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.

Algorithm
----------

Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.

In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206

The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
  tpiSources[tpiSrcIndex]->ghashes[typeIndex]

By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.

To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.

After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.

Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.

Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.

Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.

Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.

Differential Revision: https://reviews.llvm.org/D87805
2020-09-30 14:22:48 -07:00
Reid Kleckner 1e5b7e91aa [PDB] Split TypeServerSource and extend type index map lifetime
Extending the lifetime of these type index mappings does increase memory
usage (+2% in my case), but it decouples type merging from symbol
merging. This is a pre-requisite for two changes that I have in mind:
- parallel type merging: speeds up slow type merging
- defered symbol merging: avoid heap allocating (relocating) all symbols

This eliminates CVIndexMap and moves its data into TpiSource. The maps
are also split into a SmallVector and ArrayRef component, so that the
ipiMap can alias the tpiMap for /Z7 object files, and so that both maps
can simply alias the PDB type server maps for /Zi files.

Splitting TypeServerSource establishes that all input types to be merged
can be identified with two 32-bit indices:
- The index of the TpiSource object
- The type index of the record
This is useful, because this information can be stored in a single
64-bit atomic word to enable concurrent hashtable insertion.

One last change is that now all object files with debugChunks get a
TpiSource, even if they have no type info. This avoids some null checks
and special cases.

Differential Revision: https://reviews.llvm.org/D87736
2020-09-17 11:53:10 -07:00
Alexandre Ganea 98e01f56b0 Revert "Re-Re-land: [CodeView] Add full repro to LF_BUILDINFO record"
This reverts commit a3036b3863.

As requested in: https://reviews.llvm.org/D80833#2221866
Bug report: https://crbug.com/1117026
2020-08-17 15:49:18 -04:00
Alexandre Ganea a3036b3863 Re-Re-land: [CodeView] Add full repro to LF_BUILDINFO record
This patch adds the missing information to the LF_BUILDINFO record, which allows for rebuilding a .CPP without any external dependency but the .OBJ itself (other than the compiler).

Some external tools that we are using (Recode, Live++) are extracting the information to reproduce a build without any knowledge of the build system. The LF_BUILDINFO stores a full path to the compiler, the PWD (CWD at program startup), a relative or absolute path to the TU, and the full CC1 command line. The command line needs to be freestanding (not depend on any environment variables). In the same way, MSVC doesn't store the provided command-line, but an expanded version (somehow their equivalent of CC1) which is also freestanding.

For more information see PR36198 and D43002.

Differential Revision: https://reviews.llvm.org/D80833
2020-08-10 13:36:30 -04:00
Alexandre Ganea b71499ac9e Revert "Re-land [CodeView] Add full repro to LF_BUILDINFO record"
This reverts commit add59ecb34 and 41d2813a5f.
2020-07-10 19:46:16 -04:00
Alexandre Ganea add59ecb34 Re-land [CodeView] Add full repro to LF_BUILDINFO record
This patch adds some missing information to the LF_BUILDINFO which allows for rebuilding an .OBJ without any external dependency but the .OBJ itself (other than the compiler executable).

Some tools need this information to reproduce a build without any knowledge of the build system. The LF_BUILDINFO therefore stores a full path to the compiler, the PWD (which is the CWD at program startup), a relative or absolute path to the TU, and the full CC1 command line. The command line needs to be freestanding (not depend on any environment variable). In the same way, MSVC doesn't store the provided command-line, but an expanded version (somehow their equivalent of CC1) which is also freestanding.

For more information see PR36198 and D43002.

Differential Revision: https://reviews.llvm.org/D80833
2020-07-10 13:59:28 -04:00
Reid Kleckner b7402edce3 [PDB] Defer public serialization until PDB writing
This reduces peak memory on my test case from 1960.14MB to 1700.63MB
(-260MB, -13.2%) with no measurable impact on CPU time. I'm currently
working with a publics stream that is about 277MB. Before this change,
we would allocate 277MB of heap memory, serialize publics into them,
hold onto that heap memory, open the PDB, and commit into it.  After
this change, we defer the serialization until commit time.

In the last change I made to public writing, I re-sorted the list of
publics multiple times in place to avoid allocating new temporary data
structures. Deferring serialization until later requires that we don't
reorder the publics. Instead of sorting the publics, I partially
construct the hash table data structures, store a publics index in them,
and then sort the hash table data structures. Later, I replace the index
with the symbol record offset.

This change also addresses a FIXME and moves the list of global and
public records from GSIHashStreamBuilder to GSIStreamBuilder. Now that
publics aren't being serialized, it makes even less sense to store them
as a list of CVSymbol records. The hash table used to deduplicate
globals is moved as well, since that is specific to globals, and not
publics.

Reviewed By: aganea, hans

Differential Revision: https://reviews.llvm.org/D81296
2020-06-30 11:28:04 -07:00
Alexandre Ganea 2ae0df5be7 [CodeView] Revert 8374bf4363 and 403f953792
This reverts:
8374bf4363 [CodeView] Fix generated command-line expansion in LF_BUILDINFO. Fix the 'pdb' entry which was previously a null reference, now an empty string.
403f953792 [CodeView] Add full repro to LF_BUILDINFO record

This is causing the lld/test/COFF/pdb-relative-source-lines.test to fail: http://lab.llvm.org:8011/builders/lld-x86_64-win/builds/1096/steps/test-check-all/logs/FAIL%3A%20lld%3A%3Apdb-relative-source-lines.test
And clang/test/CodeGen/debug-info-codeview-buildinfo.c fails as well: http://lab.llvm.org:8011/builders/clang-s390x-linux/builds/33346/steps/ninja%20check%201/logs/FAIL%3A%20Clang%3A%3Adebug-info-codeview-buildinfo.c
2020-06-18 16:18:46 -04:00
Alexandre Ganea 403f953792 [CodeView] Add full repro to LF_BUILDINFO record
This patch adds some missing information to the LF_BUILDINFO which allows for rebuilding an .OBJ without any external dependency but the .OBJ itself (other than the compiler executable).

Some tools need this information to reproduce a build without any knowledge of the build system. The LF_BUILDINFO therefore stores a full path to the compiler, the PWD (which is the CWD at program startup), a relative or absolute path to the TU, and the full CC1 command line. The command line needs to be freestanding (not depend on any environment variable). In the same way, MSVC doesn't store the provided command-line, but an expanded version (somehow their equivalent of CC1) which is also freestanding.

For more information see PR36198 and D43002.

Differential Revision: https://reviews.llvm.org/D80833
2020-06-18 09:17:15 -04:00
Reid Kleckner 45fd3e4688 [PDB] Share code to relocate .debug$[SF] sections, NFC
Sink relocateDebugChunk near the only call site.
2020-06-01 13:16:57 -07:00
Reid Kleckner 8f0a660030 [PDB] Use inlinee file checksum offsets directly
The inlinees section contains references to the file checksum table. The
file checksum table in the PDB must have the same layout as the file
checksum table in the object file, so all the existing file id
references should stay valid.

Previously, we would do this:
  for all inlined functions:
    - lookup filename from checksum and string table
    - make that filename absolute
    - look up the new file id for that filename up in the new checksum
      table

This lead to pdbMakeAbsolute and remove_dots ending up in the hot path.
We should only need to absolutify the source path once, not once every
time we process an inline function from that source file.

This speeds up linking chrome PGO stage 1 net_unittests.exe from 9.203s
to 8.500s (-7.6%). Looking just at time to process symbol records, it
goes from ~2000ms to ~1300ms, which is consistent with the overall
speedup of about 700ms. This will be less noticeable in debug builds,
which have fewer inlined functions records.
2020-06-01 12:28:32 -07:00
Reid Kleckner 3774bcf9f8 [COFF] Fix var names cVStrTab->cvStrTab sXDataChunks->sxDataChunks
NFC
2020-05-14 11:23:07 -07:00
Reid Kleckner 54a335a2f6 [COFF] Move type merging to TpiSource::mergeDebugT virtual method
This paves the way to doing more things in parallel, and allows us to
order type sources in dependency order. PDBs and PCH objects have to be
loaded before object files which use them.

This is a rebase of the unapplied remaining changes in
https://reviews.llvm.org/D59226. I found it very challenging to rebase
this across the LLD variable name style change. I recall there was a
tool for that, but I didn't take the time to use it.

Reviewers: aganea, akhuang

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79672
2020-05-14 09:47:00 -07:00
Reid Kleckner 6da5672962 [LLD] Rename iDTable -> idTable, NFC
The variable renaming change did not handle this variable well.
2020-05-12 06:37:39 -07:00
Reid Kleckner 3b3e28a07c [PDB] Optimize public symbol processing
Reduces time to link PGO instrumented net_unittets.exe by 11% (9.766s ->
8.672s, best of three). Reduces peak memory by 65.7MB (2142.71MB ->
2076.95MB).

Use a more compact struct, BulkPublic, for faster sorting. Sort in
parallel. Construct the hash buckets in parallel. Try to use one vector
to hold all the publics instead of copying them from one to another.
Allocate all the memory needed to serialize publics up front, and then
serialize them in place in parallel.

Reviewed By: aganea, hans

Differential Revision: https://reviews.llvm.org/D79467
2020-05-08 10:23:27 -07:00
Alexandre Ganea 6adc45d3fd [LLD][COFF] Move debug info for thread-local variables into PDB global stream
Before this patch, the debug record S_GTHREAD32 which represents global thread_local symbols, was emitted by LLD into the respective module stream. This makes Visual Studio unable to display thread_local symbols in the debugger.

After this patch, S_GTHREAD32 is moved into the globals stream. This matches MSVC behavior.

Differential Revision: https://reviews.llvm.org/D79005
2020-05-06 15:23:58 -04:00
Reid Kleckner 932f0276ea [Support] Move LLD's parallel algorithm wrappers to support
Essentially takes the lld/Common/Threads.h wrappers and moves them to
the llvm/Support/Paralle.h algorithm header.

The changes are:
- Remove policy parameter, since all clients use `par`.
- Rename the methods to `parallelSort` etc to match LLVM style, since
  they are no longer C++17 pstl compatible.
- Move algorithms from llvm::parallel:: to llvm::, since they have
  "parallel" in the name and are no longer overloads of the regular
  algorithms.
- Add range overloads
- Use the sequential algorithm directly when 1 thread is requested
  (skips task grouping)
- Fix the index type of parallelForEachN to size_t. Nobody in LLVM was
  using any other parameter, and it made overload resolution hard for
  for_each_n(par, 0, foo.size(), ...) because 0 is int, not size_t.

Remove Threads.h and update LLD for that.

This is a prerequisite for parallel public symbol processing in the PDB
library, which is in LLVM.

Reviewed By: MaskRay, aganea

Differential Revision: https://reviews.llvm.org/D79390
2020-05-05 15:21:05 -07:00
Martin Storsjö 5a1c30177f [LLD] [COFF] Fix a typo in an assert message. NFC. 2020-05-05 11:46:50 +03:00
Reid Kleckner 2868ee5b32 [PDB] Use the global BumpPtrAllocator
Profiling shows that time is spent destroying the allocator member of
PDBLinker, and that is unneeded.
2020-05-04 16:15:36 -07:00
Reid Kleckner 1e5793345b Re-land "[PDB] Avoid calling discoverTypeIndices for a known record kind"
Fixed bad usage of slice API causing assertion failures.

Reverts 810c8e9b49
Reinstates bd7ea8641e
2020-05-02 18:39:33 -07:00
Nico Weber 810c8e9b49 Revert "[PDB] Avoid calling discoverTypeIndices for a known record kind"
This reverts commit bd7ea8641e.
Breaks check-lld everywhere.
2020-05-02 21:06:06 -04:00
Reid Kleckner bd7ea8641e [PDB] Avoid calling discoverTypeIndices for a known record kind
This particular overload allocates memory, and we do this for every
S_[GL]PROC32_ID record. Instead, hardcode the offset of the typeindex
that we are looking for in the LF_[MEM]FUNC_ID record. We already
assumed that looking up the item index already found a record of this
kind.
2020-05-02 15:51:08 -07:00
Eric Astor a39b14f0b4 [ms] Add new /PDBSTREAM option to lld-link allowing injection of streams into PDB files.
Summary:
/PDBSTREAM:<name>=<file> adds the contents of <file> to stream <name> in the resulting PDB.

This allows native uses with workflows that (for example) add srcsrv streams to PDB files to provide a location for the build's source files.

Results should be equivalent to linking with lld-link, then running Microsoft's pdbstr tool with the command line:
pdbstr.exe -w -p:<PDB LOCATION> -s:<name> -i:<file>
except in cases where the named stream overlaps with a default named stream, such as "/names". In those cases, the added stream will be overridden, making the /pdbstream option a no-op.

Reviewers: thakis, rnk

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D77310
2020-04-07 16:19:38 -04:00
Benjamin Kramer b578f130a7 [COFF] Stabilize sort
Found by llvm::sort's expensive checks.
2020-03-28 21:38:50 +01:00
Reid Kleckner 8a310f40d0 Remove namespace lld { namespace coff { from COFF LLD cpp files
Instead, use `using namespace lld(::coff)`, and fully qualify the names
of free functions where they are defined in cpp files.

This effectively reverts d79c3be618 to follow the new style guide added
in 236fcbc21a.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D74882
2020-02-25 17:30:53 -08:00
Benjamin Kramer adcd026838 Make llvm::StringRef to std::string conversions explicit.
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.

This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.

This doesn't actually modify StringRef yet, I'll do that in a follow-up.
2020-01-28 23:25:25 +01:00
Reid Kleckner e5caa156b4 [PDB] Simplify API for making section map, NFC
Prevents API misuse described in PR44495
2020-01-23 12:15:21 -08:00
Reid Kleckner 783db78835 [PDB] Print the most redundant type record indices with /summary
Summary:
I used this information to motivate splitting up the Intrinsic::ID enum
(5d986953c8) and adding a key method to
clang::Sema (586f65d31f) which saved a
fair amount of object file size.

Example output for clang.pdb:

  Top 10 types responsible for the most TPI input bytes:
         index     total bytes   count     size
        0x3890:      8,671,220 = 1,805 *  4,804
       0xE13BE:      5,634,720 =   252 * 22,360
       0x6874C:      5,181,600 =   408 * 12,700
        0x2A1F:      4,520,528 = 1,574 *  2,872
       0x64BFF:      4,024,020 =   469 *  8,580
        0x1123:      4,012,020 = 2,157 *  1,860
        0x6952:      3,753,792 =   912 *  4,116
        0xC16F:      3,630,888 =   633 *  5,736
        0x69DD:      3,601,160 =   985 *  3,656
        0x678D:      3,577,904 =   319 * 11,216

In this case, we can see that record 0x3890 is responsible for ~8MB of
total object file size for objects in clang.

The user can then use llvm-pdbutil to find out what the record is:

  $ llvm-pdbutil dump -types -type-index 0x3890
                       Types (TPI Stream)
  ============================================================
    Showing 1 records.
       0x3890 | LF_FIELDLIST [size = 4804]
                - LF_STMEMBER [name = `WORDTYPE_MAX`, type = 0x1001, attrs = public]
                - LF_MEMBER [name = `U`, Type = 0x37F0, offset = 0, attrs = private]
                - LF_MEMBER [name = `BitWidth`, Type = 0x0075 (unsigned), offset = 8, attrs = private]
                - LF_METHOD [name = `APInt`, # overloads = 8, overload list = 0x3805]
  ...

In this case, we can see that these are members of the APInt class,
which is emitted in 1805 object files.

The next largest type is ASTContext:

  $ llvm-pdbutil dump -types -type-index 0xE13BE bin/clang.pdb
      0xE13BE | LF_FIELDLIST [size = 22360]
                - LF_BCLASS
                  type = 0x653EA, offset = 0, attrs = public
                - LF_MEMBER [name = `Types`, Type = 0x653EB, offset = 8, attrs = private]
                - LF_MEMBER [name = `ExtQualNodes`, Type = 0x653EC, offset = 24, attrs = private]
                - LF_MEMBER [name = `ComplexTypes`, Type = 0x653ED, offset = 48, attrs = private]
                - LF_MEMBER [name = `PointerTypes`, Type = 0x653EE, offset = 72, attrs = private]
  ...

ASTContext only appears 252 times, but the list of members is long, and
must be repeated everywhere it is used.

This was the output before I split Intrinsic::ID:

  Top 10 types responsible for the most TPI input:
        0x686C:     69,823,920 = 1,070 * 65,256
        0x686D:     69,819,640 = 1,070 * 65,252
        0x686E:     69,819,640 = 1,070 * 65,252
        0x686B:     16,371,000 = 1,070 * 15,300
        ...

These records were all lists of intrinsic enums.

Reviewers: MaskRay, ruiu

Subscribers: mgrang, zturner, thakis, hans, akhuang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71437
2020-01-02 16:10:36 -08:00
Reid Kleckner de3fb1ec05 [COFF] Avoid CodeView include in header
Most LLD/COFF files don't care about CodeView. Avoid using CodeView
types in InputFiles.h.
2019-11-14 14:27:48 -08:00
Martin Storsjo e0916f4fbe [LLD] [COFF] Update a leftover comment after SVN r374869. NFC.
llvm-svn: 374874
2019-10-15 09:46:33 +00:00
Martin Storsjo 9318c94ebb [LLD] [COFF] Wrap file location pair<StringRef,int> in Optional<>. NFC.
This makes use of it slightly clearer, and makes it match the
same construct in the lld ELF linker.

Differential Revision: https://reviews.llvm.org/D68935

llvm-svn: 374869
2019-10-15 09:18:18 +00:00
Zachary Turner 02c5386811 [PDB] Fix bug when using multiple PCH header objects with the same name.
A common pattern in Windows is to have all your precompiled headers
use an object named stdafx.obj.  If you've got a project with many
different static libs, you might use a separate PCH for each one of
these.

During the final link step, a file from A might reference the PCH
object from A, but it will have the same name (stdafx.obj) as any
other PCH from another project.  The only difference will be the
path.  For example, A might be A/stdafx.obj while B is B/stdafx.obj.

The existing algorithm checks only the filename that was passed on
the command line (or stored in archive), but this is insufficient in
the case where relative paths are used, because depending on the
command line object file / library order, it might find the wrong
PCH object first resulting in a signature mismatch.

The fix here is to simply check whether the absolute path of the
PCH object (which is stored in the input obj file for the file that
references the PCH) *ends with* the full relative path of whatever
is specified on the command line (or is in the archive).

Differential Revision: https://reviews.llvm.org/D66431

llvm-svn: 374442
2019-10-10 20:25:51 +00:00
Fangrui Song d79c3be618 [COFF] Wrap definitions in namespace lld { namespace coff {. NFC
Similar to D67323, but for COFF. Many lld/COFF/ files already use
`namespace lld { namespace coff {`. Only a few need changing.

Reviewed By: ruiu

Differential Revision: https://reviews.llvm.org/D68772

llvm-svn: 374314
2019-10-10 11:27:58 +00:00
Hans Wennborg 1e1e3ba252 Unify the two CRC implementations
David added the JamCRC implementation in r246590. More recently, Eugene
added a CRC-32 implementation in r357901, which falls back to zlib's
crc32 function if present.

These checksums are essentially the same, so having multiple
implementations seems unnecessary. This replaces the CRC-32
implementation with the simpler one from JamCRC, and implements the
JamCRC interface in terms of CRC-32 since this means it can use zlib's
implementation when available, saving a few bytes and potentially making
it faster.

JamCRC took an ArrayRef<char> argument, and CRC-32 took a StringRef.
This patch changes it to ArrayRef<uint8_t> which I think is the best
choice, and simplifies a few of the callers nicely.

Differential revision: https://reviews.llvm.org/D68570

llvm-svn: 374148
2019-10-09 09:06:30 +00:00
Martin Storsjo 1d06d48bb3 [LLD] [COFF] Resolve source locations for undefined references using dwarf
This fixes PR42407.

Differential Revision: https://reviews.llvm.org/D67053

llvm-svn: 372843
2019-09-25 11:03:48 +00:00