llvm-project/lld/test/COFF/precomp-link.test

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

98 lines
4.9 KiB
Plaintext
Raw Normal View History

RUN: lld-link %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj %S/Inputs/precomp.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe /opt:ref /opt:icf /summary | FileCheck %s -check-prefix SUMMARY
2019-02-12 05:38:20 +08:00
RUN: llvm-pdbutil dump -types %t.pdb | FileCheck %s
RUN: lld-link %S/Inputs/precomp.obj %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe /opt:ref /opt:icf
RUN: llvm-pdbutil dump -types %t.pdb | FileCheck %s
RUN: lld-link %S/Inputs/precomp-a.obj %S/Inputs/precomp-invalid.obj %S/Inputs/precomp.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe /opt:ref /opt:icf 2>&1 | FileCheck %s -check-prefix FAILURE
Re-land "[PDB] Merge types in parallel when using ghashing" Stored Error objects have to be checked, even if they are success values. This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351. Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f.. Original commit message: ----------------------------------------- This makes type merging much faster (-24% on chrome.dll) when multiple threads are available, but it slightly increases the time to link (+10%) when /threads:1 is passed. With only one more thread, the new type merging is faster (-11%). The output PDB should be identical to what it was before this change. To give an idea, here is the /time output placed side by side: BEFORE | AFTER Input File Reading: 956 ms | 968 ms Code Layout: 258 ms | 190 ms Commit Output File: 6 ms | 7 ms PDB Emission (Cumulative): 6691 ms | 4253 ms Add Objects: 4341 ms | 2927 ms Type Merging: 2814 ms | 1269 ms -55%! Symbol Merging: 1509 ms | 1645 ms Publics Stream Layout: 111 ms | 112 ms TPI Stream Layout: 764 ms | 26 ms trivial Commit to Disk: 1322 ms | 1036 ms -300ms ----------------------------------------- -------- Total Link Time: 8416 ms 5882 ms -30% overall The main source of the additional overhead in the single-threaded case is the need to iterate all .debug$T sections up front to check which type records should go in the IPI stream. See fillIsItemIndexFromDebugT. With changes to the .debug$H section, we could pre-calculate this info and eliminate the need to do this walk up front. That should restore single-threaded performance back to what it was before this change. This change will cause LLD to be much more parallel than it used to, and for users who do multiple links in parallel, it could regress performance. However, when the user is only doing one link, it's a huge improvement. In the future, we can use NT worker threads to avoid oversaturating the machine with work, but for now, this is such an improvement for the single-link use case that I think we should land this as is. Algorithm ---------- Before this change, we essentially used a DenseMap<GloballyHashedType, TypeIndex> to check if a type has already been seen, and if it hasn't been seen, insert it now and use the next available type index for it in the destination type stream. DenseMap does not support concurrent insertion, and even if it did, the linker must be deterministic: it cannot produce different PDBs by using different numbers of threads. The output type stream must be in the same order regardless of the order of hash table insertions. In order to create a hash table that supports concurrent insertion, the table cells must be small enough that they can be updated atomically. The algorithm I used for updating the table using linear probing is described in this paper, "Concurrent Hash Tables: Fast and General(?)!": https://dl.acm.org/doi/10.1145/3309206 The GHashCell in this change is essentially a pair of 32-bit integer indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the TpiSource object, and it represents an input type stream. The typeIndex is the index of the type in the stream. Together, we have something like a ragged 2D array of ghashes, which can be looked up as: tpiSources[tpiSrcIndex]->ghashes[typeIndex] By using these side tables, we can omit the key data from the hash table, and keep the table cell small. There is a cost to this: resolving hash table collisions requires many more loads than simply looking at the key in the same cache line as the insertion position. However, most supported platforms should have a 64-bit CAS operation to update the cell atomically. To make the result of concurrent insertion deterministic, the cell payloads must have a priority function. Defining one is pretty straightforward: compare the two 32-bit numbers as a combined 64-bit number. This means that types coming from inputs earlier on the command line have a higher priority and are more likely to appear earlier in the final PDB type stream than types from an input appearing later on the link line. After table insertion, the non-empty cells in the table can be copied out of the main table and sorted by priority to determine the ordering of the final type index stream. At this point, item and type records must be separated, either by sorting or by splitting into two arrays, and I chose sorting. This is why the GHashCell must contain the isItem bit. Once the final PDB TPI stream ordering is known, we need to compute a mapping from source type index to PDB type index. To avoid starting over from scratch and looking up every type again by its ghash, we save the insertion position of every hash table insertion during the first insertion phase. Because the table does not support rehashing, the insertion position is stable. Using the array of insertion positions indexed by source type index, we can replace the source type indices in the ghash table cells with the PDB type indices. Once the table cells have been updated to contain PDB type indices, the mapping for each type source can be computed in parallel. Simply iterate the list of cell positions and replace them with the PDB type index, since the insertion positions are no longer needed. Once we have a source to destination type index mapping for every type source, there are no more data dependencies. We know which type records are "unique" (not duplicates), and what their final type indices will be. We can do the remapping in parallel, and accumulate type sizes and type hashes in parallel by type source. Lastly, TPI stream layout must be done serially. Accumulate all the type records, sizes, and hashes, and add them to the PDB. Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
RUN: lld-link %S/Inputs/precomp-a.obj %S/Inputs/precomp-invalid.obj %S/Inputs/precomp.obj /nodefaultlib /entry:main /debug:ghash /pdb:%t.pdb /out:%t.exe /opt:ref /opt:icf 2>&1 | FileCheck %s -check-prefix FAILURE
2019-02-12 05:38:20 +08:00
FIXME: The following RUN line should fail, regardless of whether debug info is
enabled or not. Normally this would result in an error due to missing _PchSym_
references, but SymbolTable.cpp suppresses such errors. MSVC seems to have a
special case for those symbols and it emits the LNK2011 error.
RUN: lld-link %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe /opt:ref /opt:icf 2>&1 | FileCheck %s -check-prefix FAILURE-MISSING-PRECOMPOBJ
2019-02-12 05:38:20 +08:00
FAILURE: warning: Cannot use debug info for '{{.*}}precomp-invalid.obj' [LNK4099]
FAILURE-NEXT: failed to load reference '{{.*}}precomp.obj': No matching precompiled header could be located.
2019-02-12 05:38:20 +08:00
FAILURE-MISSING-PRECOMPOBJ: warning: Cannot use debug info for '{{.*}}precomp-a.obj' [LNK4099]
FAILURE-MISSING-PRECOMPOBJ-NEXT: failed to load reference '{{.*}}precomp.obj': No matching precompiled header could be located.
2019-02-12 05:38:20 +08:00
Check that a PCH object file with a missing S_OBJNAME record results in an
error. Edit out this record from the yaml-ified object:
- Kind: S_OBJNAME
ObjNameSym:
Signature: 545589255
ObjectName: 'F:\svn\lld\test\COFF\precomp\precomp.obj'
RUN: obj2yaml %S/Inputs/precomp.obj | grep -v 'SectionData: *04000000' > %t.precomp.yaml
RUN: sed '/S_OBJNAME/,/ObjectName:/d' < %t.precomp.yaml > precomp-no-objname.yaml
RUN: sed 's/Signature: *545589255/Signature: 0/' < %t.precomp.yaml > precomp-zero-sig.yaml
RUN: yaml2obj precomp-no-objname.yaml -o %t.precomp-no-objname.obj
RUN: yaml2obj precomp-zero-sig.yaml -o %t.precomp-zero-sig.obj
[LLD][COFF] When using LLD-as-a-library, always prevent re-entrance on failures This is a follow-up for D70378 (Cover usage of LLD as a library). While debugging an intermittent failure on a bot, I recalled this scenario which causes the issue: 1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach lld::elf::Obj-File::ObjFile() which goes straight into its base ELFFileBase(), then ELFFileBase::init(). 2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a half-initialized ObjFile instance. 3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we hapily restore the control flow to CrashRecoveryContext::RunSafely() then back in lld::safeLldMain(). 4.Before this patch, we called errorHandler().reset() just after, and this attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried to free the half-initialized ObjFile instance, and more precisely its ObjFile::dwarf member. Sometimes that worked, sometimes it failed and was catched by the CrashRecoveryContext. This scenario was the reason we called errorHandler().reset() through a CrashRecoveryContext. But in some rare cases, the above repro somehow corrupted the heap, creating a stack overflow. When the CrashRecoveryContext's filter (that is, __except (ExceptionFilter(GetExceptionInformation()))) tried to handle the exception, it crashed again since the stack was exhausted -- and that took the whole application down. That is the issue seen on the bot. Locally it happens about 1 times out of 15. Now this situation can happen anywhere in LLD. Since catching stack overflows is not a reliable scenario ATM when using CrashRecoveryContext, we're now preventing further re-entrance when such failures occur, by signaling lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above), only one iteration will be executed, instead of two. Differential Revision: https://reviews.llvm.org/D88348
2020-11-12 21:14:20 +08:00
RUN: env LLD_IN_TEST=1 not lld-link %t.precomp-no-objname.obj %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe 2>&1 | FileCheck %s -check-prefix FAILURE-NO-SIGNATURE
[LLD][COFF] When using LLD-as-a-library, always prevent re-entrance on failures This is a follow-up for D70378 (Cover usage of LLD as a library). While debugging an intermittent failure on a bot, I recalled this scenario which causes the issue: 1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach lld::elf::Obj-File::ObjFile() which goes straight into its base ELFFileBase(), then ELFFileBase::init(). 2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a half-initialized ObjFile instance. 3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we hapily restore the control flow to CrashRecoveryContext::RunSafely() then back in lld::safeLldMain(). 4.Before this patch, we called errorHandler().reset() just after, and this attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried to free the half-initialized ObjFile instance, and more precisely its ObjFile::dwarf member. Sometimes that worked, sometimes it failed and was catched by the CrashRecoveryContext. This scenario was the reason we called errorHandler().reset() through a CrashRecoveryContext. But in some rare cases, the above repro somehow corrupted the heap, creating a stack overflow. When the CrashRecoveryContext's filter (that is, __except (ExceptionFilter(GetExceptionInformation()))) tried to handle the exception, it crashed again since the stack was exhausted -- and that took the whole application down. That is the issue seen on the bot. Locally it happens about 1 times out of 15. Now this situation can happen anywhere in LLD. Since catching stack overflows is not a reliable scenario ATM when using CrashRecoveryContext, we're now preventing further re-entrance when such failures occur, by signaling lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above), only one iteration will be executed, instead of two. Differential Revision: https://reviews.llvm.org/D88348
2020-11-12 21:14:20 +08:00
RUN: env LLD_IN_TEST=1 not lld-link %t.precomp-zero-sig.obj %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe 2>&1 | FileCheck %s -check-prefix FAILURE-NO-SIGNATURE
FAILURE-NO-SIGNATURE: error: {{.*}}.obj claims to be a PCH object, but does not have a valid signature
Check that two PCH objs with duplicate signatures are an error.
RUN: cp %S/Inputs/precomp.obj %t.precomp-dup.obj
[LLD][COFF] When using LLD-as-a-library, always prevent re-entrance on failures This is a follow-up for D70378 (Cover usage of LLD as a library). While debugging an intermittent failure on a bot, I recalled this scenario which causes the issue: 1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach lld::elf::Obj-File::ObjFile() which goes straight into its base ELFFileBase(), then ELFFileBase::init(). 2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a half-initialized ObjFile instance. 3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we hapily restore the control flow to CrashRecoveryContext::RunSafely() then back in lld::safeLldMain(). 4.Before this patch, we called errorHandler().reset() just after, and this attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried to free the half-initialized ObjFile instance, and more precisely its ObjFile::dwarf member. Sometimes that worked, sometimes it failed and was catched by the CrashRecoveryContext. This scenario was the reason we called errorHandler().reset() through a CrashRecoveryContext. But in some rare cases, the above repro somehow corrupted the heap, creating a stack overflow. When the CrashRecoveryContext's filter (that is, __except (ExceptionFilter(GetExceptionInformation()))) tried to handle the exception, it crashed again since the stack was exhausted -- and that took the whole application down. That is the issue seen on the bot. Locally it happens about 1 times out of 15. Now this situation can happen anywhere in LLD. Since catching stack overflows is not a reliable scenario ATM when using CrashRecoveryContext, we're now preventing further re-entrance when such failures occur, by signaling lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above), only one iteration will be executed, instead of two. Differential Revision: https://reviews.llvm.org/D88348
2020-11-12 21:14:20 +08:00
RUN: env LLD_IN_TEST=1 not lld-link %S/Inputs/precomp.obj %t.precomp-dup.obj %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj /nodefaultlib /entry:main /debug /pdb:%t.pdb /out:%t.exe 2>&1 | FileCheck %s -check-prefix FAILURE-DUP-SIGNATURE
FAILURE-DUP-SIGNATURE: error: a PCH object with the same signature has already been provided ({{.*precomp.obj and .*precomp-dup.obj.*}})
2019-02-12 05:38:20 +08:00
CHECK: Types (TPI Stream)
CHECK-NOT: LF_PRECOMP
CHECK-NOT: LF_ENDPRECOMP
Re-land "[PDB] Merge types in parallel when using ghashing" Stored Error objects have to be checked, even if they are success values. This reverts commit 8d250ac3cd48d0f17f9314685a85e77895c05351. Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f.. Original commit message: ----------------------------------------- This makes type merging much faster (-24% on chrome.dll) when multiple threads are available, but it slightly increases the time to link (+10%) when /threads:1 is passed. With only one more thread, the new type merging is faster (-11%). The output PDB should be identical to what it was before this change. To give an idea, here is the /time output placed side by side: BEFORE | AFTER Input File Reading: 956 ms | 968 ms Code Layout: 258 ms | 190 ms Commit Output File: 6 ms | 7 ms PDB Emission (Cumulative): 6691 ms | 4253 ms Add Objects: 4341 ms | 2927 ms Type Merging: 2814 ms | 1269 ms -55%! Symbol Merging: 1509 ms | 1645 ms Publics Stream Layout: 111 ms | 112 ms TPI Stream Layout: 764 ms | 26 ms trivial Commit to Disk: 1322 ms | 1036 ms -300ms ----------------------------------------- -------- Total Link Time: 8416 ms 5882 ms -30% overall The main source of the additional overhead in the single-threaded case is the need to iterate all .debug$T sections up front to check which type records should go in the IPI stream. See fillIsItemIndexFromDebugT. With changes to the .debug$H section, we could pre-calculate this info and eliminate the need to do this walk up front. That should restore single-threaded performance back to what it was before this change. This change will cause LLD to be much more parallel than it used to, and for users who do multiple links in parallel, it could regress performance. However, when the user is only doing one link, it's a huge improvement. In the future, we can use NT worker threads to avoid oversaturating the machine with work, but for now, this is such an improvement for the single-link use case that I think we should land this as is. Algorithm ---------- Before this change, we essentially used a DenseMap<GloballyHashedType, TypeIndex> to check if a type has already been seen, and if it hasn't been seen, insert it now and use the next available type index for it in the destination type stream. DenseMap does not support concurrent insertion, and even if it did, the linker must be deterministic: it cannot produce different PDBs by using different numbers of threads. The output type stream must be in the same order regardless of the order of hash table insertions. In order to create a hash table that supports concurrent insertion, the table cells must be small enough that they can be updated atomically. The algorithm I used for updating the table using linear probing is described in this paper, "Concurrent Hash Tables: Fast and General(?)!": https://dl.acm.org/doi/10.1145/3309206 The GHashCell in this change is essentially a pair of 32-bit integer indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the TpiSource object, and it represents an input type stream. The typeIndex is the index of the type in the stream. Together, we have something like a ragged 2D array of ghashes, which can be looked up as: tpiSources[tpiSrcIndex]->ghashes[typeIndex] By using these side tables, we can omit the key data from the hash table, and keep the table cell small. There is a cost to this: resolving hash table collisions requires many more loads than simply looking at the key in the same cache line as the insertion position. However, most supported platforms should have a 64-bit CAS operation to update the cell atomically. To make the result of concurrent insertion deterministic, the cell payloads must have a priority function. Defining one is pretty straightforward: compare the two 32-bit numbers as a combined 64-bit number. This means that types coming from inputs earlier on the command line have a higher priority and are more likely to appear earlier in the final PDB type stream than types from an input appearing later on the link line. After table insertion, the non-empty cells in the table can be copied out of the main table and sorted by priority to determine the ordering of the final type index stream. At this point, item and type records must be separated, either by sorting or by splitting into two arrays, and I chose sorting. This is why the GHashCell must contain the isItem bit. Once the final PDB TPI stream ordering is known, we need to compute a mapping from source type index to PDB type index. To avoid starting over from scratch and looking up every type again by its ghash, we save the insertion position of every hash table insertion during the first insertion phase. Because the table does not support rehashing, the insertion position is stable. Using the array of insertion positions indexed by source type index, we can replace the source type indices in the ghash table cells with the PDB type indices. Once the table cells have been updated to contain PDB type indices, the mapping for each type source can be computed in parallel. Simply iterate the list of cell positions and replace them with the PDB type index, since the insertion positions are no longer needed. Once we have a source to destination type index mapping for every type source, there are no more data dependencies. We know which type records are "unique" (not duplicates), and what their final type indices will be. We can do the remapping in parallel, and accumulate type sizes and type hashes in parallel by type source. Lastly, TPI stream layout must be done serially. Accumulate all the type records, sizes, and hashes, and add them to the PDB. Differential Revision: https://reviews.llvm.org/D87805
2020-10-01 05:55:51 +08:00
Re-run with ghash. Eventually, perhaps this will be the default.
RUN: lld-link %S/Inputs/precomp-a.obj %S/Inputs/precomp-b.obj %S/Inputs/precomp.obj /nodefaultlib /entry:main /debug /debug:ghash /pdb:%t.pdb /out:%t.exe /opt:ref /opt:icf /summary | FileCheck %s -check-prefix SUMMARY
RUN: llvm-pdbutil dump -types %t.pdb | FileCheck %s
SUMMARY: Summary
SUMMARY-NEXT: --------------------------------------------------------------------------------
SUMMARY-NEXT: 3 Input OBJ files (expanded from all cmd-line inputs)
SUMMARY-NEXT: 0 PDB type server dependencies
SUMMARY-NEXT: 1 Precomp OBJ dependencies
SUMMARY-NEXT: 1066 Input type records
SUMMARY-NEXT: 55968 Input type records bytes
SUMMARY-NEXT: 874 Merged TPI records
SUMMARY-NEXT: 170 Merged IPI records
SUMMARY-NEXT: 5 Output PDB strings
SUMMARY-NEXT: 167 Global symbol records
SUMMARY-NEXT: 20 Module symbol records
SUMMARY-NEXT: 3 Public symbol records
2019-02-12 05:38:20 +08:00
// precomp.h
#pragma once
int Function(char A);
// precomp.cpp
#include "precomp.h"
// a.cpp
#include "precomp.h"
int main(void) {
Function('a');
return 0;
}
// b.cpp
#include "precomp.h"
int Function(char a) {
return (int)a;
}
// cl.exe precomp.cpp /Z7 /Ycprecomp.h /c
// cl.exe a.cpp b.cpp /Z7 /Yuprecomp.h /c