Commit Graph

13401 Commits

Author SHA1 Message Date
Alexandre Ganea fe1f0a1a19 [LLD] Fix /time formatting for very long runs. NFC. 2020-10-02 09:53:43 -04:00
Alexandre Ganea 55b97a6d2a [LLD][COFF] Add more type record information to /summary
This adds the following two new lines to /summary:

      21351 Input OBJ files (expanded from all cmd-line inputs)
         61 PDB type server dependencies
         38 Precomp OBJ dependencies
 1420669231 Input type records         <<<<
78665073382 Input type records bytes   <<<<
    8801393 Merged TPI records
    3177158 Merged IPI records
      59194 Output PDB strings
   71576766 Global symbol records
   25416935 Module symbol records
    2103431 Public symbol records

Differential Revision: https://reviews.llvm.org/D88703
2020-10-02 09:36:11 -04:00
Alexandre Ganea 4140f0744f [LLD][COFF] Fix crash with /summary and PCH input files
Before this patch /summary was crashing with some .PCH.OBJ files, because tpiMap[srcIdx++] was reading at the wrong location. When the TpiSource depends on a .PCH.OBJ file, the types should be offset by the previously merged PCH.OBJ set of indices.

Differential Revision: https://reviews.llvm.org/D88678
2020-10-01 17:08:35 -04:00
Fangrui Song 88f2fe5cad Raland D87318 [LLD][PowerPC] Add support for R_PPC64_GOT_TLSGD_PCREL34 used in TLS General Dynamic
Add Thread Local Storage support for the 34 bit relocation R_PPC64_GOT_TLSGD_PCREL34 used in General Dynamic.

The compiler will produce code that looks like:
```
pla r3, x@got@tlsgd@pcrel            R_PPC64_GOT_TLSGD_PCREL34
bl __tls_get_addr@notoc(x@tlsgd)     R_PPC64_TLSGD
                                     R_PPC64_REL24_NOTOC
```
LLD should be able to correctly compute the relocation for  R_PPC64_GOT_TLSGD_PCREL34 as well as do the following two relaxations where possible:
General Dynamic to Local Exec:
```
paddi r3, r13, x@tprel
nop
```
and General Dynamic to Initial Exec:
```
pld r3, x@got@tprel@pcrel
add r3, r3, r13
```
Note:
This patch adds support for the PC Relative (no TOC) version of General Dynamic on top of the existing support for the TOC version of General Dynamic.
The ABI does not provide any way to tell by looking only at the relocation `R_PPC64_TLSGD` when it is being used in a TOC instruction sequence or and when it is being used in a no TOC sequence. The TOC sequence should always be 4 byte aligned. This patch adds one to the offset of the relocation when it is being used in a no TOC sequence. In this way LLD can tell by looking at the alignment of the offset of `R_PPC64_TLSGD` whether or not it is being used as part of a TOC or no TOC sequence.

Reviewed By: NeHuang, sfertile, MaskRay

Differential Revision: https://reviews.llvm.org/D87318
2020-10-01 12:36:33 -07:00
Reid Kleckner 5d46d7e8b2 [PDB] Use one func id DenseMap instead of per-source maps, NFC
This avoids some DenseMap copies when /Zi is in use, and results in
fewer data structures.

Differential Revision: https://reviews.llvm.org/D88617
2020-10-01 12:22:27 -07:00
Arthur Eubanks 499260c03b Revert "[CFGuard] Add address-taken IAT tables and delay-load support"
This reverts commit ef4e971e5e.
2020-10-01 11:29:54 -07:00
Stefan Pintilie 5f3e565f59 Revert "[LLD][PowerPC] Add support for R_PPC64_GOT_TLSGD_PCREL34 used in TLS General Dynamic"
This reverts commit 79122868f9.
2020-10-01 13:28:35 -05:00
Stefan Pintilie 79122868f9 [LLD][PowerPC] Add support for R_PPC64_GOT_TLSGD_PCREL34 used in TLS General Dynamic
Add Thread Local Storage support for the 34 bit relocation R_PPC64_GOT_TLSGD_PCREL34 used in General Dynamic.

The compiler will produce code that looks like:
```
pla r3, x@got@tlsgd@pcrel            R_PPC64_GOT_TLSGD_PCREL34
bl __tls_get_addr@notoc(x@tlsgd)     R_PPC64_TLSGD
                                     R_PPC64_REL24_NOTOC
```
LLD should be able to correctly compute the relocation for  R_PPC64_GOT_TLSGD_PCREL34 as well as do the following two relaxations where possible:
General Dynamic to Local Exec:
```
paddi r3, r13, x@tprel
nop
```
and General Dynamic to Initial Exec:
```
pld r3, x@got@tprel@pcrel
add r3, r3, r13
```
Note:
This patch adds support for the PC Relative (no TOC) version of General Dynamic on top of the existing support for the TOC version of General Dynamic.
The ABI does not provide any way to tell by looking only at the relocation `R_PPC64_TLSGD` when it is being used in a TOC instruction sequence or and when it is being used in a no TOC sequence. The TOC sequence should always be 4 byte aligned. This patch adds one to the offset of the relocation when it is being used in a no TOC sequence. In this way LLD can tell by looking at the alignment of the offset of `R_PPC64_TLSGD` whether or not it is being used as part of a TOC or no TOC sequence.

Reviewed By: NeHuang, sfertile, MaskRay

Differential Revision: https://reviews.llvm.org/D87318
2020-10-01 13:00:37 -05:00
James Henderson a20168d030 [Archive] Don't throw away errors for malformed archive members
When adding an archive member with a problem, e.g. a new bitcode with an
old archiver, containing an unsupported attribute, or an ELF file with a
malformed symbol table, the archiver would throw away the error and
simply add the member to the archive without any symbol entries. This
meant that the resultant archive could be silently unusable when not
using --whole-archive, and result in unexpected undefined symbols.

This change fixes this issue by addressing two FIXMEs and only throwing
away not-an-object errors. However, this meant that some LLD tests which
didn't need symbol tables and were using invalid members deliberately to
test the linker's malformed input handling no longer worked, so this
patch also stops the archiver from looking for symbols in an object if
it doesn't require a symbol table, and updates the tests accordingly.

Differential Revision: https://reviews.llvm.org/D88288

Reviewed by: grimar, rupprecht, MaskRay
2020-10-01 14:03:34 +01:00
Andrew Paverd ef4e971e5e [CFGuard] Add address-taken IAT tables and delay-load support
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D87544
2020-10-01 12:45:07 +01:00
Fangrui Song 4e9277eda1 [ELF] --wrap: don't unnecessarily expose __real_
The routing rules are:

sym -> __wrap_sym
__real_sym -> sym

__wrap_sym and sym are routing targets, so they need to be exposed to the symbol
table. __real_sym is not and can be eliminated if not used by regular object.
2020-09-30 20:09:25 -07:00
Dan Gohman 6cd8511e59 [WebAssembly] New-style command support
This adds support for new-style command support. In this mode, all exports
are considered command entrypoints, and the linker inserts calls to
`__wasm_call_ctors` and `__wasm_call_dtors` for all such entrypoints.

This enables support for:

 - Command entrypoints taking arguments other than strings and return values
   other than `int`.
 - Multicall executables without requiring on the use of string-based
   command-line arguments.

This new behavior is disabled when the input has an explicit call to
`__wasm_call_ctors`, indicating code not expecting new-style command
support.

This change does mean that wasm-ld no longer supports DCE-ing the
`__wasm_call_ctors` function when there are no calls to it. If there are no
calls to it, and there are ctors present, we assume it's wasm-ld's job to
insert the calls. This seems ok though, because if there are ctors present,
the program is expecting them to be called. This change affects the
init-fini-gc.ll test.
2020-09-30 19:02:40 -07:00
Sam Clegg 3c45a06f26 [lld][WebAssembly] Allow exporting of mutable globals
In particular allow explict exporting of `__stack_pointer` but
exclud this from `--export-all` to avoid requiring the mutable
globals feature whenenve `--export-all` is used.

This uncovered a bug in populateTargetFeatures regarding checking
if the mutable-globals feature is allowed.

See: https://github.com/WebAssembly/binaryen/issues/2934

Differential Revision: https://reviews.llvm.org/D88506
2020-09-30 17:53:27 -07:00
Reid Kleckner 5519e4da83 Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.

This reverts commit 8d250ac3cd.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..

Original commit message:
-----------------------------------------

This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.

To give an idea, here is the /time output placed side by side:
                              BEFORE    | AFTER
  Input File Reading:           956 ms  |  968 ms
  Code Layout:                  258 ms  |  190 ms
  Commit Output File:             6 ms  |    7 ms
  PDB Emission (Cumulative):   6691 ms  | 4253 ms
    Add Objects:               4341 ms  | 2927 ms
      Type Merging:            2814 ms  | 1269 ms  -55%!
      Symbol Merging:          1509 ms  | 1645 ms
    Publics Stream Layout:      111 ms  |  112 ms
    TPI Stream Layout:          764 ms  |   26 ms  trivial
    Commit to Disk:            1322 ms  | 1036 ms  -300ms
----------------------------------------- --------
Total Link Time:               8416 ms    5882 ms  -30% overall

The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.

This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.

Algorithm
----------

Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.

In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206

The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
  tpiSources[tpiSrcIndex]->ghashes[typeIndex]

By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.

To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.

After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.

Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.

Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.

Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.

Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.

Differential Revision: https://reviews.llvm.org/D87805
2020-09-30 15:44:38 -07:00
Reid Kleckner 8d250ac3cd Revert "[PDB] Merge types in parallel when using ghashing"
This reverts commit 49b3459930.
2020-09-30 14:55:32 -07:00
Reid Kleckner 49b3459930 [PDB] Merge types in parallel when using ghashing
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.

To give an idea, here is the /time output placed side by side:
                              BEFORE    | AFTER
  Input File Reading:           956 ms  |  968 ms
  Code Layout:                  258 ms  |  190 ms
  Commit Output File:             6 ms  |    7 ms
  PDB Emission (Cumulative):   6691 ms  | 4253 ms
    Add Objects:               4341 ms  | 2927 ms
      Type Merging:            2814 ms  | 1269 ms  -55%!
      Symbol Merging:          1509 ms  | 1645 ms
    Publics Stream Layout:      111 ms  |  112 ms
    TPI Stream Layout:          764 ms  |   26 ms  trivial
    Commit to Disk:            1322 ms  | 1036 ms  -300ms
----------------------------------------- --------
Total Link Time:               8416 ms    5882 ms  -30% overall

The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.

This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.

Algorithm
----------

Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.

In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206

The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
  tpiSources[tpiSrcIndex]->ghashes[typeIndex]

By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.

To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.

After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.

Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.

Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.

Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.

Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.

Differential Revision: https://reviews.llvm.org/D87805
2020-09-30 14:22:48 -07:00
Fangrui Song 259bb61c11 [ELF] Fix multiple -mllvm after D70378
Fixes https://reviews.llvm.org/D70378#2299569 Multiple -mllvm is intended to be supported.

We don't have a proper test for `-plugin-opt=-`. This patch adds the test as well.

Differential Revision: https://reviews.llvm.org/D88461
2020-09-29 10:26:58 -07:00
Benjamin Kramer b59dff4b16 [wasm] Move WasmTraits.h to BinaryFormat
There's no dependency on Object in there and this avoids a cyclic
dependency between libMC and libObject.
2020-09-28 22:07:28 +02:00
Fangrui Song 20e9c36c01 Internalize functions from various tools. NFC
And internalize some classes if I noticed them:)
2020-09-26 15:57:13 -07:00
Jez Ng 2c2a749448 [lld-macho] Ignore a few more undocumented flags
Reviewed By: #lld-macho, compnerd

Differential Revision: https://reviews.llvm.org/D88268
2020-09-25 11:28:37 -07:00
Jez Ng 643ec67a64 [lld-macho] Always include custom syslibroot when running tests
This greatly reduces the amount of boilerplate in our tests.

Reviewed By: #lld-macho, compnerd

Differential Revision: https://reviews.llvm.org/D87960
2020-09-25 11:28:36 -07:00
Jez Ng 62a3f0c984 [lld-macho] Support absolute symbols
They operate like Defined symbols but with no associated InputSection.

Note that `ld64` seems to treat the weak definition flag like a no-op for
absolute symbols, so I have replicated that behavior.

Reviewed By: #lld-macho, smeenai

Differential Revision: https://reviews.llvm.org/D87909
2020-09-25 11:28:35 -07:00
Jez Ng c7c9776f77 [lld-macho] Allow the entry symbol to be dynamically bound
Apparently this is used in real programs. I've handled this by reusing
the logic we already have for branch (function call) relocations.

Reviewed By: #lld-macho, smeenai

Differential Revision: https://reviews.llvm.org/D87852
2020-09-25 11:28:33 -07:00
Jez Ng f23f512691 [lld-macho] Support -bundle
Not 100% sure but it appears that bundles are almost identical to
dylibs, aside from the fact that they do not contain `LC_ID_DYLIB`. ld64's code
seems to treat bundles and dylibs identically in most places.

Supporting bundles allows us to run e.g. XCTests, as all test suites are
compiled into bundles which get dynamically loaded by the `xctest` test runner.

Reviewed By: #lld-macho, smeenai

Differential Revision: https://reviews.llvm.org/D87856
2020-09-25 11:28:32 -07:00
Jez Ng e4e673e75a [lld-macho] Implement support for PIC
* Implement rebase opcodes. Rebase opcodes tell dyld where absolute
  addresses have been encoded in the binary. If the binary is not loaded
  at its preferred address, dyld has to rebase these addresses by adding
  an offset to them.
* Support `-pie` and use it to test rebase opcodes.

This is necessary for absolute address references in dylibs, bundles etc
to work.

Reviewed By: #lld-macho, gkm

Differential Revision: https://reviews.llvm.org/D87199
2020-09-25 11:28:31 -07:00
Stefan Pintilie 8c53282d64 [PowerPC][NFC] Merged two switch entries.
Two switch entries did exactly the same thing. This patch merges them.
2020-09-25 09:49:13 -05:00
Stefan Pintilie d224175230 [PowerPC][LLD] Extend R2 save stub to support offsets of more than 26 bits
The R2 save stub will now support offsets up to 64 bits.

There are three cases that will be used.
1) The offset fits in 26 bits.
```
b <26 bit offset>
```
2) The offset does not fit in 26 bits but fits in 34 bits.
```
paddi r12, 0, <34 bit offset>, 1
mtctr r12
bctr
```
3) The offset does not fit in 34 bits. Since this is an R2 save stub we can use
the TOC in R2. We are not loading the offset but the actual address we want to
branch to.
```
addis r12, r2, <address in TOC lo>
ld r12 <address in TOC hi>(r12)
mtctr r12
bctr
```

In case 1) the stub is only 8 bytes while in cases 2) and 3) the stub will be
20 bytes.

Reviewed By: MaskRay, sfertile, NeHuang

Differential Revision: https://reviews.llvm.org/D87916
2020-09-25 06:39:14 -05:00
Thomas Lively 15a5e86fb3 [lld][WebAssembly] Allow `atomics` feature with unshared memory
https://github.com/WebAssembly/threads/issues/144 updated the
WebAssembly threads proposal to make atomic operations on unshared memories
valid. This change updates the feature checking in the linker accordingly.
Production WebAssembly engines have recently been updated to allow this
behvaior, but after this change users who accidentally use atomics with unshared
memories on older versions of the engines will get validation errors at runtime
rather than link errors.

Differential Revision: https://reviews.llvm.org/D79530
2020-09-24 20:35:29 -07:00
Fangrui Song 1ca6bd261e [lld] Clean up in lld::{coff,elf}::link after D70378
Library users should not need to call errorHandler().reset() explicitly.

google/iree calls lld:🧝:link and without the patch some global
variables are not cleaned up in the next invocation.
2020-09-24 18:02:45 -07:00
Snehasish Kumar 070555c6c0 [lld] Make -z keep-text-section-prefix recognize .text.split. as a prefix.
".text.split." holds symbols which are split out from functions in
other input sections. For example, with -fsplit-machine-functions,
placing the cold parts in .text.split instead of .text.unlikely mitigates
against poor profile inaccuracy. Techniques such as hugepage remapping can
make conservative decisions at the section granularity.

Differential Revision: https://reviews.llvm.org/D87840
2020-09-24 15:02:48 -07:00
Jez Ng 5213576fa2 [lld-macho][re-land] Implement and test resolution of common symbols
Earlier build break fixed in c32e69b2ce.

This reverts commit c367f93e85.
2020-09-24 15:00:56 -07:00
Jez Ng c32e69b2ce [lld-macho][re-land] Initial support for common symbols
Fix earlier build break via a static_cast.

This reverts commit 8112d494d3.

Differential Revision: https://reviews.llvm.org/D86909
2020-09-24 15:00:20 -07:00
Alexandre Ganea f2efb5742c [LLD][COFF] Cover usage of LLD-as-a-library in tests
In lit tests, we run each LLD invocation twice (LLD_IN_TEST=2), without shutting down the process in-between. This ensures a full cleanup is properly done between runs.
Only active for the COFF driver for now. Other drivers still use LLD_IN_TEST=1 which executes just one iteration with full cleanup, like before.
When the environment variable LLD_IN_TEST is unset, a shortcut is taken, only one iteration is executed, no cleanup for faster exit, like before.
A public API, lld::safeLldMain(), is also available when using LLD as a library.

Differential Revision: https://reviews.llvm.org/D70378
2020-09-24 15:07:50 -04:00
Alexandre Ganea 55624237be [LLD][COFF] Avoid overwriting inputs in tests
Before this patch, these two tests were emitting both a .DLL and .LIB. The output .LIB file name also happens to be an input .LIB file name. This prevented the test from executing a second time when LLD is re-entrant (LLD_IN_TEST=2).

This is a support patch for https://reviews.llvm.org/D70378.
2020-09-24 15:01:25 -04:00
Nico Weber 0389eff404 lld: Try to fix check-lld on incremental builds after 8f2c31f22b 2020-09-24 09:33:57 -04:00
James Henderson a4e42601d4 [lld][ELF][test] Add a couple of test cases for LTO behaviour
This patch expands two LTO test cases to check other aspects.

1) weak.ll has been expanded to show that it doesn't matter whether the
   first appearance of a weak symbol appears in a bitcode file or native
   object - that one is picked.
2) reproduce-lto.ll has been expanded to show that the bitcode files are
   stored in the reproduce package and that intermediate files (such as
   the LTO-compiled object) are not.

Differential Revision: https://reviews.llvm.org/D88094

Reviewed by: grimar, MaskRay
2020-09-24 11:49:20 +01:00
Muhammad Omair Javaid 8112d494d3 Revert "[lld-macho] Initial support for common symbols"
This reverts commit 63ace77962.

Breaks LLDB Arm build:
http://lab.llvm.org:8011/builders/lldb-arm-ubuntu/builds/4409
2020-09-24 12:26:40 +05:00
Muhammad Omair Javaid c367f93e85 Revert "[lld-macho] Implement and test resolution of common symbols"
This reverts commit cd7cb0c303.
Break lldb Arm build:
http://lab.llvm.org:8011/builders/lldb-arm-ubuntu/builds/4409
2020-09-24 12:25:47 +05:00
Jez Ng 9c70281497 [lld-macho][NFC] Make `!= nullptr` implicit 2020-09-23 20:09:49 -07:00
Jez Ng ca8752a793 [lld-macho][NFC] Refactor syslibroot / library path lookup
* Move computation of systemLibraryRoots into a separate function, so we
  can add more functionality to it without things becoming unwieldy
* Have `getSearchPaths` and related functions return by value instead of
  by output parameter. NRVO should ensure that performance is unaffected.

Reviewed By: #lld-macho, smeenai

Differential Revision: https://reviews.llvm.org/D87959
2020-09-23 19:26:41 -07:00
Jez Ng 98f03908d0 [lld-macho] Support -weak_lx, -weak_library, -weak_framework
They cause their corresponding libraries / frameworks to be loaded via
`LC_LOAD_WEAK_DYLIB` instead of `LC_LOAD_DYLIB`.

Reviewed By: #lld-macho, gkm

Differential Revision: https://reviews.llvm.org/D87929
2020-09-23 19:26:41 -07:00
Jez Ng 79412d6ca7 [lld-macho] Ignore `-mllvm` and its argument
Test Plan:

Reviewed By: #lld-macho, compnerd, MaskRay

Differential Revision: https://reviews.llvm.org/D87803
2020-09-23 19:26:40 -07:00
Jez Ng 5d26bd3b75 [lld-macho] Emit indirect symbol table
Makes it a little easier to read objdump's disassembly.

Reviewed By: #lld-macho, gkm

Differential Revision: https://reviews.llvm.org/D87178
2020-09-23 19:26:40 -07:00
Jez Ng cd7cb0c303 [lld-macho] Implement and test resolution of common symbols
Handle the case where there are both common and non-common definitions
of the same symbol. Add a bunch of tests to ensure compatibility with ld64.

Reviewed By: #lld-macho, gkm

Differential Revision: https://reviews.llvm.org/D86910
2020-09-23 19:26:40 -07:00
Jez Ng 63ace77962 [lld-macho] Initial support for common symbols
On Unix, it is traditionally allowed to write variable definitions without
initialization expressions (such as "int foo;") to header files. These are
called tentative definitions.

The compiler creates common symbols when it sees tentative definitions. When
linking the final binary, if there are remaining common symbols after name
resolution is complete, the linker converts them to regular defined symbols in
a `__common` section.

This diff implements most of that functionality, though we do not yet handle
the case where there are both common and non-common definitions of the same
symbol.

Reviewed By: #lld-macho, gkm

Differential Revision: https://reviews.llvm.org/D86909
2020-09-23 19:26:40 -07:00
Greg McGary 8f2c31f22b [lld-macho] handle options -search_paths_first, -search_dylibs_first
Differential Revision: https://reviews.llvm.org/D88054
2020-09-23 14:56:33 -07:00
Greg McGary fa5f945212 [lld-macho] cleanup unimplemented-option warnings
Remove all spurious `HelpHidden` flags from  `lld/MachO/Options.td`. Add test for `HelpHidden` to `warnIfUnimplementedOption()` so that the empty `// handled elsewhere` case is unnecessary.

Reviewed By: #lld-macho, int3, smeenai

Differential Revision: https://reviews.llvm.org/D88160
2020-09-23 14:38:23 -07:00
Greg McGary ab903560a4 [lld-maco] fix build breakage 2020-09-22 20:42:23 -07:00
Greg McGary 1a3ef0417c [lld-macho] In the context of relocs, s/target/referent/ for sections & symbols
The word "target" is overloaded, so lighten its load by using another word to denote the symbol or section to which a reloc points. While more stilted than "target", "referent" is rather less pompous than "designatum" or "denotatum". :P

Along the way, make a few neighboring variable names more descriptive.

Reviewed By: #lld-macho, int3

Differential Revision: https://reviews.llvm.org/D87584
2020-09-22 20:31:01 -07:00
Greg McGary 145ce86dba [lld-macho] handle option -headerpad_max_install_names
Differential Revision: https://reviews.llvm.org/D88064
2020-09-22 17:24:19 -07:00