Commit Graph

607 Commits

Author SHA1 Message Date
Igor Kudrin 68f5a8b204 [DebugInfo] Do not hang when parsing a malformed .debug_pub* section.
The parsing method did not check reading errors and might easily fall
into an infinite loop on an invalid input because of that.

Differential Revision: https://reviews.llvm.org/D83049
2020-07-09 19:15:11 +07:00
Fangrui Song ee9a251caf [ELF] Set DF_1_PIE for -pie
DF_1_PIE originated from Solaris (https://docs.oracle.com/cd/E36784_01/html/E36857/chapter6-42444.html ).
GNU ld since
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=5fe2850dd96483f176858fd75c098313d5b20bc2
sets the flag on non-Solaris platforms.

It can help distinguish PIE from ET_DYN.
eu-classify from elfutils uses this to recognize PIE (https://sourceware.org/git/?p=elfutils.git;a=commit;h=3f489b5c7c78df6d52f8982f79c36e9a220e8951 )

glibc uses this flag to reject dlopen'ing a PIE (https://sourceware.org/bugzilla/show_bug.cgi?id=24323 )

Reviewed By: psmith

Differential Revision: https://reviews.llvm.org/D80872
2020-06-01 10:19:41 -07:00
Fangrui Song 07837b8f49 [ELF] Use namespace qualifiers (lld:: or elf::) instead of `namespace lld { namespace elf {`
Similar to D74882. This reverts much code from commit
bd8cfe65f5 (D68323) and fixes some
problems before D68323.

Sorry for the churn but D68323 was a mistake. Namespace qualifiers avoid
bugs where the definition does not match the declaration from the
header. See
https://llvm.org/docs/CodingStandards.html#use-namespace-qualifiers-to-implement-previously-declared-functions (D74515)

Differential Revision: https://reviews.llvm.org/D79982
2020-05-15 08:49:53 -07:00
Peter Smith 0ae7990b60 [ELF][ARM] Support /DISCARD/ of subset of .ARM.exidx sections
Both the .ARM.exidx and .eh_frame sections have a custom SyntheticSection
that acts as a container for the InputSections. The InputSections are added
to the SyntheticSection prior to /DISCARD/ which limits the affect a
/DISCARD/ can have to the whole SyntheticSection. In the majority of cases
this is sufficient as it is not common to discard subsets of the
InputSections. The Linux kernel has one of these scripts which has something
like:
/DISCARD/ : { *(.ARM.exidx.exit.text) *(.ARM.extab.exit.text) ... }
The .ARM.exidx.exit.text are not discarded because the InputSection has been
transferred to the Synthetic Section. The *(.ARM.extab.exit.text) sections
have not so they are discarded. When we come to write out the .ARM.exidx
sections the dangling references from .ARM.exidx.exit.text to
.ARM.extab.exit.text currently cause relocation out of range errors, but
could as easily cause a fatal error message if we check for dangling
references at relocation time.

This patch attempts to respect the /DISCARD/ command by running it on the
.ARM.exidx InputSections stored in the SyntheticSection.

The .eh_frame is in theory affected by this problem, but I don't think that
there is a dangling reference problem that can happen with these sections.

Fixes remaining part of pr44824

Differential Revision: https://reviews.llvm.org/D79687
2020-05-11 14:27:13 +01:00
Reid Kleckner 932f0276ea [Support] Move LLD's parallel algorithm wrappers to support
Essentially takes the lld/Common/Threads.h wrappers and moves them to
the llvm/Support/Paralle.h algorithm header.

The changes are:
- Remove policy parameter, since all clients use `par`.
- Rename the methods to `parallelSort` etc to match LLVM style, since
  they are no longer C++17 pstl compatible.
- Move algorithms from llvm::parallel:: to llvm::, since they have
  "parallel" in the name and are no longer overloads of the regular
  algorithms.
- Add range overloads
- Use the sequential algorithm directly when 1 thread is requested
  (skips task grouping)
- Fix the index type of parallelForEachN to size_t. Nobody in LLVM was
  using any other parameter, and it made overload resolution hard for
  for_each_n(par, 0, foo.size(), ...) because 0 is int, not size_t.

Remove Threads.h and update LLD for that.

This is a prerequisite for parallel public symbol processing in the PDB
library, which is in LLVM.

Reviewed By: MaskRay, aganea

Differential Revision: https://reviews.llvm.org/D79390
2020-05-05 15:21:05 -07:00
Peter Smith 48aebfc908 [ELF][ARM] Do not create .ARM.exidx sections for out of range inputs
A linker will create .ARM.exidx sections for InputSections that don't
have them. This can cause a relocation out of range error If the
InputSection happens to be extremely far away from the other sections.
This is often the case for the vector table on older ARM CPUs as the only
two places that the table can be placed is 0 or 0xffff0000. We fix this
by removing InputSections that need a linker generated .ARM.exidx
section if that would cause an error.

Differential Revision: https://reviews.llvm.org/D79289
2020-05-05 09:59:45 +01:00
Peter Smith 3b1622d63a [LLD][ELF][ARM] recommit Fix ARM Exidx order for non monotonic section order
Fixed error detected by msan. The size field of the .ARM.exidx synthetic
section needs to be initialized to at least estimation level before
calling assignAddresses as that will use the size field.

This was previously reverted in 1ca16fc4f5.

Differential Revision: https://reviews.llvm.org/D78422
2020-04-24 13:47:28 +01:00
Peter Smith 1ca16fc4f5 Revert "[LLD][ELF][ARM] Fix ARM Exidx order for non monotonic section order"
This reverts commit f969c2aa65.

There are some msan buildbot failures sanitzer-x86_64-linux-fast that
I need to investigate.

Differential Revision: https://reviews.llvm.org/D78422
2020-04-23 16:58:50 +01:00
Peter Smith f969c2aa65 [LLD][ELF][ARM] Fix ARM Exidx order for non monotonic section order
The contents of the .ARM.exidx section must be ordered by SHF_LINK_ORDER
rules. We don't need to know the precise address for this order, but we
do need to know the relative order of sections. We have been using the
sectionIndex for this purpose, this works when the OutputSection order
has a monotonically increasing virtual address, but it is possible to
write a linker script with non-monotonically increasing virtual address.
For these cases we need to evaluate the base address of the OutputSection
so that we can order the .ARM.exidx sections properly.

This change moves the finalisation of .ARM.exidx till after the first
call to AssignAddresses. This permits us to sort on virtual address which
is linker script safe. It also permits a fix for part of pr44824 where
we generate .ARM.exidx section for the vector table when that table is so
far away it is out of range of the .ARM.exidx section. This fix will come
in a follow up patch.

Differential Revision: https://reviews.llvm.org/D78422
2020-04-23 15:46:44 +01:00
Kazuaki Ishizaki 7c5fcb3591 [lld] NFC: fix trivial typos in comments
Differential Revision: https://reviews.llvm.org/D72339
2020-04-02 01:21:36 +09:00
Fangrui Song eb4663d8c6 [lld][COFF][ELF][WebAssembly] Replace --[no-]threads /threads[:no] with --threads={1,2,...} /threads:{1,2,...}
--no-threads is a name copied from gold.
gold has --no-thread, --thread-count and several other --thread-count-*.

There are needs to customize the number of threads (running several lld
processes concurrently or customizing the number of LTO threads).
Having a single --threads=N is a straightforward replacement of gold's
--no-threads + --thread-count.

--no-threads is used rarely. So just delete --no-threads instead of
keeping it for compatibility for a while.

If --threads= is specified (ELF,wasm; COFF /threads: is similar),
--thinlto-jobs= defaults to --threads=,
otherwise all available hardware threads are used.

There is currently no way to override a --threads={1,2,...}. It is still
a debate whether we should use --threads=all.

Reviewed By: rnk, aganea

Differential Revision: https://reviews.llvm.org/D76885
2020-03-31 08:46:12 -07:00
Alexandre Ganea 42dc667db2 [LLD][ELF] Put back rounding which was lost in 8404aeb56a 2020-03-29 21:52:01 -04:00
Fangrui Song 0bb362c164 [ELF] --gdb-index: fix memory usage regression after D74773
On an internal target,

* Before D74773: time -f '%M' => 18275680
* After D74773:  time -f '%M' => 22088964

This patch restores to the status before D74773.
2020-03-12 16:55:30 -07:00
Alexey Lapshin dcf6494abe LLD already has a mechanism for caching creation of DWARCContext:
llvm::call_once(initDwarfLine, [this]() { initializeDwarf(); });

Though it is not used in all places.

I need that patch for implementing "Remove obsolete debug info" feature
(D74169). But this caching mechanism is useful by itself, and I think it
would be good to use it without connection to "Remove obsolete debug info"
feature. So this patch changes inplace creation of DWARFContext with
its cached version.

Depends on D74308

Reviewed By: ruiu

Differential Revision: https://reviews.llvm.org/D74773
2020-03-06 21:17:07 +03:00
Alexey Lapshin a130be6ac5 [LLD][NFC] Remove getOffsetInFile() workaround.
Summary:
LLD has workaround for the times when SectionIndex was not passed properly:

LT->getFileLineInfoForAddress(
      S->getOffsetInFile() + Offset, nullptr,
      DILineInfoSpecifier::FileLineInfoKind::AbsoluteFilePath, Info));

S->getOffsetInFile() was added to differentiate offsets between
various sections. Now SectionIndex is properly specified.
Thus it is not necessary to use getOffsetInFile() workaround.
See https://reviews.llvm.org/D58194, https://reviews.llvm.org/D58357.

This patch removes getOffsetInFile() workaround.

Reviewers: ruiu, grimar, MaskRay, espindola

Reviewed By: grimar, MaskRay

Subscribers: emaste, arichardson, llvm-commits

Tags: #llvm, #lld

Differential Revision: https://reviews.llvm.org/D75636
2020-03-05 15:52:46 +03:00
Fangrui Song 00925aadb3 [ELF][PPC32] Fix canonical PLTs when the order does not match the PLT order
Reviewed By: Bdragon28

Differential Revision: https://reviews.llvm.org/D75394
2020-02-28 22:23:14 -08:00
Alexey Lapshin 0a2d415bd0 [LLD] Report errors occurred while parsing debug info as warnings.
Summary:
Extracted from D74773. Currently, errors happened while parsing
debug info are reported as errors. DebugInfoDWARF library treats such
errors as "Recoverable errors". This patch makes debug info errors
to be reported as warnings, to support DebugInfoDWARF approach.

Reviewers: ruiu, grimar, MaskRay, jhenderson, espindola

Reviewed By: MaskRay, jhenderson

Subscribers: emaste, aprantl, arichardson, arphaman, llvm-commits

Tags: #llvm, #debug-info, #lld

Differential Revision: https://reviews.llvm.org/D75234
2020-02-29 00:03:18 +03:00
Daniel Kiss b6162622c0 [LLD][ELF][AArch64] Change the semantics of -z pac-plt.
Summary:
Generate PAC protected plt only when "-z pac-plt" is passed to the
linker. GNU toolchain generates when it is explicitly requested[1].
When pac-plt is requested then set the GNU_PROPERTY_AARCH64_FEATURE_1_PAC
note even when not all function compiled with PAC but issue a warning.
Harmonizing the warning style for BTI/PAC/IBT.
Generate BTI protected PLT if case of "-z force-bti".

[1] https://www.sourceware.org/ml/binutils/2019-03/msg00021.html

Reviewers: peter.smith, espindola, MaskRay, grimar

Reviewed By: peter.smith, MaskRay

Subscribers: tatyana-krasnukha, emaste, arichardson, kristof.beyls, MaskRay, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74537
2020-02-18 09:56:57 +01:00
Alexandre Ganea 8404aeb56a [Support] On Windows, ensure hardware_concurrency() extends to all CPU sockets and all NUMA groups
The goal of this patch is to maximize CPU utilization on multi-socket or high core count systems, so that parallel computations such as LLD/ThinLTO can use all hardware threads in the system. Before this patch, on Windows, a maximum of 64 hardware threads could be used at most, in some cases dispatched only on one CPU socket.

== Background ==
Windows doesn't have a flat cpu_set_t like Linux. Instead, it projects hardware CPUs (or NUMA nodes) to applications through a concept of "processor groups". A "processor" is the smallest unit of execution on a CPU, that is, an hyper-thread if SMT is active; a core otherwise. There's a limit of 32-bit processors on older 32-bit versions of Windows, which later was raised to 64-processors with 64-bit versions of Windows. This limit comes from the affinity mask, which historically is represented by the sizeof(void*). Consequently, the concept of "processor groups" was introduced for dealing with systems with more than 64 hyper-threads.

By default, the Windows OS assigns only one "processor group" to each starting application, in a round-robin manner. If the application wants to use more processors, it needs to programmatically enable it, by assigning threads to other "processor groups". This also means that affinity cannot cross "processor group" boundaries; one can only specify a "preferred" group on start-up, but the application is free to allocate more groups if it wants to.

This creates a peculiar situation, where newer CPUs like the AMD EPYC 7702P (64-cores, 128-hyperthreads) are projected by the OS as two (2) "processor groups". This means that by default, an application can only use half of the cores. This situation could only get worse in the years to come, as dies with more cores will appear on the market.

== The problem ==
The heavyweight_hardware_concurrency() API was introduced so that only *one hardware thread per core* was used. Once that API returns, that original intention is lost, only the number of threads is retained. Consider a situation, on Windows, where the system has 2 CPU sockets, 18 cores each, each core having 2 hyper-threads, for a total of 72 hyper-threads. Both heavyweight_hardware_concurrency() and hardware_concurrency() currently return 36, because on Windows they are simply wrappers over std:🧵:hardware_concurrency() -- which can only return processors from the current "processor group".

== The changes in this patch ==
To solve this situation, we capture (and retain) the initial intention until the point of usage, through a new ThreadPoolStrategy class. The number of threads to use is deferred as late as possible, until the moment where the std::threads are created (ThreadPool in the case of ThinLTO).

When using hardware_concurrency(), setting ThreadCount to 0 now means to use all the possible hardware CPU (SMT) threads. Providing a ThreadCount above to the maximum number of threads will have no effect, the maximum will be used instead.
The heavyweight_hardware_concurrency() is similar to hardware_concurrency(), except that only one thread per hardware *core* will be used.

When LLVM_ENABLE_THREADS is OFF, the threading APIs will always return 1, to ensure any caller loops will be exercised at least once.

Differential Revision: https://reviews.llvm.org/D71775
2020-02-14 10:24:22 -05:00
Russell Gallop e7cb374433 [LLD][ELF] Add time-trace to ELF LLD
This adds some of LLD specific scopes and picks up optimisation scopes
via LTO/ThinLTO. Makes use of TimeProfiler multi-thread support added in
77e6bb3c.

Differential Revision: https://reviews.llvm.org/D71060
2020-02-06 12:14:13 +00:00
Fangrui Song 837e8a9c0c [ELF][PPC32] Support canonical PLT
-fno-pie produces a pair of non-GOT-non-PLT relocations R_PPC_ADDR16_{HA,LO} (R_ABS) referencing external
functions.

```
lis 3, func@ha
la 3, func@l(3)
```

In a -no-pie/-pie link, if func is not defined in the executable, a canonical PLT entry (st_value>0, st_shndx=0) will be needed.
References to func in shared objects will be resolved to this address.
-fno-pie -pie should fail with "can't create dynamic relocation ... against ...", so we just need to think about -no-pie.

On x86, the PLT entry passes the JMP_SLOT offset to the rtld PLT resolver.
On x86-64: the PLT entry passes the JUMP_SLOT index to the rtld PLT resolver.
On ARM/AArch64: the PLT entry passes &.got.plt[n]. The PLT header passes &.got.plt[fixed-index]. The rtld PLT resolver can compute the JUMP_SLOT index from the two addresses.

For these targets, the canonical PLT entry can just reuse the regular PLT entry (in PltSection).

On PPC32: PltSection (.glink) consists of `b PLTresolve` instructions and `PLTresolve`. The rtld PLT resolver depends on r11 having been set up to the .plt (GotPltSection) entry.
On PPC64 ELFv2: PltSection (.glink) consists of `__glink_PLTresolve` and `bl __glink_PLTresolve`. The rtld PLT resolver depends on r12 having been set up to the .plt (GotPltSection) entry.

We cannot reuse a `b PLTresolve`/`bl __glink_PLTresolve` in PltSection as a canonical PLT entry. PPC64 ELFv2 avoids the problem by using TOC for any external reference, even in non-pic code, so the canonical PLT entry scenario should not happen in the first place.
For PPC32, we have to create a PLT call stub as the canonical PLT entry. The code sequence sets up r11.

Reviewed By: Bdragon28

Differential Revision: https://reviews.llvm.org/D73399
2020-01-25 17:56:37 -08:00
Fangrui Song deb5819d62 [ELF] Rename relocateOne() to relocate() and pass `Relocation` to it
Symbol information can be used to improve out-of-range/misalignment diagnostics.
It also helps R_ARM_CALL/R_ARM_THM_CALL which has different behaviors with different symbol types.

There are many (67) relocateOne() call sites used in thunks, {Arm,AArch64}errata, PLT, etc.
Rename them to `relocateNoSym()` to be clearer that there is no symbol information.

Reviewed By: grimar, peter.smith

Differential Revision: https://reviews.llvm.org/D73254
2020-01-25 12:00:18 -08:00
Peter Smith 01ad4c8384 [LLD][ELF][ARM][AArch64] Only round up ThunkSection Size when large OS.
In D71281 a fix was put in to round up the size of a ThunkSection to the
nearest 4KiB when performing errata patching. This fixed a problem with a
very large instrumented program that had thunks and patches mutually
trigger each other. Unfortunately it triggers an assertion failure in an
AArch64 allyesconfig build of the kernel. There is a specific assertion
preventing an InputSectionDescription being larger than 4KiB. This will
always trigger if there is at least one Thunk needed in that
InputSectionDescription, which is possible for an allyesconfig build.

Abstractly the problem case is:
.text : {
          *(.text) ;
          ...
          . = ALIGN(SZ_4K);
          __idmap_text_start = .;
          *(.idmap.text)
          __idmap_text_end = .;
          ...
        }
The assertion checks that __idmap_text_end - __idmap_start is < 4 KiB.
Note that there is more than one InputSectionDescription in the
OutputSection so we can't just restrict the fix to OutputSections smaller
than 4 KiB.

The fix presented here limits the D71281 to InputSectionDescriptions that
meet the following conditions:
1.) The OutputSection is bigger than the thunkSectionSpacing so adding
thunks will affect the addresses of following code.
2.) The InputSectionDescription is larger than 4 KiB. This will prevent
any assertion failures that an InputSectionDescription is < 4 KiB
in size.

We do this at ThunkSection creation time as at this point we know that
the addresses are stable and up to date prior to adding the thunks as
assignAddresses() will have been called immediately prior to thunk
generation.

The fix reverts the two tests affected by D71281 to their original state
as they no longer need the 4KiB size roundup. I've added simpler tests to
check for D71281 when the OutputSection size is larger than the ThunkSection
spacing.

Fixes https://github.com/ClangBuiltLinux/linux/issues/812

Differential Revision: https://reviews.llvm.org/D72344
2020-01-17 10:47:21 +00:00
Fangrui Song 870094decf [ELF] Decrease alignment of ThunkSection on 64-bit targets from 8 to 4
ThunkSection contains 4-byte instructions on all targets that use
thunks. Thunks should not be used in any performance sensitive places,
and locality/cache line/instruction fetching arguments should not apply.

We use 16 bytes as preferred function alignments for modern PowerPC cores.
In any case, 8 is not optimal.

Differential Revision: https://reviews.llvm.org/D72819
2020-01-16 10:36:33 -08:00
Fangrui Song 7cd429f27d [ELF] Add -z force-ibt and -z shstk for Intel Control-flow Enforcement Technology
This patch is a joint work by Rui Ueyama and me based on D58102 by Xiang Zhang.

It adds Intel CET (Control-flow Enforcement Technology) support to lld.
The implementation follows the draft version of psABI which you can
download from https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI.

CET introduces a new restriction on indirect jump instructions so that
you can limit the places to which you can jump to using indirect jumps.

In order to use the feature, you need to compile source files with
-fcf-protection=full.

* IBT is enabled if all input files are compiled with the flag. To force enabling ibt, pass -z force-ibt.
* SHSTK is enabled if all input files are compiled with the flag, or if -z shstk is specified.

IBT-enabled executables/shared objects have two PLT sections, ".plt" and
".plt.sec".  For the details as to why we have two sections, please read
the comments.

Reviewed By: xiangzhangllvm

Differential Revision: https://reviews.llvm.org/D59780
2020-01-13 23:39:28 -08:00
Fangrui Song 681b1be774 [lld] Fix -Wrange-loop-analysis warnings
One instance looks like a false positive:

lld/ELF/Relocations.cpp:1622:14: note: use reference type 'const std::pair<ThunkSection *, uint32_t> &' (aka 'cons
t pair<lld:🧝:ThunkSection *, unsigned int> &') to prevent copying
        for (const std::pair<ThunkSection *, uint32_t> ts : isd->thunkSections)

It is not changed in this commit.
2020-01-01 15:41:20 -08:00
Fangrui Song 1edd965130 [ELF] Support input section description .gnu.version* in /DISCARD/
Linux powerpc discards `*(.gnu.version*)` (arch/powerpc/kernel/vmlinux.lds.S)
to suppress --orphan-handling=warn warnings in the -pie output `.tmp_vmlinux1`

The support is simple. Just add isLive() to:

1) Fix an assertion in SectionBase::getPartition() called by VersionTableSection::isNeeded().
2) Suppress DT_VERSYM, DT_VERDEF, DT_VERNEED and DT_VERNEEDNUM, if the relevant section is discarded.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D71819
2019-12-26 09:54:22 -08:00
Fangrui Song 37b2808059 [ELF] writePlt, writeIplt: replace parameters gotPltEntryAddr and index with `const Symbol &`. NFC
PPC::writeIplt (IPLT code sequence, D71621) needs to access `Symbol`.

Reviewed By: grimar, ruiu

Differential Revision: https://reviews.llvm.org/D71631
2019-12-18 00:14:03 -08:00
Fangrui Song 345f59667d [ELF] Rename .plt to .iplt and decrease EM_PPC{,64} alignment of .glink to 4
GNU ld creates the synthetic section .iplt, and has a built-in linker
script that assigns .iplt to the output section .plt . There is no
output section named .iplt .

Making .iplt an output section actually has a benefit that makes the
tricky toolchain feature stand out. Symbolizers don't have to deal with
mixed PLT entries (e.g. llvm-objdump -d incorrectly annotates such jump
targets).

On EM_PPC{,64}, .glink contains a PLT resolver and a series of jump
instructions. The 4-byte entry size makes it unnecessary to have an
alignment of 16.

Mark ppc32-gnu-ifunc.s and ppc32-gnu-ifunc-nonpreemptable.s as `XFAIL: *`.
They test IPLT on EM_PPC, which never works.

Reviewed By: peter.smith

Differential Revision: https://reviews.llvm.org/D71520
2019-12-17 00:15:59 -08:00
Fangrui Song 891a8655ab [ELF] Add IpltSection
PltSection is used by both PLT and IPLT. The PLT section may have a
header while the IPLT section does not. Split off IpltSection from
PltSection to be clearer.

Unlike other targets, PPC64 cannot use the same code sequence for PLT
and IPLT. This helps make a future PPC64 patch (D71509) more isolated.

On EM_386 and EM_X86_64, when PLT is empty while IPLT is not, currently
we are inconsistent whether the PLT header is conceptually attached to
in.plt or in.iplt .  Consistently attach the header to in.plt can make
the -z retpolineplt logic simpler. It also makes `jmp` point to an
aesthetically better place for non-retpolineplt cases.

Reviewed By: grimar, ruiu

Differential Revision: https://reviews.llvm.org/D71519
2019-12-17 00:06:04 -08:00
Fangrui Song 90d195d026 [ELF] Delete relOff from TargetInfo::writePLT
This change only affects EM_386. relOff can be computed from `index`
easily, so it is unnecessarily passed as a parameter.

Both in.plt and in.iplt entries are written by writePLT. For in.iplt,
the instruction `push reloc_offset` will change because `index` is now
different. Fortunately, this does not matter because `push; jmp` is only
used by PLT. IPLT does not need the code sequence.

Reviewed By: grimar, ruiu

Differential Revision: https://reviews.llvm.org/D71518
2019-12-16 11:10:02 -08:00
Fangrui Song 98afa2c1f1 [ELF] De-template PltSection::addEntry. NFC 2019-12-16 11:03:20 -08:00
Rui Ueyama 69da7e29de Revert an accidental commit af5ca40b47 2019-12-13 15:17:40 +09:00
Rui Ueyama af5ca40b47 temporary 2019-12-13 14:35:03 +09:00
Peter Smith 86d24193a9 [LLD][ELF][AArch64][ARM] When errata patching, round thunk size to 4KiB.
On some edge cases such as Chromium compiled with full instrumentation we
have a .text section over twice the size of the maximum branch range and
the instrumented code generation containing many examples of the erratum
sequence. The combination of Thunks and many erratum sequences causes
finalizeAddressDependentContent() to not converge. We end up with:
start
- Thunk Creation (disturbs addresses after thunks, creating more patches)
- Patch Creation (disturbs addresses after patches, creating more thunks)
- goto start

In most images with few thunks and patches the mutual disturbance does not
cause convergence problems. As the .text size and number of patches go up
the risk increases.

A way to prevent the thunk creation from interfering with patch creation is
to round up the size of the thunks to a 4KiB boundary when the
erratum patch is enabled. As the erratum sequence only triggers when an
instruction sequence starts at 0xff8 or 0xffc modulo (4 KiB) by making the
thunks not affect addresses modulo (4 KiB) we prevent thunks from
interfering with the patch.

The patches themselves could be aggregated in the same way that Thunks are
within ThunkSections and we could round up the size in the same way. This
would reduce the number of patches created in a .text section size >
128 MiB but would not likely help convergence problems.

Differential Revision: https://reviews.llvm.org/D71281

fixes (remaining part of) pr44071, other part in D71242
2019-12-11 14:09:15 +00:00
Fangrui Song c8f0d3e130 [ELF][PPC64] Support long branch thunks with addends
Fixes PPC64 part of PR40438

  // clang -target ppc64le -c a.cc
  // .text.unlikely may be placed in a separate output section (via -z keep-text-section-prefix)
  // The distance between bar in .text.unlikely and foo in .text may be larger than 32MiB.
  static void foo() {}
  __attribute__((section(".text.unlikely"))) static int bar() { foo(); return 0; }
  __attribute__((used)) static int dummy = bar();

This patch makes such thunks with addends work for PPC64.

AArch64: .text -> `__AArch64ADRPThunk_ (adrp x16, ...; add x16, x16, ...; br x16)` -> target
PPC64: .text -> `__long_branch_ (addis 12, 2, ...; ld 12, ...(12); mtctr 12; bctr)` -> target

AArch64 can leverage ADRP to jump to the target directly, but PPC64
needs to load an address from .branch_lt . Before Power ISA v3.0, the
PC-relative ADDPCIS was not available. .branch_lt was invented to work
around the limitation.

Symbol::ppc64BranchltIndex is replaced by
PPC64LongBranchTargetSection::entry_index which take addends into
consideration.

The tests are rewritten: ppc64-long-branch.s tests -no-pie and
ppc64-long-branch-pi.s tests -pie and -shared.

Reviewed By: sfertile

Differential Revision: https://reviews.llvm.org/D70937
2019-12-05 10:17:45 -08:00
Peter Smith 784f57584f [LLD][ELF][AArch64] .note.gnu.property sections should have alignment 8
The .note.gnu.property SHT_NOTE sections on AArch64 (a 64-bit target)
should have alignment 8 to more closely match the binutils implementation
where alignment is 4-bytes on 32-bit machines and 8-bytes on 64-bit
machines.

Previously LLD was using 4 for both 32-bit and 64-bit machines.

Differential Revision: https://reviews.llvm.org/D70962
2019-12-05 10:11:31 +00:00
Peter Collingbourne 2c6fae179e ELF: Discard .ARM.exidx sections for empty functions instead of misordering them.
The logic added in r372781 caused ARMExidxSyntheticSection::addSection()
to return false for exidx sections without a link order dep that passed
isValidExidxSectionDep(). This included exidx sections for empty functions. As
a result, such exidx sections would end up treated like ordinary sections and
would end up being laid out before the ARMExidxSyntheticSection, most likely in
the wrong order relative to the exidx entries in the ARMExidxSyntheticSection,
breaking the orderedness invariant relied upon by unwinders. Fix this by
simply discarding such sections.

Differential Revision: https://reviews.llvm.org/D69744
2019-11-04 09:11:14 -08:00
Nico Weber 5976a3f5aa Fix a few typos in lld/ELF to cycle bots 2019-10-28 21:41:47 -04:00
Fangrui Song bd8cfe65f5 [ELF] Wrap things in `namespace lld { namespace elf {`, NFC
This makes it clear `ELF/**/*.cpp` files define things in the `lld::elf`
namespace and simplifies `elf::foo` to `foo`.

Reviewed By: atanasyan, grimar, ruiu

Differential Revision: https://reviews.llvm.org/D68323

llvm-svn: 373885
2019-10-07 08:31:18 +00:00
Peter Collingbourne 0bb825d208 ELF: Add .interp synthetic sections first in createSyntheticSections().
Our .interp section is not a SyntheticSection. As a result, it terminates the
loop in removeUnusedSyntheticSections(). This has at least two consequences:

- The synthetic .bss and .bss.rel.ro sections are always present in
  dynamically linked executables, even when they are not needed.
- The synthetic .ARM.exidx (and possibly other) sections are always present
  in partitions other than the last one, even when not needed.
  .ARM.exidx in particular is problematic because it assumes that its
  list of code sections is non-empty in getLinkOrderDep(), which can
  lead to a crash if the partition does not have any code sections.

Fix these problems by moving the creation of the .interp sections to the
top of createSyntheticSections(). While here, make the code a little less
error-prone by changing the add() lambdas to take a SyntheticSection instead
of an InputSectionBase.

Differential Revision: https://reviews.llvm.org/D68256

llvm-svn: 373347
2019-10-01 16:10:13 +00:00
Peter Smith 06b3e3421a [ELF][ARM] Fix crash when discarding InputSections that have .ARM.exidx
When /DISCARD/ is used on an input section, that input section may have
a .ARM.exidx metadata section that depends on it. As the discard handling
comes after the .ARM.exidx synthetic section is created we need to make
sure that we account for the case where the .ARM.exidx output section
should be removed because there are no more live input sections.

Differential Revision: https://reviews.llvm.org/D67848

llvm-svn: 372781
2019-09-24 21:44:14 +00:00
Fangrui Song e47bbd28f8 [ELF] Make MergeInputSection merging aware of output sections
Fixes PR38748

mergeSections() calls getOutputSectionName() to get output section
names. Two MergeInputSections may be merged even if they are made
different by SECTIONS commands.

This patch moves mergeSections() after processSectionCommands() and
addOrphanSections() to fix the issue. The new pass is renamed to
OutputSection::finalizeInputSections().

processSectionCommands() and addorphanSections() are changed to add
sections to InputSectionDescription::sectionBases.

finalizeInputSections() merges MergeInputSections and migrates
`sectionBases` to `sections`.

For the -r case, we drop an optimization that tries keeping sh_entsize
non-zero. This is for the simplicity of addOrphanSections(). The
updated merge-entsize2.s reflects the change.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D67504

llvm-svn: 372734
2019-09-24 11:48:31 +00:00
Fangrui Song 7afffb54ea [ELF] Don't shrink RelrSection
Fixes PR43214.

The size of SHT_RELR may oscillate between 2 numbers (see D53003 for a
similar --pack-dyn-relocs=android issue). This can happen if the shrink
of SHT_RELR causes it to take more words to encode relocation offsets
(this can happen with thunks or segments with overlapping p_offset
ranges), and the expansion of SHT_RELR causes it to take fewer words to
encode relocation offsets.

To avoid the issue, add padding 1s to the end of the relocation section
if its size would decrease. Trailing 1s do not decode to more relocations.

Reviewed By: peter.smith

Differential Revision: https://reviews.llvm.org/D67164

llvm-svn: 370923
2019-09-04 16:27:35 +00:00
Fangrui Song 8fbe81fb29 [ELF][RISCV] Assign st_shndx of __global_pointer$ to 1 if .sdata does not exist
This essentially reverts the code change of D63132 and switches to a simpler approach.

In an executable/shared object, st_shndx of a symbol can be:

1) SHN_UNDEF: undefined symbol (or canonical PLT)
2) SHN_ABS: absolute symbol
3) any other value (usually a regular section index) represents a relative symbol.
  The actual value does not matter.

Many ld.so (musl, all archs except MIPS of FreeBSD rtld-elf) even treat 2) and 3)
the same. If .sdata does not exist, it does not matter what value/section
__global_pointer$ has, as long as it is relative (otherwise there will be a pedantic
lld error. See D63132). Just set the st_shndx arbitrarily to 1.

Dummy st_shndx=1 may be used by __rela_iplt_start, linker-script-defined symbols outside a section, __dso_handle, etc.

Reviewed By: ruiu

Differential Revision: https://reviews.llvm.org/D66798

llvm-svn: 370172
2019-08-28 09:01:03 +00:00
Fangrui Song 1681ceb2c4 [ELF] EhFrameSection: postpone FDE liveness check to finalizeSections
EhFrameSection::addSection checks liveness of FDE early. This makes it
infeasible to move combineEhSections() before ICF.

Postpone the check to EhFrameSection::finalizeContents(). This is what
ARMExidxSyntheticSection does and it will make a subsequent patch D66717
simpler.

Reviewed By: ruiu

Differential Revision: https://reviews.llvm.org/D66727

llvm-svn: 369890
2019-08-26 10:32:12 +00:00
Fangrui Song 2d337fdc95 Reland D65242 "[ELF] More dynamic relocation packing""
This fixed a bug in r369488. When config->isRela is false, i->r_addend
is not initialized (see encodeDynamicReloc). So we should check
config->isRela before accessing r_addend:

- if (j - i < 3 || i->r_addend)
+ if (j - i < 3 || (config->isRela && i->r_addend != 0))

Original description:

Currently, with Android dynamic relocation packing, only relative
relocations are grouped together. This patch implements similar
packing for non-relative relocations.

The implementation groups non-relative relocations with the same
r_info and r_addend, if using RELA. By requiring a minimum group
size of 3, this achieves smaller relocation sections. Building Android
for an ARM32 device, I see the total size of /system/lib decrease by
392 KB.

Grouping by r_info also allows the runtime dynamic linker to implement
an 1-entry cache to reduce the number of symbol lookup required. With
such 1-entry cache implemented on Android, I'm seeing 10% to 20%
reduction in total time spent in runtime linker for several executables
that I tested.

As a simple correctness check, I've also built x86_64 Android and booted
successfully.

Differential Revision: https://reviews.llvm.org/D65242
Patch by Vic Yang

llvm-svn: 369507
2019-08-21 09:21:37 +00:00
Fangrui Song b2895a8cdc Revert D65242 "[ELF] More dynamic relocation packing"
This reverts r369488 and r369489. The change broke build bots:

http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap-ubsan/builds/14511
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/34407

llvm-svn: 369497
2019-08-21 06:50:08 +00:00
Fangrui Song 35f9a84a15 [ELF] More dynamic relocation packing
Currently, with Android dynamic relocation packing, only relative
relocations are grouped together. This patch implements similar
packing for non-relative relocations.

The implementation groups non-relative relocations with the same
r_info and r_addend, if using RELA. By requiring a minimum group
size of 3, this achieves smaller relocation sections. Building Android
for an ARM32 device, I see the total size of /system/lib decrease by
392 KB.

Grouping by r_info also allows the runtime dynamic linker to implement
an 1-entry cache to reduce the number of symbol lookup required. With
such 1-entry cache implemented on Android, I'm seeing 10% to 20%
reduction in total time spent in runtime linker for several executables
that I tested.

As a simple correctness check, I've also built x86_64 Android and booted
successfully.

Differential Revision: https://reviews.llvm.org/D66491
Patch by Vic Yang!

llvm-svn: 369488
2019-08-21 03:02:08 +00:00
Jonas Devlieghere 6ba7992031 [LLD] Migrate llvm::make_unique to std::make_unique
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.

Differential revision: https://reviews.llvm.org/D66259

llvm-svn: 368936
2019-08-14 22:28:17 +00:00