Both the .ARM.exidx and .eh_frame sections have a custom SyntheticSection
that acts as a container for the InputSections. The InputSections are added
to the SyntheticSection prior to /DISCARD/ which limits the affect a
/DISCARD/ can have to the whole SyntheticSection. In the majority of cases
this is sufficient as it is not common to discard subsets of the
InputSections. The Linux kernel has one of these scripts which has something
like:
/DISCARD/ : { *(.ARM.exidx.exit.text) *(.ARM.extab.exit.text) ... }
The .ARM.exidx.exit.text are not discarded because the InputSection has been
transferred to the Synthetic Section. The *(.ARM.extab.exit.text) sections
have not so they are discarded. When we come to write out the .ARM.exidx
sections the dangling references from .ARM.exidx.exit.text to
.ARM.extab.exit.text currently cause relocation out of range errors, but
could as easily cause a fatal error message if we check for dangling
references at relocation time.
This patch attempts to respect the /DISCARD/ command by running it on the
.ARM.exidx InputSections stored in the SyntheticSection.
The .eh_frame is in theory affected by this problem, but I don't think that
there is a dangling reference problem that can happen with these sections.
Fixes remaining part of pr44824
Differential Revision: https://reviews.llvm.org/D79687
Essentially takes the lld/Common/Threads.h wrappers and moves them to
the llvm/Support/Paralle.h algorithm header.
The changes are:
- Remove policy parameter, since all clients use `par`.
- Rename the methods to `parallelSort` etc to match LLVM style, since
they are no longer C++17 pstl compatible.
- Move algorithms from llvm::parallel:: to llvm::, since they have
"parallel" in the name and are no longer overloads of the regular
algorithms.
- Add range overloads
- Use the sequential algorithm directly when 1 thread is requested
(skips task grouping)
- Fix the index type of parallelForEachN to size_t. Nobody in LLVM was
using any other parameter, and it made overload resolution hard for
for_each_n(par, 0, foo.size(), ...) because 0 is int, not size_t.
Remove Threads.h and update LLD for that.
This is a prerequisite for parallel public symbol processing in the PDB
library, which is in LLVM.
Reviewed By: MaskRay, aganea
Differential Revision: https://reviews.llvm.org/D79390
A linker will create .ARM.exidx sections for InputSections that don't
have them. This can cause a relocation out of range error If the
InputSection happens to be extremely far away from the other sections.
This is often the case for the vector table on older ARM CPUs as the only
two places that the table can be placed is 0 or 0xffff0000. We fix this
by removing InputSections that need a linker generated .ARM.exidx
section if that would cause an error.
Differential Revision: https://reviews.llvm.org/D79289
Fixed error detected by msan. The size field of the .ARM.exidx synthetic
section needs to be initialized to at least estimation level before
calling assignAddresses as that will use the size field.
This was previously reverted in 1ca16fc4f5.
Differential Revision: https://reviews.llvm.org/D78422
This reverts commit f969c2aa65.
There are some msan buildbot failures sanitzer-x86_64-linux-fast that
I need to investigate.
Differential Revision: https://reviews.llvm.org/D78422
The contents of the .ARM.exidx section must be ordered by SHF_LINK_ORDER
rules. We don't need to know the precise address for this order, but we
do need to know the relative order of sections. We have been using the
sectionIndex for this purpose, this works when the OutputSection order
has a monotonically increasing virtual address, but it is possible to
write a linker script with non-monotonically increasing virtual address.
For these cases we need to evaluate the base address of the OutputSection
so that we can order the .ARM.exidx sections properly.
This change moves the finalisation of .ARM.exidx till after the first
call to AssignAddresses. This permits us to sort on virtual address which
is linker script safe. It also permits a fix for part of pr44824 where
we generate .ARM.exidx section for the vector table when that table is so
far away it is out of range of the .ARM.exidx section. This fix will come
in a follow up patch.
Differential Revision: https://reviews.llvm.org/D78422
--no-threads is a name copied from gold.
gold has --no-thread, --thread-count and several other --thread-count-*.
There are needs to customize the number of threads (running several lld
processes concurrently or customizing the number of LTO threads).
Having a single --threads=N is a straightforward replacement of gold's
--no-threads + --thread-count.
--no-threads is used rarely. So just delete --no-threads instead of
keeping it for compatibility for a while.
If --threads= is specified (ELF,wasm; COFF /threads: is similar),
--thinlto-jobs= defaults to --threads=,
otherwise all available hardware threads are used.
There is currently no way to override a --threads={1,2,...}. It is still
a debate whether we should use --threads=all.
Reviewed By: rnk, aganea
Differential Revision: https://reviews.llvm.org/D76885
On an internal target,
* Before D74773: time -f '%M' => 18275680
* After D74773: time -f '%M' => 22088964
This patch restores to the status before D74773.
llvm::call_once(initDwarfLine, [this]() { initializeDwarf(); });
Though it is not used in all places.
I need that patch for implementing "Remove obsolete debug info" feature
(D74169). But this caching mechanism is useful by itself, and I think it
would be good to use it without connection to "Remove obsolete debug info"
feature. So this patch changes inplace creation of DWARFContext with
its cached version.
Depends on D74308
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D74773
Summary:
LLD has workaround for the times when SectionIndex was not passed properly:
LT->getFileLineInfoForAddress(
S->getOffsetInFile() + Offset, nullptr,
DILineInfoSpecifier::FileLineInfoKind::AbsoluteFilePath, Info));
S->getOffsetInFile() was added to differentiate offsets between
various sections. Now SectionIndex is properly specified.
Thus it is not necessary to use getOffsetInFile() workaround.
See https://reviews.llvm.org/D58194, https://reviews.llvm.org/D58357.
This patch removes getOffsetInFile() workaround.
Reviewers: ruiu, grimar, MaskRay, espindola
Reviewed By: grimar, MaskRay
Subscribers: emaste, arichardson, llvm-commits
Tags: #llvm, #lld
Differential Revision: https://reviews.llvm.org/D75636
Summary:
Extracted from D74773. Currently, errors happened while parsing
debug info are reported as errors. DebugInfoDWARF library treats such
errors as "Recoverable errors". This patch makes debug info errors
to be reported as warnings, to support DebugInfoDWARF approach.
Reviewers: ruiu, grimar, MaskRay, jhenderson, espindola
Reviewed By: MaskRay, jhenderson
Subscribers: emaste, aprantl, arichardson, arphaman, llvm-commits
Tags: #llvm, #debug-info, #lld
Differential Revision: https://reviews.llvm.org/D75234
Summary:
Generate PAC protected plt only when "-z pac-plt" is passed to the
linker. GNU toolchain generates when it is explicitly requested[1].
When pac-plt is requested then set the GNU_PROPERTY_AARCH64_FEATURE_1_PAC
note even when not all function compiled with PAC but issue a warning.
Harmonizing the warning style for BTI/PAC/IBT.
Generate BTI protected PLT if case of "-z force-bti".
[1] https://www.sourceware.org/ml/binutils/2019-03/msg00021.html
Reviewers: peter.smith, espindola, MaskRay, grimar
Reviewed By: peter.smith, MaskRay
Subscribers: tatyana-krasnukha, emaste, arichardson, kristof.beyls, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74537
The goal of this patch is to maximize CPU utilization on multi-socket or high core count systems, so that parallel computations such as LLD/ThinLTO can use all hardware threads in the system. Before this patch, on Windows, a maximum of 64 hardware threads could be used at most, in some cases dispatched only on one CPU socket.
== Background ==
Windows doesn't have a flat cpu_set_t like Linux. Instead, it projects hardware CPUs (or NUMA nodes) to applications through a concept of "processor groups". A "processor" is the smallest unit of execution on a CPU, that is, an hyper-thread if SMT is active; a core otherwise. There's a limit of 32-bit processors on older 32-bit versions of Windows, which later was raised to 64-processors with 64-bit versions of Windows. This limit comes from the affinity mask, which historically is represented by the sizeof(void*). Consequently, the concept of "processor groups" was introduced for dealing with systems with more than 64 hyper-threads.
By default, the Windows OS assigns only one "processor group" to each starting application, in a round-robin manner. If the application wants to use more processors, it needs to programmatically enable it, by assigning threads to other "processor groups". This also means that affinity cannot cross "processor group" boundaries; one can only specify a "preferred" group on start-up, but the application is free to allocate more groups if it wants to.
This creates a peculiar situation, where newer CPUs like the AMD EPYC 7702P (64-cores, 128-hyperthreads) are projected by the OS as two (2) "processor groups". This means that by default, an application can only use half of the cores. This situation could only get worse in the years to come, as dies with more cores will appear on the market.
== The problem ==
The heavyweight_hardware_concurrency() API was introduced so that only *one hardware thread per core* was used. Once that API returns, that original intention is lost, only the number of threads is retained. Consider a situation, on Windows, where the system has 2 CPU sockets, 18 cores each, each core having 2 hyper-threads, for a total of 72 hyper-threads. Both heavyweight_hardware_concurrency() and hardware_concurrency() currently return 36, because on Windows they are simply wrappers over std:🧵:hardware_concurrency() -- which can only return processors from the current "processor group".
== The changes in this patch ==
To solve this situation, we capture (and retain) the initial intention until the point of usage, through a new ThreadPoolStrategy class. The number of threads to use is deferred as late as possible, until the moment where the std::threads are created (ThreadPool in the case of ThinLTO).
When using hardware_concurrency(), setting ThreadCount to 0 now means to use all the possible hardware CPU (SMT) threads. Providing a ThreadCount above to the maximum number of threads will have no effect, the maximum will be used instead.
The heavyweight_hardware_concurrency() is similar to hardware_concurrency(), except that only one thread per hardware *core* will be used.
When LLVM_ENABLE_THREADS is OFF, the threading APIs will always return 1, to ensure any caller loops will be exercised at least once.
Differential Revision: https://reviews.llvm.org/D71775
This adds some of LLD specific scopes and picks up optimisation scopes
via LTO/ThinLTO. Makes use of TimeProfiler multi-thread support added in
77e6bb3c.
Differential Revision: https://reviews.llvm.org/D71060
-fno-pie produces a pair of non-GOT-non-PLT relocations R_PPC_ADDR16_{HA,LO} (R_ABS) referencing external
functions.
```
lis 3, func@ha
la 3, func@l(3)
```
In a -no-pie/-pie link, if func is not defined in the executable, a canonical PLT entry (st_value>0, st_shndx=0) will be needed.
References to func in shared objects will be resolved to this address.
-fno-pie -pie should fail with "can't create dynamic relocation ... against ...", so we just need to think about -no-pie.
On x86, the PLT entry passes the JMP_SLOT offset to the rtld PLT resolver.
On x86-64: the PLT entry passes the JUMP_SLOT index to the rtld PLT resolver.
On ARM/AArch64: the PLT entry passes &.got.plt[n]. The PLT header passes &.got.plt[fixed-index]. The rtld PLT resolver can compute the JUMP_SLOT index from the two addresses.
For these targets, the canonical PLT entry can just reuse the regular PLT entry (in PltSection).
On PPC32: PltSection (.glink) consists of `b PLTresolve` instructions and `PLTresolve`. The rtld PLT resolver depends on r11 having been set up to the .plt (GotPltSection) entry.
On PPC64 ELFv2: PltSection (.glink) consists of `__glink_PLTresolve` and `bl __glink_PLTresolve`. The rtld PLT resolver depends on r12 having been set up to the .plt (GotPltSection) entry.
We cannot reuse a `b PLTresolve`/`bl __glink_PLTresolve` in PltSection as a canonical PLT entry. PPC64 ELFv2 avoids the problem by using TOC for any external reference, even in non-pic code, so the canonical PLT entry scenario should not happen in the first place.
For PPC32, we have to create a PLT call stub as the canonical PLT entry. The code sequence sets up r11.
Reviewed By: Bdragon28
Differential Revision: https://reviews.llvm.org/D73399
Symbol information can be used to improve out-of-range/misalignment diagnostics.
It also helps R_ARM_CALL/R_ARM_THM_CALL which has different behaviors with different symbol types.
There are many (67) relocateOne() call sites used in thunks, {Arm,AArch64}errata, PLT, etc.
Rename them to `relocateNoSym()` to be clearer that there is no symbol information.
Reviewed By: grimar, peter.smith
Differential Revision: https://reviews.llvm.org/D73254
In D71281 a fix was put in to round up the size of a ThunkSection to the
nearest 4KiB when performing errata patching. This fixed a problem with a
very large instrumented program that had thunks and patches mutually
trigger each other. Unfortunately it triggers an assertion failure in an
AArch64 allyesconfig build of the kernel. There is a specific assertion
preventing an InputSectionDescription being larger than 4KiB. This will
always trigger if there is at least one Thunk needed in that
InputSectionDescription, which is possible for an allyesconfig build.
Abstractly the problem case is:
.text : {
*(.text) ;
...
. = ALIGN(SZ_4K);
__idmap_text_start = .;
*(.idmap.text)
__idmap_text_end = .;
...
}
The assertion checks that __idmap_text_end - __idmap_start is < 4 KiB.
Note that there is more than one InputSectionDescription in the
OutputSection so we can't just restrict the fix to OutputSections smaller
than 4 KiB.
The fix presented here limits the D71281 to InputSectionDescriptions that
meet the following conditions:
1.) The OutputSection is bigger than the thunkSectionSpacing so adding
thunks will affect the addresses of following code.
2.) The InputSectionDescription is larger than 4 KiB. This will prevent
any assertion failures that an InputSectionDescription is < 4 KiB
in size.
We do this at ThunkSection creation time as at this point we know that
the addresses are stable and up to date prior to adding the thunks as
assignAddresses() will have been called immediately prior to thunk
generation.
The fix reverts the two tests affected by D71281 to their original state
as they no longer need the 4KiB size roundup. I've added simpler tests to
check for D71281 when the OutputSection size is larger than the ThunkSection
spacing.
Fixes https://github.com/ClangBuiltLinux/linux/issues/812
Differential Revision: https://reviews.llvm.org/D72344
ThunkSection contains 4-byte instructions on all targets that use
thunks. Thunks should not be used in any performance sensitive places,
and locality/cache line/instruction fetching arguments should not apply.
We use 16 bytes as preferred function alignments for modern PowerPC cores.
In any case, 8 is not optimal.
Differential Revision: https://reviews.llvm.org/D72819
This patch is a joint work by Rui Ueyama and me based on D58102 by Xiang Zhang.
It adds Intel CET (Control-flow Enforcement Technology) support to lld.
The implementation follows the draft version of psABI which you can
download from https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI.
CET introduces a new restriction on indirect jump instructions so that
you can limit the places to which you can jump to using indirect jumps.
In order to use the feature, you need to compile source files with
-fcf-protection=full.
* IBT is enabled if all input files are compiled with the flag. To force enabling ibt, pass -z force-ibt.
* SHSTK is enabled if all input files are compiled with the flag, or if -z shstk is specified.
IBT-enabled executables/shared objects have two PLT sections, ".plt" and
".plt.sec". For the details as to why we have two sections, please read
the comments.
Reviewed By: xiangzhangllvm
Differential Revision: https://reviews.llvm.org/D59780
One instance looks like a false positive:
lld/ELF/Relocations.cpp:1622:14: note: use reference type 'const std::pair<ThunkSection *, uint32_t> &' (aka 'cons
t pair<lld:🧝:ThunkSection *, unsigned int> &') to prevent copying
for (const std::pair<ThunkSection *, uint32_t> ts : isd->thunkSections)
It is not changed in this commit.
Linux powerpc discards `*(.gnu.version*)` (arch/powerpc/kernel/vmlinux.lds.S)
to suppress --orphan-handling=warn warnings in the -pie output `.tmp_vmlinux1`
The support is simple. Just add isLive() to:
1) Fix an assertion in SectionBase::getPartition() called by VersionTableSection::isNeeded().
2) Suppress DT_VERSYM, DT_VERDEF, DT_VERNEED and DT_VERNEEDNUM, if the relevant section is discarded.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D71819
GNU ld creates the synthetic section .iplt, and has a built-in linker
script that assigns .iplt to the output section .plt . There is no
output section named .iplt .
Making .iplt an output section actually has a benefit that makes the
tricky toolchain feature stand out. Symbolizers don't have to deal with
mixed PLT entries (e.g. llvm-objdump -d incorrectly annotates such jump
targets).
On EM_PPC{,64}, .glink contains a PLT resolver and a series of jump
instructions. The 4-byte entry size makes it unnecessary to have an
alignment of 16.
Mark ppc32-gnu-ifunc.s and ppc32-gnu-ifunc-nonpreemptable.s as `XFAIL: *`.
They test IPLT on EM_PPC, which never works.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D71520
PltSection is used by both PLT and IPLT. The PLT section may have a
header while the IPLT section does not. Split off IpltSection from
PltSection to be clearer.
Unlike other targets, PPC64 cannot use the same code sequence for PLT
and IPLT. This helps make a future PPC64 patch (D71509) more isolated.
On EM_386 and EM_X86_64, when PLT is empty while IPLT is not, currently
we are inconsistent whether the PLT header is conceptually attached to
in.plt or in.iplt . Consistently attach the header to in.plt can make
the -z retpolineplt logic simpler. It also makes `jmp` point to an
aesthetically better place for non-retpolineplt cases.
Reviewed By: grimar, ruiu
Differential Revision: https://reviews.llvm.org/D71519
This change only affects EM_386. relOff can be computed from `index`
easily, so it is unnecessarily passed as a parameter.
Both in.plt and in.iplt entries are written by writePLT. For in.iplt,
the instruction `push reloc_offset` will change because `index` is now
different. Fortunately, this does not matter because `push; jmp` is only
used by PLT. IPLT does not need the code sequence.
Reviewed By: grimar, ruiu
Differential Revision: https://reviews.llvm.org/D71518
On some edge cases such as Chromium compiled with full instrumentation we
have a .text section over twice the size of the maximum branch range and
the instrumented code generation containing many examples of the erratum
sequence. The combination of Thunks and many erratum sequences causes
finalizeAddressDependentContent() to not converge. We end up with:
start
- Thunk Creation (disturbs addresses after thunks, creating more patches)
- Patch Creation (disturbs addresses after patches, creating more thunks)
- goto start
In most images with few thunks and patches the mutual disturbance does not
cause convergence problems. As the .text size and number of patches go up
the risk increases.
A way to prevent the thunk creation from interfering with patch creation is
to round up the size of the thunks to a 4KiB boundary when the
erratum patch is enabled. As the erratum sequence only triggers when an
instruction sequence starts at 0xff8 or 0xffc modulo (4 KiB) by making the
thunks not affect addresses modulo (4 KiB) we prevent thunks from
interfering with the patch.
The patches themselves could be aggregated in the same way that Thunks are
within ThunkSections and we could round up the size in the same way. This
would reduce the number of patches created in a .text section size >
128 MiB but would not likely help convergence problems.
Differential Revision: https://reviews.llvm.org/D71281
fixes (remaining part of) pr44071, other part in D71242
Fixes PPC64 part of PR40438
// clang -target ppc64le -c a.cc
// .text.unlikely may be placed in a separate output section (via -z keep-text-section-prefix)
// The distance between bar in .text.unlikely and foo in .text may be larger than 32MiB.
static void foo() {}
__attribute__((section(".text.unlikely"))) static int bar() { foo(); return 0; }
__attribute__((used)) static int dummy = bar();
This patch makes such thunks with addends work for PPC64.
AArch64: .text -> `__AArch64ADRPThunk_ (adrp x16, ...; add x16, x16, ...; br x16)` -> target
PPC64: .text -> `__long_branch_ (addis 12, 2, ...; ld 12, ...(12); mtctr 12; bctr)` -> target
AArch64 can leverage ADRP to jump to the target directly, but PPC64
needs to load an address from .branch_lt . Before Power ISA v3.0, the
PC-relative ADDPCIS was not available. .branch_lt was invented to work
around the limitation.
Symbol::ppc64BranchltIndex is replaced by
PPC64LongBranchTargetSection::entry_index which take addends into
consideration.
The tests are rewritten: ppc64-long-branch.s tests -no-pie and
ppc64-long-branch-pi.s tests -pie and -shared.
Reviewed By: sfertile
Differential Revision: https://reviews.llvm.org/D70937
The .note.gnu.property SHT_NOTE sections on AArch64 (a 64-bit target)
should have alignment 8 to more closely match the binutils implementation
where alignment is 4-bytes on 32-bit machines and 8-bytes on 64-bit
machines.
Previously LLD was using 4 for both 32-bit and 64-bit machines.
Differential Revision: https://reviews.llvm.org/D70962
The logic added in r372781 caused ARMExidxSyntheticSection::addSection()
to return false for exidx sections without a link order dep that passed
isValidExidxSectionDep(). This included exidx sections for empty functions. As
a result, such exidx sections would end up treated like ordinary sections and
would end up being laid out before the ARMExidxSyntheticSection, most likely in
the wrong order relative to the exidx entries in the ARMExidxSyntheticSection,
breaking the orderedness invariant relied upon by unwinders. Fix this by
simply discarding such sections.
Differential Revision: https://reviews.llvm.org/D69744
This makes it clear `ELF/**/*.cpp` files define things in the `lld::elf`
namespace and simplifies `elf::foo` to `foo`.
Reviewed By: atanasyan, grimar, ruiu
Differential Revision: https://reviews.llvm.org/D68323
llvm-svn: 373885
Our .interp section is not a SyntheticSection. As a result, it terminates the
loop in removeUnusedSyntheticSections(). This has at least two consequences:
- The synthetic .bss and .bss.rel.ro sections are always present in
dynamically linked executables, even when they are not needed.
- The synthetic .ARM.exidx (and possibly other) sections are always present
in partitions other than the last one, even when not needed.
.ARM.exidx in particular is problematic because it assumes that its
list of code sections is non-empty in getLinkOrderDep(), which can
lead to a crash if the partition does not have any code sections.
Fix these problems by moving the creation of the .interp sections to the
top of createSyntheticSections(). While here, make the code a little less
error-prone by changing the add() lambdas to take a SyntheticSection instead
of an InputSectionBase.
Differential Revision: https://reviews.llvm.org/D68256
llvm-svn: 373347
When /DISCARD/ is used on an input section, that input section may have
a .ARM.exidx metadata section that depends on it. As the discard handling
comes after the .ARM.exidx synthetic section is created we need to make
sure that we account for the case where the .ARM.exidx output section
should be removed because there are no more live input sections.
Differential Revision: https://reviews.llvm.org/D67848
llvm-svn: 372781
Fixes PR38748
mergeSections() calls getOutputSectionName() to get output section
names. Two MergeInputSections may be merged even if they are made
different by SECTIONS commands.
This patch moves mergeSections() after processSectionCommands() and
addOrphanSections() to fix the issue. The new pass is renamed to
OutputSection::finalizeInputSections().
processSectionCommands() and addorphanSections() are changed to add
sections to InputSectionDescription::sectionBases.
finalizeInputSections() merges MergeInputSections and migrates
`sectionBases` to `sections`.
For the -r case, we drop an optimization that tries keeping sh_entsize
non-zero. This is for the simplicity of addOrphanSections(). The
updated merge-entsize2.s reflects the change.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D67504
llvm-svn: 372734
Fixes PR43214.
The size of SHT_RELR may oscillate between 2 numbers (see D53003 for a
similar --pack-dyn-relocs=android issue). This can happen if the shrink
of SHT_RELR causes it to take more words to encode relocation offsets
(this can happen with thunks or segments with overlapping p_offset
ranges), and the expansion of SHT_RELR causes it to take fewer words to
encode relocation offsets.
To avoid the issue, add padding 1s to the end of the relocation section
if its size would decrease. Trailing 1s do not decode to more relocations.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D67164
llvm-svn: 370923
This essentially reverts the code change of D63132 and switches to a simpler approach.
In an executable/shared object, st_shndx of a symbol can be:
1) SHN_UNDEF: undefined symbol (or canonical PLT)
2) SHN_ABS: absolute symbol
3) any other value (usually a regular section index) represents a relative symbol.
The actual value does not matter.
Many ld.so (musl, all archs except MIPS of FreeBSD rtld-elf) even treat 2) and 3)
the same. If .sdata does not exist, it does not matter what value/section
__global_pointer$ has, as long as it is relative (otherwise there will be a pedantic
lld error. See D63132). Just set the st_shndx arbitrarily to 1.
Dummy st_shndx=1 may be used by __rela_iplt_start, linker-script-defined symbols outside a section, __dso_handle, etc.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D66798
llvm-svn: 370172
EhFrameSection::addSection checks liveness of FDE early. This makes it
infeasible to move combineEhSections() before ICF.
Postpone the check to EhFrameSection::finalizeContents(). This is what
ARMExidxSyntheticSection does and it will make a subsequent patch D66717
simpler.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D66727
llvm-svn: 369890
This fixed a bug in r369488. When config->isRela is false, i->r_addend
is not initialized (see encodeDynamicReloc). So we should check
config->isRela before accessing r_addend:
- if (j - i < 3 || i->r_addend)
+ if (j - i < 3 || (config->isRela && i->r_addend != 0))
Original description:
Currently, with Android dynamic relocation packing, only relative
relocations are grouped together. This patch implements similar
packing for non-relative relocations.
The implementation groups non-relative relocations with the same
r_info and r_addend, if using RELA. By requiring a minimum group
size of 3, this achieves smaller relocation sections. Building Android
for an ARM32 device, I see the total size of /system/lib decrease by
392 KB.
Grouping by r_info also allows the runtime dynamic linker to implement
an 1-entry cache to reduce the number of symbol lookup required. With
such 1-entry cache implemented on Android, I'm seeing 10% to 20%
reduction in total time spent in runtime linker for several executables
that I tested.
As a simple correctness check, I've also built x86_64 Android and booted
successfully.
Differential Revision: https://reviews.llvm.org/D65242
Patch by Vic Yang
llvm-svn: 369507
Currently, with Android dynamic relocation packing, only relative
relocations are grouped together. This patch implements similar
packing for non-relative relocations.
The implementation groups non-relative relocations with the same
r_info and r_addend, if using RELA. By requiring a minimum group
size of 3, this achieves smaller relocation sections. Building Android
for an ARM32 device, I see the total size of /system/lib decrease by
392 KB.
Grouping by r_info also allows the runtime dynamic linker to implement
an 1-entry cache to reduce the number of symbol lookup required. With
such 1-entry cache implemented on Android, I'm seeing 10% to 20%
reduction in total time spent in runtime linker for several executables
that I tested.
As a simple correctness check, I've also built x86_64 Android and booted
successfully.
Differential Revision: https://reviews.llvm.org/D66491
Patch by Vic Yang!
llvm-svn: 369488
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.
Differential revision: https://reviews.llvm.org/D66259
llvm-svn: 368936
This ensures these errors produce a non-zero exit and improves the
context (providing the name of the input object and section being
parsed).
llvm-svn: 368378
The combineEhSections runs, by design, before processSectionCommands so
that input exception sections like .ARM.exidx and .eh_frame are not assigned
to OutputSections. Unfortunately if /DISCARD/ removes InputSections that
have associated .ARM.exidx sections without discarding the .ARM.exidx
synthetic section then we will end up crashing when trying to sort the
InputSections in ascending address order.
We fix this by filtering out the sections that have been discarded prior
to processing the InputSections in finalizeContents().
fixes pr42890
Differential Revision: https://reviews.llvm.org/D65759
llvm-svn: 368041
We prioritize non-* wildcards overs VER_NDX_LOCAL/VER_NDX_GLOBAL "*".
This patch generalizes the rule to "*" of other versions and thus fixes PR40176.
I don't feel strongly about this GNU linkers' behavior but the
generalization simplifies code.
Delete `config->defaultSymbolVersion` which was used to special case
VER_NDX_LOCAL/VER_NDX_GLOBAL "*".
In `SymbolTable::scanVersionScript`, custom versions are handled the same
way as VER_NDX_LOCAL/VER_NDX_GLOBAL. So merge
`config->versionScript{Locals,Globals}` into `config->versionDefinitions`.
Overall this seems to simplify the code.
In `SymbolTable::assign{Exact,Wildcard}Versions`,
`sym->verdefIndex == config->defaultSymbolVersion` is changed to
`verdefIndex == UINT32_C(-1)`.
This allows us to give duplicate assignment diagnostics for
`{ global: foo; };` `V1 { global: foo; };`
In test/linkerscript/version-script.s:
vs_index of an undefined symbol changes from 0 to 1. This doesn't matter (arguably 1 is better because the binding is STB_GLOBAL) because vs_index of an undefined symbol is ignored.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D65716
llvm-svn: 367869
An R_*_IRELATIVE represents the address of a STT_GNU_IFUNC symbol
(redirected at runtime) which is non-preemptable and is not associated
with a canonical PLT (associated with a symbol with a section index of
SHN_UNDEF but a non-zero st_value).
.rel[a].plt [DT_JMPREL, DT_JMPREL+DT_JMPRELSZ) contains relocations that
can be lazily resolved. R_*_IRELATIVE are always eagerly resolved, so
conceptually they do not belong to .rela.plt. "iplt" is mostly a misnomer.
glibc powerpc and powerpc64 do not resolve R_*_IRELATIVE if they are in .rela.plt.
// a.o - synthesized PLT call stub has an R_*_IRELATIVE
void ifunc(); int main() { ifunc(); }
// b.o
static void real() {}
asm (".type ifunc, %gnu_indirect_function");
void *ifunc() { return ℜ }
The lld-linked executable crashes. ld.bfd places R_*_IRELATIVE in
.rela.dyn and the executable works.
glibc i386, x86_64, and aarch64 have logic
(glibc/sysdeps/*/dl-machine.h:elf_machine_lazy_rel) to eagerly resolve
R_*_IRELATIVE in .rel[a].plt so the lld-linked executable works.
Move R_*_IRELATIVE from .rel[a].plt to .rel[a].dyn to fix the crashes on
glibc powerpc/powerpc64. This also helps simplifying ifunc
implementation in FreeBSD rtld-elf powerpc64.
If --pack-dyn-relocs=android[+relr] is specified, the Android packed
dynamic relocation format is used for .rela.dyn. We cannot name
in.relaIplt ".rela.dyn" because the output section will have mixed
formats. This can be improved in the future.
Reviewed By: pcc
Differential Revision: https://reviews.llvm.org/D65651
llvm-svn: 367745
This patch does the same thing as r365595 to other subdirectories,
which completes the naming style change for the entire lld directory.
With this, the naming style conversion is complete for lld.
Differential Revision: https://reviews.llvm.org/D64473
llvm-svn: 365730
This patch is mechanically generated by clang-llvm-rename tool that I wrote
using Clang Refactoring Engine just for creating this patch. You can see the
source code of the tool at https://reviews.llvm.org/D64123. There's no manual
post-processing; you can generate the same patch by re-running the tool against
lld's code base.
Here is the main discussion thread to change the LLVM coding style:
https://lists.llvm.org/pipermail/llvm-dev/2019-February/130083.html
In the discussion thread, I proposed we use lld as a testbed for variable
naming scheme change, and this patch does that.
I chose to rename variables so that they are in camelCase, just because that
is a minimal change to make variables to start with a lowercase letter.
Note to downstream patch maintainers: if you are maintaining a downstream lld
repo, just rebasing ahead of this commit would cause massive merge conflicts
because this patch essentially changes every line in the lld subdirectory. But
there's a remedy.
clang-llvm-rename tool is a batch tool, so you can rename variables in your
downstream repo with the tool. Given that, here is how to rebase your repo to
a commit after the mass renaming:
1. rebase to the commit just before the mass variable renaming,
2. apply the tool to your downstream repo to mass-rename variables locally, and
3. rebase again to the head.
Most changes made by the tool should be identical for a downstream repo and
for the head, so at the step 3, almost all changes should be merged and
disappear. I'd expect that there would be some lines that you need to merge by
hand, but that shouldn't be too many.
Differential Revision: https://reviews.llvm.org/D64121
llvm-svn: 365595
The difference from D63432/r365015 is that this patch does not place
SHF_STRINGS sections with different alignments into the same
MergeSyntheticSection. Doing that would:
(1) create unnecessary padding and thus waste space.
Add a test tail-merge-string-align2.s to check no extra padding is created.
(2) make some input sections unaligned when tail merge (-O2) is enabled.
The alignment of MergeTailAlignment::Builder was out of sync in D63432.
MOVAPS on such unaligned strings can raise SIGSEGV.
This should fix PR42289: the Linux kernel has a use case that input
files have .rodata.cst32 sections with different alignments. The
expectation (and what ld.bfd and gold do) is that in the -r link, there
is only one .rodata.cst32 (SHF_MERGE sections with different alignments
can be combined), but lld currently creates one for each different
alignment.
The current merging strategy:
1) Group SHF_MERGE sections by (name, sh_flags, sh_entsize and
sh_addralign). Merging is performed among a group, even if -O0 is specified.
2) Create one output section for each group. This is a special case in
addInputSec().
This patch changes 1) to:
1) Group SHF_MERGE sections by (name, sh_flags, sh_entsize).
Merging is performed among a group, even if -O0 is specified.
We will thus create just one .rodata.cst32 . This also improves merging
efficiency when sections with the same name but different alignments are
combined.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D64200
llvm-svn: 365139
This reverts r365015.
David Zarzycki reported this change broke stage2 and stage3 tests. The
root cause is still not very clear, but I guess some SHF_MERGE sections
with the same name have different alignments. They were not merged
before but were merged after r365015.
Something that assumes address uniqueness of such mergeable data caused
the bug.
llvm-svn: 365048
This should fix PR42289: the Linux kernel has a use case that input
files have .rodata.cst32 sections with different alignments. The
expectation (and what ld.bfd and gold do) is that in the -r link, there
is only one .rodata.cst32 (SHF_MERGE sections with different alignments
can be combined), but lld currently creates one for each different
alignment.
The current merging strategy:
1) Group SHF_MERGE sections by (name, sh_flags, sh_entsize and
sh_addralign). String merging is performed among a group, even if -O0 is specified.
2) Create one output section for each group. This is a special case in
addInputSec().
This patch changes 1) to:
1) Group SHF_MERGE sections by (name, sh_flags, sh_entsize).
String merging is performed among a group, even if -O0 is specified.
We will thus create just one .rodata.cst32 . This also improves merging
efficiency when sections with the same name but different alignments are
combined.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D63432
llvm-svn: 365015
If .rela.plt is mentioned in a linker script, it might be preserved
even if it is empty. In that case, LLD created DT_JMPREL and DT_PLTGOT
dynamic tags. When the tags exist, a dynamic loader writes values into
reserved slots in .got.plt to support lazy symbol resolution.
The problem is that, in fact, the linker has not reserved that space,
and the writing may occur into the memory allocated for something else.
Differential Revision: https://reviews.llvm.org/D63869
llvm-svn: 364639
If .sdata is absent, linker synthesized __global_pointer$ gets a section index of SHN_ABS.
(ld.bfd has a similar issue: binutils PR24678)
Scrt1.o may use `lla gp, __global_pointer$` to reference the symbol PC
relatively. In -pie/-shared mode, lld complains if a PC relative
relocation references an absolute symbol (SHN_ABS) but ld.bfd doesn't:
ld.lld: error: relocation R_RISCV_PCREL_HI20 cannot refer to lute symbol: __global_pointer$
Let the reference of __global_pointer$ to force creation of .sdata to
fix the problem. This is similar to _GLOBAL_OFFSET_TABLE_, which forces
creation of .got or .got.plt .
Also, change the visibility from STV_HIDDEN to STV_DEFAULT and don't
define the symbol for -shared. This matches ld.bfd, though I don't
understand why it uses STV_DEFAULT.
Reviewed By: ruiu, jrtc27
Differential Revision: https://reviews.llvm.org/D63132
llvm-svn: 363351
We create several types of synthetic sections for loadable partitions, including:
- The dynamic symbol table. This allows code outside of the loadable partitions
to find entry points with dlsym.
- Creating a dynamic symbol table also requires the creation of several other
synthetic sections for the partition, such as the dynamic table and hash table
sections.
- The partition's ELF header is represented as a synthetic section in the
combined output file, and will be used by llvm-objcopy to extract partitions.
Differential Revision: https://reviews.llvm.org/D62350
llvm-svn: 362819
Branch Target Identification (BTI) and Pointer Authentication (PAC) are
architecture features introduced in v8.5a and 8.3a respectively. The new
instructions have been added in the hint space so that binaries take
advantage of support where it exists yet still run on older hardware. The
impact of each feature is:
BTI: For executable pages that have been guarded, all indirect branches
must have a destination that is a BTI instruction of the appropriate type.
For the static linker, this means that PLT entries must have a "BTI c" as
the first instruction in the sequence. BTI is an all or nothing
property for a link unit, any indirect branch not landing on a valid
destination will cause a Branch Target Exception.
PAC: The dynamic loader encodes with PACIA the address of the destination
that the PLT entry will load from the .plt.got, placing the result in a
subset of the top-bits that are not valid virtual addresses. The PLT entry
may authenticate these top-bits using the AUTIA instruction before
branching to the destination. Use of PAC in PLT sequences is a contract
between the dynamic loader and the static linker, it is independent of
whether the relocatable objects use PAC.
BTI and PAC are independent features that can be combined. So we can have
several combinations of PLT:
- Standard with no BTI or PAC
- BTI PLT with "BTI c" as first instruction.
- PAC PLT with "AUTIA1716" before the indirect branch to X17.
- BTIPAC PLT with "BTI c" as first instruction and "AUTIA1716" before the
first indirect branch to X17.
The use of BTI and PAC in relocatable object files are encoded by feature
bits in the .note.gnu.property section in a similar way to Intel CET. There
is one AArch64 specific program property GNU_PROPERTY_AARCH64_FEATURE_1_AND
and two target feature bits defined:
- GNU_PROPERTY_AARCH64_FEATURE_1_BTI
-- All executable sections are compatible with BTI.
- GNU_PROPERTY_AARCH64_FEATURE_1_PAC
-- All executable sections have return address signing enabled.
Due to the properties of FEATURE_1_AND the static linker can tell when all
input relocatable objects have the BTI and PAC feature bits set. The static
linker uses this to enable the appropriate PLT sequence.
Neither -> standard PLT
GNU_PROPERTY_AARCH64_FEATURE_1_BTI -> BTI PLT
GNU_PROPERTY_AARCH64_FEATURE_1_PAC -> PAC PLT
Both properties -> BTIPAC PLT
In addition to the .note.gnu.properties there are two new command line
options:
--force-bti : Act as if all relocatable inputs had
GNU_PROPERTY_AARCH64_FEATURE_1_BTI and warn for every relocatable object
that does not.
--pac-plt : Act as if all relocatable inputs had
GNU_PROPERTY_AARCH64_FEATURE_1_PAC. As PAC is a contract between the loader
and static linker no warning is given if it is not present in an input.
Two processor specific dynamic tags are used to communicate that a non
standard PLT sequence is being used.
DTI_AARCH64_BTI_PLT and DTI_AARCH64_BTI_PAC.
Differential Revision: https://reviews.llvm.org/D62609
llvm-svn: 362793
Many -static/-no-pie/-shared/-pie applications linked against glibc or musl
should work with this patch. This also helps FreeBSD PowerPC64 to migrate
their lib32 (PR40888).
* Fix default image base and max page size.
* Support new-style Secure PLT (see below). Old-style BSS PLT is not
implemented, so it is not suitable for FreeBSD rtld now because it doesn't
support Secure PLT yet.
* Support more initial relocation types:
R_PPC_ADDR32, R_PPC_REL16*, R_PPC_LOCAL24PC, R_PPC_PLTREL24, and R_PPC_GOT16.
The addend of R_PPC_PLTREL24 is special: it decides the call stub PLT type
but it should be ignored for the computation of target symbol VA.
* Support GNU ifunc
* Support .glink used for lazy PLT resolution in glibc
* Add a new thunk type: PPC32PltCallStub that is similar to PPC64PltCallStub.
It is used by R_PPC_REL24 and R_PPC_PLTREL24.
A PLT stub used in -fPIE/-fPIC usually loads an address relative to
.got2+0x8000 (-fpie/-fpic code uses _GLOBAL_OFFSET_TABLE_ relative
addresses).
Two .got2 sections in two object files have different addresses, thus a PLT stub
can't be shared by two object files. To handle this incompatibility,
change the parameters of Thunk::isCompatibleWith to
`const InputSection &, const Relocation &`.
PowerPC psABI specified an old-style .plt (BSS PLT) that is both
writable and executable. Linkers don't make separate RW- and RWE segments,
which causes all initially writable memory (think .data) executable.
This is a big security concern so a new PLT scheme (secure PLT) was developed to
address the security issue.
TLS will be implemented in D62940.
glibc older than ~2012 requires .rela.dyn to include .rela.plt, it can
not handle the DT_RELA+DT_RELASZ == DT_JMPREL case correctly. A hack
(not included in this patch) in LinkerScript.cpp addOrphanSections() to
work around the issue:
if (Config->EMachine == EM_PPC) {
// Older glibc assumes .rela.dyn includes .rela.plt
Add(In.RelaDyn);
if (In.RelaPlt->isLive() && !In.RelaPlt->Parent)
In.RelaDyn->getParent()->addSection(In.RelaPlt);
}
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D62464
llvm-svn: 362721
GotEntrySize and GotPltEntrySize were added in D22288. Later, with
the introduction of wordsize() (then Config->Wordsize), they become
redundant, because there is no target that sets GotEntrySize or
GotPltEntrySize to a number different from Config->Wordsize.
Reviewed By: grimar, ruiu
Differential Revision: https://reviews.llvm.org/D62727
llvm-svn: 362220
This change causes us to read partition specifications from partition
specification sections and split output sections into partitions according
to their reachability from partition entry points.
This is only the first step towards a full implementation of partitions. Later
changes will add additional synthetic sections to each partition so that
they can be loaded independently.
Differential Revision: https://reviews.llvm.org/D60353
llvm-svn: 361925
We currently sort dynamic relocations by (!is_relative,symbol_index).
Add r_offset as the third key. This makes `readelf -r` debugging easier
(relocations to the same symbol are ordered by r_offset).
Refactor the test combreloc.s (renamed from combrelocs.s) to check
R_X86_64_RELATIVE, and delete --expand-relocs.
The difference from the reverted D61477 is that we keep !is_relative as
the first key. In local dynamic TLS model, DTPMOD (e.g.
R_ARM_TLS_DTPMOD32 R_X86_64_DTPMOD and R_PPC{,64}_DTPMOD) may use 0 as
the symbol index.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D62141
llvm-svn: 361164
This reverts commit r361125. This linker change breaks shared libraries
in some subtle way on x86_64. (Specifically, gold segfaults when
loading the LLVMgold.so plugin linked with lldb with this patch.)
llvm-svn: 361150
Fixes PR41692.
We currently sort dynamic relocations by (!is_relative,symbol_index).
Change it to (symbol_index,r_offset). We still place relative
relocations first because R_*_RELATIVE are the only dynamic relocations
with 0 symbol index (except on MIPS, which doesn't use DT_REL[A]COUNT
anyway).
This makes `readelf -r` debugging easier (relocations to the same symbol
are ordered by r_offset).
Refactor the test combreloc.s (renamed from combrelocs.s) to check
R_X86_64_RELATIVE, and delete --expand-relocs.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D61477
llvm-svn: 361125
See D61891: llvm had a bug that might create invalid (DW_AT_low_pc,DW_AT_high_pc) pairs or range list entries due to missing DW_AT_addr_base.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D61889
llvm-svn: 360679
Make some small adjustment while touching the code: make parameters
const, use less_first(), etc.
Differential Revision: https://reviews.llvm.org/D60989
llvm-svn: 358943
Summary:
We access Live and OutputOff (which may share the same memory location)
concurrently in 2 parallelForEachN loops. Separating them avoids subtle
data races like D41884/PR35788. This patch places Live and Hash
together.
2 reasons this is appealing:
1) Hash is immutable. Live is almost read-only - only written once in MarkLive.cpp where
Hash is not accessed
2) we already discard low bits of Hash to decide ShardID. It doesn't
matter much if we make 32-bit Hash to 31-bit.
For a huge internal clang -O3 executable (1.6GiB),
`Strings` in StringTableBuilder::finalizeStringTable contains at most 310253 elements.
The expected number of pair-wise collisions 2^(-31) * C(310253,2) ~= 22.41 is too small to have a negative impact on performance.
Actually, my benchmark shows there is actually a minor performance improvement.
Differential Revision: https://reviews.llvm.org/D60765
llvm-svn: 358645
With partitions, each partition should have the same build id. This means
that the build id needs to be only computed once, otherwise we will end up
with different build ids in each partition as a result of the file contents
changing. This change moves the computation of the build id into Writer so
that it only happens once.
Differential Revision: https://reviews.llvm.org/D60342
llvm-svn: 358536
The typo was introduced to llvm MC in rL204769 (fixed in rL358247) and then to lld.
Also, for relocatable-many-sections.s, the size of .symtab changed at some point and the formula needs update.
llvm-svn: 358248
For partitions I intend to use the same set of version indexes in
each partition for simplicity. Since each partition will need its own
VersionNeedSection this will require moving the verneed tracking out of
VersionNeedSection. The way I've done this is to move most of the tracking
into SharedFile. What will eventually become the per-partition tracking
still lives in VersionNeedSection.
As a bonus the code gets a little simpler and more consistent with how we
handle verdef.
Differential Revision: https://reviews.llvm.org/D60307
llvm-svn: 357926
And rename the function to combineEhSections(). This makes the processing
of .ARM.exidx even more similar to .eh_frame and means that we can avoid an
additional loop over InputSections.
Differential Revision: https://reviews.llvm.org/D60026
llvm-svn: 357417
Summary:
Some synthetic sections can be empty while still being needed, thus they
can't be removed by removeUnusedSyntheticSections(). Rename this member
function to more appropriate isNeeded() with the opposite meaning.
No functional change intended.
Reviewers: ruiu, espindola
Reviewed By: ruiu
Subscribers: jhenderson, grimar, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59982
llvm-svn: 357377
Recommit r356666 with fixes for buildbot failure, as well as handling for
--emit-relocs, which we decide not to emit any relocation sections as the
table is already position independent and an offline tool can deduce the
relocations.
Instead of creating extra Synthetic .ARM.exidx sections to account for
gaps in the table, create a single .ARM.exidx SyntheticSection that can
derive the contents of the gaps from a sorted list of the executable
InputSections. This has the benefit of moving the ARM specific code for
SyntheticSections in SHF_LINK_ORDER processing and the table merging code
into the ARM specific SyntheticSection. This also makes it easier to create
EXIDX_CANTUNWIND table entries for executable InputSections that don't
have an associated .ARM.exidx section.
Fixes pr40277
Differential Revision: https://reviews.llvm.org/D59216
llvm-svn: 357160
Summary:
This should address remaining issues discussed in PR36555.
Currently R_GOT*_FROM_END are exclusively used by x86 and x86_64 to
express relocations types relative to the GOT base. We have
_GLOBAL_OFFSET_TABLE_ (GOT base) = start(.got.plt) but end(.got) !=
start(.got.plt)
This can have problems when _GLOBAL_OFFSET_TABLE_ is used as a symbol, e.g.
glibc dl_machine_dynamic assumes _GLOBAL_OFFSET_TABLE_ is start(.got.plt),
which is not true.
extern const ElfW(Addr) _GLOBAL_OFFSET_TABLE_[] attribute_hidden;
return _GLOBAL_OFFSET_TABLE_[0]; // R_X86_64_GOTPC32
In this patch, we
* Change all GOT*_FROM_END to GOTPLT* to fix the problem.
* Add HasGotPltOffRel to denote whether .got.plt should be kept even if
the section is empty.
* Simplify GotSection::empty and GotPltSection::empty by setting
HasGotOffRel and HasGotPltOffRel according to GlobalOffsetTable early.
The change of R_386_GOTPC makes X86::writePltHeader simpler as we don't
have to compute the offset start(.got.plt) - Ebx (it is constant 0).
We still diverge from ld.bfd (at least in most cases) and gold in that
.got.plt and .got are not adjacent, but the advantage doing that is
unclear.
Reviewers: ruiu, sivachandra, espindola
Subscribers: emaste, mehdi_amini, arichardson, dexonsmith, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59594
llvm-svn: 356968
Previously, `Entries` contains pairs of symbols and their indices.
The indices are always 0, x, 2x, 3x, ..., where x is the size of
relocation entry. We didn't have to store that values because we can
compute them when we consume them.
llvm-svn: 356812
There is a reproducible buildbot failure (segfault) on the 2 stage
clang-cmake-armv8-lld bot. Reverting while I investigate.
Differential Revision: https://reviews.llvm.org/D59216
llvm-svn: 356684
Instead of creating extra Synthetic .ARM.exidx sections to account for
gaps in the table, create a single .ARM.exidx SyntheticSection that can
derive the contents of the gaps from a sorted list of the executable
InputSections. This has the benefit of moving the ARM specific code for
SyntheticSections in SHF_LINK_ORDER processing and the table merging code
into the ARM specific SyntheticSection. This also makes it easier to create
EXIDX_CANTUNWIND table entries for executable InputSections that don't
have an associated .ARM.exidx section.
Fixes pr40277
Differential Revision: https://reviews.llvm.org/D59216
llvm-svn: 356666
Summary:
This implements Rui Ueyama's idea in PR39044.
I've checked that ld.bfd and gold do not have the power-of-2 requirement
and do not require sh_entsize to be a multiple of sh_align.
Now on the updated test merge-entsize.s, all the 3 linkers happily
create .rodata that is not 3-byte aligned.
This has a use case in Linux arch/x86/crypto/sha512-avx2-asm.S
It uses sh_entsize of 640, which is not a power of 2.
See https://github.com/ClangBuiltLinux/linux/issues/417
Reviewers: ruiu, espindola
Reviewed By: ruiu
Subscribers: nickdesaulniers, E5ten, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59478
llvm-svn: 356428
This does not appear to be necessary because StringTableSection does not
need to be finalized, which also means that we can remove the call to
finalizeSynthetic on .dynstr.
Differential Revision: https://reviews.llvm.org/D59240
llvm-svn: 355977
We're going to need a separate VersionNeedSection for each partition, and
the partition data structure won't be templated.
With this the VersionTableSection class no longer needs ELFT, so detemplate it.
Differential Revision: https://reviews.llvm.org/D58808
llvm-svn: 355478
This lets us remove the special case from Writer::writeSections(), and also
fixes a bug where .eh_frame_hdr isn't necessarily written in the correct
order if a linker script moves .eh_frame and .eh_frame_hdr into the same
output section.
Differential Revision: https://reviews.llvm.org/D58795
llvm-svn: 355153
RelocationBaseSection is not used in -r links, so Config->Relocatable will
always be false.
Differential Revision: https://reviews.llvm.org/D58489
llvm-svn: 354607
The patch solves two tasks:
1. MIPS ABI allows to mix regular and microMIPS code and perform
cross-mode jumps. Linker needs to detect such cases and replace
jump/branch instructions by their cross-mode equivalents.
2. Other tools like dunamic linkers need to recognize cases when dynamic
table entries, e_entry field of an ELF header etc point to microMIPS
symbol. Linker should provide such information.
The first task is implemented in the `MIPS<ELFT>::relocateOne()` method.
New routine `fixupCrossModeJump` detects ISA mode change, checks and
replaces an instruction.
The main problem is how to recognize that relocation target is microMIPS
symbol. For absolute and section symbols compiler or assembler set the
less-significant bit of the symbol's value or sum of the symbol's value
and addend. And this bit signals to linker about microMIPS code. For
global symbols compiler cannot do the same trick because other tools like,
for example, disassembler wants to know an actual position of the symbol.
So compiler sets STO_MIPS_MICROMIPS flag in the `st_other` field.
In `MIPS<ELFT>::relocateOne()` method we have a symbol's value only and
cannot access any symbol's attributes. To pass type of the symbol
(regular/microMIPS) to that routine as well as other places where we
write a symbol value as-is (.dynamic section, `Elf_Ehdr::e_entry` field
etc) we set when necessary a less-significant bit in the `getSymVA`
function.
Differential revision: https://reviews.llvm.org/D40147
llvm-svn: 354311
On PowerPC64, it is necessary to keep the LocalEntry bits in st_other,
especially when -r is used. Otherwise, when the resulting object is used
in a posterior linking, LocalEntry info will be unavailable and
functions may be called through the wrong entrypoint.
Patch by Leandro Lupori.
Differential Revision: https://reviews.llvm.org/D56782
llvm-svn: 354184
Non-GOT non-PLT relocations to non-preemptible ifuncs result in the
creation of a canonical PLT, which now takes the identity of the IFUNC
in the symbol table. This (a) ensures address consistency inside and
outside the module, and (b) fixes a bug where some of these relocations
end up pointing to the resolver.
Fixes (at least) PR40474 and PR40501.
Differential Revision: https://reviews.llvm.org/D57371
llvm-svn: 353981
With the following changes:
1) Compilation fix:
std::atomic<bool> HasStaticTlsModel = false; ->
std::atomic<bool> HasStaticTlsModel{false};
2) Adjusted the comment in code.
Initial commit message:
DF_STATIC_TLS flag indicates that the shared object or executable
contains code using a static thread-local storage scheme.
Patch checks if IE/LE relocations were used to check if the code uses
a static model. If so it sets the DF_STATIC_TLS flag.
Differential revision: https://reviews.llvm.org/D57749
----
Modified : /lld/trunk/ELF/Arch/X86.cpp
Modified : /lld/trunk/ELF/Config.h
Modified : /lld/trunk/ELF/SyntheticSections.cpp
Added : /lld/trunk/test/ELF/Inputs/i386-static-tls-model1.s
Added : /lld/trunk/test/ELF/Inputs/i386-static-tls-model2.s
Added : /lld/trunk/test/ELF/Inputs/i386-static-tls-model3.s
Added : /lld/trunk/test/ELF/Inputs/i386-static-tls-model4.s
Added : /lld/trunk/test/ELF/i386-static-tls-model.s
Modified : /lld/trunk/test/ELF/i386-tls-ie-shared.s
Modified : /lld/trunk/test/ELF/tls-dynamic-i686.s
Modified : /lld/trunk/test/ELF/tls-opt-iele-i686-nopic.s
llvm-svn: 353299
DF_STATIC_TLS flag indicates that the shared object or executable
contains code using a static thread-local storage scheme.
Patch checks if IE/LE relocations were used to check if the code uses
a static model. If so it sets the DF_STATIC_TLS flag.
Differential revision: https://reviews.llvm.org/D57749
llvm-svn: 353293
Previously we were setting it to the GotPlt output section, which is
incorrect on ARM where this section is in .got. In static binaries
this can lead to sh_info being set to -1 (because there is no .got.plt)
which results in various tools rejecting the output file.
Differential Revision: https://reviews.llvm.org/D57274
llvm-svn: 352413