The parsing method did not check reading errors and might easily fall
into an infinite loop on an invalid input because of that.
Differential Revision: https://reviews.llvm.org/D83049
Both the .ARM.exidx and .eh_frame sections have a custom SyntheticSection
that acts as a container for the InputSections. The InputSections are added
to the SyntheticSection prior to /DISCARD/ which limits the affect a
/DISCARD/ can have to the whole SyntheticSection. In the majority of cases
this is sufficient as it is not common to discard subsets of the
InputSections. The Linux kernel has one of these scripts which has something
like:
/DISCARD/ : { *(.ARM.exidx.exit.text) *(.ARM.extab.exit.text) ... }
The .ARM.exidx.exit.text are not discarded because the InputSection has been
transferred to the Synthetic Section. The *(.ARM.extab.exit.text) sections
have not so they are discarded. When we come to write out the .ARM.exidx
sections the dangling references from .ARM.exidx.exit.text to
.ARM.extab.exit.text currently cause relocation out of range errors, but
could as easily cause a fatal error message if we check for dangling
references at relocation time.
This patch attempts to respect the /DISCARD/ command by running it on the
.ARM.exidx InputSections stored in the SyntheticSection.
The .eh_frame is in theory affected by this problem, but I don't think that
there is a dangling reference problem that can happen with these sections.
Fixes remaining part of pr44824
Differential Revision: https://reviews.llvm.org/D79687
Essentially takes the lld/Common/Threads.h wrappers and moves them to
the llvm/Support/Paralle.h algorithm header.
The changes are:
- Remove policy parameter, since all clients use `par`.
- Rename the methods to `parallelSort` etc to match LLVM style, since
they are no longer C++17 pstl compatible.
- Move algorithms from llvm::parallel:: to llvm::, since they have
"parallel" in the name and are no longer overloads of the regular
algorithms.
- Add range overloads
- Use the sequential algorithm directly when 1 thread is requested
(skips task grouping)
- Fix the index type of parallelForEachN to size_t. Nobody in LLVM was
using any other parameter, and it made overload resolution hard for
for_each_n(par, 0, foo.size(), ...) because 0 is int, not size_t.
Remove Threads.h and update LLD for that.
This is a prerequisite for parallel public symbol processing in the PDB
library, which is in LLVM.
Reviewed By: MaskRay, aganea
Differential Revision: https://reviews.llvm.org/D79390
A linker will create .ARM.exidx sections for InputSections that don't
have them. This can cause a relocation out of range error If the
InputSection happens to be extremely far away from the other sections.
This is often the case for the vector table on older ARM CPUs as the only
two places that the table can be placed is 0 or 0xffff0000. We fix this
by removing InputSections that need a linker generated .ARM.exidx
section if that would cause an error.
Differential Revision: https://reviews.llvm.org/D79289
Fixed error detected by msan. The size field of the .ARM.exidx synthetic
section needs to be initialized to at least estimation level before
calling assignAddresses as that will use the size field.
This was previously reverted in 1ca16fc4f5.
Differential Revision: https://reviews.llvm.org/D78422
This reverts commit f969c2aa65.
There are some msan buildbot failures sanitzer-x86_64-linux-fast that
I need to investigate.
Differential Revision: https://reviews.llvm.org/D78422
The contents of the .ARM.exidx section must be ordered by SHF_LINK_ORDER
rules. We don't need to know the precise address for this order, but we
do need to know the relative order of sections. We have been using the
sectionIndex for this purpose, this works when the OutputSection order
has a monotonically increasing virtual address, but it is possible to
write a linker script with non-monotonically increasing virtual address.
For these cases we need to evaluate the base address of the OutputSection
so that we can order the .ARM.exidx sections properly.
This change moves the finalisation of .ARM.exidx till after the first
call to AssignAddresses. This permits us to sort on virtual address which
is linker script safe. It also permits a fix for part of pr44824 where
we generate .ARM.exidx section for the vector table when that table is so
far away it is out of range of the .ARM.exidx section. This fix will come
in a follow up patch.
Differential Revision: https://reviews.llvm.org/D78422
--no-threads is a name copied from gold.
gold has --no-thread, --thread-count and several other --thread-count-*.
There are needs to customize the number of threads (running several lld
processes concurrently or customizing the number of LTO threads).
Having a single --threads=N is a straightforward replacement of gold's
--no-threads + --thread-count.
--no-threads is used rarely. So just delete --no-threads instead of
keeping it for compatibility for a while.
If --threads= is specified (ELF,wasm; COFF /threads: is similar),
--thinlto-jobs= defaults to --threads=,
otherwise all available hardware threads are used.
There is currently no way to override a --threads={1,2,...}. It is still
a debate whether we should use --threads=all.
Reviewed By: rnk, aganea
Differential Revision: https://reviews.llvm.org/D76885
On an internal target,
* Before D74773: time -f '%M' => 18275680
* After D74773: time -f '%M' => 22088964
This patch restores to the status before D74773.
llvm::call_once(initDwarfLine, [this]() { initializeDwarf(); });
Though it is not used in all places.
I need that patch for implementing "Remove obsolete debug info" feature
(D74169). But this caching mechanism is useful by itself, and I think it
would be good to use it without connection to "Remove obsolete debug info"
feature. So this patch changes inplace creation of DWARFContext with
its cached version.
Depends on D74308
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D74773
Summary:
LLD has workaround for the times when SectionIndex was not passed properly:
LT->getFileLineInfoForAddress(
S->getOffsetInFile() + Offset, nullptr,
DILineInfoSpecifier::FileLineInfoKind::AbsoluteFilePath, Info));
S->getOffsetInFile() was added to differentiate offsets between
various sections. Now SectionIndex is properly specified.
Thus it is not necessary to use getOffsetInFile() workaround.
See https://reviews.llvm.org/D58194, https://reviews.llvm.org/D58357.
This patch removes getOffsetInFile() workaround.
Reviewers: ruiu, grimar, MaskRay, espindola
Reviewed By: grimar, MaskRay
Subscribers: emaste, arichardson, llvm-commits
Tags: #llvm, #lld
Differential Revision: https://reviews.llvm.org/D75636
Summary:
Extracted from D74773. Currently, errors happened while parsing
debug info are reported as errors. DebugInfoDWARF library treats such
errors as "Recoverable errors". This patch makes debug info errors
to be reported as warnings, to support DebugInfoDWARF approach.
Reviewers: ruiu, grimar, MaskRay, jhenderson, espindola
Reviewed By: MaskRay, jhenderson
Subscribers: emaste, aprantl, arichardson, arphaman, llvm-commits
Tags: #llvm, #debug-info, #lld
Differential Revision: https://reviews.llvm.org/D75234
Summary:
Generate PAC protected plt only when "-z pac-plt" is passed to the
linker. GNU toolchain generates when it is explicitly requested[1].
When pac-plt is requested then set the GNU_PROPERTY_AARCH64_FEATURE_1_PAC
note even when not all function compiled with PAC but issue a warning.
Harmonizing the warning style for BTI/PAC/IBT.
Generate BTI protected PLT if case of "-z force-bti".
[1] https://www.sourceware.org/ml/binutils/2019-03/msg00021.html
Reviewers: peter.smith, espindola, MaskRay, grimar
Reviewed By: peter.smith, MaskRay
Subscribers: tatyana-krasnukha, emaste, arichardson, kristof.beyls, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D74537
The goal of this patch is to maximize CPU utilization on multi-socket or high core count systems, so that parallel computations such as LLD/ThinLTO can use all hardware threads in the system. Before this patch, on Windows, a maximum of 64 hardware threads could be used at most, in some cases dispatched only on one CPU socket.
== Background ==
Windows doesn't have a flat cpu_set_t like Linux. Instead, it projects hardware CPUs (or NUMA nodes) to applications through a concept of "processor groups". A "processor" is the smallest unit of execution on a CPU, that is, an hyper-thread if SMT is active; a core otherwise. There's a limit of 32-bit processors on older 32-bit versions of Windows, which later was raised to 64-processors with 64-bit versions of Windows. This limit comes from the affinity mask, which historically is represented by the sizeof(void*). Consequently, the concept of "processor groups" was introduced for dealing with systems with more than 64 hyper-threads.
By default, the Windows OS assigns only one "processor group" to each starting application, in a round-robin manner. If the application wants to use more processors, it needs to programmatically enable it, by assigning threads to other "processor groups". This also means that affinity cannot cross "processor group" boundaries; one can only specify a "preferred" group on start-up, but the application is free to allocate more groups if it wants to.
This creates a peculiar situation, where newer CPUs like the AMD EPYC 7702P (64-cores, 128-hyperthreads) are projected by the OS as two (2) "processor groups". This means that by default, an application can only use half of the cores. This situation could only get worse in the years to come, as dies with more cores will appear on the market.
== The problem ==
The heavyweight_hardware_concurrency() API was introduced so that only *one hardware thread per core* was used. Once that API returns, that original intention is lost, only the number of threads is retained. Consider a situation, on Windows, where the system has 2 CPU sockets, 18 cores each, each core having 2 hyper-threads, for a total of 72 hyper-threads. Both heavyweight_hardware_concurrency() and hardware_concurrency() currently return 36, because on Windows they are simply wrappers over std:🧵:hardware_concurrency() -- which can only return processors from the current "processor group".
== The changes in this patch ==
To solve this situation, we capture (and retain) the initial intention until the point of usage, through a new ThreadPoolStrategy class. The number of threads to use is deferred as late as possible, until the moment where the std::threads are created (ThreadPool in the case of ThinLTO).
When using hardware_concurrency(), setting ThreadCount to 0 now means to use all the possible hardware CPU (SMT) threads. Providing a ThreadCount above to the maximum number of threads will have no effect, the maximum will be used instead.
The heavyweight_hardware_concurrency() is similar to hardware_concurrency(), except that only one thread per hardware *core* will be used.
When LLVM_ENABLE_THREADS is OFF, the threading APIs will always return 1, to ensure any caller loops will be exercised at least once.
Differential Revision: https://reviews.llvm.org/D71775
This adds some of LLD specific scopes and picks up optimisation scopes
via LTO/ThinLTO. Makes use of TimeProfiler multi-thread support added in
77e6bb3c.
Differential Revision: https://reviews.llvm.org/D71060
-fno-pie produces a pair of non-GOT-non-PLT relocations R_PPC_ADDR16_{HA,LO} (R_ABS) referencing external
functions.
```
lis 3, func@ha
la 3, func@l(3)
```
In a -no-pie/-pie link, if func is not defined in the executable, a canonical PLT entry (st_value>0, st_shndx=0) will be needed.
References to func in shared objects will be resolved to this address.
-fno-pie -pie should fail with "can't create dynamic relocation ... against ...", so we just need to think about -no-pie.
On x86, the PLT entry passes the JMP_SLOT offset to the rtld PLT resolver.
On x86-64: the PLT entry passes the JUMP_SLOT index to the rtld PLT resolver.
On ARM/AArch64: the PLT entry passes &.got.plt[n]. The PLT header passes &.got.plt[fixed-index]. The rtld PLT resolver can compute the JUMP_SLOT index from the two addresses.
For these targets, the canonical PLT entry can just reuse the regular PLT entry (in PltSection).
On PPC32: PltSection (.glink) consists of `b PLTresolve` instructions and `PLTresolve`. The rtld PLT resolver depends on r11 having been set up to the .plt (GotPltSection) entry.
On PPC64 ELFv2: PltSection (.glink) consists of `__glink_PLTresolve` and `bl __glink_PLTresolve`. The rtld PLT resolver depends on r12 having been set up to the .plt (GotPltSection) entry.
We cannot reuse a `b PLTresolve`/`bl __glink_PLTresolve` in PltSection as a canonical PLT entry. PPC64 ELFv2 avoids the problem by using TOC for any external reference, even in non-pic code, so the canonical PLT entry scenario should not happen in the first place.
For PPC32, we have to create a PLT call stub as the canonical PLT entry. The code sequence sets up r11.
Reviewed By: Bdragon28
Differential Revision: https://reviews.llvm.org/D73399
Symbol information can be used to improve out-of-range/misalignment diagnostics.
It also helps R_ARM_CALL/R_ARM_THM_CALL which has different behaviors with different symbol types.
There are many (67) relocateOne() call sites used in thunks, {Arm,AArch64}errata, PLT, etc.
Rename them to `relocateNoSym()` to be clearer that there is no symbol information.
Reviewed By: grimar, peter.smith
Differential Revision: https://reviews.llvm.org/D73254
In D71281 a fix was put in to round up the size of a ThunkSection to the
nearest 4KiB when performing errata patching. This fixed a problem with a
very large instrumented program that had thunks and patches mutually
trigger each other. Unfortunately it triggers an assertion failure in an
AArch64 allyesconfig build of the kernel. There is a specific assertion
preventing an InputSectionDescription being larger than 4KiB. This will
always trigger if there is at least one Thunk needed in that
InputSectionDescription, which is possible for an allyesconfig build.
Abstractly the problem case is:
.text : {
*(.text) ;
...
. = ALIGN(SZ_4K);
__idmap_text_start = .;
*(.idmap.text)
__idmap_text_end = .;
...
}
The assertion checks that __idmap_text_end - __idmap_start is < 4 KiB.
Note that there is more than one InputSectionDescription in the
OutputSection so we can't just restrict the fix to OutputSections smaller
than 4 KiB.
The fix presented here limits the D71281 to InputSectionDescriptions that
meet the following conditions:
1.) The OutputSection is bigger than the thunkSectionSpacing so adding
thunks will affect the addresses of following code.
2.) The InputSectionDescription is larger than 4 KiB. This will prevent
any assertion failures that an InputSectionDescription is < 4 KiB
in size.
We do this at ThunkSection creation time as at this point we know that
the addresses are stable and up to date prior to adding the thunks as
assignAddresses() will have been called immediately prior to thunk
generation.
The fix reverts the two tests affected by D71281 to their original state
as they no longer need the 4KiB size roundup. I've added simpler tests to
check for D71281 when the OutputSection size is larger than the ThunkSection
spacing.
Fixes https://github.com/ClangBuiltLinux/linux/issues/812
Differential Revision: https://reviews.llvm.org/D72344
ThunkSection contains 4-byte instructions on all targets that use
thunks. Thunks should not be used in any performance sensitive places,
and locality/cache line/instruction fetching arguments should not apply.
We use 16 bytes as preferred function alignments for modern PowerPC cores.
In any case, 8 is not optimal.
Differential Revision: https://reviews.llvm.org/D72819
This patch is a joint work by Rui Ueyama and me based on D58102 by Xiang Zhang.
It adds Intel CET (Control-flow Enforcement Technology) support to lld.
The implementation follows the draft version of psABI which you can
download from https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI.
CET introduces a new restriction on indirect jump instructions so that
you can limit the places to which you can jump to using indirect jumps.
In order to use the feature, you need to compile source files with
-fcf-protection=full.
* IBT is enabled if all input files are compiled with the flag. To force enabling ibt, pass -z force-ibt.
* SHSTK is enabled if all input files are compiled with the flag, or if -z shstk is specified.
IBT-enabled executables/shared objects have two PLT sections, ".plt" and
".plt.sec". For the details as to why we have two sections, please read
the comments.
Reviewed By: xiangzhangllvm
Differential Revision: https://reviews.llvm.org/D59780
One instance looks like a false positive:
lld/ELF/Relocations.cpp:1622:14: note: use reference type 'const std::pair<ThunkSection *, uint32_t> &' (aka 'cons
t pair<lld:🧝:ThunkSection *, unsigned int> &') to prevent copying
for (const std::pair<ThunkSection *, uint32_t> ts : isd->thunkSections)
It is not changed in this commit.
Linux powerpc discards `*(.gnu.version*)` (arch/powerpc/kernel/vmlinux.lds.S)
to suppress --orphan-handling=warn warnings in the -pie output `.tmp_vmlinux1`
The support is simple. Just add isLive() to:
1) Fix an assertion in SectionBase::getPartition() called by VersionTableSection::isNeeded().
2) Suppress DT_VERSYM, DT_VERDEF, DT_VERNEED and DT_VERNEEDNUM, if the relevant section is discarded.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D71819
GNU ld creates the synthetic section .iplt, and has a built-in linker
script that assigns .iplt to the output section .plt . There is no
output section named .iplt .
Making .iplt an output section actually has a benefit that makes the
tricky toolchain feature stand out. Symbolizers don't have to deal with
mixed PLT entries (e.g. llvm-objdump -d incorrectly annotates such jump
targets).
On EM_PPC{,64}, .glink contains a PLT resolver and a series of jump
instructions. The 4-byte entry size makes it unnecessary to have an
alignment of 16.
Mark ppc32-gnu-ifunc.s and ppc32-gnu-ifunc-nonpreemptable.s as `XFAIL: *`.
They test IPLT on EM_PPC, which never works.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D71520
PltSection is used by both PLT and IPLT. The PLT section may have a
header while the IPLT section does not. Split off IpltSection from
PltSection to be clearer.
Unlike other targets, PPC64 cannot use the same code sequence for PLT
and IPLT. This helps make a future PPC64 patch (D71509) more isolated.
On EM_386 and EM_X86_64, when PLT is empty while IPLT is not, currently
we are inconsistent whether the PLT header is conceptually attached to
in.plt or in.iplt . Consistently attach the header to in.plt can make
the -z retpolineplt logic simpler. It also makes `jmp` point to an
aesthetically better place for non-retpolineplt cases.
Reviewed By: grimar, ruiu
Differential Revision: https://reviews.llvm.org/D71519
This change only affects EM_386. relOff can be computed from `index`
easily, so it is unnecessarily passed as a parameter.
Both in.plt and in.iplt entries are written by writePLT. For in.iplt,
the instruction `push reloc_offset` will change because `index` is now
different. Fortunately, this does not matter because `push; jmp` is only
used by PLT. IPLT does not need the code sequence.
Reviewed By: grimar, ruiu
Differential Revision: https://reviews.llvm.org/D71518
On some edge cases such as Chromium compiled with full instrumentation we
have a .text section over twice the size of the maximum branch range and
the instrumented code generation containing many examples of the erratum
sequence. The combination of Thunks and many erratum sequences causes
finalizeAddressDependentContent() to not converge. We end up with:
start
- Thunk Creation (disturbs addresses after thunks, creating more patches)
- Patch Creation (disturbs addresses after patches, creating more thunks)
- goto start
In most images with few thunks and patches the mutual disturbance does not
cause convergence problems. As the .text size and number of patches go up
the risk increases.
A way to prevent the thunk creation from interfering with patch creation is
to round up the size of the thunks to a 4KiB boundary when the
erratum patch is enabled. As the erratum sequence only triggers when an
instruction sequence starts at 0xff8 or 0xffc modulo (4 KiB) by making the
thunks not affect addresses modulo (4 KiB) we prevent thunks from
interfering with the patch.
The patches themselves could be aggregated in the same way that Thunks are
within ThunkSections and we could round up the size in the same way. This
would reduce the number of patches created in a .text section size >
128 MiB but would not likely help convergence problems.
Differential Revision: https://reviews.llvm.org/D71281
fixes (remaining part of) pr44071, other part in D71242
Fixes PPC64 part of PR40438
// clang -target ppc64le -c a.cc
// .text.unlikely may be placed in a separate output section (via -z keep-text-section-prefix)
// The distance between bar in .text.unlikely and foo in .text may be larger than 32MiB.
static void foo() {}
__attribute__((section(".text.unlikely"))) static int bar() { foo(); return 0; }
__attribute__((used)) static int dummy = bar();
This patch makes such thunks with addends work for PPC64.
AArch64: .text -> `__AArch64ADRPThunk_ (adrp x16, ...; add x16, x16, ...; br x16)` -> target
PPC64: .text -> `__long_branch_ (addis 12, 2, ...; ld 12, ...(12); mtctr 12; bctr)` -> target
AArch64 can leverage ADRP to jump to the target directly, but PPC64
needs to load an address from .branch_lt . Before Power ISA v3.0, the
PC-relative ADDPCIS was not available. .branch_lt was invented to work
around the limitation.
Symbol::ppc64BranchltIndex is replaced by
PPC64LongBranchTargetSection::entry_index which take addends into
consideration.
The tests are rewritten: ppc64-long-branch.s tests -no-pie and
ppc64-long-branch-pi.s tests -pie and -shared.
Reviewed By: sfertile
Differential Revision: https://reviews.llvm.org/D70937
The .note.gnu.property SHT_NOTE sections on AArch64 (a 64-bit target)
should have alignment 8 to more closely match the binutils implementation
where alignment is 4-bytes on 32-bit machines and 8-bytes on 64-bit
machines.
Previously LLD was using 4 for both 32-bit and 64-bit machines.
Differential Revision: https://reviews.llvm.org/D70962
The logic added in r372781 caused ARMExidxSyntheticSection::addSection()
to return false for exidx sections without a link order dep that passed
isValidExidxSectionDep(). This included exidx sections for empty functions. As
a result, such exidx sections would end up treated like ordinary sections and
would end up being laid out before the ARMExidxSyntheticSection, most likely in
the wrong order relative to the exidx entries in the ARMExidxSyntheticSection,
breaking the orderedness invariant relied upon by unwinders. Fix this by
simply discarding such sections.
Differential Revision: https://reviews.llvm.org/D69744
This makes it clear `ELF/**/*.cpp` files define things in the `lld::elf`
namespace and simplifies `elf::foo` to `foo`.
Reviewed By: atanasyan, grimar, ruiu
Differential Revision: https://reviews.llvm.org/D68323
llvm-svn: 373885
Our .interp section is not a SyntheticSection. As a result, it terminates the
loop in removeUnusedSyntheticSections(). This has at least two consequences:
- The synthetic .bss and .bss.rel.ro sections are always present in
dynamically linked executables, even when they are not needed.
- The synthetic .ARM.exidx (and possibly other) sections are always present
in partitions other than the last one, even when not needed.
.ARM.exidx in particular is problematic because it assumes that its
list of code sections is non-empty in getLinkOrderDep(), which can
lead to a crash if the partition does not have any code sections.
Fix these problems by moving the creation of the .interp sections to the
top of createSyntheticSections(). While here, make the code a little less
error-prone by changing the add() lambdas to take a SyntheticSection instead
of an InputSectionBase.
Differential Revision: https://reviews.llvm.org/D68256
llvm-svn: 373347
When /DISCARD/ is used on an input section, that input section may have
a .ARM.exidx metadata section that depends on it. As the discard handling
comes after the .ARM.exidx synthetic section is created we need to make
sure that we account for the case where the .ARM.exidx output section
should be removed because there are no more live input sections.
Differential Revision: https://reviews.llvm.org/D67848
llvm-svn: 372781
Fixes PR38748
mergeSections() calls getOutputSectionName() to get output section
names. Two MergeInputSections may be merged even if they are made
different by SECTIONS commands.
This patch moves mergeSections() after processSectionCommands() and
addOrphanSections() to fix the issue. The new pass is renamed to
OutputSection::finalizeInputSections().
processSectionCommands() and addorphanSections() are changed to add
sections to InputSectionDescription::sectionBases.
finalizeInputSections() merges MergeInputSections and migrates
`sectionBases` to `sections`.
For the -r case, we drop an optimization that tries keeping sh_entsize
non-zero. This is for the simplicity of addOrphanSections(). The
updated merge-entsize2.s reflects the change.
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D67504
llvm-svn: 372734
Fixes PR43214.
The size of SHT_RELR may oscillate between 2 numbers (see D53003 for a
similar --pack-dyn-relocs=android issue). This can happen if the shrink
of SHT_RELR causes it to take more words to encode relocation offsets
(this can happen with thunks or segments with overlapping p_offset
ranges), and the expansion of SHT_RELR causes it to take fewer words to
encode relocation offsets.
To avoid the issue, add padding 1s to the end of the relocation section
if its size would decrease. Trailing 1s do not decode to more relocations.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D67164
llvm-svn: 370923
This essentially reverts the code change of D63132 and switches to a simpler approach.
In an executable/shared object, st_shndx of a symbol can be:
1) SHN_UNDEF: undefined symbol (or canonical PLT)
2) SHN_ABS: absolute symbol
3) any other value (usually a regular section index) represents a relative symbol.
The actual value does not matter.
Many ld.so (musl, all archs except MIPS of FreeBSD rtld-elf) even treat 2) and 3)
the same. If .sdata does not exist, it does not matter what value/section
__global_pointer$ has, as long as it is relative (otherwise there will be a pedantic
lld error. See D63132). Just set the st_shndx arbitrarily to 1.
Dummy st_shndx=1 may be used by __rela_iplt_start, linker-script-defined symbols outside a section, __dso_handle, etc.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D66798
llvm-svn: 370172
EhFrameSection::addSection checks liveness of FDE early. This makes it
infeasible to move combineEhSections() before ICF.
Postpone the check to EhFrameSection::finalizeContents(). This is what
ARMExidxSyntheticSection does and it will make a subsequent patch D66717
simpler.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D66727
llvm-svn: 369890
This fixed a bug in r369488. When config->isRela is false, i->r_addend
is not initialized (see encodeDynamicReloc). So we should check
config->isRela before accessing r_addend:
- if (j - i < 3 || i->r_addend)
+ if (j - i < 3 || (config->isRela && i->r_addend != 0))
Original description:
Currently, with Android dynamic relocation packing, only relative
relocations are grouped together. This patch implements similar
packing for non-relative relocations.
The implementation groups non-relative relocations with the same
r_info and r_addend, if using RELA. By requiring a minimum group
size of 3, this achieves smaller relocation sections. Building Android
for an ARM32 device, I see the total size of /system/lib decrease by
392 KB.
Grouping by r_info also allows the runtime dynamic linker to implement
an 1-entry cache to reduce the number of symbol lookup required. With
such 1-entry cache implemented on Android, I'm seeing 10% to 20%
reduction in total time spent in runtime linker for several executables
that I tested.
As a simple correctness check, I've also built x86_64 Android and booted
successfully.
Differential Revision: https://reviews.llvm.org/D65242
Patch by Vic Yang
llvm-svn: 369507
Currently, with Android dynamic relocation packing, only relative
relocations are grouped together. This patch implements similar
packing for non-relative relocations.
The implementation groups non-relative relocations with the same
r_info and r_addend, if using RELA. By requiring a minimum group
size of 3, this achieves smaller relocation sections. Building Android
for an ARM32 device, I see the total size of /system/lib decrease by
392 KB.
Grouping by r_info also allows the runtime dynamic linker to implement
an 1-entry cache to reduce the number of symbol lookup required. With
such 1-entry cache implemented on Android, I'm seeing 10% to 20%
reduction in total time spent in runtime linker for several executables
that I tested.
As a simple correctness check, I've also built x86_64 Android and booted
successfully.
Differential Revision: https://reviews.llvm.org/D66491
Patch by Vic Yang!
llvm-svn: 369488
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.
Differential revision: https://reviews.llvm.org/D66259
llvm-svn: 368936