Ported the D64906 technique to AArch64. It deletes 3 alignments at
PT_LOAD boundaries for the default case: the size of an aarch64 binary
decreases by at most 192kb.
If `sh_addralign(.tdata) < sh_addralign(.tbss)`,
we can potentially make `p_vaddr(PT_TLS)%p_align(PT_TLS) != 0`.
ld.so that are known to have problems if p_vaddr%p_align!=0:
* musl<=1.1.22
* FreeBSD 13.0-CURRENT (and before) rtld-elf arm64
New test aarch64-tls-vaddr-align.s checks that our workaround makes p_vaddr%p_align = 0.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D64930
llvm-svn: 369344
The current rule is loose: `!Sym.IsPreemptible || Expr == R_GOT`.
When the symbol is non-preemptable, this allows absolute relocation
types with smaller numbers of bits, e.g. R_X86_64_{8,16,32}. They are
disallowed by ld.bfd and gold, e.g.
ld.bfd: a.o: relocation R_X86_64_8 against `.text' can not be used when making a shared object; recompile with -fPIC
This patch:
a) Add TargetInfo::SymbolicRel to represent relocation types that resolve to a
symbol value (e.g. R_AARCH_ABS64, R_386_32, R_X86_64_64).
As a side benefit, we currently (ab)use GotRel (R_*_GLOB_DAT) to resolve
GOT slots that are link-time constants. Since we now use Target->SymbolRel
to do the job, we can remove R_*_GLOB_DAT from relocateOne() for all targets.
R_*_GLOB_DAT cannot be used as static relocation types.
b) Change the condition to `!Sym.IsPreemptible && Type != Target->SymbolicRel || Expr == R_GOT`.
Some tests are caught by the improved error checking (ld.bfd/gold also
issue errors on them). Many misuse .long where .quad should be used
instead.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D63121
llvm-svn: 363059
Also change some options that have different semantics (cause confusion) in llvm-readelf mode:
-s => -S
-t => --symbols
-sd => --section-data
llvm-svn: 359651
arm-plt-reloc.s arm-thumb-plt-reloc.s: update offset calculations
pack-dyn-relocs-loop.s: this test is very sensitive to exact section
offsets and sizes. If we comment out the following two lines in
SyntheticSections.cpp, we should reproduce `ld.lld: error: thunk
creation not converged` caused by oscillation of the section size.
if (RelocData.size() < OldSize)
RelocData.append(OldSize - RelocData.size(), 0);
Use -z norelro to counteract the layout change (to be more specific,
we have to place .dynamic below .foo so that offset(foo) remains 0x10004)
llvm-svn: 356229
Summary:
Based on Peter Collingbourne's suggestion in D56828.
Before D56828: PT_LOAD(.data PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) .bss)
Old: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) .data .bss)
New: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro)) PT_LOAD(.data. .bss)
The new layout reflects the runtime memory mappings.
By having two PT_LOAD segments, we can utilize the NOBITS part of the
first PT_LOAD and save bytes for .bss.rel.ro.
.bss.rel.ro is currently small and only used by copy relocations of
symbols in read-only segments, but it can be used for other purposes in
the future, e.g. if a relro section's statically relocated data is all
zeros, we can move it to .bss.rel.ro.
Reviewers: espindola, ruiu, pcc
Reviewed By: ruiu
Subscribers: nemanjai, jvesely, nhaehnle, javed.absar, kbarton, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58892
llvm-svn: 356226
Old: PT_LOAD(.data | PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) | .bss)
New: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) | .data .bss)
The placement of | indicates page alignment caused by PT_GNU_RELRO. The
new layout has simpler rules and saves space for many cases.
Old size: roundup(.data) + roundup(.data.rel.ro)
New size: roundup(.data.rel.ro + .bss.rel.ro) + .data
Other advantages:
* At runtime the 3 memory mappings decrease to 2.
* start(PT_TLS) = start(PT_GNU_RELRO) = start(RW PT_LOAD). This
simplifies binary manipulation tools.
GNU strip before 2.31 discards PT_GNU_RELRO if its
address is not equal to the start of its associated PT_LOAD.
This has been fixed by https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=f2731e0c374e5323ce4cdae2bcc7b7fe22da1a6f
But with this change, we will be compatible with GNU strip before 2.31
* Before, .got.plt (non-relro by default) was placed before .got (relro
by default), which made it impossible to have _GLOBAL_OFFSET_TABLE_
(start of .got.plt on x86-64) equal to the end of .got (R_GOT*_FROM_END)
(https://bugs.llvm.org/show_bug.cgi?id=36555). With the new ordering, we
can improve on this regard if we'd like to.
Reviewers: ruiu, espindola, pcc
Subscribers: emaste, arichardson, llvm-commits, joerg, jdoerfert
Differential Revision: https://reviews.llvm.org/D56828
llvm-svn: 356117
Android uses a compressed relocation format, which means the size of the
relocation section isn't predictable based on the number of relocations,
and can vary if the layout changes in any way. To deal with this, the
linker normally runs multiple passes until the layout converges.
The layout should converge if the size of the compressed
relocation section increases monotonically: if the size of an encoded
offset increases by one byte, the larget value which can be encoded is
multiplied by 128, so the representable offsets grow much faster than
the size of the section itself.
The problem here is that there is no code to ensure the size of the
section doesn't decrease. If the size of the relocation section
decreases, the relative offsets can increase due to alignment
restrictions, so that can force the size of the relocation section to
increase again. The end result is an infinite loop; the loop gets cut
off after 10 iterations with the message "thunk creation not
converged".
To avoid this issue, this patch adds padding to the end of the
relocation section if its size would decrease. The extra
padding is harmless because of the way the format is defined:
decoding stops after it reaches the number of relocations specified
in the section's header.
Differential Revision: https://reviews.llvm.org/D53003
llvm-svn: 344300