Follow-up for D74433
What the function returns are almost standard BFD names, except that "ELF" is
in uppercase instead of lowercase.
This patch changes "ELF" to "elf" and changes ARM/AArch64 to use their BFD names.
MIPS and PPC64 have endianness differences as well, but this patch does not intend to address them.
Advantages:
* llvm-objdump: the "file format " line matches GNU objdump on ARM/AArch64 objects
* "file format " line can be extracted and fed into llvm-objcopy -O literally.
(https://github.com/ClangBuiltLinux/linux/issues/779 has such a use case)
Affected tools: llvm-readobj, llvm-objdump, llvm-dwarfdump, MCJIT (internal implementation detail, not exposed)
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D76046
This change affects the non-linker script case (precisely, when the
`SECTIONS` command is not used). It deletes 3 alignments at PT_LOAD
boundaries for the default case: the size of a powerpc64 binary can be
decreased by at most 192kb. The technique can be ported to other
targets.
Let me demonstrate the idea with a maxPageSize=65536 example:
When assigning the address to the first output section of a new PT_LOAD,
if the end p_vaddr of the previous PT_LOAD is 0x10020, we advance to
the next multiple of maxPageSize: 0x20000. The new PT_LOAD will thus
have p_vaddr=0x20000. Because p_offset and p_vaddr are congruent modulo
maxPageSize, p_offset will be 0x20000, leaving a p_offset gap [0x10020,
0x20000) in the output.
Alternatively, if we advance to 0x20020, the new PT_LOAD will have
p_vaddr=0x20020. We can pick either 0x10020 or 0x20020 for p_offset!
Obviously 0x10020 is the choice because it leaves no gap. At runtime,
p_vaddr will be rounded down by pagesize (65536 if
pagesize=maxPageSize). This PT_LOAD will load additional initial
contents from p_offset ranges [0x10000,0x10020), which will also be
loaded by the previous PT_LOAD. This is fine if -z noseparate-code is in
effect or if we are not transiting between executable and non-executable
segments.
ld.bfd -z noseparate-code leverages this technique to keep output small.
This patch implements the technique in lld, which is mostly effective on
targets with large defaultMaxPageSize (AArch64/MIPS/PPC: 65536). The 3
removed alignments can save almost 3*65536 bytes.
Two places that rely on p_vaddr%pagesize = 0 have to be updated.
1) We used to round p_memsz(PT_GNU_RELRO) up to commonPageSize (defaults
to 4096 on all targets). Now p_vaddr%commonPageSize may be non-zero.
The updated formula takes account of that factor.
2) Our TP offsets formulae are only correct if p_vaddr%p_align = 0.
Fix them. See the updated comments in InputSection.cpp for details.
On targets that we enable the technique (only PPC64 now),
we can potentially make `p_vaddr(PT_TLS)%p_align(PT_TLS) != 0`
if `sh_addralign(.tdata) < sh_addralign(.tbss)`
This exposes many problems in ld.so implementations, especially the
offsets of dynamic TLS blocks. Known issues:
FreeBSD 13.0-CURRENT rtld-elf (i386/amd64/powerpc/arm64)
glibc (HEAD) i386 and x86_64 https://sourceware.org/bugzilla/show_bug.cgi?id=24606
musl<=1.1.22 on TLS Variant I architectures (aarch64/powerpc64/...)
So, force p_vaddr%p_align = 0 by rounding dot up to p_align(PT_TLS).
The technique will be enabled (with updated tests) for other targets in
subsequent patches.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D64906
llvm-svn: 369343
basic64be.s is a big-endian powerpc64 test that just duplicates what
basic-ppc64.s does. Extend basic-ppc64.s to add big-endian tests.
Delete basic64be.s
Rename basic32.s to basic-i386.s
llvm-svn: 366401
Also change some options that have different semantics (cause confusion) in llvm-readelf mode:
-s => -S
-t => --symbols
-sd => --section-data
llvm-svn: 359651
Summary:
Based on Peter Collingbourne's suggestion in D56828.
Before D56828: PT_LOAD(.data PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) .bss)
Old: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) .data .bss)
New: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro)) PT_LOAD(.data. .bss)
The new layout reflects the runtime memory mappings.
By having two PT_LOAD segments, we can utilize the NOBITS part of the
first PT_LOAD and save bytes for .bss.rel.ro.
.bss.rel.ro is currently small and only used by copy relocations of
symbols in read-only segments, but it can be used for other purposes in
the future, e.g. if a relro section's statically relocated data is all
zeros, we can move it to .bss.rel.ro.
Reviewers: espindola, ruiu, pcc
Reviewed By: ruiu
Subscribers: nemanjai, jvesely, nhaehnle, javed.absar, kbarton, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58892
llvm-svn: 356226
On PowerPC64, when a function call offset is too large to encode in a call
instruction the address is stored in a table in the data segment. A thunk is
used to load the branch target address from the table relative to the
TOC-pointer and indirectly branch to the callee. When linking position-dependent
code the addresses are stored directly in the table, for position-independent
code the table is allocated and filled in at load time by the dynamic linker.
For position-independent code the branch targets could have gone in the .got.plt
but using the .branch_lt section for both position dependent and position
independent binaries keeps it consitent and helps keep this PPC64 specific logic
seperated from the target-independent code handling the .got.plt.
Differential Revision: https://reviews.llvm.org/D53408
llvm-svn: 346877
This generalizes the old heuristic placing SHT_DYNSYM SHT_DYNSTR first in the readonly SHF_ALLOC segment.
Reviewers: espindola
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D48406
llvm-svn: 335674
This CL places .dynsym and .dynstr at the beginning of SHF_ALLOC
sections. We do this to mitigate the possibility that huge .dynsym and
.dynstr sections placed between ro-data and text sections cause
relocation overflow.
Differential Revision: https://reviews.llvm.org/D45788
llvm-svn: 332374
This CL is to mitigate R_X86_64_PC32 relocation overflow problems for huge binaries that has near 4G allocated sections.
By examining those binaries, there're 2 issues contributes to the problem:
1). huge ".dynsym" and ".dynstr" stands in the way between .rodata and .text
2). _init_array_start/end are placed at 0 if no ".init_array" presents, this causes .text relocation against them become more prone to overflow.
This CL addresses 1st problem (the 2nd will be addressed in another CL.) by assigning a smaller sortrank to .dynsym and .dynstr thus they no longer stand in between.
llvm-svn: 332038
This patch adds changes to start supporting the Power 64-Bit ELF V2 ABI.
This includes:
- Changing the ElfSym::GlobalOffsetTable to be named .TOC.
- Creating a GotHeader so the first entry in the .got is .TOC.
- Setting the e_flags to be 1 for ELF V1 and 2 for ELF V2
Differential Revision: https://reviews.llvm.org/D44483
llvm-svn: 327871