Also a couple of minor cleanups in merge-string.s:
- fix inconsistent use of tabs
- use `.p2align` rather than `.align` since `.p2align` works the
same on all platforms (the meaning of align seems to differ
between platforms according to `AlignmentIsInBytes`.
I noticed these potential cleanups while porting SHF_STRINGS support to
wasm-ld.
Differential Revision: https://reviews.llvm.org/D97647
In GCC emitted .debug_info sections, R_386_GOTOFF may be used to
relocate DW_AT_GNU_call_site_value values
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98946).
R_386_GOTOFF (`S + A - GOT`) is one of the `isStaticLinkTimeConstant` relocation
type which is not PC-relative, so it can be used from non-SHF_ALLOC sections. We
current allow new relocation types as needs come. The diagnostic has caught some
bugs in the past.
Differential Revision: https://reviews.llvm.org/D95994
The scope of R_TLS (TP offset relocation types (TPREL/TPOFF) used for the
local-exec TLS model) is actually narrower than its name may imply. R_TLS_NEG
is only used by Solaris R_386_TLS_LE_32.
Rename them so that they will be less confusing.
Reviewed By: grimar, psmith, rprichard
Differential Revision: https://reviews.llvm.org/D93467
`ELFFile<ELFT>` has many methods that take pointers,
though they assume that arguments are never null and
hence could take references instead.
This patch performs such clean-up.
Differential revision: https://reviews.llvm.org/D87385
This patch implements the handling for the R_PPC64_PCREL_OPT relocation as well
as the GOT relocation for the associated R_PPC64_GOT_PCREL34 relocation.
On Power10 targets with PC-Relative addressing, the linker can relax
GOT-relative accesses to PC-Relative under some conditions. Since the sequence
consists of a prefixed load, followed by a non-prefixed access (load or store),
the linker needs to replace the first instruction (as the replacement
instruction will be prefixed). The compiler communicates to the linker that
this optimization is safe by placing the two aforementioned relocations on the
GOT load (of the address).
The linker then does two things:
- Convert the load from the got into a PC-Relative add to compute the address
relative to the PC
- Find the instruction referred to by the second relocation (R_PPC64_PCREL_OPT)
and replace the first with the PC-Relative version of it
It is important to synchronize the mapping from legacy memory instructions to
their PC-Relative form. Hence, this patch adds a file to be included by both
the compiler and the linker so they're always in agreement.
Differential revision: https://reviews.llvm.org/D84360
For an InputSection, the `buf` argument of `InputSectionBase::relocate` points
to the content of the containing OutputSection, instead of the content of the
InputSection itself, so `outSecOff` needs to be added in its callees. This is
counter-intuitive and leads to many `- outSecOff` and `+ outSecOff`.
This patch makes `InputSection::writeTo` call `InputSectionBase::relocate` with
`outSecOff` added. relocAlloc/relocNonAlloc/relocateNonAllocForRelocatable can
thus be simplified now.
Updated test:
* non-abs-reloc.s: A minor offset bug is fixed for a diagnostic in `relocateNonAlloc`
Reviewed By: grimar
Differential Revision: https://reviews.llvm.org/D85618
tl;dr See D81784 for the 'tombstone value' concept. This patch changes our behavior to be almost the same as GNU ld (except that we also use 1 for .debug_loc):
* .debug_ranges & .debug_loc: 1 (LLD<11: 0+addend; GNU ld uses 1 for .debug_ranges)
* .debug_*: 0 (LLD<11: 0+addend; GNU ld uses 0; future LLD: -1)
We make the tweaks because:
1) The new tombstone is novel and needs more time to be adopted by consumers before it's the default.
2) The old (gold) strategy had problems with zero-length functions - so rather than going back that, we're going to the GNU ld strategy which doesn't have that problem.
3) One slight tweak to (2) is to apply the .debug_ranges workaround to .debug_loc for the same reasons it applies to debug_ranges - to avoid terminating lists early.
-----
http://lists.llvm.org/pipermail/llvm-dev/2020-July/143482.html
The tombstone value -1 in .debug_line caused problems to lldb (fixed by D83957;
will be included in 11.0.0) and breakpad (fixed by
https://crrev.com/c/2321300). It may potentially affects other DWARF consumers.
For .debug_ranges & .debug_loc: 1, an argument preferring 1 (GNU ld for .debug_ranges) over -2 is that:
```
{-1, -2} <<< base address selection entry
{0, length} <<< address range
```
may create a situation where low_pc is greater than high_pc. So we use
1, the GNU ld behavior for .debug_ranges
For other .debug_* sections, there haven't been many reports. One issue is that
bloaty (src/dwarf.cc) can incorrectly count address ranges in .debug_ranges . To
reduce similar disruption, this patch changes the tombstone values to be similar to GNU ld.
This does mean another behavior change to the default trunk behavior. Sorry
about it. The default trunk behavior will be similar to release/11.x while we work on a transition plan for LLD users.
Reviewed By: dblaikie, echristo
Differential Revision: https://reviews.llvm.org/D84825
Part of https://bugs.llvm.org/show_bug.cgi?id=41734
The semantics of SHF_LINK_ORDER have been extended to represent metadata
sections associated with some other sections (usually text).
The associated text section may be discarded (e.g. LTO) and we want the
metadata section to have sh_link=0 (D72899, D76802).
Normally the metadata section is only referenced by the associated text
section. sh_link=0 means the associated text section is discarded, and
the metadata section will be garbage collected. If there is another
section (.gc_root) referencing the metadata section, the metadata
section will be retained. It's the .gc_root consumer's job to validate
the metadata sections.
# This creates a SHF_LINK_ORDER .meta with sh_link=0
.section .meta,"awo",@progbits,0
1:
.section .meta,"awo",@progbits,foo
2:
.section .gc_root,"a",@progbits
.quad 1b
.quad 2b
Reviewed By: pcc, jhenderson
Differential Revision: https://reviews.llvm.org/D72904
* If two group members are combined, we should leave just one index in the SHT_GROUP content.
* If a group member is discarded (/DISCARD/ or upcoming -r --gc-sections combination),
we should drop its index in the SHT_GROUP content. LLD currently crashes (`getOutputSection()` is null).
Reviewed By: psmith
Differential Revision: https://reviews.llvm.org/D84129
... to customize the tombstone value we use for an absolute relocation
referencing a discarded symbol. This can be used as a workaround when
some debug processing tool has trouble with current -1 tombstone value
(https://bugs.chromium.org/p/chromium/issues/detail?id=1102223#c11 )
For example, to get the current built-in rules (not considering the .debug_line special case for ICF):
```
-z dead-reloc-in-nonalloc='.debug_*=0xffffffffffffffff'
-z dead-reloc-in-nonalloc=.debug_loc=0xfffffffffffffffe
-z dead-reloc-in-nonalloc=.debug_ranges=0xfffffffffffffffe
```
To get GNU ld (as of binutils 2.35)'s behavior:
```
-z dead-reloc-in-nonalloc='*=0'
-z dead-reloc-in-nonalloc=.debug_ranges=1
```
This option has other use cases. For example, if we want to check
whether a non-SHF_ALLOC section has dead relocations.
With this patch, we can run a regular LLD and run another with a special
-z dead-reloc-in-nonalloc=, then compare their output.
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D83264
The location of a TLS variable is encoded as a DW_OP_const4u/DW_OP_const8u
followed by a DW_OP_push_tls_address (or DW_OP_GNU_push_tls_address https://sourceware.org/bugzilla/show_bug.cgi?id=11616 ).
This change follows up to D81784 and makes relocations types generalized as
R_DTPREL (e.g. R_X86_64_DTPOFF{32,64}, R_PPC64_DTPREL64) use -1 as the
tombstone value as well. This works for both TLS Variant I and Variant II
architectures.
* arm: .long tls(tlsldo) # not working currently (R_ARM_TLS_LDO32 is R_ABS)
* mips64: .dtpreldword tls+32768
* ppc64: .quad tls@DTPREL+0x8000
* riscv: neither GCC nor clang has implemented DW_AT_location. It is likely .long/.quad tls@dtprel+0x800
* x86-32: .long tls@DTPOFF
* x86-64: .long tls@DTPOFF; .quad tls@DTPOFF
tls has a non-negative st_value, so such relocations (st_value+addend)
never resolve to -1 in a normal (not discarded) case.
```
// clang -fuse-ld=lld -g -ffunction-sections a.c -Wl,--gc-sections
// foo and tls will be discarded by --gc-sections.
// DW_AT_location [DW_FORM_exprloc] (DW_OP_const8u 0xffffffffffffffff, DW_OP_GNU_push_tls_address)
thread_local int tls;
int foo() { return ++tls; }
int main() {}
```
Also, drop logic added in D26201 intended to address PR30793. It added a test
(gc-debuginfo-tls.s) using a non-SHF_ALLOC section and a local symbol, which
does not reflect the intended scenario: a relocation in a SHF_ALLOC section
referencing a discarded non-local symbol. For such a non .debug_* section, just
emit an error.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D82899
After D81784, we resolve a relocation in .debug_* referencing an ICF folded
section symbol to a tombstone value.
Doing this for .debug_line has a problem (https://reviews.llvm.org/D81784#2116925 ):
.debug_line may describe folded lines as having addresses UINT64_MAX or
some wraparound small addresses.
```
int foo(int x) {
return x; // line 2
}
int bar(int x) {
return x; // line 6
}
```
```
Address Line Column File ISA Discriminator Flags
------------------ ------ ------ ------ --- ------------- -------------
0x00000000002016c0 1 0 1 0 0 is_stmt
0x00000000002016c7 2 9 1 0 0 is_stmt
prologue_end
0x00000000002016ca 2 2 1 0 0
0x00000000002016cc 2 2 1 0 0 end_sequence
// UINT64_MAX and wraparound small addresses
0xffffffffffffffff 5 0 1 0 0 is_stmt
0x0000000000000006 6 9 1 0 0 is_stmt
prologue_end
0x0000000000000009 6 2 1 0 0
0x000000000000000b 6 2 1 0 0 end_sequence
0x00000000002016d0 9 0 1 0 0 is_stmt
0x00000000002016df 10 6 1 0 0 is_stmt prologue_end
0x00000000002016e6 11 11 1 0 0 is_stmt
...
```
These entries can confuse debuggers:
gdb before 2020-07-01 (binutils-gdb a8caed5d7faa639a1e6769eba551d15d8ddd9510 "Recognize -1 as a tombstone value in .debug_line")
(can't continue due to a breakpoint in an invalid region of memory):
```
Warning:
Cannot insert breakpoint 1.
Cannot access memory at address 0x6
```
lldb (breakpoint has no effect):
```
(lldb) b 6
Breakpoint 1: no locations (pending).
WARNING: Unable to resolve breakpoint to any actual locations.
```
This patch special cases .debug_line to not use the tombstone value,
restoring the previous behavior: .debug_line will have entries with the
same addresses (ICF) but different line numbers. A breakpoint on line 2
or 6 will trigger on both functions.
Reviewed By: dblaikie, jhenderson
Differential Revision: https://reviews.llvm.org/D82828
This is the followup to D77647 which implements handling for the new
R_AARCH64_PLT32 relocation type in lld. This relocation would benefit the
PIC-friendly vtables feature described in D72959.
Differential Revision: https://reviews.llvm.org/D81184
See D59553, https://lists.llvm.org/pipermail/llvm-dev/2020-May/141885.html and
https://sourceware.org/pipermail/binutils/2020-May/111357.html for
extensive discussions on a tombstone value.
See http://www.dwarfstd.org/ShowIssue.php?issue=200609.1
(Reserve an address value for "not present") for a DWARF enhancement proposal.
We resolve such relocations to a tombstone value to indicate that the address is invalid.
This solves several problems (the normal behavior is to resolve the relocation to the addend):
* For an empty function in a collected section, a pair of (0,0) can
terminate .debug_loc and .debug_ranges (as of binutils 2.34, GNU ld
resolves such a relocation to 1 to avoid the .debug_ranges issue)
* If DW_AT_high_pc is sufficiently large, the address range can collide
with a regular code range of low address (https://bugs.llvm.org/show_bug.cgi?id=41124 )
* If a text section is folded into another by ICF, we may leave entries
in multiple CUs claiming ownership of the same range of code, which can
confuse consumers.
* Debug information associated with COMDAT sections can have problems
similar to ICF, but is more complex - thus not addressed by this patch.
For pre-DWARF-v5 .debug_loc and .debug_ranges, a pair of 0 can terminate
entries (invalidating subsequent ranges).
-1 is a reserved value with special meaning (base address selection entry) which can't be used either.
Use -2 instead.
For all other .debug_*, use UINT32_MAX for 32-bit targets and UINT64_MAX
for 64-bit targets. In the code, we intentionally use
`uint64_t tombstone = UINT64_MAX` for 32-bit targets as well: this matches
SignExtend64 as used in `relocateAlloc`. (Actually UINT32_MAX does not work for R_386_32)
Note 0, we only special case `target->symbolicRel` (R_X86_64_64, R_AARCH64_ABS64, R_PPC64_ADDR64), not
short-range absolute relocations (e.g. R_X86_64_32). Only forms like DW_FORM_addr need to be special cased.
They can hold an arbitrary address (must be 64-bit on a 64-bit target). (In theory,
producers can make use of small code model to emit 32-bit relocations. This doesn't seem to be leveraged.)
Note 1, we have to ignore the addend, because we don't want to resolve
DW_AT_low_pc (which may have a non-zero addend) to -1+addend (wrap
around to a low address):
__attribute__((section(".text.x"))) void f1() { }
__attribute__((section(".text.x"))) void f2() { } // DW_AT_low_pc has a non-zero addend
Note 2, if the prevailing copy does not have debugging information while
a non-prevailing copy has (partial debug build), we don't do extra work
to attach debugging information to the prevailing definition. (clang
has a lot of debug info optimizations that are on-by-default that assume
the whole program is built with debug info).
clang -c -ffunction-sections a.cc # prevailing copy has no debug info
clang -c -ffunction-sections -g b.cc
Reviewed By: dblaikie, avl, jhenderson
Differential Revision: https://reviews.llvm.org/D81784
The current implementation assumes that R_PPC64_TOC16_HA is always followed
by R_PPC64_TOC16_LO_DS. This can break with R_PPC64_TOC16_LO:
// Load the address of the TOC entry, instead of the value stored at that address
addis 3, 2, .LC0@tloc@ha # R_PPC64_TOC16_HA
addi 3, 3, .LC0@tloc@l # R_PPC64_TOC16_LO
blr
which is used by boringssl's util/fipstools/delocate/delocate.go
https://github.com/google/boringssl/blob/master/crypto/fipsmodule/FIPS.md has some documentation.
In short, this tool converts an assembly file to avoid any potential relocations.
The distance to an input .toc is not a constant after linking, so it cannot use an `addis;ld` pair.
Instead, it jumps to a stub which loads the TOC entry address with `addis;addi`.
This patch checks the presence of R_PPC64_TOC16_LO and suppresses
toc-indirect to toc-relative relaxation if R_PPC64_TOC16_LO is seen.
This approach is conservative and loses some relaxation opportunities but is easy to implement.
addis 3, 2, .LC0@toc@ha # no relaxation
addi 3, 3, .LC0@toc@l # no relaxation
li 9, 0
addis 4, 2, .LC0@toc@ha # can relax but suppressed
ld 4, .LC0@toc@l(4) # can relax but suppressed
Also note that interleaved R_PPC64_TOC16_HA and R_PPC64_TOC16_LO_DS is
possible and this patch accounts for that.
addis 3, 2, .LC1@toc@ha # can relax
addis 4, 2, .LC2@toc@ha # can relax
ld 3, .LC1@toc@l(3) # can relax
ld 4, .LC2@toc@l(4) # can relax
Reviewed By: #powerpc, sfertile
Differential Revision: https://reviews.llvm.org/D78431
This reverts commit 03ffe58605.
Full tile of reverted commit is:
[ELF][PPC64] Don't perform toc-indirect to toc-relative relaxation for
R_PPC64_TOC16_HA not followed by R_PPC64_TOC16_LO_DS
Breaks the multistage lld PowerPC buildbot.
The current implementation assumes that R_PPC64_TOC16_HA is always followed
by R_PPC64_TOC16_LO_DS. This can break with:
// Load the address of the TOC entry, instead of the value stored at that address
addis 3, 2, .LC0@tloc@ha # R_PPC64_TOC16_HA
addi 3, 3, .LC0@tloc@l # R_PPC64_TOC16_LO
blr
which is used by boringssl's util/fipstools/delocate/delocate.go
https://github.com/google/boringssl/blob/master/crypto/fipsmodule/FIPS.md has some documentation.
In short, this tool converts an assembly file to avoid any potential relocations.
The distance to an input .toc is not a constant after linking, so the assembly cannot use an `addis;ld` pair.
Instead, delocate changes the code to jump to a stub (`addis;addi`) which loads the TOC entry address.
Reviewed By: sfertile
Differential Revision: https://reviews.llvm.org/D78431
If there is no SHF_TLS section, there will be no PT_TLS and Out::tlsPhdr may be a nullptr.
If the symbol referenced by an R_TLS is lazy, we should treat the symbol as undefined.
Also reorganize tls-in-archive.s and tls-weak-undef.s . They do not test what they intended to test.
Implemented a bunch of relocations found in binaries with medium/large code model and the Local-Exec TLS model. The binaries link and run fine in Qemu.
In addition, the emulation `elf64_sparc` is now recognized.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D77672
This is part of the Propeller framework to do post link code layout
optimizations. Please see the RFC here:
https://groups.google.com/forum/#!msg/llvm-dev/ef3mKzAdJ7U/1shV64BYBAAJ and the
detailed RFC doc here:
https://github.com/google/llvm-propeller/blob/plo-dev/Propeller_RFC.pdf
This patch adds lld support for basic block sections and performs relaxations
after the basic blocks have been reordered.
After the linker has reordered the basic block sections according to the
desired sequence, it runs a relaxation pass to optimize jump instructions.
Currently, the compiler emits the long form of all jump instructions. AMD64 ISA
supports variants of jump instructions with one byte offset or a four byte
offset. The compiler generates jump instructions with R_X86_64 32-bit PC
relative relocations. We would like to use a new relocation type for these jump
instructions as it makes it easy and accurate while relaxing these instructions.
The relaxation pass does two things:
First, it deletes all explicit fall-through direct jump instructions between
adjacent basic blocks. This is done by discarding the tail of the basic block
section.
Second, If there are consecutive jump instructions, it checks if the first
conditional jump can be inverted to convert the second into a fall through and
delete the second.
The jump instructions are relaxed by using jump instruction mods, something
like relocations. These are used to modify the opcode of the jump instruction.
Jump instruction mods contain three values, instruction offset, jump type and
size. While writing this jump instruction out to the final binary, the linker
uses the jump instruction mod to determine the opcode and the size of the
modified jump instruction. These mods are required because the input object
files are memory-mapped without write permissions and directly modifying the
object files requires copying these sections. Copying a large number of basic
block sections significantly bloats memory.
Differential Revision: https://reviews.llvm.org/D68065
Similar to D63182 [ELF][PPC64] Don't report "relocation refers to a discarded section" for .toc
Reviewed By: Bdragon28
Differential Revision: https://reviews.llvm.org/D75419
MC will now output the R_ARM_THM_PC8, R_ARM_THM_PC12 and
R_ARM_THM_PREL_11_0 relocations. These are short-ranged relocations that
are used to implement the adr rd, literal and ldr rd, literal pseudo
instructions.
The instructions use a new RelExpr called R_ARM_PCA in order to calculate
the required S + A - Pa expression, where Pa is AlignDown(P, 4) as the
instructions add their immediate to AlignDown(PC, 4). We also do not want
these relocations to generate or resolve against a PLT entry as the range
of these relocations is so short they would never reach.
The R_ARM_THM_PC8 has a special encoding convention for the relocation
addend, the immediate field is unsigned, yet the addend must be -4 to
account for the Thumb PC bias. The ABI (not the architecture) uses the
convention that the 8-byte immediate of 0xff represents -4.
Differential Revision: https://reviews.llvm.org/D75042
Fixes https://bugs.llvm.org//show_bug.cgi?id=44878
When --strip-debug is specified, .debug* are removed from inputSections
while .rel[a].debug* (incorrectly) remain.
LinkerScript::addOrphanSections() requires the output section of a relocated
InputSectionBase to be created first.
.debug* are not in inputSections ->
output sections .debug* are not created ->
getOutputSectionName(.rel[a].debug*) dereferences a null pointer.
Fix the null pointer dereference by deleting .rel[a].debug* from inputSections as well.
Reviewed By: grimar, nickdesaulniers
Differential Revision: https://reviews.llvm.org/D74510
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.
This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.
This doesn't actually modify StringRef yet, I'll do that in a follow-up.
Similar to R_MIPS_GPREL16 and R_MIPS_GPREL32 (D45972).
If the addend of an R_PPC_PLTREL24 is >= 0x8000, it indicates that r30
is relative to the input section .got2.
```
addis 30, 30, .got2+0x8000-.L1$pb@ha
addi 30, 30, .got2+0x8000-.L1$pb@l
...
bl foo+0x8000@PLT
```
After linking, the relocation will be relative to the output section .got2.
To compensate for the shift `address(input section .got2) - address(output section .got2) = ppc32Got2OutSecOff`, adjust by `ppc32Got2OutSecOff`:
```
addis 30, 30, .got2+0x8000-.L1+ppc32Got2OutSecOff$pb@ha
addi 30, 30, .got2+0x8000-.L1+ppc32Got2OutSecOff$pb@ha$pb@l
...
bl foo+0x8000+ppc32Got2OutSecOff@PLT
```
This rule applys to a relocatable link or a non-relocatable link with --emit-relocs.
Reviewed By: Bdragon28
Differential Revision: https://reviews.llvm.org/D73532
Symbol information can be used to improve out-of-range/misalignment diagnostics.
It also helps R_ARM_CALL/R_ARM_THM_CALL which has different behaviors with different symbol types.
There are many (67) relocateOne() call sites used in thunks, {Arm,AArch64}errata, PLT, etc.
Rename them to `relocateNoSym()` to be clearer that there is no symbol information.
Reviewed By: grimar, peter.smith
Differential Revision: https://reviews.llvm.org/D73254
These functions call relocateOne(). This patch is a prerequisite for
making relocateOne() aware of `Symbol` (D73254).
Reviewed By: grimar, nickdesaulniers
Differential Revision: https://reviews.llvm.org/D73250
GCC before r245813 (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79439)
did not emit nop after b/bl. This can happen with recursive calls.
r245813 was back ported to GCC 5.5 and GCC 6.4.
This is common, for example, libstdc++.a(locale.o) shipped with GCC 4.9
and many objects in netlib lapack can cause lld to error. gold allows
such calls to the same section. Our __plt_foo symbol's `section` field
is used for ThunkSection, so we can't implement a similar loosen rule
easily. But we can make use of its `file` field which is currently NULL.
Differential Revision: https://reviews.llvm.org/D71639
This makes it clear `ELF/**/*.cpp` files define things in the `lld::elf`
namespace and simplifies `elf::foo` to `foo`.
Reviewed By: atanasyan, grimar, ruiu
Differential Revision: https://reviews.llvm.org/D68323
llvm-svn: 373885
This change affects the non-linker script case (precisely, when the
`SECTIONS` command is not used). It deletes 3 alignments at PT_LOAD
boundaries for the default case: the size of a powerpc64 binary can be
decreased by at most 192kb. The technique can be ported to other
targets.
Let me demonstrate the idea with a maxPageSize=65536 example:
When assigning the address to the first output section of a new PT_LOAD,
if the end p_vaddr of the previous PT_LOAD is 0x10020, we advance to
the next multiple of maxPageSize: 0x20000. The new PT_LOAD will thus
have p_vaddr=0x20000. Because p_offset and p_vaddr are congruent modulo
maxPageSize, p_offset will be 0x20000, leaving a p_offset gap [0x10020,
0x20000) in the output.
Alternatively, if we advance to 0x20020, the new PT_LOAD will have
p_vaddr=0x20020. We can pick either 0x10020 or 0x20020 for p_offset!
Obviously 0x10020 is the choice because it leaves no gap. At runtime,
p_vaddr will be rounded down by pagesize (65536 if
pagesize=maxPageSize). This PT_LOAD will load additional initial
contents from p_offset ranges [0x10000,0x10020), which will also be
loaded by the previous PT_LOAD. This is fine if -z noseparate-code is in
effect or if we are not transiting between executable and non-executable
segments.
ld.bfd -z noseparate-code leverages this technique to keep output small.
This patch implements the technique in lld, which is mostly effective on
targets with large defaultMaxPageSize (AArch64/MIPS/PPC: 65536). The 3
removed alignments can save almost 3*65536 bytes.
Two places that rely on p_vaddr%pagesize = 0 have to be updated.
1) We used to round p_memsz(PT_GNU_RELRO) up to commonPageSize (defaults
to 4096 on all targets). Now p_vaddr%commonPageSize may be non-zero.
The updated formula takes account of that factor.
2) Our TP offsets formulae are only correct if p_vaddr%p_align = 0.
Fix them. See the updated comments in InputSection.cpp for details.
On targets that we enable the technique (only PPC64 now),
we can potentially make `p_vaddr(PT_TLS)%p_align(PT_TLS) != 0`
if `sh_addralign(.tdata) < sh_addralign(.tbss)`
This exposes many problems in ld.so implementations, especially the
offsets of dynamic TLS blocks. Known issues:
FreeBSD 13.0-CURRENT rtld-elf (i386/amd64/powerpc/arm64)
glibc (HEAD) i386 and x86_64 https://sourceware.org/bugzilla/show_bug.cgi?id=24606
musl<=1.1.22 on TLS Variant I architectures (aarch64/powerpc64/...)
So, force p_vaddr%p_align = 0 by rounding dot up to p_align(PT_TLS).
The technique will be enabled (with updated tests) for other targets in
subsequent patches.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D64906
llvm-svn: 369343
R_GOTPLT is relative to .got.plt since D59594. Since R_HEXAGON_GOT
relocations always have 0 r_addend, they can use R_GOTPLT instead.
Reviewed By: sidneym
Differential Revision: https://reviews.llvm.org/D66274
llvm-svn: 369128
That allows to remove duplicated code which subtracts 0x7000 from the
R_MIPS_TLS_TPREL_XXX relocations values in the `MIPS::relocateOne`
function.
llvm-svn: 366888
This patch does the same thing as r365595 to other subdirectories,
which completes the naming style change for the entire lld directory.
With this, the naming style conversion is complete for lld.
Differential Revision: https://reviews.llvm.org/D64473
llvm-svn: 365730
This patch is mechanically generated by clang-llvm-rename tool that I wrote
using Clang Refactoring Engine just for creating this patch. You can see the
source code of the tool at https://reviews.llvm.org/D64123. There's no manual
post-processing; you can generate the same patch by re-running the tool against
lld's code base.
Here is the main discussion thread to change the LLVM coding style:
https://lists.llvm.org/pipermail/llvm-dev/2019-February/130083.html
In the discussion thread, I proposed we use lld as a testbed for variable
naming scheme change, and this patch does that.
I chose to rename variables so that they are in camelCase, just because that
is a minimal change to make variables to start with a lowercase letter.
Note to downstream patch maintainers: if you are maintaining a downstream lld
repo, just rebasing ahead of this commit would cause massive merge conflicts
because this patch essentially changes every line in the lld subdirectory. But
there's a remedy.
clang-llvm-rename tool is a batch tool, so you can rename variables in your
downstream repo with the tool. Given that, here is how to rebase your repo to
a commit after the mass renaming:
1. rebase to the commit just before the mass variable renaming,
2. apply the tool to your downstream repo to mass-rename variables locally, and
3. rebase again to the head.
Most changes made by the tool should be identical for a downstream repo and
for the head, so at the step 3, almost all changes should be merged and
disappear. I'd expect that there would be some lines that you need to merge by
hand, but that shouldn't be too many.
Differential Revision: https://reviews.llvm.org/D64121
llvm-svn: 365595
The referenced symbol is expected to point to an R_RISCV_*_HI20
relocation. An absolute symbol has no associated section, therefore
there cannot be a matching R_RISCV_*_HI20.
This fixes the crash reported by PR42038. For reference, ld.bfd errors:
(.init+0x4): dangerous relocation: %pcrel_lo missing matching %pcrel_hi
Differential Revision: https://reviews.llvm.org/D63273
llvm-svn: 365049
gcc may generate .debug_info/.debug_aranges/.debug_line/etc that are
relocated by R_RISCV_ADD*/R_RISCV_SUB* pairs.
Allow R_RISCV_ADD in non-SHF_ALLOC section to fix link errors like:
ld.lld: error: print.c:(.debug_frame+0x60): has non-ABS relocation R_RISCV_ADD64 against symbol '.L0 '
Differential Revision: https://reviews.llvm.org/D63259
llvm-svn: 365035
RISC-V psABI doesn't specify TLS relaxation. It can be handled the same
way as we handle ARM TLS. RISC-V TLS is even simpler because GD/LD use
the same relocation type.
Reviewed By: jrtc27, ruiu
Differential Revision: https://reviews.llvm.org/D63220
llvm-svn: 364813
* Handle initial relocation types: R_RISCV_CALL_PLT and R_RISCV_GOT_HI20.
* Produce dynamic relocation types: R_RISCV_COPY, R_RISCV_RELATIVE, R_RISCV_JUMP_SLOT.
* Define SymbolRel as R_RISCV_{32,64}
* Generate PLT header: it is used by lazy binding PLT in glibc.
* R_RISCV_CALL is changed from R_PC to R_PC_PLT. If the target symbol is preemptable, this will suppress an unnecessary "canonical PLT".
This behavior is different from ld.bfd but it is agreed the current lld behavior is favored.
I have received positive responses from the binutils maintainer that the ABI/binutils implementation can be improved, see:
https://github.com/riscv/riscv-elf-psabi-doc/issues/98https://sourceware.org/bugzilla/show_bug.cgi?id=24685
Many -no-pie/-pie/-shared programs linked against musl or glibc should work with this patch.
Reviewed By: jrtc27
Differential Revision: https://reviews.llvm.org/D63076
llvm-svn: 364812