This generalizes the old heuristic placing SHT_DYNSYM SHT_DYNSTR first in the readonly SHF_ALLOC segment.
Reviewers: espindola
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D48406
llvm-svn: 335674
This CL places .dynsym and .dynstr at the beginning of SHF_ALLOC
sections. We do this to mitigate the possibility that huge .dynsym and
.dynstr sections placed between ro-data and text sections cause
relocation overflow.
Differential Revision: https://reviews.llvm.org/D45788
llvm-svn: 332374
This CL is to mitigate R_X86_64_PC32 relocation overflow problems for huge binaries that has near 4G allocated sections.
By examining those binaries, there're 2 issues contributes to the problem:
1). huge ".dynsym" and ".dynstr" stands in the way between .rodata and .text
2). _init_array_start/end are placed at 0 if no ".init_array" presents, this causes .text relocation against them become more prone to overflow.
This CL addresses 1st problem (the 2nd will be addressed in another CL.) by assigning a smaller sortrank to .dynsym and .dynstr thus they no longer stand in between.
llvm-svn: 332038
No difference in practice other than having sh_entsize in the output.
This should simplify the patch for handling SHF_MERGE in -r.
Based on a patch by George Rimar.
llvm-svn: 318306
String merging is one of the most time-consuming functions in lld.
This patch parallelize it to speed it up. On my 2-socket 20-core
40-threads Xeon E5-2680 @ 2.8 GHz machine, this patch shorten the
clang debug build link time from 7.11s to 5.16s. It's a 27%
improvement and actually pretty noticeable. In this test condition,
lld is now 4x faster than gold.
Differential Revision: https://reviews.llvm.org/D38266
llvm-svn: 314588
When -O0 is specified, we do not do section merging.
Though before this patch several sections were generated instead
of single, what is useless.
Differential revision: https://reviews.llvm.org/D27041
llvm-svn: 288151
This patch redefines the default optimization level as 1 and adds
new level 0. In the command line, it is -O0. The flag disables
costly but optional features so that the linker produces semantically
correct but larger output quickly. Currently it only disables
section merging.
This flag is not intended to be used for final production linking.
It is intended to be used in compile-link-test cycle.
Time to link clang with debug info is about 2x faster with the flag.
Head:
13.24 seconds
Output size: 1227189664 bytes
With this patch:
7.41 seconds
Output size: 2490281784 bytes
Differential Revision: http://reviews.llvm.org/D19705
llvm-svn: 268056
Partial (-z relro) and full (-z relro, -z now) relro cases are implemented.
Partial relro:
The ELF sections are reordered so that the ELF internal data sections (.got, .dtors, etc.) precede the program's data sections (.data and .bss).
.got is readonly, .got.plt is still writeable.
Full relro:
Supports all the features of partial RELRO, .got.plt is also readonly.
Differential revision: http://reviews.llvm.org/D14218
llvm-svn: 253967
With this patch, lld creates PT_GNU_STACK segments only when all input
files have .note.GNU-stack sections. This is in line with other linkers
with a minor difference (we don't care about .note.GNU-stack rwx bits as
you can always remove .note.GNU-stack sections instead of setting x bit.)
At least, NetBSD loader does not understand PT_GNU_STACK segments and
reject any executables that have the section. This patch makes lld
compatible with such operating systems.
llvm-svn: 253797