The typo was introduced to llvm MC in rL204769 (fixed in rL358247) and then to lld.
Also, for relocatable-many-sections.s, the size of .symtab changed at some point and the formula needs update.
llvm-svn: 358248
Patch by Robert O'Callahan.
Rust projects tend to link in all object files from all dependent
libraries and rely on --gc-sections to strip unused code and data.
Unfortunately --gc-sections doesn't currently strip any debuginfo
associated with GC'ed sections, so lld links in the full debuginfo from
all dependencies even if almost all that code has been discarded. See
https://github.com/rust-lang/rust/issues/56068 for some details.
Properly stripping debuginfo for discarded sections would be difficult,
but a simple approach that helps significantly is to mark debuginfo
sections as live only if their associated object file has at least one
live code/data section. This patch does that. In a (contrived but not
totally artificial) Rust testcase linked above, it reduces the final
binary size from 46MB to 5.1MB.
Differential Revision: https://reviews.llvm.org/D54747
llvm-svn: 358069
The code previously specified a 32-bit range for R_RISCV_HI20 and
R_RISCV_LO12_[IS], however this is incorrect as the maximum offset on
RV64 that can be formed from the immediate of lui and the displacement
of an I-type or S-type instruction is -0x80000800 to 0x7ffff7ff. There
is also the same issue with a c.lui and LO12 pair, whose actual
addressable range should be -0x20800 to 0x1f7ff.
The tests will be included in the next patch that converts all RISC-V
tests to use llvm-mc instead of yaml2obj, as assembler support has
matured enough to write tests in them.
Differential Revision: https://reviews.llvm.org/D60414
llvm-svn: 357995
For partitions I intend to use the same set of version indexes in
each partition for simplicity. Since each partition will need its own
VersionNeedSection this will require moving the verneed tracking out of
VersionNeedSection. The way I've done this is to move most of the tracking
into SharedFile. What will eventually become the per-partition tracking
still lives in VersionNeedSection.
As a bonus the code gets a little simpler and more consistent with how we
handle verdef.
Differential Revision: https://reviews.llvm.org/D60307
llvm-svn: 357926
Previously, we drop symbols starting with .L from the symbol table, so
if there is a relocation that refers a .L symbol, it ended up
referencing a null -- which happened to be interpreted as an absolute
symbol.
This patch copies all symbols including local ones if -emit-reloc is
given.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41385
Differential Revision: https://reviews.llvm.org/D60306
llvm-svn: 357885
And rename the function to combineEhSections(). This makes the processing
of .ARM.exidx even more similar to .eh_frame and means that we can avoid an
additional loop over InputSections.
Differential Revision: https://reviews.llvm.org/D60026
llvm-svn: 357417
Summary:
Some synthetic sections can be empty while still being needed, thus they
can't be removed by removeUnusedSyntheticSections(). Rename this member
function to more appropriate isNeeded() with the opposite meaning.
No functional change intended.
Reviewers: ruiu, espindola
Reviewed By: ruiu
Subscribers: jhenderson, grimar, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59982
llvm-svn: 357377
This change itself doesn't mean anything, but it helps D59780 because
in patch, we don't know whether we need to create a CET-aware PLT or
not until we read all input files.
llvm-svn: 357194
Recommit r356666 with fixes for buildbot failure, as well as handling for
--emit-relocs, which we decide not to emit any relocation sections as the
table is already position independent and an offline tool can deduce the
relocations.
Instead of creating extra Synthetic .ARM.exidx sections to account for
gaps in the table, create a single .ARM.exidx SyntheticSection that can
derive the contents of the gaps from a sorted list of the executable
InputSections. This has the benefit of moving the ARM specific code for
SyntheticSections in SHF_LINK_ORDER processing and the table merging code
into the ARM specific SyntheticSection. This also makes it easier to create
EXIDX_CANTUNWIND table entries for executable InputSections that don't
have an associated .ARM.exidx section.
Fixes pr40277
Differential Revision: https://reviews.llvm.org/D59216
llvm-svn: 357160
Patch by Tiancong Wang.
In D36351, Call-Chain Clustering (C3) heuristic is implemented with
option --call-graph-ordering-file <file>.
This patch adds a flag --print-symbol-order=<file> to LLD, and when
specified, it prints out the symbols ordered by the heuristics to the
file. The symbols printout is helpful to those who want to understand
the heuristics and want to reproduce the ordering with
--symbol-ordering-file in later pass.
Differential Revision: https://reviews.llvm.org/D59311
llvm-svn: 357133
Summary:
This should address remaining issues discussed in PR36555.
Currently R_GOT*_FROM_END are exclusively used by x86 and x86_64 to
express relocations types relative to the GOT base. We have
_GLOBAL_OFFSET_TABLE_ (GOT base) = start(.got.plt) but end(.got) !=
start(.got.plt)
This can have problems when _GLOBAL_OFFSET_TABLE_ is used as a symbol, e.g.
glibc dl_machine_dynamic assumes _GLOBAL_OFFSET_TABLE_ is start(.got.plt),
which is not true.
extern const ElfW(Addr) _GLOBAL_OFFSET_TABLE_[] attribute_hidden;
return _GLOBAL_OFFSET_TABLE_[0]; // R_X86_64_GOTPC32
In this patch, we
* Change all GOT*_FROM_END to GOTPLT* to fix the problem.
* Add HasGotPltOffRel to denote whether .got.plt should be kept even if
the section is empty.
* Simplify GotSection::empty and GotPltSection::empty by setting
HasGotOffRel and HasGotPltOffRel according to GlobalOffsetTable early.
The change of R_386_GOTPC makes X86::writePltHeader simpler as we don't
have to compute the offset start(.got.plt) - Ebx (it is constant 0).
We still diverge from ld.bfd (at least in most cases) and gold in that
.got.plt and .got are not adjacent, but the advantage doing that is
unclear.
Reviewers: ruiu, sivachandra, espindola
Subscribers: emaste, mehdi_amini, arichardson, dexonsmith, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59594
llvm-svn: 356968
lld's mark-sweep garbage collector was written in the visitor pattern.
There are functions that traverses a given graph, and the functions calls
callback functions to dispatch according to node type.
The code was originaly pretty simple, and lambdas worked pretty
well. However, as we add more features to the garbage collector, that became
more like a callback hell. We now have a callback function that wraps
another callback function, for example. It is not easy to follow the flow of
the control.
This patch rewrites it as a regular class. What was once a lambda is now a
regular class member function. I think this change fixes the readability
issue.
No functionality change intended.
Differential Revision: https://reviews.llvm.org/D59800
llvm-svn: 356966
Previously, `Entries` contains pairs of symbols and their indices.
The indices are always 0, x, 2x, 3x, ..., where x is the size of
relocation entry. We didn't have to store that values because we can
compute them when we consume them.
llvm-svn: 356812
There is a reproducible buildbot failure (segfault) on the 2 stage
clang-cmake-armv8-lld bot. Reverting while I investigate.
Differential Revision: https://reviews.llvm.org/D59216
llvm-svn: 356684
Instead of creating extra Synthetic .ARM.exidx sections to account for
gaps in the table, create a single .ARM.exidx SyntheticSection that can
derive the contents of the gaps from a sorted list of the executable
InputSections. This has the benefit of moving the ARM specific code for
SyntheticSections in SHF_LINK_ORDER processing and the table merging code
into the ARM specific SyntheticSection. This also makes it easier to create
EXIDX_CANTUNWIND table entries for executable InputSections that don't
have an associated .ARM.exidx section.
Fixes pr40277
Differential Revision: https://reviews.llvm.org/D59216
llvm-svn: 356666
Summary:
This implements Rui Ueyama's idea in PR39044.
I've checked that ld.bfd and gold do not have the power-of-2 requirement
and do not require sh_entsize to be a multiple of sh_align.
Now on the updated test merge-entsize.s, all the 3 linkers happily
create .rodata that is not 3-byte aligned.
This has a use case in Linux arch/x86/crypto/sha512-avx2-asm.S
It uses sh_entsize of 640, which is not a power of 2.
See https://github.com/ClangBuiltLinux/linux/issues/417
Reviewers: ruiu, espindola
Reviewed By: ruiu
Subscribers: nickdesaulniers, E5ten, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59478
llvm-svn: 356428
Summary:
Based on Peter Collingbourne's suggestion in D56828.
Before D56828: PT_LOAD(.data PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) .bss)
Old: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) .data .bss)
New: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro)) PT_LOAD(.data. .bss)
The new layout reflects the runtime memory mappings.
By having two PT_LOAD segments, we can utilize the NOBITS part of the
first PT_LOAD and save bytes for .bss.rel.ro.
.bss.rel.ro is currently small and only used by copy relocations of
symbols in read-only segments, but it can be used for other purposes in
the future, e.g. if a relro section's statically relocated data is all
zeros, we can move it to .bss.rel.ro.
Reviewers: espindola, ruiu, pcc
Reviewed By: ruiu
Subscribers: nemanjai, jvesely, nhaehnle, javed.absar, kbarton, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58892
llvm-svn: 356226
We allow an archive file without symbol table as a linker input as a
workaround for a very common error in LTO build. But that logic worked
even for an archive file containing non-bitcode files, which is not
expected. This patch limits that workaround to one that contains only
bitcode files.
Differential Revision: https://reviews.llvm.org/D59373
llvm-svn: 356186
Old: PT_LOAD(.data | PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) | .bss)
New: PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro) | .data .bss)
The placement of | indicates page alignment caused by PT_GNU_RELRO. The
new layout has simpler rules and saves space for many cases.
Old size: roundup(.data) + roundup(.data.rel.ro)
New size: roundup(.data.rel.ro + .bss.rel.ro) + .data
Other advantages:
* At runtime the 3 memory mappings decrease to 2.
* start(PT_TLS) = start(PT_GNU_RELRO) = start(RW PT_LOAD). This
simplifies binary manipulation tools.
GNU strip before 2.31 discards PT_GNU_RELRO if its
address is not equal to the start of its associated PT_LOAD.
This has been fixed by https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=f2731e0c374e5323ce4cdae2bcc7b7fe22da1a6f
But with this change, we will be compatible with GNU strip before 2.31
* Before, .got.plt (non-relro by default) was placed before .got (relro
by default), which made it impossible to have _GLOBAL_OFFSET_TABLE_
(start of .got.plt on x86-64) equal to the end of .got (R_GOT*_FROM_END)
(https://bugs.llvm.org/show_bug.cgi?id=36555). With the new ordering, we
can improve on this regard if we'd like to.
Reviewers: ruiu, espindola, pcc
Subscribers: emaste, arichardson, llvm-commits, joerg, jdoerfert
Differential Revision: https://reviews.llvm.org/D56828
llvm-svn: 356117
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
Original llvm-svn: 355964
llvm-svn: 355984
This does not appear to be necessary because StringTableSection does not
need to be finalized, which also means that we can remove the call to
finalizeSynthetic on .dynstr.
Differential Revision: https://reviews.llvm.org/D59240
llvm-svn: 355977
This shaves another word off SectionBase and makes it possible to clone a
section using the implicit copy constructor.
This basically reverts r311056, which removed the mutex in order to
make the code easier to understand. On balance I think it's probably more
straightforward to have a mutex here than to have an unusual copy constructor
in SectionBase.
Differential Revision: https://reviews.llvm.org/D59269
llvm-svn: 355966
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
llvm-svn: 355964
We don't need to take a slice of SectionCommands in addOrphanSections()
because it is not modified until the end of the function.
Differential Revision: https://reviews.llvm.org/D59239
llvm-svn: 355954