When parsing CU ranges for gdb-index, handle the error (now propagated
up though the API lld is calling here - previously the error was
printed within the libDebugInfo API, not allowing lld to format or
handle the message at all) - including information about the object and
archive name, as well as failing the link.
llvm-svn: 349979
Summary:
For the 2-bit bloom filter, we currently pick the bits Hash%64 and Hash>>6%64 (Shift2=6), but bits [6:...] are also used to select a word, causing a loss of precision.
In this patch, we choose Shift2=26, with is suggested by Ambrose Feinstein.
Note, Shift2 is computed as maskbitslog2 in bfd/elflink.c and gold/dynobj.cc
It is varying with the number of dynamic symbols but we don't
necessarily copy its rule.
Reviewers: ruiu, espindola
Reviewed By: ruiu
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D55971
llvm-svn: 349966
Summary:
This reinstates what I originally intended to do in D54361.
It removes the assumption that .debug_gnu_pubnames has increasing CuOffset.
Now we do better than gold here: when .debug_gnu_pubnames contains
multiple sets, gold would think every set has the same CU index as the
first set (incorrect).
Reviewed By: ruiu
Reviewers: ruiu, dblaikie, espindola
Subscribers: emaste, arichardson, arphaman, llvm-commits
Differential Revision: https://reviews.llvm.org/D54483
llvm-svn: 347820
The DT_PLTRELSZ dynamic tag is calculated using the size of the
OutputSection containing the In.RelaPlt InputSection. This will work for the
default no linker script case and the majority of linker scripts.
Unfortunately it doesn't work for some 'almost' sensible linker scripts. It
is permitted by ELF to have a single OutputSection containing both
In.RelaDyn, In.RelaPlt and In.RelaIPlt. It is also permissible for the range
of memory [DT_RELA, DT_RELA + DT_RELASZ) and the range
[DT_JMPREL, DT_JMPREL + DT_JMPRELSZ) to overlap as long as the the latter
range is at the end.
To support this type of linker script use the specific InputSection sizes.
Fixes pr39678
Differential Revision: https://reviews.llvm.org/D54759
llvm-svn: 347736
This is https://bugs.llvm.org//show_bug.cgi?id=38978
Spec says that:
"Objects may be built with the -z nodefaultlib option to
suppress any search of the default locations at runtime.
Use of this option implies that all the dependencies of an
object can be located using its runpaths.
Without this option, which is the most common case, no
matter how you augment the runtime linker's library
search path, its last element is always /usr/lib for 32-bit
objects and /usr/lib/64 for 64-bit objects."
The patch implements this option.
Differential revision: https://reviews.llvm.org/D54577
llvm-svn: 347647
We explicitly call finalizeContents() only once for
DynamicSection. The code testing we do not do it twice is
just excessive.
It could be an assert, but we don't do
that for other sections, so does not seem we
should do it here too.
llvm-svn: 347543
Summary:
This fixes PR39711: -static -z retpolineplt does not produce retpoline PLT header.
-z now is not relevant.
Statically linked executable does not have PLT, but may have IPLT with no header. When -z retpolineplt is specified, however, the repoline PLT header should still be emitted.
I've checked that this fixes the FreeBSD reproduce in PR39711 and a Linux program statically linked against glibc. The programm print "Hi" rather than SIGILL/SIGSEGV.
getPltEntryOffset may look dirty after this patch, but it can be cleaned up later.
Another possible improvement is that when there are non-preemptible IFUNC symbols (rare case, e.g. -Bsymbolic), both In.Plt and In.Iplt can be non-empty and we'll emit the retpoline PLT header twice.
Reviewers: espindola, emaste, chandlerc, ruiu
Reviewed By: emaste
Subscribers: emaste, arichardson, krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D54782
llvm-svn: 347404
On PowerPC64, when a function call offset is too large to encode in a call
instruction the address is stored in a table in the data segment. A thunk is
used to load the branch target address from the table relative to the
TOC-pointer and indirectly branch to the callee. When linking position-dependent
code the addresses are stored directly in the table, for position-independent
code the table is allocated and filled in at load time by the dynamic linker.
For position-independent code the branch targets could have gone in the .got.plt
but using the .branch_lt section for both position dependent and position
independent binaries keeps it consitent and helps keep this PPC64 specific logic
seperated from the target-independent code handling the .got.plt.
Differential Revision: https://reviews.llvm.org/D53408
llvm-svn: 346877
Summary:
NameTypeEntry::Type is a bit-packed value of CU index+attributes (https://sourceware.org/gdb//onlinedocs/gdb/Index-Section-Format.html), which is named cu_index_and_attrs in a local variable in gdb/dwarf2read.c:dw2_symtab_iter_next
The new name CuIndexAndAttrs is more meaningful.
Reviewers: ruiu, dblaikie, espindola
Reviewed By: dblaikie
Subscribers: emaste, aprantl, arichardson, JDevlieghere, arphaman, llvm-commits
Differential Revision: https://reviews.llvm.org/D54481
llvm-svn: 346794
Summary:
Idx passed to readPubNamesAndTypes was an index into Chunks, not an
index into the CU list. This would be incorrect if some .debug_info
section contained more than 1 DW_TAG_compile_unit.
In real world, glibc Scrt1.o is a partial link of start.os abi-note.o init.o and contains 2 CUs in debug builds.
Without this patch, any application linking such Scrt1.o would have invalid .gdb_index
The issue could be demonstrated by:
(gdb) py print(gdb.lookup_global_symbol('main'))
None
Reviewers: espindola, ruiu
Reviewed By: ruiu
Subscribers: Higuoxing, grimar, dblaikie, emaste, aprantl, arichardson, JDevlieghere, arphaman, llvm-commits
Differential Revision: https://reviews.llvm.org/D54361
llvm-svn: 346747
Summary:
The debug_info_offset value may be relocated.
This is lld side change of D54375.
Reviewers: ruiu, dblaikie, grimar, espindola
Subscribers: emaste, arichardson, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D54376
llvm-svn: 346616
Summary:
D52830 sets sh_link to .symtab in static link, which breaks executable stripped by GNU strip.
It may also be odd that .rela.plt (SHF_ALLOC) points to .symtab (non-SHF_ALLOC).
Change the logic on pcc's suggestion.
Before:
% clang -fuse-ld=lld -static -xc =(printf 'int main(){}') # or gcc
% strip a.out; ./a.out
unexpected reloc type in static binary[1] 61634 segmentation fault ./a.out
Reviewers: ruiu, grimar, emaste, espindola
Reviewed By: ruiu
Subscribers: pcc, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D53993
llvm-svn: 345899
Summary: .rela.plt may only contain R_*_{,I}RELATIVE relocations and not need a symbol table link. bfd/gold fallbacks to sh_link=0 in this case. Without this patch, ld.lld --strip-all caused lld to dereference a null pointer.
Reviewers: ruiu, grimar, espindola
Reviewed By: ruiu
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D53881
llvm-svn: 345648
Out::DebugInfo was used only by GdbIndex class to determine if
we need to create a .gdb_index section, but we can do the same
check without it.
Added a test that this patch doesn't change the existing behavior.
llvm-svn: 345058
Android uses a compressed relocation format, which means the size of the
relocation section isn't predictable based on the number of relocations,
and can vary if the layout changes in any way. To deal with this, the
linker normally runs multiple passes until the layout converges.
The layout should converge if the size of the compressed
relocation section increases monotonically: if the size of an encoded
offset increases by one byte, the larget value which can be encoded is
multiplied by 128, so the representable offsets grow much faster than
the size of the section itself.
The problem here is that there is no code to ensure the size of the
section doesn't decrease. If the size of the relocation section
decreases, the relative offsets can increase due to alignment
restrictions, so that can force the size of the relocation section to
increase again. The end result is an infinite loop; the loop gets cut
off after 10 iterations with the message "thunk creation not
converged".
To avoid this issue, this patch adds padding to the end of the
relocation section if its size would decrease. The extra
padding is harmless because of the way the format is defined:
decoding stops after it reaches the number of relocations specified
in the section's header.
Differential Revision: https://reviews.llvm.org/D53003
llvm-svn: 344300
This is https://bugs.llvm.org/show_bug.cgi?id=37538,
Currently, LLD may set both sh_link and sh_info for
.rela.plt section to zero when we have only .rela.iplt section part used.
ELF spec (https://docs.oracle.com/cd/E19683-01/816-1386/chapter6-94076/index.html)
says that for SHT_REL and SHT_RELA, sh_link references the associated symbol table
and sh_info the "section to which the relocation applies."
When we set the sh_link field, for the regular case we use the .dynsym index.
For .rela.iplt sections, it is unclear what is the associated symbol table,
because R_*_RELATIVE relocations do not use symbol names and we might have no
.dynsym section at all so this patch uses .symtab section index.
Differential revision: https://reviews.llvm.org/D52830
llvm-svn: 344226
Previously, we uncompress all compressed sections before doing anything.
That works, and that is conceptually simple, but that could results in
a waste of CPU time and memory if uncompressed sections are then
discarded or just copied to the output buffer.
In particular, if .debug_gnu_pub{names,types} are compressed and if no
-gdb-index option is given, we wasted CPU and memory because we
uncompress them into newly allocated bufers and then memcpy the buffers
to the output buffer. That temporary buffer was redundant.
This patch changes how to uncompress sections. Now, compressed sections
are uncompressed lazily. To do that, `Data` member of `InputSectionBase`
is now hidden from outside, and `data()` accessor automatically expands
an compressed buffer if necessary.
If no one calls `data()`, then `writeTo()` directly uncompresses
compressed data into the output buffer. That eliminates the redundant
memory allocation and redundant memcpy.
This patch significantly reduces memory consumption (20 GiB max RSS to
15 Gib) for an executable whose .debug_gnu_pub{names,types} are in total
5 GiB in an uncompressed form.
Differential Revision: https://reviews.llvm.org/D52917
llvm-svn: 343979
Summary: The convenience wrapper in STLExtras is available since rL342102.
Reviewers: ruiu, espindola
Subscribers: emaste, arichardson, mgrang, llvm-commits
Differential Revision: https://reviews.llvm.org/D52569
llvm-svn: 343146
When we write a struct to a mmap'ed buffer, we usually use
write16/32/64, but we didn't for VersionDefinitionSection, so
we needed to template that class.
llvm-svn: 343024
Previously, if you invoke lld's `main` more than once in the same process,
the second invocation could fail or produce a wrong result due to a stale
pointer values of the previous run.
Differential Revision: https://reviews.llvm.org/D52506
llvm-svn: 343009
tolower() has some overhead because current locale is considered (though in lld the default "C" locale is used which does not matter too much). llvm::toLower is more efficient as it compiles to a compare and a conditional jump, as opposed to a libc call if tolower is used.
Disregarding locale also matches gdb's behavior (gdb/minsyms.h):
#define SYMBOL_HASH_NEXT(hash, c) \
((hash) * 67 + TOLOWER ((unsigned char) (c)) - 113)
where TOLOWER (include/safe-ctype.h) is a macro that uses a lookup table under the hood which is similar to llvm::toLower.
Reviewers: ruiu, espindola
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D52128
llvm-svn: 342342
Once we create .gdb_index contents, .zdebug_gnu_pub{names,types}
are useless, so there's no need to keep their uncompressed data
in memory.
I observed that for a test case in which lld creates a 3GB .gdb_index
section, the maximum resident set size reduced from 43GB to 29GB after
this patch.
Differential Revision: https://reviews.llvm.org/D52126
llvm-svn: 342297
-z interpose sets the DF_1_INTERPOSE flag, marking the object as an
interposer.
Via FreeBSD PR 230604, linking Valgrind with lld failed.
Differential Revision: https://reviews.llvm.org/D52094
llvm-svn: 342239
This fixes the following warning when compiling with gcc version 8.0.1 20180319 (experimental) (GCC):
/home/umb/LLVM/llvm/tools/lld/ELF/SyntheticSections.cpp:1951:46: warning: enumeral and non-enumeral type in conditional expression [-Wextra]
return OS->SectionIndex >= SHN_LORESERVE ? SHN_XINDEX : OS->SectionIndex;
llvm-svn: 340164
Summary: This makes it conform to what the comment says. Otherwise when getErrPlace() is called afterwards, cast<InputSection>(D) will cause incompatible cast as MergeInputSection is not a subclass of InputSection.
Reviewers: ruiu, grimar, espindola, pcc
Reviewed By: grimar
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D50742
llvm-svn: 339904
It turns out that postThunkContents() is only used for
sorting symbols in .symtab.
Though we can instead move the logic to SymbolTableBaseSection::finalizeContents(),
postpone calling it and then get rid of postThunkContents completely.
Differential revision: https://reviews.llvm.org/D49547
llvm-svn: 339413
If we fail to merge a secondary GOT with the primary GOT but so far only
one merged GOT has been created (the primary one), the final element in
MergedGots is the primary GOT. Thus we should not try to merge with this
final element passing IsPrimary=false, since this will ignore the fact
that the destination GOT does in fact need a header, and those extra two
entries can be enough to allow the merge to incorrectly occur. Instead
we should check for this case before attempting the second merge.
Patch by James Clarke.
Differential revision: https://reviews.llvm.org/D49422
llvm-svn: 337810
Signed values for the FDE PC addr were not correctly handled in
readFdeAddr(). If the value is negative and the type of the value is
smaller than 64 bits, the FDE PC addr overflow error would be
incorrectly triggered.
Fixed readFdeAddr() to properly handle signed values by sign extending
where appropriate.
Differential Revision: https://reviews.llvm.org/D49557
llvm-svn: 337683
Currently, getFdePC() returns uint64_t. Its because the following
encodings might use 8 bytes: DW_EH_PE_absptr and DW_EH_PE_udata8.
But caller assigns returned value to uint32_t field:
https://github.com/llvm-mirror/lld/blob/master/ELF/SyntheticSections.cpp#L508
Value is used for building .eh_frame_hdr section.
We use DW_EH_PE_sdata4 encoding for building it at this moment:
https://github.com/llvm-mirror/lld/blob/master/ELF/SyntheticSections.cpp#L2545
And that means that an overflow issue might happen if
DW_EH_PE_absptr/DW_EH_PE_udata8 address encodings are present
in .eh_frame. In that case, before this patch, we silently would
truncate the address and produced broken .eh_frame_hdr section.
It would be not hard to support real 64-bit values for
DW_EH_PE_absptr/DW_EH_PE_udata8 encodings, but it is
unclear if it is usefull and if we should do it.
Since nobody faced/reported it, int this patch I only implement
a check to stop producing broken output silently for now.
llvm-svn: 337382
Code was dead because at the moment of BssSection creation
it can never have a parent. Also, code simply does not
make sence as alignment adjastment happens when
BssSection is added to its parent later.
llvm-svn: 337276
Since .gdb_index sections contain all known symbols, they can be very large.
One of my executables has a .gdb_index section of 1350 GiB. Uniquifying
symbols by name takes 3.77 seconds on my machine. This patch parallelize it.
Time to call createSymbols() with 8.4 million unique symbols:
Without this patch: 3773 ms
Parallelism = 1: 4374 ms
Parallelism = 2: 2628 ms
Parallelism = 16: 837 ms
As you can see above, this algorithm is a bit more inefficient
than the non-parallelized version, but even with dual-core, it is
faster than that, so I think it is overall a win.
Differential Revision: https://reviews.llvm.org/D49164
llvm-svn: 336790
This workaround is for GCC 5.4.1. Without this workaround, lld will
produce larger .gdb_index sections for object files compiled with the
buggy version of the compiler.
Since it is not for correctness, and it affects only debug builds (since
you are generating .gdb_index sections), perhaps the hack shouldn't have been
added in the first place. At least, I think it is time to remove this hack.
Differential Revision: https://reviews.llvm.org/D49149
llvm-svn: 336788
This patch merges createGdbIndex function and GdbIndexSection's
constructor into a single static member function of the class.
This patch also change how we keep CU vectors. Previously, CuVector
and GdbSymbols were parallel arrays, but there's no reason to choose that
design. Now, CuVector is a member of GdbSymbol class.
A lot of members are removed from GdbIndexSection. Previously, it has
members that need to be kept in sync over several phases. I belive the new
design is less error-prone, and the new code is much easier to read
than before.
llvm-svn: 336743