Instead of storing a pointer, store the members we need.
The reason for doing this is that it makes it far easier to create
synthetic sections. It also avoids reading data from files multiple
times., which might help with cross endian linking and host
architectures with slow unaligned access.
There are obvious compacting opportunities, but this already has mixed
results even on native x86_64 linking.
There is also the possibility of better refactoring the code for
handling common symbols, but this already shows that a custom class is
not necessary.
llvm-svn: 285148
We were fairly inconsistent as to what information should be accessed
with getSectionHdr and what information (like alignment) was stored
elsewhere.
Now all section info has a dedicated getter. The code is also a bit
more compact.
llvm-svn: 285079
Builds were failing with:
InputSection.h(139): error C2338: SectionPiece is too big
because MSVC does record layout differently, probably not packing the
'OutputOff' and 'Live' bitfields because their types are of different
size. Using size_t for 'Live' seems to fix it.
llvm-svn: 284740
Previously, we supported only SHF_COMPRESSED sections because it's
new and it's the ELF standard. But there are object files compressed
in the GNU style out there, so we had to support it.
Sections compressed in the GNU style start with ".zdebug_" and
contain different headers than the ELF standard's one. In this
patch, getRawCompressedData is responsible to handle it.
A tricky thing about GNU-style compressed sections is that we have
to rename them when creating output sections. ".zdebug_" prefix
implies the section is compressed. We need to rename ".zdebug_"
".debug" because our output sections are not compressed.
We do that in this patch.
llvm-svn: 284068
.ARM.exidx sections have a reverse dependency on the section they have
a SHF_LINK_ORDER dependency on. In other words a .ARM.exidx section is
live only if the executable section it describes is live. We implement
this with a reverse dependency field in InputSection.
Adding the dependency to InputSection is the simplest implementation
but it could be moved out to a separate map if it were found to decrease
performance for non ARM targets.
Differential revision: https://reviews.llvm.org/D25234
llvm-svn: 283734
The .ARM.exidx sections contain a table. Each entry has two fields:
- PREL31 offset to the function the table entry describes
- Action to take, either cantunwind, inline unwind, or PREL31 offset to
.ARM.extab section
The table entries must be sorted in order of the virtual addresses the
first entry of the table describes. Traditionally this is implemented by
the SHF_LINK_ORDER dependency. Instead of implementing this directly we
sort the table entries post relocation.
The .ARM.exidx OutputSection is described by the PT_ARM_EXIDX program
header
Differential revision: https://reviews.llvm.org/D25127
llvm-svn: 283730
This spreads out computing the hash and using it in a hash table. The
speedups are:
firefox
master 6.811232891
patch 6.559280249 1.03841162939x faster
chromium
master 4.369323666
patch 4.33171853 1.00868134338x faster
chromium fast
master 1.856679971
patch 1.850617741 1.00327578725x faster
the gold plugin
master 0.32917962
patch 0.325711944 1.01064645023x faster
clang
master 0.558015452
patch 0.550284165 1.01404962652x faster
llvm-as
master 0.032563515
patch 0.032152077 1.01279662275x faster
the gold plugin fsds
master 0.356221362
patch 0.352772162 1.00977741549x faster
clang fsds
master 0.635096494
patch 0.627249229 1.01251060127x faster
llvm-as fsds
master 0.030183188
patch 0.029889544 1.00982430511x faster
scylla
master 3.071448906
patch 2.938484138 1.04524944215x faster
This seems to be because we don't stall as much. When linking firefox
stalled-cycles-frontend goes from 57.56% to 55.55%.
With -O2 the difference is even more significant since we avoid
recomputing the hash. For firefox we go from 9.990295265 to
9.149627521 seconds (1.09x faster).
llvm-svn: 283367
It is pretty easy to get the data from the InputSection, so we don't
have to store it.
This opens the way for storing the hash instead.
llvm-svn: 283357
This simplifies error handling as there is now only one place in the
code that needs to consider the possibility that the name is
corrupted. Before we would do it in every access.
llvm-svn: 280937
Previously we used LayoutInputSection class to correctly assign
symbols defined in linker script. This patch removes it and uses
pointer to preceding input section in SymbolAssignment class instead.
Differential revision: https://reviews.llvm.org/D23661
llvm-svn: 280348
This section supersedes .reginfo and .MIPS.options sections. But for now
we have to support all three sections for ABI transition period.
llvm-svn: 278482
All other singleton instances are accessible globally.
CommonInputSection shouldn't be an exception.
Differential Revision: https://reviews.llvm.org/D22935
llvm-svn: 277034
Not all relocations from a .eh_frame that point to an executable
section should be ignored. In particular, the relocation finding the
personality function should not.
This is a reduction from trying to bootstrap a static lld on linux.
llvm-svn: 276329
We no longer need it for relocations in .eh_frame.
The only relocations that point to .eh_frame are the ones trying to
find the output .eh_frame.
This actually fixes a bug in the symbol value code. It was not
handling -1 as an indicator for a piece not being included in the
output.
llvm-svn: 276175
Creating sections on linkerscript side requires some methods
that can be reused if are exported from writer.
Patch implements that change.
Differential revision: http://reviews.llvm.org/D20104
llvm-svn: 275162
The TinyPtrVector of const Thunk<ELFT>* in InputSections.h can cause
build failures on certain compiler/library combinations when Thunk<ELFT>
is not a complete type or is an abstract class. Fixed by making Thunk<ELFT>
non Abstract.
type or is an abstract class
llvm-svn: 274863
Generalise the Mips LA25 Thunk code and implement ARM and Thumb
interworking Thunks.
- Introduce a new module Thunks.cpp to store the Target Specific Thunk
implementations.
- DefinedRegular and Shared have a ThunkData field to record Thunk.
- A Target can have more than one type of Thunk.
- Support PC-relative calls to Thunks.
- Support Thunks to PLT entries.
- Existing Mips LA25 Thunk code integrated.
- Support for ARMv7A interworking Thunks.
Limitations:
- Only one Thunk per SymbolBody, this is sufficient for all currently
implemented Thunks.
- ARM thunks assume presence of V6T2 MOVT and MOVW instructions.
Differential revision: http://reviews.llvm.org/D21891
llvm-svn: 274836
Previously, ch_size was read in host byte order, so if a host and
a target are different in byte order, we would produce a corrupted
output.
llvm-svn: 274729
Patch implements support of zlib style compressed sections.
SHF_COMPRESSED flag is used to recognize that decompression is required.
After that decompression is performed and flag is removed from output.
Differential revision: http://reviews.llvm.org/D20272
llvm-svn: 273661
Peter Smith found while trying to support thunk creation for ARM that
LLD sometimes creates broken thunks for MIPS. The cause of the bug is
that we assign file offsets to input sections too early. We need to
create all sections and then assign section offsets because appending
thunks changes file offsets for all following sections.
This patch separates the pass to assign file offsets from thunk
creation pass. This effectively reverts r265673.
Differential Revision: http://reviews.llvm.org/D21598
llvm-svn: 273532
I think it is me who named these variables, but I always find that
they are slightly confusing because align is a verb.
Adding four letters is worth it.
llvm-svn: 272984
MergedInputSection::getOffset is the busiest function in LLD if string
merging is enabled and input files have lots of mergeable sections.
It is usually the case when creating executable with debug info,
so it is pretty common.
The reason why it is slow is because it has to do faily complex
computations. For non-mergeable sections, section contents are
contiguous in output, so in order to compute an output offset,
we only have to add the output section's base address to an input
offset. But for mergeable strings, section contents are split for
merging, so they are not contigous. We've got to do some lookups.
We used to do binary search on the list of section pieces.
It is slow because I think it's hostile to branch prediction.
This patch replaces it with hash table lookup. Seems it's working
pretty well. Below is "perf stat -r10" output when linking clang
with debug info. In this case this patch speeds up about 4%.
Before:
6584.153205 task-clock (msec) # 1.001 CPUs utilized ( +- 0.09% )
238 context-switches # 0.036 K/sec ( +- 6.59% )
0 cpu-migrations # 0.000 K/sec ( +- 50.92% )
1,067,675 page-faults # 0.162 M/sec ( +- 0.15% )
18,369,931,470 cycles # 2.790 GHz ( +- 0.09% )
9,640,680,143 stalled-cycles-frontend # 52.48% frontend cycles idle ( +- 0.18% )
<not supported> stalled-cycles-backend
21,206,747,787 instructions # 1.15 insns per cycle
# 0.45 stalled cycles per insn ( +- 0.04% )
3,817,398,032 branches # 579.786 M/sec ( +- 0.04% )
132,787,249 branch-misses # 3.48% of all branches ( +- 0.02% )
6.579106511 seconds time elapsed ( +- 0.09% )
After:
6312.317533 task-clock (msec) # 1.001 CPUs utilized ( +- 0.19% )
221 context-switches # 0.035 K/sec ( +- 4.11% )
1 cpu-migrations # 0.000 K/sec ( +- 45.21% )
1,280,775 page-faults # 0.203 M/sec ( +- 0.37% )
17,611,539,150 cycles # 2.790 GHz ( +- 0.19% )
10,285,148,569 stalled-cycles-frontend # 58.40% frontend cycles idle ( +- 0.30% )
<not supported> stalled-cycles-backend
18,794,779,900 instructions # 1.07 insns per cycle
# 0.55 stalled cycles per insn ( +- 0.03% )
3,287,450,865 branches # 520.799 M/sec ( +- 0.03% )
72,259,605 branch-misses # 2.20% of all branches ( +- 0.01% )
6.307411828 seconds time elapsed ( +- 0.19% )
Differential Revision: http://reviews.llvm.org/D20645
llvm-svn: 270999
This patch makes SectionPiece class 8 bytes smaller on platforms
on which pointer size is 8 bytes. Sean suggested in a post commit
review for r270340 that this could make a differentce, and it
actually is. Time to link clang (with debug info) improved from
6.725 seconds to 6.589 seconds or by about 2%.
Differential Revision: http://reviews.llvm.org/D20613
llvm-svn: 270717
scanReloc and the functions on which scanReloc depends is in total
more than 600 lines of code. Since scanReloc does not depend on Writer,
it is better to move it into a separate file.
Differential Revision: http://reviews.llvm.org/D20554
llvm-svn: 270606