Previously an empty CPU string was passed to the LTO engine which
resulted in a generic CPU for which certain features like NOPL were
disabled. This fixes that.
Patch by Pratik Bhatu!
llvm-svn: 323801
Currently symbols assigned or created by linkerscript are not processed early
enough. As a result it is not possible to version them or assign any other flags/properties.
Patch creates Defined symbols for -defsym and linkerscript symbols early,
so that issue from above can be addressed.
It is based on Rafael Espindola's version of D38239 patch.
Fixes PR34121.
Differential revision: https://reviews.llvm.org/D41987
llvm-svn: 323729
r323476 added support for DW_FORM_line_strp, and incorrectly made that
depend on having a DWARFUnit available. We shouldn't be tracking
.debug_line_str in DWARFUnit after all. After this patch, I can do an
NFC follow up and undo a bunch of the "plumbing" part of r323476.
Differential Revision: https://reviews.llvm.org/D42609
llvm-svn: 323691
This should fix PR36017.
The root problem is that we were creating a PT_LOAD just for the
header. That was technically valid, but inconvenient: we should not be
making the ELF discontinuous.
The solution is to allow a section with LMAExpr to be added to a
PT_LOAD if that PT_LOAD doesn't already have a LMAExpr.
llvm-svn: 323625
If two sections are in the same PT_LOAD, their relatives offsets,
virtual address and physical addresses are all the same.
I initially wanted to have a single global LMAOffset, on the
assumption that every ELF file was in practiced loaded contiguously in
both physical and virtual memory.
Unfortunately that is not the case. The linux kernel has:
LOAD 0x200000 0xffffffff81000000 0x0000000001000000 0xced000 0xced000 R E 0x200000
LOAD 0x1000000 0xffffffff81e00000 0x0000000001e00000 0x15f000 0x15f000 RW 0x200000
LOAD 0x1200000 0x0000000000000000 0x0000000001f5f000 0x01b198 0x01b198 RW 0x200000
LOAD 0x137b000 0xffffffff81f7b000 0x0000000001f7b000 0x116000 0x1ec000 RWE 0x200000
The delta for all but the third PT_LOAD is the same:
0xffffffff80000000. I think the 3rd one is a hack for implementing per
cpu data, but we can't break that.
llvm-svn: 323456
This fixes the crash reported at PR36083.
The issue is that we were trying to put all the sections in the same
PT_LOAD and crashing trying to write past the end of the file.
This also adds accounting for used space in LMARegion, without it all
3 PT_LOADs would have the same physical address.
llvm-svn: 323449
Since SyntheticSection::getParent() may return null, dereferencing
this pointer in ARMExidxSentinelSection::empty() call from
removeUnusedSyntheticSections() results in crashes when linking ARM
binaries.
Patch by vit9696!
llvm-svn: 323366
The reinterpret cast to uint32_t to read the little-endian instructions
will only work on a little endian system. Use ulittle32_t to always read
little-endian (AArch64 instructions are always little endian).
Fixes PR36056
Differential Revision: https://reviews.llvm.org/D42421
llvm-svn: 323243
Summary:
First, we need to explain the core of the vulnerability. Note that this
is a very incomplete description, please see the Project Zero blog post
for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
The basis for branch target injection is to direct speculative execution
of the processor to some "gadget" of executable code by poisoning the
prediction of indirect branches with the address of that gadget. The
gadget in turn contains an operation that provides a side channel for
reading data. Most commonly, this will look like a load of secret data
followed by a branch on the loaded value and then a load of some
predictable cache line. The attacker then uses timing of the processors
cache to determine which direction the branch took *in the speculative
execution*, and in turn what one bit of the loaded value was. Due to the
nature of these timing side channels and the branch predictor on Intel
processors, this allows an attacker to leak data only accessible to
a privileged domain (like the kernel) back into an unprivileged domain.
The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In many
cases, the compiler can simply use directed conditional branches and
a small search tree. LLVM already has support for lowering switches in
this way and the first step of this patch is to disable jump-table
lowering of switches and introduce a pass to rewrite explicit indirectbr
sequences into a switch over integers.
However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as
a trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures the
processor predicts the return to go to a controlled, known location. The
retpoline then "smashes" the return address pushed onto the stack by the
call with the desired target of the original indirect call. The result
is a predicted return to the next instruction after a call (which can be
used to trap speculative execution within an infinite loop) and an
actual indirect branch to an arbitrary address.
On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this device.
For 32-bit ABIs there isn't a guaranteed scratch register and so several
different retpoline variants are introduced to use a scratch register if
one is available in the calling convention and to otherwise use direct
stack push/pop sequences to pass the target address.
This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886
We also support a target feature that disables emission of the retpoline
thunk by the compiler to allow for custom thunks if users want them.
These are particularly useful in environments like kernels that
routinely do hot-patching on boot and want to hot-patch their thunk to
different code sequences. They can write this custom thunk and use
`-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this
case, on x86-64 thu thunk names must be:
```
__llvm_external_retpoline_r11
```
or on 32-bit:
```
__llvm_external_retpoline_eax
__llvm_external_retpoline_ecx
__llvm_external_retpoline_edx
__llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.
There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.
The only other indirect branches remaining that we are aware of are from
precompiled runtimes (such as crt0.o and similar). The ones we have
found are not really attackable, and so we have not focused on them
here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.
For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z retpolineplt`
(or use similar functionality from some other linker). We strongly
recommend also using `-z now` as non-lazy binding allows the
retpoline-mitigated PLT to be substantially smaller.
When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typical workloads, and relatively minor hits (approximately 2%)
even for extremely syscall-heavy applications. This is largely due to
the small number of indirect branches that occur in performance
sensitive paths of the kernel.
When using these patches on statically linked applications, especially
C++ applications, you should expect to see a much more dramatic
performance hit. For microbenchmarks that are switch, indirect-, or
virtual-call heavy we have seen overheads ranging from 10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically reduce
the impact of hot indirect calls (by speculatively promoting them to
direct calls) and allow optimized search trees to be used to lower
switches. If you need to deploy these techniques in C++ applications, we
*strongly* recommend that you ensure all hot call targets are statically
linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well
tuned servers using all of these techniques saw 5% - 10% overhead from
the use of retpoline.
We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality available
as soon as possible. Happy for more code review, but we'd really like to
get these patches landed and backported ASAP for obvious reasons. We're
planning to backport this to both 6.0 and 5.0 release streams and get
a 5.0 release with just this cherry picked ASAP for distros and vendors.
This patch is the work of a number of people over the past month: Eric, Reid,
Rui, and myself. I'm mailing it out as a single commit due to the time
sensitive nature of landing this and the need to backport it. Huge thanks to
everyone who helped out here, and everyone at Intel who helped out in
discussions about how to craft this. Also, credit goes to Paul Turner (at
Google, but not an LLVM contributor) for much of the underlying retpoline
design.
Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer
Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D41723
llvm-svn: 323155
Compiler doesn't know the fact that Config->WordSize * 8 is always a
power of two, so it had to use the div instruction to divide some
number with C.
llvm-svn: 323014
I created https://reviews.llvm.org/D42202 to see how large the bloom
filter should be. With that patch, I tested various bloom filter sizes
with the following commands:
$ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug -DLLVM_ENABLE_LLD=true \
-DLLVM_ENABLE_PROJECTS='clang;lld' -DBUILD_SHARED_LIBS=ON \
-DCMAKE_SHARED_LINKER_FLAGS=-Wl,-bloom-filter-bits=<some integer> \
../llvm-project/llvm
$ rm -f $(find . -name \*.so.7.0.0svn)
$ ninja lld
$ LD_BIND_NOW=1 perf stat bin/ld.lld
Here is the result:
-bloom-filter-bits=8 0.220351609 seconds
-bloom-filter-bits=10 0.217146597 seconds
-bloom-filter-bits=12 0.206870826 seconds
-bloom-filter-bits=16 0.209456312 seconds
-bloom-filter-bits=32 0.195092075 seconds
Currently we allocate 8 bits for a symbol, but according to the above
result, that number is not optimal. Even though the numbers follow the
diminishing return rule, the point where a marginal improvement becomes
too small is not -bloom-filter-bits=8 but 12. So this patch sets it to 12.
Differential Revision: https://reviews.llvm.org/D42204
llvm-svn: 323010
We need to decompose relocation type for N32 / N64 ABI. Let's do it
before any other manipulations with relocation type in the `relocateOne`
routine.
llvm-svn: 322860
The problem we had with it is that anything inside an AT is an
expression, so we failed to parse the section name because of the - in
it.
llvm-svn: 322801
Previously we always handled -defsym after other commands in command line.
That made impossible to overload values set by -defsym from linker script:
test.script:
foo = 0x22;
-defsym=foo=0x11 -script t.script
would always set foo to 0x11.
That is inconstent with common logic which allows to override command line
options. it is inconsistent with bfd behavior and seems breaks assumption that
-defsym is the same as linker script assignment, as -defsyms always handled out of
command line order.
Patch fixes the handling order.
Differential revision: https://reviews.llvm.org/D42054
llvm-svn: 322625
Symbol had both Visibility and getVisibility() and they had different
meanings. That is just too easy to get wrong.
getVisibility() would compute the visibility of a particular symbol
(foo in bar.o), and Visibility stores the computed value we will put
in the output.
There is only one case when we want what getVisibility() provides, so
inline it.
llvm-svn: 322590
We track both the combined visibility that will be used for the output
symbol and the original input visibility of the selected symbol.
Almost everything should use the computed visibility.
I will make the names less confusing an a followup patch.
llvm-svn: 322576
parseInt assumed that it could take a negative number literal (e.g.
"-123"). However, such number is in reality already handled as a
unary operator '-' followed by a number literal, so the number
literal is always non-negative. Thus, this code is dead.
llvm-svn: 322453
When a section placement (AT) command references the section itself,
the physical address of the section in the ELF header was calculated
incorrectly due to alignment happening right after the location
pointer's value was captured.
The problem was diagnosed and the first version of the patch written
by Erick Reyes.
llvm-svn: 322421
Previously we checked (HeaderSize == 0) to find out if
PltSection section is IPLT or PLT. Some targets does not set
HeaderSize though. For example PPC64 has no lazy binding implemented
and does not set PltHeaderSize constant.
Because of that using of both IPLT and PLT relocations worked
incorrectly there (testcase is provided).
Patch fixes the issue.
Differential revision: https://reviews.llvm.org/D41613
llvm-svn: 322362
AT> lma_region expression allows to specify the memory region
for section load address.
Should fix PR35684.
Differential revision: https://reviews.llvm.org/D41397
llvm-svn: 322359
Summary:
As reported in bug 35788, rL316280 reintroduces a race between two
members of SectionPiece, which share the same 64 bit memory location.
To fix the race, check the hash before checking the Live member, as
suggested by Rafael.
Reviewers: ruiu, rafael
Reviewed By: ruiu
Subscribers: smeenai, emaste, llvm-commits
Differential Revision: https://reviews.llvm.org/D41884
llvm-svn: 322264
When setting up the chain, we copy over the bucket's previous symbol
index, assuming that this index will be 0 (STN_UNDEF) for an unused
bucket (marking the end of the chain). When linking with --no-rosegment,
however, unused buckets will in fact contain the padding value, and so
the hash table will end up containing invalid chains. Zero out the hash
table section explicitly to avoid this, similar to what's already done
for GNU hash sections.
Differential Revision: https://reviews.llvm.org/D41928
llvm-svn: 322259
When we have --icf=safe we should be able to define --icf=all as a
shorthand for --icf=safe --ignore-function-address-equality.
For now --ignore-function-address-equality is used only to control
access to non preemptable symbols in shared libraries.
llvm-svn: 322152
Summary:
All other templated methods have explicit instantiations but this one is
missing. Discovered while building with a clang with inliner
modifications.
Reviewers: espindola
Subscribers: emaste, llvm-commits, davidxl
Differential Revision: https://reviews.llvm.org/D41847
llvm-svn: 322057
This splits relocation processing in two steps.
First, analyze what needs to be done at the relocation spot. This can
be a constant (non preemptible symbol, relative got reference, etc) or
require a dynamic relocation. At this step we also consider creating
copy relocations.
Once that is done we decide if we need a got or a plt entry.
The code is simpler IMHO. For example:
- There is a single call to isPicRel since the logic is not split
among adjustExpr and the caller.
- R_MIPS_GOTREL is simple to handle now.
- The tracking of what is preemptible or not is much simpler now.
This also fixes a regression with symbols being both in a got and copy
relocated. They had regressed in r268668 and r268149.
The other test changes are because of error messages changes or the
order of two relocations in the output.
llvm-svn: 322047
Currently LLVM's paralellForEach has a problem with reentracy.
That caused https://bugs.llvm.org/show_bug.cgi?id=35788 (lld somtimes
hangs while linking Ruby 2.4) because maybeCompress calls writeTo which
uses paralellForEach.
This patch is to avoid using paralellForEach to call maybeCompress
to workaround the issue.
llvm-svn: 322041
The body of the in scanRelocs is fairly big. This moves it to its own
function.
It is not a big readability win by itself, but should help further
refactoring.
llvm-svn: 322035
Previously, in r320472, I moved the calculation of section offsets and sizes
for compressed debug sections into maybeCompress, which happens before
assignAddresses, so that the compression had the required information. However,
I failed to take account of relocations that patch such sections. This had two
effects:
1. A race condition existed when a debug section referred to a different debug
section (see PR35788).
2. References to symbols in non-debug sections would be patched incorrectly.
This is because the addresses of such symbols are not calculated until after
assignAddresses (this was a partial regression caused by r320472, but they
could still have been broken before, in the event that a custom layout was used
in a linker script).
assignAddresses does not need to know about the output section size of
non-allocatable sections, because they do not affect the value of Dot. This
means that there is no longer a reason not to support custom layout of
compressed debug sections, as far as I'm aware. These two points allow for
delaying when maybeCompress can be called, removing the need for the loop I
previously added to calculate the section size, and therefore the race
condition. Furthermore, by delaying, we fix the issues of relocations getting
incorrect symbol values, because they have now all been finalized.
llvm-svn: 321986