This patch adds the safe stack instrumentation pass to LLVM, which separates
the program stack into a safe stack, which stores return addresses, register
spills, and local variables that are statically verified to be accessed
in a safe way, and the unsafe stack, which stores everything else. Such
separation makes it much harder for an attacker to corrupt objects on the
safe stack, including function pointers stored in spilled registers and
return addresses. You can find more information about the safe stack, as
well as other parts of or control-flow hijack protection technique in our
OSDI paper on code-pointer integrity (http://dslab.epfl.ch/pubs/cpi.pdf)
and our project website (http://levee.epfl.ch).
The overhead of our implementation of the safe stack is very close to zero
(0.01% on the Phoronix benchmarks). This is lower than the overhead of
stack cookies, which are supported by LLVM and are commonly used today,
yet the security guarantees of the safe stack are strictly stronger than
stack cookies. In some cases, the safe stack improves performance due to
better cache locality.
Our current implementation of the safe stack is stable and robust, we
used it to recompile multiple projects on Linux including Chromium, and
we also recompiled the entire FreeBSD user-space system and more than 100
packages. We ran unit tests on the FreeBSD system and many of the packages
and observed no errors caused by the safe stack. The safe stack is also fully
binary compatible with non-instrumented code and can be applied to parts of
a program selectively.
This patch is our implementation of the safe stack on top of LLVM. The
patches make the following changes:
- Add the safestack function attribute, similar to the ssp, sspstrong and
sspreq attributes.
- Add the SafeStack instrumentation pass that applies the safe stack to all
functions that have the safestack attribute. This pass moves all unsafe local
variables to the unsafe stack with a separate stack pointer, whereas all
safe variables remain on the regular stack that is managed by LLVM as usual.
- Invoke the pass as the last stage before code generation (at the same time
the existing cookie-based stack protector pass is invoked).
- Add unit tests for the safe stack.
Original patch by Volodymyr Kuznetsov and others at the Dependable Systems
Lab at EPFL; updates and upstreaming by myself.
Differential Revision: http://reviews.llvm.org/D6094
llvm-svn: 239761
Switch to using BalancedDelimiterTracker to get better diagnostics for
unbalanced delimiters. This still does not handle any of the attributes, simply
improves the parsing.
llvm-svn: 239758
LLVM does not and has not ever supported a soft-float ABI mode on
Sparc, so don't pretend that it does.
Also switch the default from "soft-float" -- which was actually
hard-float because soft-float is unimplemented -- to hard-float.
Differential Revision: http://reviews.llvm.org/D10457
llvm-svn: 239755
Summary:
This commit adds symbolize_vs_style=false to every instance of
ASAN_OPTIONS in the asan tests and sets
ASAN_OPTIONS=symbolize_vs_style=false in lit, for tests which don't set
it.
This way we don't need to make the tests be able to deal with both
symbolize styles.
This is the first patch in the series. I will eventually submit for the
other sanitizers too.
We need this change (or another way to deal with the different outputs) in
order to be able to default to symbolize_vs_style=true on some platforms.
Adding to this change, I'm also adding "env " before any command line
which sets environment variables. That way the test works on other host
shells, like we have if the host is running Windows.
Reviewers: samsonov, kcc, rnk
Subscribers: tberghammer, llvm-commits
Differential Revision: http://reviews.llvm.org/D10294
llvm-svn: 239754
This commit connects the machine function analysis pass (which creates machine
functions) to the MIR parser, which will initialize the machine functions
with the state from the MIR file and reconstruct the machine IR.
This commit introduces a new interface called 'MachineFunctionInitializer',
which can be used to provide custom initialization for the machine functions.
This commit also introduces a new diagnostic class called
'DiagnosticInfoMIRParser' which is used for MIR parsing errors.
This commit modifies the default diagnostic handling in LLVMContext - now the
the diagnostics are printed directly into llvm::errs() so that the MIR parsing
errors can be printed with colours.
Reviewers: Justin Bogner
Differential Revision: http://reviews.llvm.org/D9928
llvm-svn: 239753
The problem is for lldb_private::Type instances that have encoding types (pointer/reference/const/volatile/restrict/typedef to type with user ID 0x123). If they started out with m_flags.clang_type_resolve_state being set to eResolveStateUnresolved (0), then when we would call Type::ResolveClangType(eResolveStateForward) we would complete the full type due to logic errors in the code.
We now only complete the type if clang_type_resolve_state is eResolveStateLayout or eResolveStateFull and we correctly upgrade the type's current completion state to eResolveStateForward after we make a forward delcaration to the pointer/reference/const/volatile/restrict/typedef type instead of leaving it set to eResolveStateUnresolved.
llvm-svn: 239752
We are currently handling all combinations of SymbolBody types directly.
This patch is to flip this and Other if Other->kind() < this->kind()
to reduce number of combinations. No functionality change intended.
llvm-svn: 239745
Summary:
NFC: no one uses AnalyzeBranchPredicate yet.
Add TargetInstrInfo::AnalyzeBranchPredicate and implement for x86. A
later change adding support for page-fault based implicit null checks
depends on this.
Reviewers: reames, ab, atrick
Reviewed By: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10200
llvm-svn: 239742
Summary:
TargetInstrInfo::getLdStBaseRegImmOfs to
TargetInstrInfo::getMemOpBaseRegImmOfs and implement for x86. The
implementation only handles a few easy cases now and will be made more
sophisticated in the future.
This is NFCI: the only user of `getLdStBaseRegImmOfs` (now
`getmemOpBaseRegImmOfs`) is `LoadClusterMotion` and `LoadClusterMotion`
is disabled for x86.
Reviewers: reames, ab, MatzeB, atrick
Reviewed By: MatzeB, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10199
llvm-svn: 239741
Summary:
This instruction encodes a loading operation that may fault, and a label
to branch to if the load page-faults. The locations of potentially
faulting loads and their "handler" destinations are recorded in a
FaultMap section, meant to be consumed by LLVM's clients.
Nothing generates FAULTING_LOAD_OP instructions yet, but they will be
used in a future change.
The documentation (FaultMaps.rst) needs improvement and I will update
this diff with a more expanded version shortly.
Depends on D10196
Reviewers: rnk, reames, AndyAyers, ab, atrick, pgavlin
Reviewed By: atrick, pgavlin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10197
llvm-svn: 239740
in section 10.1, __arm_{w,r}sr{,p,64}.
This includes arm_acle.h definitions with builtins and codegen to support
these, the intrinsics are implemented by generating read/write_register calls
which get appropriately lowered in the backend based on the register string
provided. SemaChecking is also implemented to fault invalid parameters.
Differential Revision: http://reviews.llvm.org/D9697
llvm-svn: 239737
LLVM targeting aarch64 doesn't correctly produce aligned accesses for non-aligned
data at -O0/fast-isel (-mno-unaligned-access).
The root cause seems to be in fast-isel not producing unaligned access correctly
for -mno-unaligned-access.
The patch just aborts fast-isel for loads and stores when -mno-unaligned-access is
present.
The regression test is updated to check this new test case (-mno-unaligned-access
together with fast-isel).
Differential Revision: http://reviews.llvm.org/D10360
llvm-svn: 239732
Summary:
This affects other tools so the previous C++ API has been retained as a
deprecated function for the moment. Clang has been updated with a trivial
patch (not covered by the pre-commit review) to avoid breaking -Werror builds.
Other in-tree tools will be fixed with similar trivial patches.
This continues the patch series to eliminate StringRef forms of GNU triples
from the internals of LLVM that began in r239036.
Reviewers: rengolin
Reviewed By: rengolin
Subscribers: llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D10366
llvm-svn: 239721
This patch fixes a compilation time issue, when MachineSink faces PHIs
with a huge number of operands. This can happen for example in goto table
based interpreters, where some basic blocks can have several of those PHIs,
each one with several hundreds operands. MachineSink was spending a
significant time re-building and re-sorting the list of successors of
the current MachineBasicBlock. The computing and sorting of the current
MachineBasicBlock successors is now cached.
llvm-svn: 239720
Add method to query segments for specified output section name.
Return error if the section is assigned to unknown segment.
Check matching of sections to segments during layout on the subject of correctness.
NOTE: no actual functionality of using custom segments is implemented.
Differential Revision: http://reviews.llvm.org/D10359
llvm-svn: 239719
Summary:
ValueTracking used to overwrite the analysis results computed from
assumes and dominating conditions. This patch fixes this issue.
Test Plan: test/Analysis/ValueTracking/assume.ll
Reviewers: hfinkel, majnemer
Reviewed By: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10283
llvm-svn: 239718
Re-commit after adding "-aarch64-neon-syntax=generic" to fix the failure on OS X.
This patch was firstly committed in r239514, then reverted in r239544 because of a syntax incompatible failure on OS X.
llvm-svn: 239711
PE/COFF executables/DLLs usually contain data which is called
base relocations. Base relocations are a list of addresses that
need to be fixed by the loader if load-time relocation is needed.
Base relocations are in .reloc section.
We emit one base relocation entry for each IMAGE_REL_AMD64_ADDR64
relocation.
In order to save disk space, base relocations are grouped by page.
Each group is called a block. A block starts with a 32-bit page
address followed by 16-bit offsets in the page. That is more
efficient representation of addresses than just an array of 32-bit
addresses.
llvm-svn: 239710