This is a child diff of D92261.
It extended TLS arg/ret to work with aggregate types.
For a function
t foo(t1 a1, t2 a2, ... tn an)
Its arguments shadow are saved in TLS args like
a1_s, a2_s, ..., an_s
TLS ret simply includes r_s. By calculating the type size of each shadow
value, we can get their offset.
This is similar to what MSan does. See __msan_retval_tls and __msan_param_tls
from llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp.
Note that this change does not add test cases for overflowed TLS
arg/ret because this is hard to test w/o supporting aggregate shdow
types. We will be adding them after supporting that.
Reviewed-by: morehouse
Differential Revision: https://reviews.llvm.org/D92440
Before the change, DFSan always does the propagation. W/o
origin tracking, it is harder to understand such flows. After
the change, the flag is off by default.
Reviewed-by: morehouse
Differential Revision: https://reviews.llvm.org/D91234
Similar to -fprofile-generate=, add -fmemory-profile= which takes a
directory path. This is passed down to LLVM via a new module flag
metadata. LLVM in turn provides this name to the runtime via the new
__memprof_profile_filename variable.
Additionally, always pass a default filename (in $cwd if a directory
name is not specified vi the = form of the option). This is also
consistent with the behavior of the PGO instrumentation. Since the
memory profiles will generally be fairly large, it doesn't make sense to
dump them to stderr. Also, importantly, the memory profiles will
eventually be dumped in a compact binary format, which is another reason
why it does not make sense to send these to stderr by default.
Change the existing memprof tests to specify log_path=stderr when that
was being relied on.
Depends on D89086.
Differential Revision: https://reviews.llvm.org/D89087
Add support for match-all tags and GOT-free runtime calls, which
are both required for the kernel to be able to support outlined
checks. This requires extending the access info to let the backend
know when to enable these features. To make the code easier to maintain
introduce an enum with the bit field positions for the access info.
Allow outlined checks to be enabled with -mllvm
-hwasan-inline-all-checks=0. Kernels that contain runtime support for
outlined checks may pass this flag. Kernels lacking runtime support
will continue to link because they do not pass the flag. Old versions
of LLVM will ignore the flag and continue to use inline checks.
With a separate kernel patch [1] I measured the code size of defconfig
+ tag-based KASAN, as well as boot time (i.e. time to init launch)
on a DragonBoard 845c with an Android arm64 GKI kernel. The results
are below:
code size boot time
before 92824064 6.18s
after 38822400 6.65s
[1] https://linux-review.googlesource.com/id/I1a30036c70ab3c3ee78d75ed9b87ef7cdc3fdb76
Depends on D90425
Differential Revision: https://reviews.llvm.org/D90426
This is a workaround for poor heuristics in the backend where we can
end up materializing the constant multiple times. This is particularly
bad when using outlined checks because we materialize it for every call
(because the backend considers it trivial to materialize).
As a result the field containing the shadow base value will always
be set so simplify the code taking that into account.
Differential Revision: https://reviews.llvm.org/D90425
CallInst::updateProfWeight() creates branch_weights with i64 instead of i32.
To be more consistent everywhere and remove lots of casts from uint64_t
to uint32_t, use i64 for branch_weights.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D88609
CallInst::updateProfWeight() creates branch_weights with i64 instead of i32.
To be more consistent everywhere and remove lots of casts from uint64_t
to uint32_t, use i64 for branch_weights.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D88609
D70365 allows us to make attributes default. This is a follow up to
actually make nosync, nofree and willreturn default. The approach we
chose, for now, is to opt-in to default attributes to avoid introducing
problems to target specific intrinsics. Intrinsics with default
attributes can be created using `DefaultAttrsIntrinsic` class.
Do not instrument user-defined ELF sections (whose names resemble valid
C identifiers). They may have special use semantics and modifying them
may break programs. This is e.g. the case with NetBSD __link_set API
that expects these sections to store consecutive array elements.
Differential Revision: https://reviews.llvm.org/D76665
When ASan and e.g. Dead Virtual Function Elimination are enabled, the
latter will rely on type metadata to determine if certain virtual calls can be
removed. However, ASan currently does not copy type metadata, which can cause
virtual function calls to be incorrectly removed.
Differential Revision: https://reviews.llvm.org/D88368
When address sanitizing a function, stack unpinsoning code is inserted before each ret instruction. However if the ret instruciton is preceded by a musttail call, such transformation broke the musttail call contract and generates invalid IR.
This patch fixes the issue by moving the insertion point prior to the musttail call if there is one.
Differential Revision: https://reviews.llvm.org/D87777
Under NPM, the TSan passes are split into a module and function pass. A
couple tests were testing for inserted module constructors, which is
only part of the module pass.
Call instructions with musttail tag must be optimized as a tailcall, otherwise could lead to incorrect program behavior.
When TSAN is instrumenting functions, it broke the contract by adding a call to the tsan exit function inbetween the musttail call and return instruction, and also inserted exception handling code.
This happend throguh EscapeEnumerator, which adds exception handling code and returns ret instructions as the place to insert instrumentation calls.
This becomes especially problematic for coroutines, because coroutines rely on tail calls to do symmetric transfers properly.
To fix this, this patch moves the location to insert instrumentation calls prior to the musttail call for ret instructions that are following musttail calls, and also does not handle exception for musttail calls.
Differential Revision: https://reviews.llvm.org/D87620
This is consistent with the clang option added in
7ed8124d46, and the comments on the
runtime patch in D87120.
Differential Revision: https://reviews.llvm.org/D87622
See RFC for background:
http://lists.llvm.org/pipermail/llvm-dev/2020-June/142744.html
Note that the runtime changes will be sent separately (hopefully this
week, need to add some tests).
This patch includes the LLVM pass to instrument memory accesses with
either inline sequences to increment the access count in the shadow
location, or alternatively to call into the runtime. It also changes
calls to memset/memcpy/memmove to the equivalent runtime version.
The pass is modeled on the address sanitizer pass.
The clang changes add the driver option to invoke the new pass, and to
link with the upcoming heap profiling runtime libraries.
Currently there is no attempt to optimize the instrumentation, e.g. to
aggregate updates to the same memory allocation. That will be
implemented as follow on work.
Differential Revision: https://reviews.llvm.org/D85948
This would be a problem if the entire instrumented function was a call
to
e.g. memcpy
Use FnPrologueEnd Instruction* instead of ActualFnStart BB*
Differential Revision: https://reviews.llvm.org/D86001
This allows us to add addtional instrumentation before the function start,
without splitting the first BB.
Differential Revision: https://reviews.llvm.org/D85985
Have the front-end use the `nounwind` attribute on atomic libcalls.
This prevents us from seeing `invoke __atomic_load` in MSAN, which
is problematic as it has no successor for instrumentation to be added.
This lets us support the scenario where a binary is linked from a mix
of object files with both instrumented and non-instrumented globals.
This is likely to occur on Android where the decision of whether to use
instrumented globals is based on the API level, which is user-facing.
Previously, in this scenario, it was possible for the comdat from
one of the object files with non-instrumented globals to be selected,
and since this comdat did not contain the note it would mean that the
note would be missing in the linked binary and the globals' shadow
memory would be left uninitialized, leading to a tag mismatch failure
at runtime when accessing one of the instrumented globals.
It is harmless to include the note when targeting a runtime that does
not support instrumenting globals because it will just be ignored.
Differential Revision: https://reviews.llvm.org/D85871
Commit 9385aaa848 ("[sancov] Fix PR33732") added zeroext to
__sanitizer_cov_trace(_const)?_cmp[1248] parameters for x86_64 only,
however, it is useful on other targets, in particular, on SystemZ: it
fixes swap-cmp.test.
Therefore, use it on all targets. This is safe: if target ABI does not
require zero extension for a particular parameter, zeroext is simply
ignored. A similar change has been implemeted as part of commit
3bc439bdff ("[MSan] Add instrumentation for SystemZ"), and there were
no problems with it.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D85689
MSan removes readnone/readonly and similar attributes from callees,
because after MSan instrumentation those attributes no longer apply.
This change removes the attributes from call sites, as well.
Failing to do this may cause DSE of paramTLS stores before calls to
readonly/readnone functions.
Differential Revision: https://reviews.llvm.org/D85259
If a section is supposed to hold elements of type T, then the
corresponding CreateSecStartEnd()'s Ty parameter represents T*.
Forwarding it to GlobalVariable constructor causes the resulting
GlobalVariable's type to be T*, and its SSA value type to be T**, which
is one indirection too many. This issue is mostly masked by pointer
casts, however, the global variable still gets an incorrect alignment,
which causes SystemZ to choose wrong instructions to access the
section.
D68041 placed `__profc_`, `__profd_` and (if exists) `__profvp_` in different comdat groups.
There are some issues:
* Cost: one or two additional section headers (`.group` section(s)): 64 or 128 bytes on ELF64.
* `__profc_`, `__profd_` and (if exists) `__profvp_` should be retained or
discarded. Placing them into separate comdat groups is conceptually inferior.
* If the prevailing group does not include `__profvp_` (value profiling not
used) but a non-prevailing group from another translation unit has `__profvp_`
(the function is inlined into another and triggers value profiling), there
will be a stray `__profvp_` if --gc-sections is not enabled.
This has been fixed by 3d6f53018f.
Actually, we can reuse an existing symbol (we choose `__profd_`) as the group
signature to avoid a string in the string table (the sole reason that D68041
could improve code size is that `__profv_` was an otherwise unused symbol which
wasted string table space). This saves one or two section headers.
For a -DCMAKE_BUILD_TYPE=Release -DLLVM_BUILD_INSTRUMENTED=IR build, `ninja
clang lld`, the patch has saved 10.5MiB (2.2%) for the total .o size.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D84723
Freeze always returns a defined value. This also prevents msan from
checking the input shadow, which happened because freeze wasn't
explicitly visited.
Differential Revision: https://reviews.llvm.org/D85040
Adds the -fast-16-labels flag, which enables efficient instrumentation
for DFSan when the user needs <=16 labels. The instrumentation
eliminates most branches and most calls to __dfsan_union or
__dfsan_union_load.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D84371
Since the NPM pass is named sancov-module, not sancov.
This makes all tests under Instrumentation/SanitizerCoverage pass when
-enable-new-pm is on by default.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D84687
These calls are neither intercepted by compiler-rt nor is libatomic.a
naturally instrumented.
This patch uses the existing libcall mechanism to detect a call
to atomic_load or atomic_store, and instruments them much like
the preexisting instrumentation for atomics.
Calls to _load are modified to have at least Acquire ordering, and
calls to _store at least Release ordering. Because this needs to be
converted at runtime, msan injects a LUT (implemented as a vector
with extractelement).
Differential Revision: https://reviews.llvm.org/D83337
This allows tracking the in-memory type of a pointer argument to a
function for ABI purposes. This is essentially a stripped down version
of byval to remove some of the stack-copy implications in its
definition.
This includes the base IR changes, and some tests for places where it
should be treated similarly to byval. Codegen support will be in a
future patch.
My original attempt at solving some of these problems was to repurpose
byval with a different address space from the stack. However, it is
technically permitted for the callee to introduce a write to the
argument, although nothing does this in reality. There is also talk of
removing and replacing the byval attribute, so a new attribute would
need to take its place anyway.
This is intended avoid some optimization issues with the current
handling of aggregate arguments, as well as fixes inflexibilty in how
frontends can specify the kernel ABI. The most honest representation
of the amdgpu_kernel convention is to expose all kernel arguments as
loads from constant memory. Today, these are raw, SSA Argument values
and codegen is responsible for turning these into loads.
Background:
There currently isn't a satisfactory way to represent how arguments
for the amdgpu_kernel calling convention are passed. In reality,
arguments are passed in a single, flat, constant memory buffer
implicitly passed to the function. It is also illegal to call this
function in the IR, and this is only ever invoked by a driver of some
kind.
It does not make sense to have a stack passed parameter in this
context as is implied by byval. It is never valid to write to the
kernel arguments, as this would corrupt the inputs seen by other
dispatches of the kernel. These argumets are also not in the same
address space as the stack, so a copy is needed to an alloca. From a
source C-like language, the kernel parameters are invisible.
Semantically, a copy is always required from the constant argument
memory to a mutable variable.
The current clang calling convention lowering emits raw values,
including aggregates into the function argument list, since using
byval would not make sense. This has some unfortunate consequences for
the optimizer. In the aggregate case, we end up with an aggregate
store to alloca, which both SROA and instcombine turn into a store of
each aggregate field. The optimizer never pieces this back together to
see that this is really just a copy from constant memory, so we end up
stuck with expensive stack usage.
This also means the backend dictates the alignment of arguments, and
arbitrarily picks the LLVM IR ABI type alignment. By allowing an
explicit alignment, frontends can make better decisions. For example,
there's real no advantage to an aligment higher than 4, so a frontend
could choose to compact the argument layout. Similarly, there is a
high penalty to using an alignment lower than 4, so a frontend could
opt into more padding for small arguments.
Another design consideration is when it is appropriate to expose the
fact that these arguments are all really passed in adjacent
memory. Currently we have a late IR optimization pass in codegen to
rewrite the kernel argument values into explicit loads to enable
vectorization. In most programs, unrelated argument loads can be
merged together. However, exposing this property directly from the
frontend has some disadvantages. We still need a way to track the
original argument sizes and alignments to report to the driver. I find
using some side-channel, metadata mechanism to track this
unappealing. If the kernel arguments were exposed as a single buffer
to begin with, alias analysis would be unaware that the padding bits
betewen arguments are meaningless. Another family of problems is there
are still some gaps in replacing all of the available parameter
attributes with metadata equivalents once lowered to loads.
The immediate plan is to start using this new attribute to handle all
aggregate argumets for kernels. Long term, it makes sense to migrate
all kernel arguments, including scalars, to be passed indirectly in
the same manner.
Additional context is in D79744.
Under NPM, the asan-globals-md analysis is required but cannot be run
within the asan function pass due to module analyses not being able to
run from a function pass. So this pins all tests using "-asan" to the
legacy PM and adds a corresponding RUN line with
-passes='require<asan-globals-md>,function(asan)'.
Now all tests in Instrumentation/AddressSanitizer pass when
-enable-new-pm is by default on.
Tests were automatically converted using the following python script and
failures were manually fixed up.
import sys
for i in sys.argv:
with open(i, 'r') as f:
s = f.read()
with open(i, 'w') as f:
for l in s.splitlines():
if "RUN:" in l and ' -asan -asan-module ' in l and '\\' not in l:
f.write(l.replace(' -asan -asan-module ', ' -asan -asan-module -enable-new-pm=0 '))
f.write('\n')
f.write(l.replace(' -asan -asan-module ', " -passes='require<asan-globals-md>,function(asan),module(asan-module)' "))
f.write('\n')
elif "RUN:" in l and ' -asan ' in l and '\\' not in l:
f.write(l.replace(' -asan ', ' -asan -enable-new-pm=0 '))
f.write('\n')
f.write(l.replace(' -asan ', " -passes='require<asan-globals-md>,function(asan)' "))
f.write('\n')
else:
f.write(l)
f.write('\n')
See https://bugs.llvm.org/show_bug.cgi?id=46611.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D83921
This is needed because macOS on Apple Silicon has some reserved pages inside the "regular" shadow memory location, and mapping over that location fails.
Differential Revision: https://reviews.llvm.org/D82912
This reverts commit d76e62fdb7.
Reverting since this can lead to linker errors:
```
ld.lld: error: undefined hidden symbol: __start_asan_globals
```
when using --gc-sections. The linker can discard __start_asan_globals
once there are no more `asan_globals` sections left, which can lead to
this error if we have external linkages to them.
This adds option -tsan-compound-read-before-write to emit different
instrumentation for the write if the read before that write is omitted
from instrumentation. The default TSan runtime currently does not
support the different instrumentation, and the option is disabled by
default.
Alternative runtimes, such as the Kernel Concurrency Sanitizer (KCSAN)
can make use of the feature. Indeed, the initial motivation is for use
in KCSAN as it was determined that due to the Linux kernel having a
large number of unaddressed data races, it makes sense to improve
performance and reporting by distinguishing compounded operations. E.g.
the compounded instrumentation is typically emitted for compound
operations such as ++, +=, |=, etc. By emitting different reports, such
data races can easily be noticed, and also automatically bucketed
differently by CI systems.
Reviewed By: dvyukov, glider
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D83867
Implement llvm.experimental.vector.{add,mul,or,and,...}.
An IR test is included but no C test for lack of good way to
get the compiler to emit these.
Differential Revision: https://reviews.llvm.org/D82920
Adds LLVM option to control eager checking under -msan-eager-checks.
This change depends on the noundef keyword to determining cases where it
it sound to check these shadows, and falls back to passing shadows
values by TLS.
Checking at call boundaries enforces undefined behavior rules with
passing uninitialized arguments by value.
Differential Revision: https://reviews.llvm.org/D81699
Summary: `nomerge` attribute was added at D78659. So, we can remove the EmptyAsm workaround in ASan the MSan and use this attribute.
Reviewers: vitalybuka
Reviewed By: vitalybuka
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D82322
Summary:
Normally, the Origin is passed over TLS, which seems like it introduces unnecessary overhead. It's in the (extremely) cold path though, so the only overhead is in code size.
But with eager-checks, calls to __msan_warning functions are extremely common, so this becomes a useful optimization.
This can save ~5% code size.
Reviewers: eugenis, vitalybuka
Reviewed By: eugenis, vitalybuka
Subscribers: hiraditya, #sanitizers, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D81700
The !associated metadata may be attached to a global object declaration
with a single argument that references another global object. This
metadata prevents discarding of the global object in linker GC unless
the referenced object is also discarded.
Furthermore, when a function symbol is discarded by the linker, setting
up !associated metadata allows linker to discard counters, data and
values associated with that function symbol. This is not possible today
because there's metadata to guide the linker. This approach is also used
by other instrumentations like sanitizers.
Note that !associated metadata is only supported by ELF, it does not have
any effect on non-ELF targets.
Differential Revision: https://reviews.llvm.org/D76802
The !associated metadata may be attached to a global object declaration
with a single argument that references another global object. This
metadata prevents discarding of the global object in linker GC unless
the referenced object is also discarded.
Furthermore, when a function symbol is discarded by the linker, setting
up !associated metadata allows linker to discard counters, data and
values associated with that function symbol. This is not possible today
because there's metadata to guide the linker. This approach is also used
by other instrumentations like sanitizers.
Note that !associated metadata is only supported by ELF, it does not have
any effect on non-ELF targets.
Differential Revision: https://reviews.llvm.org/D76802
Summary:
This matches ELF.
This makes the number of ASan failures under the new pass manager on
Windows go from 18 to 1.
Under the old pass manager, the ASan module pass was one of the very
last things run, so these globals didn't get removed due to GlobalOpt.
But with the NPM the ASan module pass that adds these globals are run
much earlier in the pipeline and GlobalOpt ends up removing them.
Reviewers: vitalybuka, hans
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D81175
It should not be necessary to use weak linkage for these. Doing so
implies interposablity and thus PIC generates indirections and
dynamic relocations, which are unnecessary and suboptimal. Aside
from this, ASan instrumentation never introduces GOT indirection
relocations where there were none before--only new absolute relocs
in RELRO sections for metadata, which are less problematic for
special linkage situations that take pains to avoid GOT generation.
Patch By: mcgrathr
Differential Revision: https://reviews.llvm.org/D80605
Follow the model used on Linux, where the clang driver passes the
linker a -u switch to force the profile runtime to be linked in,
rather than having every TU emit a dead function with a reference.
Differential Revision: https://reviews.llvm.org/D79835
Follow the model used on Linux, where the clang driver passes the
linker a -u switch to force the profile runtime to be linked in,
rather than having every TU emit a dead function with a reference.
Patch By: mcgrathr
Differential Revision: https://reviews.llvm.org/D79835
This is D77454, except for stores. All the infrastructure work was done
for loads, so the remaining changes necessary are relatively small.
Differential Revision: https://reviews.llvm.org/D79968
Add -tsan-instrument-read-before-write which allows instrumenting reads
of reads-before-writes.
This is required for KCSAN [1], where under certain configurations plain
writes behave differently (e.g. aligned writes up to word size may be
treated as atomic). In order to avoid missing potential data races due
to plain RMW operations ("x++" etc.), we will require instrumenting
reads of reads-before-writes.
[1] https://github.com/google/ktsan/wiki/KCSAN
Author: melver (Marco Elver)
Reviewed-in: https://reviews.llvm.org/D79983
I couldn't make arc land the changes properly, for some reason they all got
squashed. Reverting them now to land cleanly.
Summary: This reverts commit cfb5f89b62.
Reviewers: kcc, thejh
Subscribers:
Summary:
A following commit will split the loop over ToInstrument into two.
To avoid having to duplicate the condition for suppressing instrumentation
sites based on ClDebug{Min,Max}, refactor it out into a new function.
While we're at it, we can also avoid the indirection through
NumInstrumented for setting FunctionModified.
This is patch 1/4 of a patch series:
https://reviews.llvm.org/D77616 [PATCH 1/4] [AddressSanitizer] Refactor ClDebug{Min,Max} handling
https://reviews.llvm.org/D77617 [PATCH 2/4] [AddressSanitizer] Split out memory intrinsic handling
https://reviews.llvm.org/D77618 [PATCH 3/4] [AddressSanitizer] Refactor: Permit >1 interesting operands per instruction
https://reviews.llvm.org/D77619 [PATCH 4/4] [AddressSanitizer] Instrument byval call arguments
Reviewers: kcc, glider
Reviewed By: glider
Subscribers: jfb, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77616
Add support to optionally emit different instrumentation for accesses to
volatile variables. While the default TSAN runtime likely will never
require this feature, other runtimes for different environments that
have subtly different memory models or assumptions may require
distinguishing volatiles.
One such environment are OS kernels, where volatile is still used in
various places for various reasons, and often declare volatile to be
"safe enough" even in multi-threaded contexts. One such example is the
Linux kernel, which implements various synchronization primitives using
volatile (READ_ONCE(), WRITE_ONCE()). Here the Kernel Concurrency
Sanitizer (KCSAN) [1], is a runtime that uses TSAN instrumentation but
otherwise implements a very different approach to race detection from
TSAN.
While in the Linux kernel it is generally discouraged to use volatiles
explicitly, the topic will likely come up again, and we will eventually
need to distinguish volatile accesses [2]. The other use-case is
ignoring data races on specially marked variables in the kernel, for
example bit-flags (here we may hide 'volatile' behind a different name
such as 'no_data_race').
[1] https://github.com/google/ktsan/wiki/KCSAN
[2] https://lkml.kernel.org/r/CANpmjNOfXNE-Zh3MNP=-gmnhvKbsfUfTtWkyg_=VqTxS4nnptQ@mail.gmail.com
Author: melver (Marco Elver)
Reviewed-in: https://reviews.llvm.org/D78554
Summary:
Following up on the comments on D77638.
Not undoing rGd6525eff5ebfa0ef1d6cd75cb9b40b1881e7a707 here at the moment, since I don't know how to test mac builds. Please let me know if I should include that here too.
Reviewers: vitalybuka
Reviewed By: vitalybuka
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77889
This is equivalent in terms of LLVM IR semantics, but we want to
transition away from using MaybeAlign to represent the alignment of
these instructions.
Differential Revision: https://reviews.llvm.org/D77984
Summary:
This patch establishes memory layout and adds instrumentation. It does
not add runtime support and does not enable MSan, which will be done
separately.
Memory layout is based on PPC64, with the exception that XorMask
is not used - low and high memory addresses are chosen in a way that
applying AndMask to low and high memory produces non-overlapping
results.
VarArgHelper is based on AMD64. It might be tempting to share some
code between the two implementations, but we need to keep in mind that
all the ABI similarities are coincidental, and therefore any such
sharing might backfire.
copyRegSaveArea() indiscriminately copies the entire register save area
shadow, however, fragments thereof not filled by the corresponding
visitCallSite() invocation contain irrelevant data. Whether or not this
can lead to practical problems is unclear, hence a simple TODO comment.
Note that the behavior of the related copyOverflowArea() is correct: it
copies only the vararg-related fragment of the overflow area shadow.
VarArgHelper test is based on the AArch64 one.
s390x ABI requires that arguments are zero-extended to 64 bits. This is
particularly important for __msan_maybe_warning_*() and
__msan_maybe_store_origin_*() shadow and origin arguments, since non
zeroed upper parts thereof confuse these functions. Therefore, add ZExt
attribute to the corresponding parameters.
Add ZExt attribute checks to msan-basic.ll. Since with
-msan-instrumentation-with-call-threshold=0 instrumentation looks quite
different, introduce the new CHECK-CALLS check prefix.
Reviewers: eugenis, vitalybuka, uweigand, jonpa
Reviewed By: eugenis
Subscribers: kristof.beyls, hiraditya, danielkiss, llvm-commits, stefansf, Andreas-Krebbel
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D76624
Summary:
New SanitizerCoverage feature `inline-bool-flag` which inserts an
atomic store of `1` to a boolean (which is an 8bit integer in
practice) flag on every instrumented edge.
Implementation-wise it's very similar to `inline-8bit-counters`
features. So, much of wiring and test just follows the same pattern.
Reviewers: kcc, vitalybuka
Reviewed By: vitalybuka
Subscribers: llvm-commits, hiraditya, jfb, cfe-commits, #sanitizers
Tags: #clang, #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D77244
Summary:
In some cases, ASan may insert instrumentation before function arguments
have been stored into their allocas. This causes two issues:
1) The argument value must be spilled until it can be stored into the
reserved alloca, wasting a stack slot.
2) Until the store occurs in a later basic block, the debug location
will point to the wrong frame offset, and backtraces will show an
uninitialized value.
The proposed solution is to move instructions which initialize allocas
for arguments up into the entry block, before the position where ASan
starts inserting its instrumentation.
For the motivating test case, before the patch we see:
```
| 0033: movq %rdi, 0x68(%rbx) | | DW_TAG_formal_parameter |
| ... | | DW_AT_name ("a") |
| 00d1: movq 0x68(%rbx), %rsi | | DW_AT_location (RBX+0x90) |
| 00d5: movq %rsi, 0x90(%rbx) | | ^ not correct ... |
```
and after the patch we see:
```
| 002f: movq %rdi, 0x70(%rbx) | | DW_TAG_formal_parameter |
| | | DW_AT_name ("a") |
| | | DW_AT_location (RBX+0x70) |
```
rdar://61122691
Reviewers: aprantl, eugenis
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77182
Try again with an up-to-date version of D69471 (99317124 was a stale
revision).
---
Revise the coverage mapping format to reduce binary size by:
1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.
This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).
Rationale for changes to the format:
- With the current format, most coverage function records are discarded.
E.g., more than 97% of the records in llc are *duplicate* placeholders
for functions visible-but-not-used in TUs. Placeholders *are* used to
show under-covered functions, but duplicate placeholders waste space.
- We reached general consensus about giving (1) a try at the 2017 code
coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
duplicates is simpler than alternatives like teaching build systems
about a coverage-aware database/module/etc on the side.
- Revising the format is expensive due to the backwards compatibility
requirement, so we might as well compress filenames while we're at it.
This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).
See CoverageMappingFormat.rst for the details on what exactly has
changed.
Fixes PR34533 [2], hopefully.
[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533
Differential Revision: https://reviews.llvm.org/D69471
Revise the coverage mapping format to reduce binary size by:
1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.
This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).
Rationale for changes to the format:
- With the current format, most coverage function records are discarded.
E.g., more than 97% of the records in llc are *duplicate* placeholders
for functions visible-but-not-used in TUs. Placeholders *are* used to
show under-covered functions, but duplicate placeholders waste space.
- We reached general consensus about giving (1) a try at the 2017 code
coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
duplicates is simpler than alternatives like teaching build systems
about a coverage-aware database/module/etc on the side.
- Revising the format is expensive due to the backwards compatibility
requirement, so we might as well compress filenames while we're at it.
This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).
See CoverageMappingFormat.rst for the details on what exactly has
changed.
Fixes PR34533 [2], hopefully.
[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533
Differential Revision: https://reviews.llvm.org/D69471
Some IRBuilder methods that were originally defined on
IRBuilderBase do not respect custom IRBuilder inserters/folders,
because those were not accessible prior to D73835. Fix this by
making use of existing (and now accessible) IRBuilder methods,
which will handle inserters/folders correctly.
There are some changes in OpenMP and Instrumentation tests, where
bitcasts now get constant folded. I've also highlighted one
InstCombine test which now finishes in two rather than three
iterations, thanks to new instructions being inserted into the
worklist.
Differential Revision: https://reviews.llvm.org/D74787
replaceDbgDeclare is used to update the descriptions of stack variables
when they are moved (e.g. by ASan or SafeStack). A side effect of
replaceDbgDeclare is that it moves dbg.declares around in the
instruction stream (typically by hoisting them into the entry block).
This behavior was introduced in llvm/r227544 to fix an assertion failure
(llvm.org/PR22386), but no longer appears to be necessary.
Hoisting a dbg.declare generally does not create problems. Usually,
dbg.declare either describes an argument or an alloca in the entry
block, and backends have special handling to emit locations for these.
In optimized builds, LowerDbgDeclare places dbg.values in the right
spots regardless of where the dbg.declare is. And no one uses
replaceDbgDeclare to handle things like VLAs.
However, there doesn't seem to be a positive case for moving
dbg.declares around anymore, and this reordering can get in the way of
understanding other bugs. I propose getting rid of it.
Testing: stage2 RelWithDebInfo sanitized build, check-llvm
rdar://59397340
Differential Revision: https://reviews.llvm.org/D74517
Various parts of the LLVM code generator assume that the address
argument of a dbg.declare is not a `ptrtoint`-of-alloca. ASan breaks
this assumption, and this results in local variables sometimes being
unavailable at -O0.
GlobalISel, SelectionDAG, and FastISel all do not appear to expect
dbg.declares to have a `ptrtoint` as an operand. This means that they do
not place entry block allocas in the usual side table reserved for local
variables available in the whole function scope. This isn't always a
problem, as LLVM can try to lower the dbg.declare to a DBG_VALUE, but
those DBG_VALUEs can get dropped for all the usual reasons DBG_VALUEs
get dropped. In the ObjC test case I'm looking at, the cause happens to
be that `replaceDbgDeclare` has hoisted dbg.declares into the entry
block, causing LiveDebugValues to "kill" the DBG_VALUEs because the
lexical dominance check fails.
To address this, I propose:
1) Have ASan (always) pass an alloca to dbg.declares (this patch). This
is a narrow bugfix for -O0 debugging.
2) Make replaceDbgDeclare not move dbg.declares around. This should be a
generic improvement for optimized debug info, as it would prevent the
lexical dominance check in LiveDebugValues from killing as many
variables.
This means reverting llvm/r227544, which fixed an assertion failure
(llvm.org/PR22386) but no longer seems to be necessary. I was able to
complete a stage2 build with the revert in place.
rdar://54688991
Differential Revision: https://reviews.llvm.org/D74369
We have to avoid using a GOT relocation to access the bias variable,
setting the hidden visibility achieves that.
Differential Revision: https://reviews.llvm.org/D73529
Summary:
These instructions ignore parts of the input vectors which makes the
default MSan handling too strict and causes false positive reports.
Reviewers: vitalybuka, RKSimon, thakis
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73374
This is an alternative to the continous mode that was implemented in
D68351. This mode relies on padding and the ability to mmap a file over
the existing mapping which is generally only available on POSIX systems
and isn't suitable for other platforms.
This change instead introduces the ability to relocate counters at
runtime using a level of indirection. On every counter access, we add a
bias to the counter address. This bias is stored in a symbol that's
provided by the profile runtime and is initially set to zero, meaning no
relocation. The runtime can mmap the profile into memory at abitrary
location, and set bias to the offset between the original and the new
counter location, at which point every subsequent counter access will be
to the new location, which allows updating profile directly akin to the
continous mode.
The advantage of this implementation is that doesn't require any special
OS support. The disadvantage is the extra overhead due to additional
instructions required for each counter access (overhead both in terms of
binary size and performance) plus duplication of counters (i.e. one copy
in the binary itself and another copy that's mmapped).
Differential Revision: https://reviews.llvm.org/D69740
As of D70146 lld GCs comdats as a group and no longer considers notes in
comdats to be GC roots, so we need to move the note to a comdat with a GC root
section (.init_array) in order to prevent lld from discarding the note.
Differential Revision: https://reviews.llvm.org/D72936
Summary:
Support alloca-referencing dbg.value in hwasan instrumentation.
Update AsmPrinter to emit DW_AT_LLVM_tag_offset when location is in
loclist format.
Reviewers: pcc
Subscribers: srhines, aprantl, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70753
Revise the coverage mapping format to reduce binary size by:
1. Naming function records and marking them `linkonce_odr`, and
2. Compressing filenames.
This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB)
and speeds up end-to-end single-threaded report generation by 10%. For
reference the compressed name data in llc is 81MB (__llvm_prf_names).
Rationale for changes to the format:
- With the current format, most coverage function records are discarded.
E.g., more than 97% of the records in llc are *duplicate* placeholders
for functions visible-but-not-used in TUs. Placeholders *are* used to
show under-covered functions, but duplicate placeholders waste space.
- We reached general consensus about giving (1) a try at the 2017 code
coverage BoF [1]. The thinking was that using `linkonce_odr` to merge
duplicates is simpler than alternatives like teaching build systems
about a coverage-aware database/module/etc on the side.
- Revising the format is expensive due to the backwards compatibility
requirement, so we might as well compress filenames while we're at it.
This shrinks the encoded filenames in llc by 86% (12MB -> 1.6MB).
See CoverageMappingFormat.rst for the details on what exactly has
changed.
Fixes PR34533 [2], hopefully.
[1] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118428.html
[2] https://bugs.llvm.org/show_bug.cgi?id=34533
Differential Revision: https://reviews.llvm.org/D69471
Summary:
This fixes https://llvm.org/PR26673
"Wrong debugging information with -fsanitize=address"
where asan instrumentation causes the prologue end to be computed
incorrectly: findPrologueEndLoc, looks for the first instruction
with a debug location to determine the prologue end. Since the asan
instrumentation instructions had debug locations, that prologue end was
at some instruction, where the stack frame is still being set up.
There seems to be no good reason for extra debug locations for the
asan instrumentations that set up the frame; they don't have a natural
source location. In the debugger they are simply located at the start
of the function.
For certain other instrumentations like -fsanitize-coverage=trace-pc-guard
the same problem persists - that might be more work to fix, since it
looks like they rely on locations of the tracee functions.
This partly reverts aaf4bb2394
"[asan] Set debug location in ASan function prologue"
whose motivation was to give debug location info to the coverage callback.
Its test only ensures that the call to @__sanitizer_cov_trace_pc_guard is
given the correct source location; as the debug location is still set in
ModuleSanitizerCoverage::InjectCoverageAtBlock, the test does not break.
So -fsanitize-coverage is hopefully unaffected - I don't think it should
rely on the debug locations of asan-generated allocas.
Related revision: 3c6c14d14b
"ASAN: Provide reliable debug info for local variables at -O0."
Below is how the X86 assembly version of the added test case changes.
We get rid of some .loc lines and put prologue_end where the user code starts.
```diff
--- 2.master.s 2019-12-02 12:32:38.982959053 +0100
+++ 2.patch.s 2019-12-02 12:32:41.106246674 +0100
@@ -45,8 +45,6 @@
.cfi_offset %rbx, -24
xorl %eax, %eax
movl %eax, %ecx
- .Ltmp2:
- .loc 1 3 0 prologue_end # 2.c:3:0
cmpl $0, __asan_option_detect_stack_use_after_return
movl %edi, 92(%rbx) # 4-byte Spill
movq %rsi, 80(%rbx) # 8-byte Spill
@@ -57,9 +55,7 @@
callq __asan_stack_malloc_0
movq %rax, 72(%rbx) # 8-byte Spill
.LBB1_2:
- .loc 1 0 0 is_stmt 0 # 2.c:0:0
movq 72(%rbx), %rax # 8-byte Reload
- .loc 1 3 0 # 2.c:3:0
cmpq $0, %rax
movq %rax, %rcx
movq %rax, 64(%rbx) # 8-byte Spill
@@ -72,9 +68,7 @@
movq %rax, %rsp
movq %rax, 56(%rbx) # 8-byte Spill
.LBB1_4:
- .loc 1 0 0 # 2.c:0:0
movq 56(%rbx), %rax # 8-byte Reload
- .loc 1 3 0 # 2.c:3:0
movq %rax, 120(%rbx)
movq %rax, %rcx
addq $32, %rcx
@@ -99,7 +93,6 @@
movb %r8b, 31(%rbx) # 1-byte Spill
je .LBB1_7
# %bb.5:
- .loc 1 0 0 # 2.c:0:0
movq 40(%rbx), %rax # 8-byte Reload
andq $7, %rax
addq $3, %rax
@@ -118,7 +111,8 @@
movl %ecx, (%rax)
movq 80(%rbx), %rdx # 8-byte Reload
movq %rdx, 128(%rbx)
- .loc 1 4 3 is_stmt 1 # 2.c:4:3
+.Ltmp2:
+ .loc 1 4 3 prologue_end # 2.c:4:3
movq %rax, %rdi
callq f
movq 48(%rbx), %rax # 8-byte Reload
```
Reviewers: eugenis, aprantl
Reviewed By: eugenis
Subscribers: ormris, aprantl, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70894
This was an experiment made possible by a non-standard feature of the
Android dynamic loader.
It required introducing a flag to tell the compiler which ABI was being
targeted.
This flag is no longer needed, since the generated code now works for
both ABI's.
We leave that flag untouched for backwards compatibility. This also
means that if we need to distinguish between targeted ABI's again
we can do that without disturbing any existing workflows.
We leave a comment in the source code and mention in the help text to
explain this for any confused person reading the code in the future.
Patch by Matthew Malcomson
Differential Revision: https://reviews.llvm.org/D69574
The address sanitizer ignore memory accesses from different address
spaces, however when instrumenting globals the check for different
address spaces is missing. This result in assertion failure. The fault
was found in an out of tree target.
The patch skip all globals of non default address space.
Reviewed By: leonardchan, vitalybuka
Differential Revision: https://reviews.llvm.org/D68790
Summary:
MSan instrumentation adds stores and loads even to pure
readonly/writeonly functions. It is removing some of those attributes
from instrumented functions and call targets, but apparently not enough.
Remove writeonly, argmemonly and speculatable in addition to readonly /
readnone.
Reviewers: pcc, vitalybuka
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D69541
Summary:
If we insert them from function pass some analysis may be missing or invalid.
Fixes PR42877.
Reviewers: eugenis, leonardchan
Reviewed By: leonardchan
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D68832
> llvm-svn: 374481
Signed-off-by: Vitaly Buka <vitalybuka@google.com>
llvm-svn: 374527
Summary:
If we insert them from function pass some analysis may be missing or invalid.
Fixes PR42877.
Reviewers: eugenis, leonardchan
Reviewed By: leonardchan
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D68832
llvm-svn: 374481
With this patch, compiler generated profile variables will have its own COMDAT
name for ELF format, which syncs the behavior with COFF. Tested with clang
PGO bootstrap. This shows a modest reduction in object sizes in ELF format.
Differential Revision: https://reviews.llvm.org/D68041
llvm-svn: 373241
We can't use short granules with stack instrumentation when targeting older
API levels because the rest of the system won't understand the short granule
tags stored in shadow memory.
Moreover, we need to be able to let old binaries (which won't understand
short granule tags) run on a new system that supports short granule
tags. Such binaries will call the __hwasan_tag_mismatch function when their
outlined checks fail. We can compensate for the binary's lack of support
for short granules by implementing the short granule part of the check in
the __hwasan_tag_mismatch function. Unfortunately we can't do anything about
inline checks, but I don't believe that we can generate these by default on
aarch64, nor did we do so when the ABI was fixed.
A new function, __hwasan_tag_mismatch_v2, is introduced that lets code
targeting the new runtime avoid redoing the short granule check. Because tag
mismatches are rare this isn't important from a performance perspective; the
main benefit is that it introduces a symbol dependency that prevents binaries
targeting the new runtime from running on older (i.e. incompatible) runtimes.
Differential Revision: https://reviews.llvm.org/D68059
llvm-svn: 373035
For COFF, a comdat group is really a symbol marked
IMAGE_COMDAT_SELECT_ANY and zero or more other symbols marked
IMAGE_COMDAT_SELECT_ASSOCIATIVE. Typically the associative symbols in
the group are not external and are not referenced by other TUs, they are
things like debug info, C++ dynamic initializers, or other section
registration schemes. The Visual C++ linker reports a duplicate symbol
error for symbols marked IMAGE_COMDAT_SELECT_ASSOCIATIVE even if they
would be discarded after handling the leader symbol.
Fixes coverage-inline.cpp in check-profile after r372020.
llvm-svn: 372182
This fixes relocations against __profd_ symbols in discarded sections,
which is PR41380.
In general, instrumentation happens very early, and optimization and
inlining happens afterwards. The counters for a function are calculated
early, and after inlining, counters for an inlined function may be
widely referenced by other functions.
For C++ inline functions of all kinds (linkonce_odr &
available_externally mainly), instr profiling wants to deduplicate these
__profc_ and __profd_ globals. Otherwise the binary would be quite
large.
I made __profd_ and __profc_ comdat in r355044, but I chose to make
__profd_ internal. At the time, I was only dealing with coverage, and in
that case, none of the instrumentation needs to reference __profd_.
However, if you use PGO, then instrumentation passes add calls to
__llvm_profile_instrument_range which reference __profd_ globals. The
solution is to make these globals externally visible by using
linkonce_odr linkage for data as was done for counters.
This is safe because PGO adds a CFG hash to the names of the data and
counter globals, so if different TUs have different globals, they will
get different data and counter arrays.
Reviewers: xur, hans
Differential Revision: https://reviews.llvm.org/D67579
llvm-svn: 372020
This patch merges the sancov module and funciton passes into one module pass.
The reason for this is because we ran into an out of memory error when
attempting to run asan fuzzer on some protobufs (pc.cc files). I traced the OOM
error to the destructor of SanitizerCoverage where we only call
appendTo[Compiler]Used which calls appendToUsedList. I'm not sure where precisely
in appendToUsedList causes the OOM, but I am able to confirm that it's calling
this function *repeatedly* that causes the OOM. (I hacked sancov a bit such that
I can still create and destroy a new sancov on every function run, but only call
appendToUsedList after all functions in the module have finished. This passes, but
when I make it such that appendToUsedList is called on every sancov destruction,
we hit OOM.)
I don't think the OOM is from just adding to the SmallSet and SmallVector inside
appendToUsedList since in either case for a given module, they'll have the same
max size. I suspect that when the existing llvm.compiler.used global is erased,
the memory behind it isn't freed. I could be wrong on this though.
This patch works around the OOM issue by just calling appendToUsedList at the
end of every module run instead of function run. The same amount of constants
still get added to llvm.compiler.used, abd we make the pass usage and logic
simpler by not having any inter-pass dependencies.
Differential Revision: https://reviews.llvm.org/D66988
llvm-svn: 370971
Follow-up for:
[ASan] Make insertion of version mismatch guard configurable
3ae9b9d5e4
This tiny change makes sure that this test passes on our internal bots
as well.
llvm-svn: 370403
By default ASan calls a versioned function
`__asan_version_mismatch_check_vXXX` from the ASan module constructor to
check that the compiler ABI version and runtime ABI version are
compatible. This ensures that we get a predictable linker error instead
of hard-to-debug runtime errors.
Sometimes, however, we want to skip this safety guard. This new command
line option allows us to do just that.
rdar://47891956
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D66826
llvm-svn: 370258
One problem with untagging memory in landing pads is that it only works
correctly if the function that catches the exception is instrumented.
If the function is uninstrumented, we have no opportunity to untag the
memory.
To address this, replace landing pad instrumentation with personality function
wrapping. Each function with an instrumented stack has its personality function
replaced with a wrapper provided by the runtime. Functions that did not have
a personality function to begin with also get wrappers if they may be unwound
past. As the unwinder calls personality functions during stack unwinding,
the original personality function is called and the function's stack frame is
untagged by the wrapper if the personality function instructs the unwinder
to keep unwinding. If unwinding stops at a landing pad, the function is
still responsible for untagging its stack frame if it resumes unwinding.
The old landing pad mechanism is preserved for compatibility with old runtimes.
Differential Revision: https://reviews.llvm.org/D66377
llvm-svn: 369721
Globals are instrumented by adding a pointer tag to their symbol values
and emitting metadata into a special section that allows the runtime to tag
their memory when the library is loaded.
Due to order of initialization issues explained in more detail in the comments,
shadow initialization cannot happen during regular global initialization.
Instead, the location of the global section is marked using an ELF note,
and we require libc support for calling a function provided by the HWASAN
runtime when libraries are loaded and unloaded.
Based on ideas discussed with @evgeny777 in D56672.
Differential Revision: https://reviews.llvm.org/D65770
llvm-svn: 368102
For consistency with normal instructions and clarity when reading IR,
it's best to print the %0, %1, ... names of function arguments in
definitions.
Also modifies the parser to accept IR in that form for obvious reasons.
llvm-svn: 367755
changes were made to the patch since then.
--------
[NewPM] Port Sancov
This patch contains a port of SanitizerCoverage to the new pass manager. This one's a bit hefty.
Changes:
- Split SanitizerCoverageModule into 2 SanitizerCoverage for passing over
functions and ModuleSanitizerCoverage for passing over modules.
- ModuleSanitizerCoverage exists for adding 2 module level calls to initialization
functions but only if there's a function that was instrumented by sancov.
- Added legacy and new PM wrapper classes that own instances of the 2 new classes.
- Update llvm tests and add clang tests.
llvm-svn: 367053
This will let us instrument globals during initialization. This required
making the new PM pass a module pass, which should still provide access to
analyses via the ModuleAnalysisManager.
Differential Revision: https://reviews.llvm.org/D64843
llvm-svn: 366379
This patch contains a port of SanitizerCoverage to the new pass manager. This one's a bit hefty.
Changes:
- Split SanitizerCoverageModule into 2 SanitizerCoverage for passing over
functions and ModuleSanitizerCoverage for passing over modules.
- ModuleSanitizerCoverage exists for adding 2 module level calls to initialization
functions but only if there's a function that was instrumented by sancov.
- Added legacy and new PM wrapper classes that own instances of the 2 new classes.
- Update llvm tests and add clang tests.
Differential Revision: https://reviews.llvm.org/D62888
llvm-svn: 365838
A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:
* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
is in bounds of the granule and the pointer tag is equal to the last byte of
the granule.
Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).
When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.
Because this functionality obsoletes the right aligned heap feature of
the HWASAN memory allocator (and because we can no longer easily test
it), the feature is removed.
Also update the documentation to cover both short granule tags and
outlined checks.
Differential Revision: https://reviews.llvm.org/D63908
llvm-svn: 365551
These are sources of poison which don't come from flags, but are clearly documented in the LangRef. Left off support for scalable vectors for the moment, but should be easy to add if anyone is interested.
llvm-svn: 365543
Implements a transform pass which instruments IR such that poison semantics are made explicit. That is, it provides a (possibly partial) executable semantics for every instruction w.r.t. poison as specified in the LLVM LangRef. There are obvious parallels to the sanitizer tools, but this pass is focused purely on the semantics of LLVM IR, not any particular source language.
The target audience for this tool is developers working on or targetting LLVM from a frontend. The idea is to be able to take arbitrary IR (with the assumption of known inputs), and evaluate it concretely after having made poison semantics explicit to detect cases where either a) the original code executes UB, or b) a transform pass introduces UB which didn't exist in the original program.
At the moment, this is mostly the framework and still needs to be fleshed out. By reusing existing code we have decent coverage, but there's a lot of cases not yet handled. What's here is good enough to handle interesting cases though; for instance, one of the recent LFTR bugs involved UB being triggered by integer induction variables with nsw/nuw flags would be reported by the current code.
(See comment in PoisonChecking.cpp for full explanation and context)
Differential Revision: https://reviews.llvm.org/D64215
llvm-svn: 365536
This reverts commit r365260 which broke the following tests:
Clang :: CodeGenCXX/cfi-mfcall.cpp
Clang :: CodeGenObjC/ubsan-nullability.m
LLVM :: Transforms/LoopVectorize/AArch64/pr36032.ll
llvm-svn: 365284
Without this, we have the unfortunate property that tests are dependent on the order of operads passed the CreateOr and CreateAnd functions. In actual usage, we'd promptly optimize them away, but it made tests slightly more verbose than they should have been.
llvm-svn: 365260
Summary:
Handling callbr is very similar to handling an inline assembly call:
MSan must checks the instruction's inputs.
callbr doesn't (yet) have outputs, so there's nothing to unpoison,
and conservative assembly handling doesn't apply either.
Fixes PR42479.
Reviewers: eugenis
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64072
llvm-svn: 365008
This shaves an instruction (and a GOT entry in PIC code) off prologues of
functions with stack variables.
Differential Revision: https://reviews.llvm.org/D63472
llvm-svn: 364608
The VM layout on iOS is not stable between releases. On 64-bit iOS and
its derivatives we use a dynamic shadow offset that enables ASan to
search for a valid location for the shadow heap on process launch rather
than hardcode it.
This commit extends that approach for 32-bit iOS plus derivatives and
their simulators.
rdar://50645192
rdar://51200372
rdar://51767702
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D63586
llvm-svn: 364105
Currently, many profiling tests on Solaris FAIL like
Command Output (stderr):
--
Undefined first referenced
symbol in file
__llvm_profile_register_names_function /tmp/lit_tmp_Nqu4eh/infinite_loop-9dc638.o
__llvm_profile_register_function /tmp/lit_tmp_Nqu4eh/infinite_loop-9dc638.o
Solaris 11.4 ld supports the non-standard GNU ld extension of adding
__start_SECNAME and __stop_SECNAME labels to sections whose names are valid
as C identifiers. Given that we already use Solaris 11.4-only features
like ld -z gnu-version-script-compat and fully working .preinit_array
support in compiler-rt, we don't need to worry about older versions of
Solaris ld.
The patch documents that support (although the comment in
lib/Transforms/Instrumentation/InstrProfiling.cpp
(needsRuntimeRegistrationOfSectionRange) is quite cryptic what it's
actually about), and adapts the affected testcase not to expect the
alternativeq __llvm_profile_register_functions and __llvm_profile_init.
It fixes all affected tests.
Tested on amd64-pc-solaris2.11.
Differential Revision: https://reviews.llvm.org/D41111
llvm-svn: 363984
This saves roughly 32 bytes of instructions per function with stack objects
and causes us to preserve enough information that we can recover the original
tags of all stack variables.
Now that stack tags are deterministic, we no longer need to pass
-hwasan-generate-tags-with-calls during check-hwasan. This also means that
the new stack tag generation mechanism is exercised by check-hwasan.
Differential Revision: https://reviews.llvm.org/D63360
llvm-svn: 363636
The goal is to improve hwasan's error reporting for stack use-after-return by
recording enough information to allow the specific variable that was accessed
to be identified based on the pointer's tag. Currently we record the PC and
lower bits of SP for each stack frame we create (which will eventually be
enough to derive the base tag used by the stack frame) but that's not enough
to determine the specific tag for each variable, which is the stack frame's
base tag XOR a value (the "tag offset") that is unique for each variable in
a function.
In IR, the tag offset is most naturally represented as part of a location
expression on the llvm.dbg.declare instruction. However, the presence of the
tag offset in the variable's actual location expression is likely to confuse
debuggers which won't know about tag offsets, and moreover the tag offset
is not required for a debugger to determine the location of the variable on
the stack, so at the DWARF level it is represented as an attribute so that
it will be ignored by debuggers that don't know about it.
Differential Revision: https://reviews.llvm.org/D63119
llvm-svn: 363635