Adds extra supported architectures that were available for vanilla
scudo, in preparation for D102543. Hopefully the dust has settled and
7d0a81ca38 is no longer an issue.
Reviewed By: cryptoad, vitalybuka
Differential Revision: https://reviews.llvm.org/D102648
This reduces the size of chrome.dll.pdb built with optimizations,
coverage, and line table info from 4,690,210,816 to 2,181,128,192, which
makes it possible to fit under the 4GB limit.
This change can greatly reduce binary size in coverage builds, which do
not need value profiling. IR PGO builds are unaffected. There is a minor
behavior change for frontend PGO.
PGO and coverage both use InstrProfiling to create profile data with
counters. PGO records the address of each function in the __profd_
global. It is used later to map runtime function pointer values back to
source-level function names. Coverage does not appear to use this
information.
Recording the address of every function with code coverage drastically
increases code size. Consider this program:
void foo();
void bar();
inline void inlineMe(int x) {
if (x > 0)
foo();
else
bar();
}
int getVal();
int main() { inlineMe(getVal()); }
With code coverage, the InstrProfiling pass runs before inlining, and it
captures the address of inlineMe in the __profd_ global. This greatly
increases code size, because now the compiler can no longer delete
trivial code.
One downside to this approach is that users of frontend PGO must apply
the -mllvm -enable-value-profiling flag globally in TUs that enable PGO.
Otherwise, some inline virtual method addresses may not be recorded and
will not be able to be promoted. My assumption is that this mllvm flag
is not popular, and most frontend PGO users don't enable it.
Differential Revision: https://reviews.llvm.org/D102818
Looks like secondary pointers don't get unmapped on one of the arm32
bots. In the interests of landing some dependent patches, disable this
test on arm32 so that it can be tested in isolation later.
Reviewed By: cryptoad, vitalybuka
Split from differential patchset (1/2): https://reviews.llvm.org/D102648
The Linux kernel has removed the interface to cyclades from
the latest kernel headers[1] due to them being orphaned for the
past 13 years.
libsanitizer uses this header when compiling against glibc, but
glibcs itself doesn't seem to have any references to cyclades.
Further more it seems that the driver is broken in the kernel and
the firmware doesn't seem to be available anymore.
As such since this is breaking the build of libsanitizer (and so the
GCC bootstrap[2]) I propose to remove this.
[1] https://lkml.org/lkml/2021/3/2/153
[2] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100379
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D102059
The Linux kernel has removed the interface to cyclades from
the latest kernel headers[1] due to them being orphaned for the
past 13 years.
libsanitizer uses this header when compiling against glibc, but
glibcs itself doesn't seem to have any references to cyclades.
Further more it seems that the driver is broken in the kernel and
the firmware doesn't seem to be available anymore.
As such since this is breaking the build of libsanitizer (and so the
GCC bootstrap[2]) I propose to remove this.
[1] https://lkml.org/lkml/2021/3/2/153
[2] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100379
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D102059
sleep(1) does not guaranty afterfork order.
Also relative child/parent afterfork order is not important for this test so we
can just avoid checking that.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D102810
These will be used for error propagation and handling in the ORC runtime.
The implementations of these types are cut-down versions of the error
support in llvm/Support/Error.h. Most advice on llvm::Error and llvm::Expected
(e.g. from the LLVM Programmer's manual) applies equally to __orc_rt::Error
and __orc_rt::Expected. The primary difference is the mechanism for testing
and handling error types: The ORC runtime uses a new 'error_cast' operation
to replace the handleErrors family of functions. See error_cast comments in
error.h.
If there are no counters, an mmap() of the counters section would fail
due to the size argument being too small (EINVAL).
rdar://78175925
Differential Revision: https://reviews.llvm.org/D102735
While developing a change to the allocator I ended up breaking
realloc on secondary allocations with increasing sizes. That didn't
cause any of the unit tests to fail, which indicated that we're
missing some test coverage here. Add a unit test for that case.
Differential Revision: https://reviews.llvm.org/D102716
This is a substitute for std::apply, which we can't use until we move to c++17.
apply_tuple will be used in upcoming the upcoming wrapper-function utils code.
Override __cxa_atexit and ignore callbacks.
This prevents crashes in a configuration when the symbolizer
is built into sanitizer runtime and consequently into the test process.
LLVM libraries have some global objects destroyed during exit,
so if the test process triggers any bugs after that, the symbolizer crashes.
An example stack trace of such crash:
For the standalone llvm-symbolizer this does not hurt,
we just don't destroy few global objects on exit.
Reviewed By: kda
Differential Revision: https://reviews.llvm.org/D102470
Since we have both aliasing mode and Intel LAM on x86_64, we need to
choose the mode at either run time or compile time. This patch
implements the plumbing to build both and choose between them at
compile time.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D102286
On AIX, we have to ship `libatomic.a` for compatibility. First, a new `clang_rt.atomic` is added. Second, use added cmake modules for AIX, we are able to build a compatible libatomic.a for AIX. The second step can't be perfectly implemented with cmake now since AIX's archive approach is kinda unique, i.e., archiving shared libraries into a static archive file.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D102155
https://reviews.llvm.org/D101681 landed a change to check the testing
configuration which relies on using the `-print-runtime-dir` flag of
clang to determine where the runtime testing library is.
The patch treated not being able to find the path reported by clang
as an error. Unfortunately this seems to break the
`llvm-clang-win-x-aarch64` bot. Either the bot is misconfigured or
clang is reporting a bogus path.
To temporarily unbreak the bot downgrade the fatal error to a warning.
While we're here also print information about the command used to
determine the path to aid debugging.
to a warning.
https://reviews.llvm.org/D101681 introduced a check to make sure the
compiler and compiler-rt were using the same library path when
`COMPILER_RT_TEST_STANDALONE_BUILD_LIBS=ON`, i.e. the developer's
intention is to test the just built libs rather that shipped with the
compiler used for testing.
It seems this broken some bots that are likely misconfigured.
So to unbreak them, for now let's make this a warning so the bot
owners can investigate without breaking their builds.
The path to the runtime libraries used by the compiler under test
is normally identical to the path where just built libraries are
created. However, this is not necessarily the case when doing standalone
builds. This is because the external compiler used by tests may choose
to get its runtime libraries from somewhere else.
When doing standalone builds there are two types of testing we could be
doing:
* Test the just built runtime libraries.
* Test the runtime libraries shipped with the compile under test.
Both types of testing are valid but it confusingly turns out compiler-rt
actually did a mixture of these types of testing.
* The `test/builtins/Unit/` test suite always tested the just built runtime
libraries.
* All other testsuites implicitly use whatever runtime library the
compiler decides to link.
There is no way for us to infer which type of testing the developer
wants so this patch introduces a new
`COMPILER_RT_TEST_STANDALONE_BUILD_LIBS` CMake
option which explicitly declares which runtime libraries should be
tested. If it is `ON` then the just built libraries should be tested,
otherwise the libraries in the external compiler should be tested.
When testing starts the lit test suite queries the compiler used for
testing to see where it will get its runtime libraries from. If these
paths are identical no action is taken (the common case). If the paths
are not identical then we check the value of
`COMPILER_RT_TEST_STANDALONE_BUILD_LIBS` (progated into the config as
`test_standalone_build_libs`) and check if the test suite supports testing in the
requested configuration.
* If we want to test just built libs and the test suite supports it
(currently only `test/builtins/Unit`) then testing proceeds without any changes.
* If we want to test the just built libs and the test suite doesn't
support it we emit a fatal error to prevent the developer from
testing the wrong runtime libraries.
* If we are testing the compiler's built libs then we adjust
`config.compiler_rt_libdir` to point at the compiler's runtime
directory. This makes the `test/builtins/Unit` tests use the
compiler's builtin library. No other changes are required because
all other testsuites implicitly use the compiler's built libs.
To make the above work the
`test_suite_supports_overriding_runtime_lib_path` test suite config
option has been introduced so we can identify what each test suite
supports.
Note all of these checks **have to be performed** when lit runs.
We cannot run the checks at CMake generation time because
multi-configuration build systems prevent us from knowing what the
paths will be.
We could perhaps support `COMPILER_RT_TEST_STANDALONE_BUILD_LIBS` being
`ON` for most test suites (when the runtime library paths differs) in
the future by specifiying a custom compiler resource directory path.
Doing so is out of scope for this patch.
rdar://77182297
Differential Revision: https://reviews.llvm.org/D101681
`-fno-exceptions -fno-asynchronous-unwind-tables` compiled programs don't
produce .eh_frame on Linux and other ELF platforms, so the slow unwinder cannot
print stack traces. Just fall back to the fast unwinder: this allows
-fno-asynchronous-unwind-tables without requiring the sanitizer option
`fast_unwind_on_fatal=1`
Reviewed By: #sanitizers, vitalybuka
Differential Revision: https://reviews.llvm.org/D102046
This removes one of the last dependencies on old Scudo, and should allow
us to delete the old Scudo soon.
Reviewed By: vitalybuka, cryptoad
Differential Revision: https://reviews.llvm.org/D102349
-fsanitize-hwaddress-experimental-aliasing is intended to distinguish
aliasing mode from LAM mode on x86_64. check-hwasan is configured
to use aliasing mode while check-hwasan-lam is configured to use LAM
mode.
The current patch doesn't actually do anything differently in the two
modes. A subsequent patch will actually build the separate runtimes
and use them in each mode.
Currently LAM mode tests must be run in an emulator that
has LAM support. To ensure LAM mode isn't broken by future patches, I
will next set up a QEMU buildbot to run the HWASan tests in LAM.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102288
With zero-sized allocations we don't actually end up storing the
address tag to the memory tag space, so store it in the first byte of
the chunk instead so that we can find it later in getInlineErrorInfo().
Differential Revision: https://reviews.llvm.org/D102442
It's more likely that we have a UAF than an OOB in blocks that are
more than 1 block away from the fault address, so the UAF should
appear first in the error report.
Differential Revision: https://reviews.llvm.org/D102379
On x32 size_t == unsigned int, not unsigned long int:
../../../../../src-master/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp: In function ??void __sanitizer::InitTlsSize()??:
../../../../../src-master/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:209:55: error: invalid conversion from ??__sanitizer::uptr*?? {aka ??long unsigned int*??} to ??size_t*?? {aka ??unsigned int*??} [-fpermissive]
209 | ((void (*)(size_t *, size_t *))get_tls_static_info)(&g_tls_size, &tls_align);
| ^~~~~~~~~~~
| |
| __sanitizer::uptr* {aka long unsigned int*}
by using size_t on g_tls_size. This is to fix:
https://bugs.llvm.org/show_bug.cgi?id=50332
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D102446
The bounds check that we previously had here was suitable for secondary
allocations but not for UAF on primary allocations, where it is likely
to result in false positives. Fix it by using a different bounds check
for UAF that requires the fault address to be in bounds.
Differential Revision: https://reviews.llvm.org/D102376
We have some significant amount of duplication around
CheckFailed functionality. Each sanitizer copy-pasted
a chunk of code. Some got random improvements like
dealing with recursive failures better. These improvements
could benefit all sanitizers, but they don't.
Deduplicate CheckFailed logic across sanitizers and let each
sanitizer only print the current stack trace.
I've tried to dedup stack printing as well,
but this got me into cmake hell. So let's keep this part
duplicated in each sanitizer for now.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102221
setlocale interceptor imitates a write into result,
which may be located in .rodata section.
This is the only interceptor that tries to do this and
I think the intention was to initialize the range for msan.
So do that instead. Writing into .rodata shouldn't happen
(without crashing later on the actual write) and this
traps on my local tsan experiments.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102161
Currently we have:
sanitizer_posix_libcdep.cpp:146:27: warning: cast between incompatible
function types from ‘__sighandler_t’ {aka ‘void (*)(int)’} to ‘sa_sigaction_t’
146 | sigact.sa_sigaction = (sa_sigaction_t)SIG_DFL;
We don't set SA_SIGINFO, so we need to assign to sa_handler.
And SIG_DFL is meant for sa_handler, so this gets rid of both
compiler warning, type cast and potential runtime misbehavior.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102162
We already declare subset of annotations in test.h.
But some are duplicated and declared in tests.
Move all annotation declarations to test.h.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102152
Add a simple test that uses syscall annotations.
Just to ensure at least basic functionality works.
Also factor out annotated syscall wrappers into a separate
header file as they may be useful for future tests.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102223
This test has two modes - testing reused threads with multiple loops of
batch create/join, and testing new threads with a single loop of
create/join per fork.
The non-reuse variant catches the problem that was fixed in D101881 with
a high probability.
Differential Revision: https://reviews.llvm.org/D101936
Add unit test infrastructure for the ORC runtime, plus a cut-down
extensible_rtti system and extensible_rtti unit test.
Removes the placeholder.cpp source file.
Differential Revision: https://reviews.llvm.org/D102080
This patch does a few cleanup things:
1. The non-standalone scudo has a problem where GWP-ASan allocations
may not meet alignment requirements where Scudo was requested to have
alignment >= 16. Use the new GWP-ASan API to fix this.
2. The standalone variant loses some debugging information inside of
GWP-ASan because we ask GWP-ASan to allocate an aligned size in the
frontend. This means reports end up with 'UaF on a 16-byte allocation'
for a 1-byte allocation with 16-byte alignment. Also use the new API to
fix this.
3. Add post-alloc hooks for GWP-ASan intercepted allocations, and add
stats tracking for GWP-ASan allocations.
4. Add a small test that checks the alignment of the frontend
allocator, so that it can be used under GWP-ASan torture mode.
5. Add GWP-ASan torture mode as a testing configuration to catch these
regressions.
Depends on D94830, D95889.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D95884
GWP-ASan is the "production" variant as compiled by compiler-rt, and it's useful to be able to benchmark changes in GWP-ASan or Scudo's GWP-ASan hooks across versions. GWP-ASan is sampled, and sampled allocations are much slower, but given the amount of allocations that happen under test here - we actually get a reasonable representation of GWP-ASan's negligent performance impact between runs.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D101865
According to:
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.poll
poll can return None if the process hasn't terminated.
I'm not quite sure how addr2line could end up closing the pipe without
terminating but we did see this happen on one of our bots:
```
<...>scripts/asan_symbolize.py",
line 211, in symbolize
logging.debug("addr2line exited early (broken pipe), returncode=%d"
% self.pipe.poll())
TypeError: %d format: a number is required, not NoneType
```
Handle None by printing a message that we couldn't get the return
code.
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D101891
Address sanitizer can detect stack exhaustion via its SEGV handler, which is
executed on a separate stack using the sigaltstack mechanism. When libFuzzer is
used with address sanitizer, it installs its own signal handlers which defer to
those put in place by the sanitizer before performing additional actions. In the
particular case of a stack overflow, the current setup fails because libFuzzer
doesn't preserve the flag for executing the signal handler on a separate stack:
when we run out of stack space, the operating system can't run the SEGV handler,
so address sanitizer never reports the issue. See the included test for an
example.
This commit fixes the issue by making libFuzzer preserve the SA_ONSTACK flag
when installing its signal handlers; the dedicated signal-handler stack set up
by the sanitizer runtime appears to be large enough to support the additional
frames from the fuzzer.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D101824
The linker suggests using -Wl,-z,notext.
Replaced assert by exit also fixed this.
After renaming, interceptor.c would be used to test interceptors in general by D101204.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D101649
Fixes compilation on Android which has a TSDSharedRegistry object in the config.
Reviewed By: cryptoad, vitalybuka
Differential Revision: https://reviews.llvm.org/D101951
Operator new must align allocations for types with large alignment.
Before c++17 behavior was implementation defined and both clang and gc++
before 11 ignored alignment. Miss-aligned objects mysteriously crashed
tests on Ubuntu 14.
Alternatives are compile with -std=c++17 or -faligned-new, but they were
discarded as less portable.
Reviewed By: hctim
Differential Revision: https://reviews.llvm.org/D101874
The problem was introduced in D100348.
It's really hard to trigger the bug in a stress test - the race is just too
narrow - but the new checks in Thread::Init should at least provide usable
diagnostic if the problem ever returns.
Differential Revision: https://reviews.llvm.org/D101881
This relates to https://reviews.llvm.org/D95835.
In DFSan origin tracking we use StackDepot to record
stack traces and origin traces (like MSan origin tracking).
For at least two reasons, we wanted to control StackDepot's memory cost
1) We may use DFSan origin tracking to monitor programs that run for
many days. This may eventually use too much memory for StackDepot.
2) DFSan supports flush shadow memory to reduce overhead. After flush,
all existing IDs in StackDepot are not valid because no one will
refer to them.
Currently, the position hint of an entry in the persistent auto
dictionary is fixed to 1. As a consequence, with a 50% chance, the entry
is applied right after the first byte of the input. As the position 1
does not appear to have any particular significance, this is likely a
bug that may have been caused by confusing the constructor parameter
with a success count.
This commit resolves the issue by preserving any existing position hint
or disabling the hint if the original entry didn't have one.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D101686
Code patterns like this are common, `#` at the line beginning
(https://google.github.io/styleguide/cppguide.html#Preprocessor_Directives),
one space indentation for if/elif/else directives.
```
#if SANITIZER_LINUX
# if defined(__aarch64__)
# endif
#endif
```
However, currently clang-format wants to reformat the code to
```
#if SANITIZER_LINUX
#if defined(__aarch64__)
#endif
#endif
```
This significantly harms readability in my review. Use `IndentPPDirectives:
AfterHash` to defeat the diagnostic. clang-format will now suggest:
```
#if SANITIZER_LINUX
# if defined(__aarch64__)
# endif
#endif
```
Unfortunately there is no clang-format option using indent with 1 for
just preprocessor directives. However, this is still one step forward
from the current behavior.
Reviewed By: #sanitizers, vitalybuka
Differential Revision: https://reviews.llvm.org/D100238
The Scudo C unit tests are currently non-hermetic. In particular, adding
or removing a transfer batch is a global state of the allocator that
persists between tests. This can cause flakiness in
ScudoWrappersCTest.MallInfo, because the creation or teardown of a batch
causes mallinfo's uordblks or fordblks to move up or down by the size of
a transfer batch on malloc/free.
It's my opinion that uordblks and fordblks should track the statistics
related to the user's malloc() and free() usage, and not the state of
the internal allocator structures. Thus, excluding the transfer batches
from stat collection does the trick and makes these tests pass.
Repro instructions of the bug:
1. ninja ./projects/compiler-rt/lib/scudo/standalone/tests/ScudoCUnitTest-x86_64-Test
2. ./projects/compiler-rt/lib/scudo/standalone/tests/ScudoCUnitTest-x86_64-Test --gtest_filter=ScudoWrappersCTest.MallInfo
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D101653
In the overwrite branch of MutationDispatcher::ApplyDictionaryEntry in
FuzzerMutate.cpp, the index Idx at which W.size() bytes are overwritten
with the word W is chosen uniformly at random in the interval
[0, Size - W.size()). This means that Idx + W.size() will always be
strictly less than Size, i.e., the last byte of the current unit will
never be overwritten.
This is fixed by adding 1 to the exclusive upper bound.
Addresses https://bugs.llvm.org/show_bug.cgi?id=49989.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D101625
Before commit "sanitizer_common: introduce kInvalidTid/kMainTid"
asan invalid/unknown thread id was 0xffffff, so presumably we printed "T16777215".
Now it's -1, so we print T-1. Fix the test.
I think the new format is even better, "T-1" clearly looks like something special
rather than a random large number.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101634
Currently we have a bit of a mess related to tids:
- sanitizers re-declare kInvalidTid multiple times
- some call it kUnknownTid
- implicit assumptions that main tid is 0
- asan/memprof claim their tids need to fit into 24 bits,
but this does not seem to be true anymore
- inconsistent use of u32/int to store tids
Introduce kInvalidTid/kMainTid in sanitizer_common
and use them consistently.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101428
Commit efd254b636 ("tsan: fix deadlock in pthread_atfork callbacks")
fixed another deadlock related to atfork handling.
But builders with DCHECKs enabled reported failures of
pthread_atfork_deadlock2.c and pthread_atfork_deadlock3.c tests
related to the fact that we hold runtime locks on interceptor exit:
https://lab.llvm.org/buildbot/#/builders/70/builds/6727
This issue is somewhat inherent to the current approach,
we indeed execute user code (atfork callbacks) with runtime lock held.
Refactor fork handling to not run user code (atfork callbacks)
with runtime locks held. This change does this by installing
own atfork callbacks during runtime initialization.
Atfork callbacks run in LIFO order, so the expectation is that
our callbacks run last, right before the actual fork.
This way we lock runtime mutexes around fork, but not around
user callbacks.
Extend tests to also install after fork callbacks just to cover
more scenarios. Some tests also started reporting real races
that we previously suppressed.
Also extend tests to cover fork syscall support.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101517
This is to help review refactor the allocator code.
So it is easy to see which are the real public interfaces.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101586
Rename `check_linker_flag` in compiler_rt to avoid conflict. Follow up
as the fix in D100901.
Patched by radford.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D101581
To see how to extract a shared allocator interface for D101204,
found some unused code. Tests passed. Are they safe to remove?
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101559
Fix format specifier.
Fix warnings about non-standard attribute placement.
Make free_race2.c test a bit more interesting:
test access with/without an offset.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101424
We've got a user report about heap block allocator overflow.
Bump the L1 capacity of all dense slab allocators to maximum
and be careful to not page the whole L1 array in from .bss.
If OS uses huge pages, this still may cause a limited RSS increase
due to boundary huge pages, but avoiding that looks hard.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101161
While implementing support for the float128 routines on x86_64, I noticed
that __builtin_isinf() was returning true for 128-bit floating point
values that are not infinite when compiling with GCC and using the
compiler-rt implementation of the soft-float comparison functions.
After stepping through the assembly, I discovered that this was caused by
GCC assuming a sign-extended 64-bit -1 result, but our implementation
returns an enum (which then has zeroes in the upper bits) and therefore
causes the comparison with -1 to fail.
Fix this by using a CMP_RESULT typedef and add a static_assert that it
matches the GCC soft-float comparison return type when compiling with GCC
(GCC has a __libgcc_cmp_return__ mode that can be used for this purpose).
Also move the 3 copies of the same code to a shared .inc file.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D98205
COMPILER_RT_TSAN_DEBUG_OUTPUT enables TSAN_COLLECT_STATS,
which changes layout of runtime structs (some structs contain
stats when the option is enabled).
It's not OK to build runtime with the define, but tests without it.
The error is detected by build_consistency_stats/nostats.
Fix this by defining TSAN_COLLECT_STATS for tests to match the runtime.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101386
Commit efd254b636 ("tsan: fix deadlock in pthread_atfork callbacks")
fixed another deadlock related to atfork handling.
But builders with DCHECKs enabled reported failures of
pthread_atfork_deadlock2.c and pthread_atfork_deadlock3.c tests
related to the fact that we hold runtime locks on interceptor exit:
https://lab.llvm.org/buildbot/#/builders/70/builds/6727
This issue is somewhat inherent to the current approach,
we indeed execute user code (atfork callbacks) with runtime lock held.
Refactor fork handling to not run user code (atfork callbacks)
with runtime locks held. This change does this by installing
own atfork callbacks during runtime initialization.
Atfork callbacks run in LIFO order, so the expectation is that
our callbacks run last, right before the actual fork.
This way we lock runtime mutexes around fork, but not around
user callbacks.
Extend tests to also install after fork callbacks just to cover
more scenarios. Some tests also started reporting real races
that we previously suppressed.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101385
We take report/thread_registry locks around fork.
This means we cannot report any bugs in atfork handlers.
We resolved this by enabling per-thread ignores around fork.
This resolved some of the cases, but not all.
The added test triggers a race report from a signal handler
called from atfork callback, we reset per-thread ignores
around signal handlers, so we tried to report it and deadlocked.
But there are more cases: a signal handler can be called
synchronously if it's sent to itself. Or any other report
types would cause deadlocks as well: mutex misuse,
signal handler spoiling errno, etc.
Disable all reports for the duration of fork with
thr->suppress_reports and don't re-enable them around
signal handlers.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101154
This is undefined if SANITIZER_SYMBOLIZER_MARKUP is 1, which is the case for
Fuchsia, and will result in a undefined symbol error. This function is needed
by hwasan for online symbolization, but is not needed for us since we do
offline symbolization.
Differential Revision: https://reviews.llvm.org/D99386
This reapplies 1e1d75b190, which was reverted in ce1a4d5323 due to build
failures.
The unconditional dependencies on clang and llvm-jitlink in
compiler-rt/test/orc/CMakeLists.txt have been removed -- they don't appear to
be necessary, and I suspect they're the cause of the build failures seen
earlier.
Some builders failed with a missing clang dependency. E.g.
CMake Error at /Users/buildslave/jenkins/workspace/clang-stage1-RA/clang-build \
/lib/cmake/llvm/AddLLVM.cmake:1786 (add_dependencies):
The dependency target "clang" of target "check-compiler-rt" does not exist.
Reverting while I investigate.
This reverts commit 1e1d75b190.
Now that page aliasing for x64 has landed, we don't need to worry about
passing tagged pointers to libc, and thus D98875 removed it.
Unfortunately, we still test on aarch64 devices that don't have the
kernel tagged address ABI (https://reviews.llvm.org/D98875#2649269).
All the memory that we pass to the kernel in these tests is from global
variables. Instead of having architecture-specific untagging mechanisms
for this memory, let's just not tag the globals.
Reviewed By: eugenis, morehouse
Differential Revision: https://reviews.llvm.org/D101121
This patch does a few cleanup things:
1. The non-standalone scudo has a problem where GWP-ASan allocations
may not meet alignment requirements where Scudo was requested to have
alignment >= 16. Use the new GWP-ASan API to fix this.
2. The standalone variant loses some debugging information inside of
GWP-ASan because we ask GWP-ASan to allocate an aligned size in the
frontend. This means reports end up with 'UaF on a 16-byte allocation'
for a 1-byte allocation with 16-byte alignment. Also use the new API to
fix this.
3. Add post-alloc hooks for GWP-ASan intercepted allocations, and add
stats tracking for GWP-ASan allocations.
4. Add a small test that checks the alignment of the frontend
allocator, so that it can be used under GWP-ASan torture mode.
5. Add GWP-ASan torture mode as a testing configuration to catch these
regressions.
Depends on D94830, D95889.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D95884
From a cache perspective it's better to store the header immediately
after loading it. If we delay this operation until after we've
retagged it's more likely that our header will have been evicted from
the cache and we'll need to fetch it again in order to perform the
compare-exchange operation.
For similar reasons, store the deallocation stack before retagging
instead of afterwards.
Differential Revision: https://reviews.llvm.org/D101137
It looks like there's some old version of gcc that doesn't like this
static_assert (I couldn't reproduce the issue with gcc 8, 9 or 10).
Work around the issue by only checking the static_assert under clang,
which should provide sufficient coverage.
Should hopefully fix this buildbot:
https://lab.llvm.org/buildbot/#/builders/112/builds/5356
With AndroidSizeClassMap all of the LSBs are in the range 4-6 so we
only need 2 bits of information per size class. Furthermore we have
32 size classes, which conveniently lets us fit all of the information
into a 64-bit integer. Do so if possible so that we can avoid a table
lookup entirely.
Differential Revision: https://reviews.llvm.org/D101105
In the most common case we call computeOddEvenMaskForPointerMaybe()
from quarantineOrDeallocateChunk(), in which case we need to look up
the class size from the SizeClassMap in order to compute the LSB. Since
we need to do a lookup anyway, we may as well look up the LSB itself
and avoid computing it every time.
While here, switch to a slightly more efficient way of computing the
odd/even mask.
Differential Revision: https://reviews.llvm.org/D101018
The first version of origin tracking tracks only memory stores. Although
this is sufficient for understanding correct flows, it is hard to figure
out where an undefined value is read from. To find reading undefined values,
we still have to do a reverse binary search from the last store in the chain
with printing and logging at possible code paths. This is
quite inefficient.
Tracking memory load instructions can help this case. The main issues of
tracking loads are performance and code size overheads.
With tracking only stores, the code size overhead is 38%,
memory overhead is 1x, and cpu overhead is 3x. In practice #load is much
larger than #store, so both code size and cpu overhead increases. The
first blocker is code size overhead: link fails if we inline tracking
loads. The workaround is using external function calls to propagate
metadata. This is also the workaround ASan uses. The cpu overhead
is ~10x. This is a trade off between debuggability and performance,
and will be used only when debugging cases that tracking only stores
is not enough.
Reviewed By: gbalats
Differential Revision: https://reviews.llvm.org/D100967
Since we already have a tagged pointer available to us, we can just
extract the tag from it and avoid an LDG instruction.
Differential Revision: https://reviews.llvm.org/D101014
Now that we have a more efficient implementation of storeTags(),
we should start using it from resizeTaggedChunk(). With that, plus
a new storeTag() function, resizeTaggedChunk() can be made generic,
and so can prepareTaggedChunk(). Make it so.
Now that the functions are generic, move them to combined.h so that
memtag.h no longer needs to know about chunks.
Differential Revision: https://reviews.llvm.org/D100911
DC GZVA can operate on multiple granules at a time (corresponding to
the CPU's cache line size) so we can generally expect it to be faster
than STZG in a loop.
Differential Revision: https://reviews.llvm.org/D100910
An empty macro that expands to just `... else ;` can get
warnings from some compilers (e.g. GCC's -Wempty-body).
Reviewed By: cryptoad, vitalybuka
Differential Revision: https://reviews.llvm.org/D100693
If these sizes do not match, asan will not work as expected. Previously, we added compile-time checks for non-iOS platforms. We check at run time for iOS because we get the max VM size from the kernel at run time.
rdar://76477969
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D100784
This test from @MaskRay comment on D69428. The patch is looking to
break this behavior. If we go with D69428 I hope we will have some
workaround for this test or include explicit test update into the patch.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100906
sanitizer_linux_libcdep.cpp doesn't build for Linux sparc (with minimum support
but can build) after D98926. I wasn't aware because the file didn't mention
`__sparc__`.
While here, add the relevant support since it does not add complexity
(the D99566 approach). Adds an explicit `#error` for unsupported
non-Android Linux and FreeBSD architectures.
ThreadDescriptorSize is only used by lsan to scan thread-specific data keys in
the thread control block.
On TLS Variant II architectures (i386/x86_64/s390/sparc), our dl_iterate_phdr
based approach can cover the region from the first byte of the static TLS block
(static TLS surplus) to the thread pointer.
We just need to extend the range to include the first few members of struct
pthread. offsetof(struct pthread, specific_used) satisfies the requirement and
has not changed since 2007-05-10. We don't need to update ThreadDescriptorSize
for each glibc version.
Technically we could use the 524/1552 for x86_64 as well but there is potential
risk that large applications with thousands of shared object dependency may
dislike the time complexity increase if there are many threads, so I don't make
the simplification for now.
Differential Revision: https://reviews.llvm.org/D100892
The previous check was wrong because it only checks that the LLVM CMake
directory exists. However, it's possible that the directory exists but
the `LLVMConfig.cmake` file does not. When this happens we would
incorectly try to include the non-existant file.
To fix this we make the check stricter by checking that the file
we want to include actually exists.
This is a follow up to fd28517d87.
rdar://76870467
If these sizes do not match, asan will not work as expected.
If possible, assert at compile time that the vm size is less than or equal to mmap range.
If a compile time assert is not possible, check at run time (for iOS)
rdar://76477969
Reviewed By: delcypher, yln
Differential Revision: https://reviews.llvm.org/D100239
We previously shrunk the mmap range size on ios, but those settings got inherited by apple silicon macs.
Don't shrink the vm range on apple silicon Mac since we have access to the full range.
Also don't shrink vm range for iOS simulators because they have the same range as the host OS, not the simulated OS.
rdar://75302812
Reviewed By: delcypher, kubamracek, yln
Differential Revision: https://reviews.llvm.org/D100234