While implementing support for the float128 routines on x86_64, I noticed
that __builtin_isinf() was returning true for 128-bit floating point
values that are not infinite when compiling with GCC and using the
compiler-rt implementation of the soft-float comparison functions.
After stepping through the assembly, I discovered that this was caused by
GCC assuming a sign-extended 64-bit -1 result, but our implementation
returns an enum (which then has zeroes in the upper bits) and therefore
causes the comparison with -1 to fail.
Fix this by using a CMP_RESULT typedef and add a static_assert that it
matches the GCC soft-float comparison return type when compiling with GCC
(GCC has a __libgcc_cmp_return__ mode that can be used for this purpose).
Also move the 3 copies of the same code to a shared .inc file.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D98205
COMPILER_RT_TSAN_DEBUG_OUTPUT enables TSAN_COLLECT_STATS,
which changes layout of runtime structs (some structs contain
stats when the option is enabled).
It's not OK to build runtime with the define, but tests without it.
The error is detected by build_consistency_stats/nostats.
Fix this by defining TSAN_COLLECT_STATS for tests to match the runtime.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101386
Commit efd254b636 ("tsan: fix deadlock in pthread_atfork callbacks")
fixed another deadlock related to atfork handling.
But builders with DCHECKs enabled reported failures of
pthread_atfork_deadlock2.c and pthread_atfork_deadlock3.c tests
related to the fact that we hold runtime locks on interceptor exit:
https://lab.llvm.org/buildbot/#/builders/70/builds/6727
This issue is somewhat inherent to the current approach,
we indeed execute user code (atfork callbacks) with runtime lock held.
Refactor fork handling to not run user code (atfork callbacks)
with runtime locks held. This change does this by installing
own atfork callbacks during runtime initialization.
Atfork callbacks run in LIFO order, so the expectation is that
our callbacks run last, right before the actual fork.
This way we lock runtime mutexes around fork, but not around
user callbacks.
Extend tests to also install after fork callbacks just to cover
more scenarios. Some tests also started reporting real races
that we previously suppressed.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101385
We take report/thread_registry locks around fork.
This means we cannot report any bugs in atfork handlers.
We resolved this by enabling per-thread ignores around fork.
This resolved some of the cases, but not all.
The added test triggers a race report from a signal handler
called from atfork callback, we reset per-thread ignores
around signal handlers, so we tried to report it and deadlocked.
But there are more cases: a signal handler can be called
synchronously if it's sent to itself. Or any other report
types would cause deadlocks as well: mutex misuse,
signal handler spoiling errno, etc.
Disable all reports for the duration of fork with
thr->suppress_reports and don't re-enable them around
signal handlers.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101154
This is undefined if SANITIZER_SYMBOLIZER_MARKUP is 1, which is the case for
Fuchsia, and will result in a undefined symbol error. This function is needed
by hwasan for online symbolization, but is not needed for us since we do
offline symbolization.
Differential Revision: https://reviews.llvm.org/D99386
This reapplies 1e1d75b190, which was reverted in ce1a4d5323 due to build
failures.
The unconditional dependencies on clang and llvm-jitlink in
compiler-rt/test/orc/CMakeLists.txt have been removed -- they don't appear to
be necessary, and I suspect they're the cause of the build failures seen
earlier.
Some builders failed with a missing clang dependency. E.g.
CMake Error at /Users/buildslave/jenkins/workspace/clang-stage1-RA/clang-build \
/lib/cmake/llvm/AddLLVM.cmake:1786 (add_dependencies):
The dependency target "clang" of target "check-compiler-rt" does not exist.
Reverting while I investigate.
This reverts commit 1e1d75b190.
Now that page aliasing for x64 has landed, we don't need to worry about
passing tagged pointers to libc, and thus D98875 removed it.
Unfortunately, we still test on aarch64 devices that don't have the
kernel tagged address ABI (https://reviews.llvm.org/D98875#2649269).
All the memory that we pass to the kernel in these tests is from global
variables. Instead of having architecture-specific untagging mechanisms
for this memory, let's just not tag the globals.
Reviewed By: eugenis, morehouse
Differential Revision: https://reviews.llvm.org/D101121
This patch does a few cleanup things:
1. The non-standalone scudo has a problem where GWP-ASan allocations
may not meet alignment requirements where Scudo was requested to have
alignment >= 16. Use the new GWP-ASan API to fix this.
2. The standalone variant loses some debugging information inside of
GWP-ASan because we ask GWP-ASan to allocate an aligned size in the
frontend. This means reports end up with 'UaF on a 16-byte allocation'
for a 1-byte allocation with 16-byte alignment. Also use the new API to
fix this.
3. Add post-alloc hooks for GWP-ASan intercepted allocations, and add
stats tracking for GWP-ASan allocations.
4. Add a small test that checks the alignment of the frontend
allocator, so that it can be used under GWP-ASan torture mode.
5. Add GWP-ASan torture mode as a testing configuration to catch these
regressions.
Depends on D94830, D95889.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D95884
From a cache perspective it's better to store the header immediately
after loading it. If we delay this operation until after we've
retagged it's more likely that our header will have been evicted from
the cache and we'll need to fetch it again in order to perform the
compare-exchange operation.
For similar reasons, store the deallocation stack before retagging
instead of afterwards.
Differential Revision: https://reviews.llvm.org/D101137
It looks like there's some old version of gcc that doesn't like this
static_assert (I couldn't reproduce the issue with gcc 8, 9 or 10).
Work around the issue by only checking the static_assert under clang,
which should provide sufficient coverage.
Should hopefully fix this buildbot:
https://lab.llvm.org/buildbot/#/builders/112/builds/5356
With AndroidSizeClassMap all of the LSBs are in the range 4-6 so we
only need 2 bits of information per size class. Furthermore we have
32 size classes, which conveniently lets us fit all of the information
into a 64-bit integer. Do so if possible so that we can avoid a table
lookup entirely.
Differential Revision: https://reviews.llvm.org/D101105
In the most common case we call computeOddEvenMaskForPointerMaybe()
from quarantineOrDeallocateChunk(), in which case we need to look up
the class size from the SizeClassMap in order to compute the LSB. Since
we need to do a lookup anyway, we may as well look up the LSB itself
and avoid computing it every time.
While here, switch to a slightly more efficient way of computing the
odd/even mask.
Differential Revision: https://reviews.llvm.org/D101018
The first version of origin tracking tracks only memory stores. Although
this is sufficient for understanding correct flows, it is hard to figure
out where an undefined value is read from. To find reading undefined values,
we still have to do a reverse binary search from the last store in the chain
with printing and logging at possible code paths. This is
quite inefficient.
Tracking memory load instructions can help this case. The main issues of
tracking loads are performance and code size overheads.
With tracking only stores, the code size overhead is 38%,
memory overhead is 1x, and cpu overhead is 3x. In practice #load is much
larger than #store, so both code size and cpu overhead increases. The
first blocker is code size overhead: link fails if we inline tracking
loads. The workaround is using external function calls to propagate
metadata. This is also the workaround ASan uses. The cpu overhead
is ~10x. This is a trade off between debuggability and performance,
and will be used only when debugging cases that tracking only stores
is not enough.
Reviewed By: gbalats
Differential Revision: https://reviews.llvm.org/D100967
Since we already have a tagged pointer available to us, we can just
extract the tag from it and avoid an LDG instruction.
Differential Revision: https://reviews.llvm.org/D101014
Now that we have a more efficient implementation of storeTags(),
we should start using it from resizeTaggedChunk(). With that, plus
a new storeTag() function, resizeTaggedChunk() can be made generic,
and so can prepareTaggedChunk(). Make it so.
Now that the functions are generic, move them to combined.h so that
memtag.h no longer needs to know about chunks.
Differential Revision: https://reviews.llvm.org/D100911
DC GZVA can operate on multiple granules at a time (corresponding to
the CPU's cache line size) so we can generally expect it to be faster
than STZG in a loop.
Differential Revision: https://reviews.llvm.org/D100910
An empty macro that expands to just `... else ;` can get
warnings from some compilers (e.g. GCC's -Wempty-body).
Reviewed By: cryptoad, vitalybuka
Differential Revision: https://reviews.llvm.org/D100693
If these sizes do not match, asan will not work as expected. Previously, we added compile-time checks for non-iOS platforms. We check at run time for iOS because we get the max VM size from the kernel at run time.
rdar://76477969
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D100784
This test from @MaskRay comment on D69428. The patch is looking to
break this behavior. If we go with D69428 I hope we will have some
workaround for this test or include explicit test update into the patch.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D100906
sanitizer_linux_libcdep.cpp doesn't build for Linux sparc (with minimum support
but can build) after D98926. I wasn't aware because the file didn't mention
`__sparc__`.
While here, add the relevant support since it does not add complexity
(the D99566 approach). Adds an explicit `#error` for unsupported
non-Android Linux and FreeBSD architectures.
ThreadDescriptorSize is only used by lsan to scan thread-specific data keys in
the thread control block.
On TLS Variant II architectures (i386/x86_64/s390/sparc), our dl_iterate_phdr
based approach can cover the region from the first byte of the static TLS block
(static TLS surplus) to the thread pointer.
We just need to extend the range to include the first few members of struct
pthread. offsetof(struct pthread, specific_used) satisfies the requirement and
has not changed since 2007-05-10. We don't need to update ThreadDescriptorSize
for each glibc version.
Technically we could use the 524/1552 for x86_64 as well but there is potential
risk that large applications with thousands of shared object dependency may
dislike the time complexity increase if there are many threads, so I don't make
the simplification for now.
Differential Revision: https://reviews.llvm.org/D100892
The previous check was wrong because it only checks that the LLVM CMake
directory exists. However, it's possible that the directory exists but
the `LLVMConfig.cmake` file does not. When this happens we would
incorectly try to include the non-existant file.
To fix this we make the check stricter by checking that the file
we want to include actually exists.
This is a follow up to fd28517d87.
rdar://76870467
If these sizes do not match, asan will not work as expected.
If possible, assert at compile time that the vm size is less than or equal to mmap range.
If a compile time assert is not possible, check at run time (for iOS)
rdar://76477969
Reviewed By: delcypher, yln
Differential Revision: https://reviews.llvm.org/D100239
We previously shrunk the mmap range size on ios, but those settings got inherited by apple silicon macs.
Don't shrink the vm range on apple silicon Mac since we have access to the full range.
Also don't shrink vm range for iOS simulators because they have the same range as the host OS, not the simulated OS.
rdar://75302812
Reviewed By: delcypher, kubamracek, yln
Differential Revision: https://reviews.llvm.org/D100234
As mentioned in https://gcc.gnu.org/PR100114 , glibc starting with the
https://sourceware.org/git/?p=glibc.git;a=commit;h=6c57d320484988e87e446e2e60ce42816bf51d53
change doesn't define SIGSTKSZ and MINSIGSTKSZ macros to constants, but to sysconf function call.
sanitizer_posix_libcdep.cpp has
static const uptr kAltStackSize = SIGSTKSZ * 4; // SIGSTKSZ is not enough.
which is generally fine, just means that when SIGSTKSZ is not a compile time constant will be initialized later.
The problem is that kAltStackSize is used in SetAlternateSignalStack which is called very early, from .preinit_array
initialization, i.e. far before file scope variables are constructed, which means it is not initialized and
mmapping 0 will fail:
==145==ERROR: AddressSanitizer failed to allocate 0x0 (0) bytes of SetAlternateSignalStack (error code: 22)
Here is one possible fix, another one could be to make kAltStackSize a preprocessor macro if _SG_SIGSTKSZ is defined
(but perhaps with having an automatic const variable initialized to it so that sysconf isn't at least called twice
during SetAlternateSignalStack.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D100645
In order to integrate libFuzzer with a dynamic symbolic execution tool
Sydr we need to print loaded file paths.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D100303
... so that FreeBSD specific GetTls/glibc specific pthread_self code can be
removed. This also helps FreeBSD arm64/powerpc64 which don't have GetTls
implementation yet.
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS blocks. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize`. However, huge glibc x86-64 binaries with numerous shared objects
may observe time complexity penalty, so exclude them for now. Use the simplified
method with non-Android Linux for now, but in theory this can be used with *BSD
and potentially other ELF OSes.
This removal of RISC-V `__builtin_thread_pointer` makes the code compilable with
more compiler versions (added in Clang in 2020-03, added in GCC in 2020-07).
This simplification enables D99566 for TLS Variant I architectures.
Note: as of musl 1.2.2 and FreeBSD 12.2, dlpi_tls_data returned by
dl_iterate_phdr is not desired: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254774
This can be worked around by using `__tls_get_addr({modid,0})` instead
of `dlpi_tls_data`. The workaround can be shared with the workaround for glibc<2.25.
This fixes some tests on Alpine Linux x86-64 (musl)
```
test/lsan/Linux/cleanup_in_tsd_destructor.c
test/lsan/Linux/fork.cpp
test/lsan/Linux/fork_threaded.cpp
test/lsan/Linux/use_tls_static.cpp
test/lsan/many_tls_keys_thread.cpp
test/msan/tls_reuse.cpp
```
and `test/lsan/TestCases/many_tls_keys_pthread.cpp` on glibc aarch64.
The number of sanitizer test failures does not change on FreeBSD/amd64 12.2.
Differential Revision: https://reviews.llvm.org/D98926
While attempting to roll the latest Scudo in Fuchsia, some issues
arose. While trying to debug them, it appeared that `DCHECK`s were
also never exercised in Fuchsia. This CL fixes the following
problems:
- the size of a block in the TransferBatch class must be a multiple
of the compact pointer scale. In some cases, it wasn't true, which
lead to obscure crashes. Now, we round up `sizeof(TransferBatch)`.
This only materialized in Fuchsia due to the specific parameters
of the `DefaultConfig`;
- 2 `DCHECK` statements in Fuchsia were incorrect;
- `map()` & co. require a size multiple of a page (as enforced in
Fuchsia `DCHECK`s), which wasn't the case for `PackedCounters`.
- In the Secondary, a parameter was marked as `UNUSED` while it is
actually used.
Differential Revision: https://reviews.llvm.org/D100524
This change adds convenient composed constants to be used for tsan_read_try_lock annotations, reducing the boilerplate at the instrumentation site.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D99595
Do not hold the free/live thread list lock longer than necessary.
This change speeds up the following benchmark 10x.
constexpr int kTopThreads = 50;
constexpr int kChildThreads = 20;
constexpr int kChildIterations = 8;
void Thread() {
for (int i = 0; i < kChildIterations; ++i) {
std::vector<std::thread> threads;
for (int i = 0; i < kChildThreads; ++i)
threads.emplace_back([](){});
for (auto& t : threads)
t.join();
}
}
int main() {
std::vector<std::thread> threads;
for (int i = 0; i < kTopThreads; ++i)
threads.emplace_back(Thread);
for (auto& t : threads)
t.join();
}
Differential Revision: https://reviews.llvm.org/D100348
The GNU assembler can't parse `.arch_extension ...` before a `;`.
So instead uniformly use raw string syntax with separate lines
instead of `;` separators in the assembly code.
Reviewed By: pcc
Differential Revision: https://reviews.llvm.org/D100413
Allow test contents to be copied before execution by using
`%ld_flags_rpath_so`, `%ld_flags_rpath_exe`, and `%dynamiclib`
substitutions.
rdar://76302416
Differential Revision: https://reviews.llvm.org/D100240
This will allow us to make osx specific changes easier. Because apple silicon macs also run on aarch64, it was easy to confuse it with iOS.
rdar://75302812
Reviewed By: yln
Differential Revision: https://reviews.llvm.org/D100157
Mark the test as unsupported to bring the bot online. Could probably be
permanently fixed by using one of the workarounds already present in
compiler-rt.
The current variable name isn't used anywhere else, which indicates it's
a typo. Let's fix it before someone copy+pastes it somewhere else.
Reviewed By: Jim
Differential Revision: https://reviews.llvm.org/D39157
ASan declares these functions as strongly-defined, which results in
'duplicate symbol' errors when trying to replace them in user code when
linking the runtimes statically.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D100220
D99763 fixed `SizeClassAllocatorLocalCache::drain` but with the
assumption that `BatchClassId` is 0 - which is currently true. I would
rather not make the assumption so that if we ever change the ID of
the batch class, the loop would still work. Since `BatchClassId` is
used more often in `local_cache.h`, introduce a constant so that we
don't have to specify `SizeClassMap::` every time.
Differential Revision: https://reviews.llvm.org/D100062
After a follow-up change (D98332) this header can be included the same time
as fenv.h when running the tests. To avoid enum members conflicting with
the macros/enums defined in the host fenv.h, prefix them with CRT_.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D98333
These tests were added in:
1daa48f00559e422c90b
malloc_zero.c and realloc_too_big.c fail when only
leak sanitizer is enabled.
http://lab.llvm.org:8011/#/builders/59/builds/1635
(also in an armv8 32 bit build)
(I would XFAIL them but the same test is run with
address and leak sanitizer enabled and that one does
pass)
Fixes the ASan RISC-V memory mapping (originally introduced by D87580 and
D87581). This should be an improvement both in terms of first principles
soundness and observed test failures --- test failures would occur
non-deterministically depending on the ASLR random offset.
On RISC-V Linux (64-bit), `TASK_UNMAPPED_BASE` is currently defined as
`PAGE_ALIGN(TASK_SIZE / 3)`. The non-power-of-two divisor makes the result
be the not very round number 0x1555556000. That address had to be further
rounded to ensure page alignment after the shadow scale shifting is applied.
Still, that value explains why the mapping table may look less regular than
expected.
Further cleanups:
- Moved the mapping table comment, to ensure that the two Linux/AArch64
tables stayed together;
- Removed mention of Sv48. Neither the original mapping nor this one are
compatible with an actual Linux Sv48 address space (mainline Linux still
operates Sv48 in Sv39 mode). A future patch can improve this;
- Removed the additional comments, for consistency.
Differential Revision: https://reviews.llvm.org/D97646
Previously it wasn't possible to configure a standalone compiler-rt
build if the `LLVMConfig.cmake` file isn't present in a shipped
toolchain.
This patch adds a fallback behaviour for when `LLVMConfig.cmake` is not
available in the toolchain being used for configure. The fallback
behaviour mocks out the bare minimum required to make a configure
succeed when the host is Darwin. Support for other platforms could
be added in future patches.
The new code path is taken either in one of the following cases:
* `llvm-config` is not available.
* `llvm-config` is available but it provides an invalid path for the CMake files.
The motivation here is to be able to generate the compiler-rt lit test
suites for an arbitrary LLVM toolchain and then run the tests against
it.
The invocation to do this looks something like.
```
CC=/path/to/cc \
CXX=/path/to/c++ \
cmake \
-G Ninja \
-DLLVM_CONFIG_PATH=/path/to/llvm-config \
-DCOMPILER_RT_INCLUDE_TESTS=ON \
/path/to/llvm-project/compiler-rt
# Note we don't compile compiler-rt in this workflow.
bin/llvm-lit -v test/path/to/generated/test_suite
```
A possible alternative approach is to configure the
`cmake/modules/LLVMConfig.cmake.in` file in the LLVM source tree
and then include it. This approach was not taken because it is more
complicated.
An interesting side benefit of this patch is that it is now
possible to configure on Darwin without `llvm-config` being available
by configuring with `-DLLVM_CONFIG_PATH=""`. This moves us a step
closer to a world where no LLVM build artefacts are required to
build compiler-rt.
rdar://76016632
Differential Revision: https://reviews.llvm.org/D99621
layout.
When doing a standalone compiler-rt build we currently rely on
getting information from the `llvm-config` binary. Previously
we would rely on calling `llvm-config --src-root` to find the
LLVM sources. Unfortunately the returned path could easily be wrong
if the sources were built on another machine.
Now that compiler-rt is part of a monorepo we can easily fix this
problem by finding the LLVM source tree next to `compiler-rt` in
the monorepo. We do this regardless of whether or not the `llvm-config`
binary is available which moves us one step closer to not requiring
`llvm-config` to be available.
To try avoid anyone breaking anyone who relies on the current behavior,
if the path assuming the monorepo layout doesn't exist we invoke
`llvm-config --src-root` to get the path. A deprecation warning is
emitted if this path is taken because we should remove this path
in the future given that other runtimes already assume the monorepo
layout.
We also now emit a warning if `LLVM_MAIN_SRC_DIR` does not exist.
The intention is that this should be a hard error in future but
to avoid breaking existing users we'll keep this as a warning
for now.
rdar://76016632
Differential Revision: https://reviews.llvm.org/D99620
This was reverted by f176803ef1 due to
Ubuntu 16.04 x86-64 glibc 2.23 problems.
This commit additionally calls `__tls_get_addr({modid,0})` to work around the
dlpi_tls_data==NULL issues for glibc<2.25
(https://sourceware.org/bugzilla/show_bug.cgi?id=19826)
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS blocks. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize` entirely. Use the simplified method with non-Android Linux for
now, but in theory this can be used with *BSD and potentially other ELF OSes.
This simplification enables D99566 for TLS Variant I architectures.
See https://reviews.llvm.org/D93972#2480556 for analysis on GetTls usage
across various sanitizers.
Differential Revision: https://reviews.llvm.org/D98926
The check was removed in D99786 as it seems that quarantine is
irrelevant for the just created allocator. However there is internal
issues with tagged memory access.
We should be able to fix iterateOverChunks for taggin later.
Existing implementations took up to 30 minutues to execute on my setup.
Now it's more convenient to debug a single test.
Reviewed By: cryptoad
Differential Revision: https://reviews.llvm.org/D99786
Linux-only for now. Some mac bits stubbed out, but not tested.
Good enough for the tiny_race.c example at
https://clang.llvm.org/docs/ThreadSanitizer.html :
$ out/gn/bin/clang -fsanitize=address -g -O1 tiny_race.c
$ while true; do ./a.out || echo $? ; done
While here, also make `-fsanitize=address` work for .c files.
Differential Revision: https://reviews.llvm.org/D99795
This change adds a SimpleFastHash64 variant of SimpleFastHash which allows call sites to specify a starting value and get a 64 bit hash in return. This allows a hash to be "resumed" with more data.
A later patch needs this to be able to hash a sequence of module-relative values one at a time, rather than just a region a memory.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D94510
Trying to build the builtins code fails because `arm64_32_SOURCES` is
missing. Setting it to the same list used for `aarch64_SOURCES` solves
that problem and allow the builtins to compile for that architecture.
Additionally, arm64_32 is added as a possible architecture for watchos
platforms.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D99690
On 64-bit systems with small VMAs (e.g. 39-bit) we can't use
SizeClassAllocator64 parameterized with size class maps containing a large
number of classes, as that will make the allocator region size too small
(< 2^32). Several tests were already disabled for Android because of this.
This patch provides the correct allocator configuration for RISC-V
(riscv64), generalizes the gating condition for tests that can't be enabled
for small VMA systems, and tweaks the tests that can be made compatible with
those systems to enable them.
I think the previous gating on Android should instead be AArch64+Android, so
the patch reflects that.
Differential Revision: https://reviews.llvm.org/D97234
The previous code may underestimate the static TLS surplus part, which may cause
false positives to LeakSanitizer if a dynamically loaded module uses the surplus
and there is an allocation only referenced by a thread's TLS.
With D98926, many_tls_keys_pthread.cpp appears to be working.
On glibc 2.30-0ubuntu2, swapcontext.cpp and Linux/fork_and_leak.cpp work fine
but they strangely fail on clang-cmake-aarch64-full
(https://lab.llvm.org/buildbot/#/builders/7/builds/2240).
Disable them for now.
Note: check-lsan was recently enabled on AArch64 in D98985. A test takes
10+ seconds. We should figure out the bottleneck.
```
/b/sanitizer-x86_64-linux/build/llvm-project/compiler-rt/test/memprof/TestCases/test_terse.cpp:11:11: error: CHECK: expected string not found in input
// CHECK: MIB:[[STACKID:[0-9]+]]/1/40.00/40/40/20.00/20/20/[[AVELIFETIME:[0-9]+]].00/[[AVELIFETIME]]/[[AVELIFETIME]]/0/0/0/0
^
<stdin>:1:1: note: scanning from here
MIB:StackID/AllocCount/AveSize/MinSize/MaxSize/AveAccessCount/MinAccessCount/MaxAccessCount/AveLifetime/MinLifetime/MaxLifetime/NumMigratedCpu/NumLifetimeOverlaps/NumSameAllocCpu/NumSameDeallocCpu
^
<stdin>:4:1: note: possible intended match here
MIB:134217729/1/40.00/40/40/20.00/20/20/7.00/7/7/1/0/0/0
```
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS ranges. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize` entirely. Use the simplified method with non-Android Linux for
now, but in theory this can be used with *BSD and potentially other ELF OSes.
In the future, we can move `ThreadDescriptorSize` code to lsan (and consider
intercepting `pthread_setspecific`) to avoid hacks in generic code.
See https://reviews.llvm.org/D93972#2480556 for analysis on GetTls usage
across various sanitizers.
Differential Revision: https://reviews.llvm.org/D98926
Userspace page aliasing allows us to use middle pointer bits for tags
without untagging them before syscalls or accesses. This should enable
easier experimentation with HWASan on x86_64 platforms.
Currently stack, global, and secondary heap tagging are unsupported.
Only primary heap allocations get tagged.
Note that aliasing mode will not work properly in the presence of
fork(), since heap memory will be shared between the parent and child
processes. This mode is non-ideal; we expect Intel LAM to enable full
HWASan support on x86_64 in the future.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98875
Make TSan runtime initialization and finalization hooks work
even if these hooks are not built in the main executable. When these
hooks are defined in another library that is not directly linked against
the TSan runtime (e.g., Swift runtime) we cannot rely on the "strong-def
overriding weak-def" mechanics and have to look them up via `dlsym()`.
Let's also define hooks that are easier to use from C-only code:
```
extern "C" void __tsan_on_initialize();
extern "C" int __tsan_on_finalize(int failed);
```
For now, these will call through to the old hooks. Eventually, we want
to adopt the new hooks downstream and remove the old ones.
This is part of the effort to support Swift Tasks (async/await and
actors) in TSan.
rdar://74256720
Reviewed By: vitalybuka, delcypher
Differential Revision: https://reviews.llvm.org/D98810
Userspace page aliasing allows us to use middle pointer bits for tags
without untagging them before syscalls or accesses. This should enable
easier experimentation with HWASan on x86_64 platforms.
Currently stack, global, and secondary heap tagging are unsupported.
Only primary heap allocations get tagged.
Note that aliasing mode will not work properly in the presence of
fork(), since heap memory will be shared between the parent and child
processes. This mode is non-ideal; we expect Intel LAM to enable full
HWASan support on x86_64 in the future.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98875
Supported ctime_r, fgets, getcwd, get_current_dir_name, gethostname,
getrlimit, getrusage, strcpy, time, inet_pton, localtime_r,
getpwuid_r, epoll_wait, poll, select, sched_getaffinity
Most of them work as calling their non-origin verision directly.
This is a part of https://reviews.llvm.org/D95835.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D98966
Supported strrchr, strrstr, strto*, recvmmsg, recrmsg, nanosleep,
memchr, snprintf, socketpair, sprintf, getocketname, getsocketopt,
gettimeofday, getpeername.
strcpy was added because the test of sprintf need it. It will be
committed by D98966. Please ignore it when reviewing.
This is a part of https://reviews.llvm.org/D95835.
Reviewed By: gbalats
Differential Revision: https://reviews.llvm.org/D99109
The function works like MapDynamicShadow, except that it creates aliased
memory to the right of the shadow. The main use case is for HWASan
aliasing mode, which gets fast IsAlias() checks by exploiting the fact
that the upper bits of the shadow base and aliased memory match.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98369
`check-lsan` passed on an aarch64-*-linux machine.
Unsupport `many_tls_keys_pthread.cpp` for now: it requires GetTls to include
`specific_1stblock` and `specific` in `struct pthread`.
Differential Revision: https://reviews.llvm.org/D98985
The main use case for this change is HWASan aliasing mode, which premaps
the alias space adjacent to the dynamic shadow. With this change, the
primary allocator can allocate from the alias space instead of a
separate region.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98293
The main use case for this change is HWASan aliasing mode, which premaps
the alias space adjacent to the dynamic shadow. With this change, the
primary allocator can allocate from the alias space instead of a
separate region.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98293
x86_64 aliasing mode will use fewer than 8 bits for tags, so refactor
existing code to remove hard-coded 0xff and 8 values.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98072
Subsequent patches will implement page-aliasing mode for x86_64, which
will initially only work for the primary heap allocator. We force
callback instrumentation to simplify the initial aliasing
implementation.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98069
If we don't specify the c++ version in these tests, it could cause compile errors because the compiler could default to an older c++
rdar://75247244
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D98913
-mbranch-protection protects the LR on the stack with PAC.
When the frames are walked the LR need to be cleared.
This inline assembly later will be replaced with a new builtin.
Test: build with -DCMAKE_C_FLAGS="-mbranch-protection=standard".
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D98008
If producing libraries with an arch suffix (i.e. if
LLVM_ENABLE_PER_TARGET_RUNTIME_DIR isn't set), we append the
architecture name. However, for arm, clang doesn't look for libraries
with the full architecture name, but only looks for "arm" and "armhf".
Try to deduce what the full target triple might have been, and use
that for deciding between "arm" and "armhf".
This tries to reapply this bit from D98173, that had to be reverted
in 7b153b43d3 due to affecting how
the builtins themselves are compiled, not only affecting the output
file name.
Differential Revision: https://reviews.llvm.org/D98452
InternalScopedString uses InternalMmapVector internally
so it can be resized dynamically as needed.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D98751
An implementation of `__sanitizer::BufferedStackTrace::UnwindImpl` is
provided per sanitizer, but there isn't one for sanitizer-common. In
non-optimized builds of the sanitizer-common tests that becomes a problem:
the test `sanitizer_stacktrace_test.cpp` won't have a reference to that
method optimized away, causing linking errors. This patch provides a dummy
implementation, which fixes those builds.
Differential Revision: https://reviews.llvm.org/D96956
As reported in D96348 <https://reviews.llvm.org/D96348>, the
`Posix/regex_startend.cpp` test `FAIL`s on Solaris because
`REG_STARTEND` isn't defined. It's a BSD extension not present everywhere.
E.g. AIX doesn't have it, too.
Fixed by wrapping the test in `#ifdef REG_STARTEND`.
Tested on `amd64-pc-solaris2.11`, `sparcv9-sun-solaris2.11`, and
`x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D98425
On Darwin, MallocNanoZone may log after execv, which messes up this test.
Disable MallocNanoZone for this test since we don't use it anyway with asan.
This environment variable should only affect Darwin and not change behavior on other platforms.
rdar://74992832
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D98735
size() is inconsistent with length().
In most size() use cases we can replace InternalScopedString with
InternalMmapVector.
Remove non-constant data() to avoid direct manipulations of internal
buffer. append() should be enought to modify InternalScopedString.
This fixes detection when linking isn't supported (i.e. while building
builtins the first time).
Since 8368e4d54c, after setting
CMAKE_TRY_COMPILE_TARGET_TYPE to STATIC_LIBRARY, this isn't strictly
needed, but is good for correctness anyway (and in case that commit
ends up reverted).
Differential Revision: https://reviews.llvm.org/D98737
Also use this in ReadBinaryName which currently is producing
warnings.
Keep pragmas for silencing warnings in sanitizer_unwind_win.cpp,
as that can be called more frequently.
Differential Revision: https://reviews.llvm.org/D97726
Android's native bridge (i.e. AArch64 emulator) doesn't support TBI so
we need a way to disable TBI on Linux when targeting the native bridge.
This can also be used to test the no-TBI code path on Linux (currently
only used on Fuchsia), or make Scudo compatible with very old
(pre-commit d50240a5f6ceaf690a77b0fccb17be51cfa151c2 from June 2013)
Linux kernels that do not enable TBI.
Differential Revision: https://reviews.llvm.org/D98732
Since we are looking to remove the old Scudo, we have to have a .so for
parity purposes as some platforms use it.
I tested this on Fuchsia & Linux, not on Android though.
Differential Revision: https://reviews.llvm.org/D98456
On 64-bit systems with small VMAs (e.g. 39-bit) we can't use
`SizeClassAllocator64` parameterized with size class maps containing a
large number of classes, as that will make the allocator region size too
small (< 2^32). Several tests were already disabled for Android because
of this.
This patch provides the correct allocator configuration for RISC-V
(riscv64), generalizes the gating condition for tests that can't be
enabled for small VMA systems, and tweaks the tests that can be made
compatible with those systems to enable them.
Differential Revision: https://reviews.llvm.org/D97234
-mbranch-protection protects the LR on the stack with PAC.
When the frames are walked the LR need to be cleared.
This inline assembly later will be replaced with a new builtin.
Test: build with -DCMAKE_C_FLAGS="-mbranch-protection=standard".
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D98008
Previously, that configuration only used the generic sources, in
addition to the couple specifically chosen arm/mingw files.
Differential Revision: https://reviews.llvm.org/D98547
The existing value of 0x1000 sets the IXE bit (Inexact floating-point exception
trap enable), but we really want to be setting IXC, bit 4:
Inexact cumulative floating-point exception bit. This bit is set to 1 to
indicate that the Inexact floating-point exception has occurred since 0 was
last written to this bit.
Reviewed By: kongyi, peter.smith
Differential Revision: https://reviews.llvm.org/D98353
The inlining of this function needs to be disabled as it is part of the
inpsected stack traces. It's string representation will look different
depending on if it was inlined or not which will cause it's string comparison
to fail.
When it was inlined in only one of the two execution stacks,
minimize_two_crashes.test failed on SystemZ. For details see
https://bugs.llvm.org/show_bug.cgi?id=49152.
Reviewers: Ulrich Weigand, Matt Morehouse, Arthur Eubanks
Differential Revision: https://reviews.llvm.org/D97975
Right now, when you have an invalid memory address, asan would just crash and does not offer much useful info.
This patch attempted to give a bit more detail on the access.
Differential Revision: https://reviews.llvm.org/D98280
Some linux distributions produce versioned llvm-symbolizer binaries,
e.g. my llvm-11 installation puts the symbolizer binary at
/usr/bin/llvm-symbolizer-11.0.0 . However if you then try to run
a binary containing ASAN with
ASAN_SYMBOLIZER_PATH=..../llvm-symbolizer-FOO , it will fail on startup
with "isn't a known symbolizer".
Although it is possible to work around this by setting up symlinks,
that's kindof ugly - supporting versioned binaries is a nicer solution.
(There are now multiple stack overflow and blog posts talking about
this exact issue :) .)
Originally added in:
https://reviews.llvm.org/D8285
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D97682
If a log message is triggered between execv and child, this test fails.
In the meantime, disable the test to unblock CI
rdar://74992832
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D98453
Attempting to build a standalone libFuzzer in Fuchsia's default toolchain for the purpose of cross-compiling the unit tests revealed a number of not-quite-proper type conversions. Fuchsia's toolchain include `-std=c++17` and `-Werror`, among others, leading to many errors like `-Wshorten-64-to-32`, `-Wimplicit-float-conversion`, etc.
Most of these have been addressed by simply making the conversion explicit with a `static_cast`. These typically fell into one of two categories: 1) conversions between types where high precision isn't critical, e.g. the "energy" calculations for `InputInfo`, and 2) conversions where the values will never reach the bits being truncated, e.g. `DftTimeInSeconds` is not going to exceed 136 years.
The major exception to this is the number of features: there are several places that treat features as `size_t`, and others as `uint32_t`. This change makes the decision to cap the features at 32 bits. The maximum value of a feature as produced by `TracePC::CollectFeatures` is roughly:
(NumPCsInPCTables + ValueBitMap::kMapSizeInBits + ExtraCountersBegin() - ExtraCountersEnd() + log2(SIZE_MAX)) * 8
It's conceivable for extremely large targets and/or extra counters that this limit could be reached. This shouldn't break fuzzing, but it will cause certain features to collide and lower the fuzzers overall precision. To address this, this change adds a warning to TracePC::PrintModuleInfo about excessive feature size if it is detected, and recommends refactoring the fuzzer into several smaller ones.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D97992
Don't normalize arm architecture names; doing that loses the ability
to pick the right implementation of builtins for each architecture
variant. When building compiler-rt builtins as part of a
runtimes build, builtins for multiple armv* variants could be built
in the same directory, and with the simplified architecture name,
they'd all be built in the same directory, overlapping each other.
1. PGOMemOPSizeOpt grabs only the first, up to five (by default) entries from
the value profile metadata and preserves the remaining entries for the fallback
memop call site. If there are more than five entries, the rest of the entries
would get dropped. This is fine for PGOMemOPSizeOpt itself as it only promotes
up to 3 (by default) values, but potentially not for other downstream passes
that may use the value profile metadata.
2. PGOMemOPSizeOpt originally assumed that only values 0 through 8 are kept
track of. When the range buckets were introduced, it was changed to skip the
range buckets, but since it does not grab all entries (only five), if some range
buckets exist in the first five entries, it could potentially cause fewer
promotion opportunities (eg. if 4 out of 5 were range buckets, it may be able to
promote up to one non-range bucket, as opposed to 3.) Also, combined with 1, it
means that wrong entries may be preserved, as it didn't correctly keep track of
which were entries were skipped.
To fix this, PGOMemOPSizeOpt now grabs all the entries (up to the maximum number
of value profile buckets), keeps track of which entries were skipped, and
preserves all the remaining entries.
Differential Revision: https://reviews.llvm.org/D97592
When building builtins, the toolchain might not yet be at a stage
when linking a test application works yet, as builtins aren't
available. Therefore set CMAKE_TRY_COMPILE_TARGET_TYPE to STATIC_LIBRARY,
to avoid failing the compiler sanity check.
Setting CMAKE_TRY_COMPILE_TARGET_TYPE to STATIC_LIBRARY has the risk
of making checks for library availability succeed falsely (e.g.
indicating that libs would be available that really aren't, as the
tests don't do any linking), but the builtins library doesn't try to
link against any external libraries (and only produces static libraries
anyway), so it should be safe here.
This avoids having to set CMAKE_C_COMPILER_WORKS when bootstrapping a
cross toolchain, when building the builtins.
Differential Revision: https://reviews.llvm.org/D91334
The paciasp and autiasp instructions are only accepted by recent
compilers, but have the same encoding as hint instructions, so we can
use the hint menmonic to support older compilers.
This avoids the `__NR_gettimeofday` syscall number, which does not exist on 32-bit musl (it has `__NR_gettimeofday_time32`).
This switched Android to `clock_gettime` as well, which should work according to the old code before D96925.
Tested on Alpine Linux x86-64 (musl) and FreeBSD x86-64.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D98121
All check-tsan tests fail on aarch64-*-linux because HeapMemEnd() > ShadowBeg()
for the following code path:
```
#if defined(__aarch64__) && !HAS_48_BIT_ADDRESS_SPACE
ProtectRange(HeapMemEnd(), ShadowBeg());
```
Restore the behavior before D86377 for aarch64-*-linux.
The LR is stored to off-stack spill area where it is vulnerable.
"paciasp" add an auth code to the LR while the "autiasp" verifies that so
LR can't be modiifed on the spill area.
Test: build with -DCMAKE_C_FLAGS="-mbranch-protection=standard",
run on Armv8.3 capable hardware with PAuth.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D98009
On FreeBSD the sys/timeb.h header has a #warning that it's deprecated.
However, we need to include this header here, so silence this warning that
is printed multiple times otherwise.
Reviewed By: dim
Differential Revision: https://reviews.llvm.org/D94963
I accidentally committed the wrong version of this patch which didn't
actually enable the hooks for FreeBSD. Fixing the typo allows the tests
to actually pass.
This corresponds to getArchNameForCompilerRTLib in clang; any
32 bit x86 architecture triple (except on android, but those
exceptions are already handled in compiler-rt on a different level)
get the compiler rt library names with i386; arm targets get either
"arm" or "armhf". (Mapping to "armhf" is handled in the toplevel
CMakeLists.txt.)
Differential Revision: https://reviews.llvm.org/D98173
This is a minor issue because the TargetValue parameter of `__llvm_profile_instrument_memop`
is usually small and cannot exceed 2**31 at all.
Differential Revision: https://reviews.llvm.org/D97640
There is no centralized store of information related to secondary
allocations. Moreover the allocations themselves become inaccessible
when the allocation is freed in order to implement UAF detection,
so we can't store information there to be used in case of UAF
anyway.
Therefore our storage location for tracking stack traces of secondary
allocations is a ring buffer. The ring buffer is copied to the process
creating the crash dump when a fault occurs.
The ring buffer is also used to store stack traces for primary
deallocations. Stack traces for primary allocations continue to be
stored inline.
In order to support the scenario where an access to the ring buffer
is interrupted by a concurrently occurring crash, the ring buffer is
accessed in a lock-free manner.
Differential Revision: https://reviews.llvm.org/D94212
Go requires 47 bits VA for tsan.
Go will run race_detector testcases unless tsan warns about "unsupported VMA range"
Author: mzh (Meng Zhuo)
Reviewed-in: https://reviews.llvm.org/D98238
This patch enhances the secondary allocator to be able to detect buffer
overflow, and (on hardware supporting memory tagging) use-after-free
and buffer underflow.
Use-after-free detection is implemented by setting memory page
protection to PROT_NONE on free. Because this must be done immediately
rather than after the memory has been quarantined, we no longer use the
combined allocator quarantine for secondary allocations. Instead, a
quarantine has been added to the secondary allocator cache.
Buffer overflow detection is implemented by aligning the allocation
to the right of the writable pages, so that any overflows will
spill into the guard page to the right of the allocation, which
will have PROT_NONE page protection. Because this would require the
secondary allocator to produce a header at the correct position,
the responsibility for ensuring chunk alignment has been moved to
the secondary allocator.
Buffer underflow detection has been implemented on hardware supporting
memory tagging by tagging the memory region between the start of the
mapping and the start of the allocation with a non-zero tag. Due to
the cost of pre-tagging secondary allocations and the memory bandwidth
cost of tagged accesses, the allocation itself uses a tag of 0 and
only the first four pages have memory tagging enabled.
This is a reland of commit 7a0da88943 which was reverted in commit
9678b07e42. This reland includes the following changes:
- Fix the calculation of BlockSize which led to incorrect statistics
returned by mallinfo().
- Add -Wno-pedantic to silence GCC warning.
- Optionally add some slack at the end of secondary allocations to help
work around buggy applications that read off the end of their
allocation.
Differential Revision: https://reviews.llvm.org/D93731
A RISC-V implementation of `internal_clone` was introduced in D87573, as
part of the RISC-V ASan patch set by @EccoTheDolphin. That function was
never used/tested until I ported LSan for RISC-V, as part of D92403. That
port revealed problems in the original implementation, so I provided a fix
in D92403. Unfortunately, my choice of replacing the assembly with regular
C++ code wasn't correct. The clone syscall arguments specify a separate
stack, so non-inlined calls, spills, etc. aren't going to work. This wasn't
a problem in practice for optimized builds of Compiler-RT, but it breaks
for debug builds. This patch fixes the original problem while keeping the
assembly.
Differential Revision: https://reviews.llvm.org/D96954
Previously, on GLibc systems, the interceptor was calling __compat_regexec
(regexec@GLIBC_2.2.5) insead of the newer __regexec (regexec@GLIBC_2.3.4).
The __compat_regexec strips the REG_STARTEND flag but does not report an
error if other flags are present. This can result in infinite loops for
programs that use REG_STARTEND to find all matches inside a buffer (since
ignoring REG_STARTEND means that the search always starts from the first
character).
The underlying issue is that GLibc's dlsym(RTLD_NEXT, ...) appears to
always return the oldest versioned symbol instead of the default. This
means it does not match the behaviour of dlsym(RTLD_DEFAULT, ...) or the
behaviour documented in the manpage.
It appears a similar issue was encountered with realpath and worked around
in 77ef78a0a5.
See also https://sourceware.org/bugzilla/show_bug.cgi?id=14932 and
https://sourceware.org/bugzilla/show_bug.cgi?id=1319.
Fixes https://github.com/google/sanitizers/issues/1371
Reviewed By: #sanitizers, vitalybuka, marxin
Differential Revision: https://reviews.llvm.org/D96348
This reverts commit bde2e56071.
This patch produces a compile failure on linux amd64 environments, when
running:
ninja GotsanRuntimeCheck
I get various build errors:
../rtl/tsan_platform.h:608: error: use of undeclared identifier 'Mapping'
return MappingImpl<Mapping, Type>();
Here's a buildbot with the same failure during stage "check-tsan in gcc
build", there are other unrelated failures in there.
http://lab.llvm.org:8011/#/builders/37/builds/2831
As reported in D93278 post-review symlinking requires privilege escalation on Windows.
Copying is functionally same, so fallback to it for systems that aren't Unix-like.
This is similar to the solution in AddLLVM.cmake.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D98111
This patch modifies the x86_64 XRay trampolines to fix the CFI information
generated by the assembler. One of the main issues in correcting the CFI
directives is the `ALIGNED_CALL_RAX` macro, which makes the CFA dependent on
the alignment of the stack. However, this macro is not really necessary because
some additional assumptions can be made on the alignment of the stack when the
trampolines are called. The code has been written as if the stack is guaranteed
to be 8-bytes aligned; however, it is instead guaranteed to be misaligned by 8
bytes with respect to a 16-bytes alignment. For this reason, always moving the
stack pointer by 8 bytes is sufficient to restore the appropriate alignment.
Trampolines that are called from within a function as a result of the builtins
`__xray_typedevent` and `__xray_customevent` are necessarely called with the
stack properly aligned so, in this case too, `ALIGNED_CALL_RAX` can be
eliminated.
Fixes https://bugs.llvm.org/show_bug.cgi?id=49060
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D96785
The hackery is due to glibc clock_gettime crashing from preinit_array (D40679).
32-bit musl architectures do not define `__NR_clock_gettime` so the code causes a compile error.
Tested on Alpine Linux x86-64 (musl) and FreeBSD x86-64.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D96925
Looks like the default options for halt_on_error are different between linux and mac. set it to 0 in the test so the behavior is the same on both platforms.
rdar://75110847
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D98089
Two ASan tests currently `FAIL' on Solaris
AddressSanitizer-i386-sunos :: TestCases/large_func_test.cpp
AddressSanitizer-i386-sunos :: TestCases/use-after-delete.cpp
both for the same reason:
error: no check strings found with prefix 'CHECK-SunOS:'
Fixed by adding the appropriate check strings.
Tested on `amd64-pc-solaris2.11` and `x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D97931
Previously, this test only ran for mac because platforms have different messaging. This diff enables the test for all posix
rdar://75110847
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D98079
Currently, `print_module_map` is only respected for ubsan if it is ran in tandem with asan. This patch adds support for this flag in standalone mode. I copied the pattern used to implement this for asan.
Also added a common `print_module_map` lit test for Darwin only. Since the print messages are different per platform, we need to write a regex test to cover them. This test is coming in a separate patch
rdar://56135732
Reviewed By: vitalybuka, vsk, delcypher
Differential Revision: https://reviews.llvm.org/D97746
The runtimes build uses variables set by add_lit_testsuite to collect
testsuites from all the runtimes.
Differential Revision: https://reviews.llvm.org/D97913
One ASan test currently `XPASS`es on Solaris:
AddressSanitizer-i386-sunos :: TestCases/Posix/unpoison-alternate-stack.cpp
It was originally `XFAIL`ed in D88501 <https://reviews.llvm.org/D88501>
because `longjmp` from a signal handled is highly unportable, warned
against in XPG7, and was not supported by Solaris `libc` at the time.
However, since then support has been added for some cases including the
current one, so the `XFAIL` can go.
Tested on `amd64-pc-solaris2.11` and `x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D97933
One ASan test currently `XPASS`es on Solaris:
AddressSanitizer-i386-sunos :: TestCases/Posix/no_asan_gen_globals.c
It was originally `XFAIL`ed in D88218 <https://reviews.llvm.org/D88218>
because Solaris `ld`, unlike GNU `ld`, doesn't strip local labels. Since
then, the integrated assembler has stopped emitting those local labels, so
the difference becomes moot and the `XFAIL` can go.
Tested on `amd64-pc-solaris2.11` and `x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D97932
We're having flaky failures on this test on the sanitizer slow
buildbot. Not per-run flaky, but it'll be green for a while, then red
for a while. I suspect that changes in codegen are causing the
LLVM_VP_MAX_NUM_VALS_PER_SITE=150 to be above and below the limit
sporadically. The limit on my machine using lld and a non-bootstrapped
compiler is 175, but the bot uses GNU ld and ld.gold at different
points, which could be affecting behaviour.
Change this threshold to LLVM_VP_MAX_NUM_VALS_PER_SITE=130 in order to
try and get it below the failure point, at least for the foreseeable
future.
http://lab.llvm.org:8011/#/builders/37/builds/2744
> OSSpinLock is deprecated, so we are switching to `os_unfair_lock`. However, `os_unfair_lock` isn't available on older OSs, so we keep `OSSpinLock` as fallback.
>
> Also change runtime assumption check to static since they only ever check constant values.
>
> rdar://69588111
>
> Reviewed By: delcypher, yln
>
> Differential Revision: https://reviews.llvm.org/D97509
This reverts commit 71ef54337d.
This knob is useful for downstream users who want that some of their
libc functions to not be intercepted.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D97740
`__llvm_prf_vnodes` and `__llvm_prf_names` are used by runtime but not
referenced via relocation in the translation unit.
With `-z start-stop-gc` (LLD 13 (D96914); GNU ld 2.37 https://sourceware.org/bugzilla/show_bug.cgi?id=27451),
the linker does not let `__start_/__stop_` references retain their sections.
Place `__llvm_prf_vnodes` and `__llvm_prf_names` in `llvm.used` to make
them retained by the linker.
This patch changes most existing `UsedVars` cases to `CompilerUsedVars`
to reflect the ideal state - if the binary format properly supports
section based GC (dead stripping), `llvm.compiler.used` should be sufficient.
`__llvm_prf_vnodes` and `__llvm_prf_names` are switched to `UsedVars`
since we want them to be unconditionally retained by both compiler and linker.
Behaviors on COFF/Mach-O are not affected.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D97649
This patch modifies the x86_64 XRay trampolines to fix the CFI information
generated by the assembler. One of the main issues in correcting the CFI
directives is the `ALIGNED_CALL_RAX` macro, which makes the CFA dependent on
the alignment of the stack. However, this macro is not really necessary because
some additional assumptions can be made on the alignment of the stack when the
trampolines are called. The code has been written as if the stack is guaranteed
to be 8-bytes aligned; however, it is instead guaranteed to be misaligned by 8
bytes with respect to a 16-bytes alignment. For this reason, always moving the
stack pointer by 8 bytes is sufficient to restore the appropriate alignment.
Trampolines that are called from within a function as a result of the builtins
`__xray_typedevent` and `__xray_customevent` are necessarely called with the
stack properly aligned so, in this case too, `ALIGNED_CALL_RAX` can be
eliminated.
Fixes: https://bugs.llvm.org/show_bug.cgi?id=49060
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D96785
compiler-rt needs to use standalone build because of the assumptions
made by its build, but other runtimes can use non-standalone build.
Differential Revision: https://reviews.llvm.org/D97575
compiler-rt needs to use standalone build because of the assumptions
made by its build, but other runtimes can use non-standalone build.
Differential Revision: https://reviews.llvm.org/D97575
`__llvm_prf_vnodes` and `__llvm_prf_names` are used by runtime but not
referenced via relocation in the translation unit.
With `-z start-stop-gc` (D96914 https://sourceware.org/bugzilla/show_bug.cgi?id=27451),
the linker no longer lets `__start_/__stop_` references retain them.
Place `__llvm_prf_vnodes` and `__llvm_prf_names` in `llvm.used` to make
them retained by the linker.
This patch changes most existing `UsedVars` cases to `CompilerUsedVars`
to reflect the ideal state - if the binary format properly supports
section based GC (dead stripping), `llvm.compiler.used` should be sufficient.
`__llvm_prf_vnodes` and `__llvm_prf_names` are switched to `UsedVars`
since we want them to be unconditionally retained by both compiler and linker.
Behaviors on other COFF/Mach-O are not affected.
Differential Revision: https://reviews.llvm.org/D97649