Reset coverage data on fork().
For memory-mapped mode (coverage_direct=1) this helps avoid loss of data
(before this change two processes would write to the same file simultaneously).
For normal mode, this reduces coverage dump size, because PCs from the parent
process are no longer inherited by the child.
llvm-svn: 210180
No longer need to set ANDROID if COMPILER_RT_TEST_TARGET_TRIPLE is
arm-linux-androideabi.
No need to set ANDROID_COMMON_FLAGS. These flags are already in
CMAKE_CXX_FLAGS which are used in try_compile().
llvm-svn: 210053
Place constants into .rdata if targeting ELF or COFF/PE. This should be
functionally identical, however, the data would be placed into a different
section. This is purely a cleanup change.
llvm-svn: 209986
Make the whitespace a bit more uniform in the various assembly routines. This
also makes the assembly files a bit more uniform on the ARM side by explicitly
stating that it is using the unified syntax and that the contents of the code is
in the text section (or segment). No functional change.
llvm-svn: 209985
Add a script that is used to deflake inherently flaky tsan tests.
It is invoked from lit tests as:
%deflake %run %t
The script runs the target program up to 10 times,
until it produces a tsan warning.
llvm-svn: 209898
The optimization is two-fold:
First, the algorithm now uses SSE instructions to
handle all 4 shadow slots at once. This makes processing
faster.
Second, if shadow contains the same access, we do not
store the event into trace. This increases effective
trace size, that is, tsan can remember up to 10x more
previous memory accesses.
Perofrmance impact:
Before:
[ OK ] DISABLED_BENCH.Mop8Read (2461 ms)
[ OK ] DISABLED_BENCH.Mop8Write (1836 ms)
After:
[ OK ] DISABLED_BENCH.Mop8Read (1204 ms)
[ OK ] DISABLED_BENCH.Mop8Write (976 ms)
But this measures only fast-path.
On large real applications the speedup is ~20%.
Trace size impact:
On app1:
Memory accesses : 1163265870
Including same : 791312905 (68%)
on app2:
Memory accesses : 166875345
Including same : 150449689 (90%)
90% of filtered events means that trace size is effectively 10x larger.
llvm-svn: 209897
You can expect the sanitizers to be built under any of the following conditions:
1) CMAKE_C_COMPILER is GCC built to cross-compile to ARM
2) CMAKE_C_COMPILER is Clang built to cross-compile to ARM (ARM is default target)
3) CMAKE_C_COMPILER is Clang and CMAKE_C_FLAGS contains -target and --sysroot
Differential Revision: http://reviews.llvm.org/D3794
llvm-svn: 209835
The new storage (MetaMap) is based on direct shadow (instead of a hashmap + per-block lists).
This solves a number of problems:
- eliminates quadratic behaviour in SyncTab::GetAndLock (https://code.google.com/p/thread-sanitizer/issues/detail?id=26)
- eliminates contention in SyncTab
- eliminates contention in internal allocator during allocation of sync objects
- removes a bunch of ad-hoc code in java interface
- reduces java shadow from 2x to 1/2x
- allows to memorize heap block meta info for Java and Go
- allows to cleanup sync object meta info for Go
- which in turn enabled deadlock detector for Go
llvm-svn: 209810