Summary: This patch disables implicit builtin knowledge about memcmp-like functions when compiling the program for fuzzing, i.e., when -fsanitize=fuzzer(-no-link) is given. This allows libFuzzer to always intercept memcmp-like functions as it effectively disables optimizing calls to such functions into different forms. This is done by adding a set of flags (-fno-builtin-memcmp and others) in the clang driver. Individual -fno-builtin-* flags previously used in several libFuzzer tests are now removed, as it is now done automatically in the clang driver.
Reviewers: morehouse, hctim
Subscribers: cfe-commits, #sanitizers
Tags: #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D83987
This test is hitting https://bugs.python.org/issue22393 which results in
the lit multiprocessing pool deadlocking and the reproducer job timing
out on GreenDragon.
This patch adds the instruction definitions and MC tests for the 128-bit Binary
Integer Operation instructions introduced in Power10.
Differential Revision: https://reviews.llvm.org/D83516
long double is a 64-bit double-precision type on:
- MSVC (32- and 64-bit x86)
- Android (32-bit x86)
long double is a 128-bit quad-precision type on x86_64 Android.
The assembly variants of the 80-bit builtins are correct, but some of
the builtins are implemented in C and require that long double be the
80-bit type passed via an x87 register.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D82153
Fix failure in Android bots from refactoring in
5d2be1a188 (https://crbug.com/1106482).
We need to make the UnmapFromTo available outside sanitizer_common for
calls from hwasan and asan linux handling. While here, remove
declaration of GetHighMemEnd which is no longer in sanitizer_common.
TargetProcessControl is a new API for communicating with JIT target processes.
It supports memory allocation and access, and inspection of some process
properties, e.g. the target proces triple and page size.
Centralizing these APIs allows utilities written against TargetProcessControl
to remain independent of the communication procotol with the target process
(which may be direct memory access/allocation for in-process JITing, or may
involve some form of IPC or RPC).
An initial set of TargetProcessControl-based utilities for lazy compilation is
provided by the TPCIndirectionUtils class.
An initial implementation of TargetProcessControl for in-process JITing
is provided by the SelfTargetProcessControl class.
An example program showing how the APIs can be used is provided in
llvm/examples/OrcV2Examples/LLJITWithTargetProcessControl.
SimplifyCFG was incorrectly reporting to the pass manager that it had not made
changes after folding away a PHI. This is detected in the EXPENSIVE_CHECKS
build when the function's hash changes.
Differential Revision: https://reviews.llvm.org/D83985
Accounting for the fact that Wasm function indices are 32-bit, but in wasm64 we want uniform 64-bit pointers.
Includes reloc types for 64-bit table indices.
Differential Revision: https://reviews.llvm.org/D83729
There was a lot of duplicate code here for checking the VT and
subtarget. Moving it into a helper avoids that.
It also fixes a bug that combineAdd reused Op0/Op1 after a call
to isHorizontalBinOp may have changed it. The new helper function
has its own local version of Op0/Op1 that aren't shared by other
code.
Fixes PR46455.
Reviewed By: spatel, bkramer
Differential Revision: https://reviews.llvm.org/D83971
Summary: libFuzzer intercepts certain library functions such as memcmp/strcmp by defining weak hooks. Weak hooks, however, are called only when other runtimes such as ASan is linked. This patch defines libFuzzer's own interceptors, which is linked into the libFuzzer executable when other runtimes are not linked, i.e., when -fsanitize=fuzzer is given, but not others.
Reviewers: kcc, morehouse, hctim
Reviewed By: morehouse, hctim
Subscribers: krytarowski, mgorny, cfe-commits, #sanitizers
Tags: #clang, #sanitizers
Differential Revision: https://reviews.llvm.org/D83494
Alternative to D83897. I believe the big change here is that I removed slow unaligned memory 16
Down side that it may adversely effect tuning if someone explicitly targets -march=pentium4 and expects pentium4 tuned code. Of course pentium4 is so old our default behavior with the previous settings may not have been the best either.
Reviewed By: echristo, RKSimon
Differential Revision: https://reviews.llvm.org/D83913
This test has been failing on some SDKs for a long time because we lack
a proper way of identifying the SDK version in Lit. Until that is possible,
mark the test as unsupported on Apple to restore the CI.
Some time ago, I introduced shortcut features like dylib-has-no-shared_mutex
to encode whether the deployment target supported shared_mutex (say). This
made the test suite annotations cleaner.
However, the problem with building Lit features on top of other Lit
features is that it's easier for them to become stale, especially when
they are generated programmatically. Furthermore, it makes the bar for
defining configurations from scratch higher, since more features have
to be defined. Instead, I think it's better to put the XFAILs in the
tests directly, which allows cleaning them up with a simple grep.
The patch that introduces handling iterators implemented as pointers may
cause crash in some projects because pointer difference is mistakenly
handled as pointer decrement. (Similair case for iterators implemented
as class instances are already handled correctly.) This patch fixes this
issue.
The second case that causes crash is comparison of an iterator
implemented as pointer and a null-pointer. This patch contains a fix for
this issue as well.
The third case which causes crash is that the checker mistakenly
considers all integers as nonloc::ConcreteInt when handling an increment
or decrement of an iterator implemented as pointers. This patch adds a
fix for this too.
The last case where crashes were detected is when checking for success
of an std::advance() operation. Since the modeling of iterators
implemented as pointers is still incomplete this may result in an
assertion. This patch replaces the assertion with an early exit and
adds a FIXME there.
Differential Revision: https://reviews.llvm.org/D83295
Summary:
This refactors some common support related to shadow memory setup from
asan and hwasan into sanitizer_common. This should not only reduce code
duplication but also make these facilities available for new compiler-rt
uses (e.g. heap profiling).
In most cases the separate copies of the code were either identical, or
at least functionally identical. A few notes:
In ProtectGap, the asan version checked the address against an upper
bound (kZeroBaseMaxShadowStart, which is (2^18). I have created a copy
of kZeroBaseMaxShadowStart in hwasan_mapping.h, with the same value, as
it isn't clear why that code should not do the same check. If it
shouldn't, I can remove this and guard this check so that it only
happens for asan.
In asan's InitializeShadowMemory, in the dynamic shadow case it was
setting __asan_shadow_memory_dynamic_address to 0 (which then sets both
macro SHADOW_OFFSET as well as macro kLowShadowBeg to 0) before calling
FindDynamicShadowStart(). AFAICT this is only needed because
FindDynamicShadowStart utilizes kHighShadowEnd to
get the shadow size, and kHighShadowEnd is a macro invoking
MEM_TO_SHADOW(kHighMemEnd) which in turn invokes:
(((kHighMemEnd) >> SHADOW_SCALE) + (SHADOW_OFFSET))
I.e. it computes the shadow space needed by kHighMemEnd (the shift), and
adds the offset. Since we only want the shadow space here, the earlier
setting of SHADOW_OFFSET to 0 via __asan_shadow_memory_dynamic_address
accomplishes this. In the hwasan version, it simply gets the shadow
space via "MemToShadowSize(kHighMemEnd)", where MemToShadowSize just
does the shift. I've simplified the asan handling to do the same
thing, and therefore was able to remove the setting of the SHADOW_OFFSET
via __asan_shadow_memory_dynamic_address to 0.
Reviewers: vitalybuka, kcc, eugenis
Subscribers: dberris, #sanitizers, llvm-commits, davidxl
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D83247
Summary:
1. gcc uses `-march` and `-mtune` flag to chose arch and
pipeline model, but clang does not have `-mtune` flag,
we uses `-mcpu` to chose both infos.
2. Add SiFive e31 and u54 cpu which have default march
and pipeline model.
3. Specific `-mcpu` with rocket-rv[32|64] would select
pipeline model only, and use the driver's arch choosing
logic to get default arch.
Reviewers: lenary, asb, evandro, HsiangKai
Reviewed By: lenary, asb, evandro
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D71124
Replace std::vector with SmallVector to reduce the number of mallocs.
This method is frequently executed, and the number of elements in the
vector is typically small.
https://reviews.llvm.org/D83920
Although the SIMD spec proposal does not specifically include a
select instruction, the select instruction in MVP WebAssembly is
polymorphic over the selected types, so it is able to work on v128
values when they are enabled. This patch introduces a new variant of
the select instruction for each legal vector type. Additional ISel
patterns are adapted from the SELECT_I32 and SELECT_I64 patterns.
Depends on D83736.
Differential Revision: https://reviews.llvm.org/D83737
The size of VTList that is pushed into this container is usually 1, but
often 6 or 7. Change the vector to SmallVector to eliminate frequent
mallocs. This happens hundreds of thousands of times in each tablegen
execution during the LLVM/clang build.
https://reviews.llvm.org/D83849
This is in preparation for fixing multiple problems with the way AGPR
copies are handled, but this change is NFC itself. First, it's relying
on recursively calling copyPhysReg, which is losing information
necessary to get correct super register handling.
Second, it's constructing a new RegScavenger and doing a O(N^2) walk
on every single sub-spill for every AGPR tuple copy. Third, it's using
the forward form of the scavenger, and not using the preferred
backwards scan.
Many tests use opt's -analyze feature, which does not translate well to
NPM and has better alternatives. The alternative here is to explicitly
add a pass that calls ScalarEvolution::print().
The legacy pass manager RUNs aren't changing, but they are now pinned to
the legacy pass manager. For each legacy pass manager RUN, I added a
corresponding NPM RUN using the 'print<scalar-evolution>' pass. For
compatibility with update_analyze_test_checks.py and existing test
CHECKs, 'print<scalar-evolution>' now prints what -analyze prints per
function.
This was generated by the following Python script and failures were
manually fixed up:
import sys
for i in sys.argv:
with open(i, 'r') as f:
s = f.read()
with open(i, 'w') as f:
for l in s.splitlines():
if "RUN:" in l and ' -analyze ' in l and '\\' not in l:
f.write(l.replace(' -analyze ', ' -analyze -enable-new-pm=0 '))
f.write('\n')
f.write(l.replace(' -analyze ', ' -disable-output ').replace(' -scalar-evolution ', ' "-passes=print<scalar-evolution>" ').replace(" | ", " 2>&1 | "))
f.write('\n')
else:
f.write(l)
There are a couple failures still in ScalarEvolution under NPM, but
those are due to other unrelated naming conflicts.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D83798
Updating the simd-select.ll tests manually with consistent named
regexps for the register numbers was taking more time than it was
worth, so this patch updates that test file to have autogenerated
output. This is not a significant readability regression because the
tests in that file are all very small.
Depends on D83734.
Differential Revision: https://reviews.llvm.org/D83736
We were previously expanding vselect and matching on the expansion to
generate bitselects, but in some cases the expansion would be further
combined and a bitselect would not get generated. This patch improves
codegen in those cases by legalizing vselect and lowering it to
v128.bitselect. The old pattern that matches the expansion is still
useful for lowering IR that already uses the expansion rather than a
select operation.
Differential Revision: https://reviews.llvm.org/D83734
In an upcoming AMDGPU patch, the scalar cases will be legal and vector
ops should be scalarized, rather than producing a long sequence of
vector ops which will also need to be scalarized.
Use a lazy heuristic that seems to work and improves the thumb2 MVE
test.
This patch fixes the compilation warnings that L is not a reference.
Thanks to Lingda Li for providing the patch.
Differential Revision: https://reviews.llvm.org/D83959
Basic support for variadic-def MIR Statepoint:
- Change TableGen STATEPOINT description to variadic out list
(For self-documentation purpose; by itself it does not affect
code generation in any way).
- Update StatepointOpers helper class to handle variadic defs.
- Update MachineVerifier to properly handle them, too.
With this change, new Statepoint instruction can be passed through
backend (excluding ISEL) without errors.
Full change set is available at D81603.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D81645