Summary:
- Enabling libfuzzer on OpenBSD
- OpenBSD can t support asan, msan ... the tests can t be run.
Patch by David CARLIER
Reviewers: eugenis, phosek, vitalybuka
Reviewed By: vitalybuka
Subscribers: srhines, mgorny, krytarowski, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D44877
llvm-svn: 329631
Summary:
exception_header->exceptionDestructor is a void(*)(void*) function
pointer; however, it can point to destructors like std::
exception::~exception that don't match that type signature.
Reviewers: pcc, vitalybuka
Reviewed By: vitalybuka
Subscribers: kcc, christof, cfe-commits
Differential Revision: https://reviews.llvm.org/D45455
llvm-svn: 329629
Lower is slightly odd. It often doesn't change the type but the lowerings
do use the new type to decide what code to create. Treat it like a mutation
but provide convenience functions that re-use the existing type.
Re-uses the existing tests:
test/CodeGen/AArch64/GlobalISel/legalize-rem.mir
test/CodeGen/AArch64/GlobalISel//legalize-mul.mir
test/CodeGen/AArch64/GlobalISel//legalize-cmpxchg-with-success.mir
llvm-svn: 329623
Fix PR36484, as suggested:
<quote>
during moves, mark the direct users of the erased things that were phis as "not to be optimized"
<quote>
llvm-svn: 329621
1. Remove max_scratch_backing_memory_byte_size from kernel header
2. Make it a reserved field
3. Ignore it while parsing assembly for backwards compatibility
4. Bump up minor version of kernel header
Differential Revision: https://reviews.llvm.org/D45452
llvm-svn: 329620
registers.
This patch fixes a bug in r328731 that caused structs transitively
containing __weak fields to be passed in registers. The patch replaces
the flag RecordDecl::CanPassInRegisters with a 2-bit enum that indicates
whether the struct or structs containing the struct are forced to be
passed indirectly.
rdar://problem/39194693
llvm-svn: 329617
LowerIntUnary as its name says has an assert for integer types. But for the bitcast case one side might be an FP type.
Rather than making sure the function really works for fp types and renaming it. Just do really basic splitting directly. The LowerIntUnary has the advantage that it can peek through BUILD_VECTOR because every other call is during Lowering. But these calls are during legalization and will be followed by a DAG combine round.
Revert some change to LowerVectorIntUnary that were originally made just to make these two calls work even in pure integer cases.
This was found purely by compiling the avx512f-builtins.c test from clang so I've copied over the offending function from that.
llvm-svn: 329616
Summary:
Right now to disable -fsanitize=kernel-address instrumentation, one needs to use no_sanitize("kernel-address"). Make either no_sanitize("address") or no_sanitize("kernel-address") disable both ASan and KASan instrumentation. Also remove redundant test.
Patch by Andrey Konovalov
Reviewers: eugenis, kcc, glider, dvyukov, vitalybuka
Reviewed By: eugenis, vitalybuka
Differential Revision: https://reviews.llvm.org/D44981
llvm-svn: 329612
This is a code size win in code that takes offseted addresses
frequently, such as C++ constructors that typically need to compute
an offseted address of a vtable. It reduces the size of Chromium for
Android's .text section by 46KB, or 56KB with ThinLTO (which exposes
more opportunities to use a direct access rather than a GOT access).
Because the addend range is limited in COFF and Mach-O, this is
enabled for ELF only.
Differential Revision: https://reviews.llvm.org/D45199
llvm-svn: 329611
Summary:
r327219 added wrappers to std::sort which randomly shuffle the container before sorting.
This will help in uncovering non-determinism caused due to undefined sorting
order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of std::sort.
Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort.
Refer the comments section in D44363 for a list of all the required patches.
Reviewers: sunfish, RKSimon
Reviewed By: sunfish
Subscribers: jfb, dschuff, sbc100, jgravelle-google, aheejin, llvm-commits
Differential Revision: https://reviews.llvm.org/D44873
llvm-svn: 329607
Summary:
Even this version seems to mess with Android somehow. Reverting for now while
I figure out what's up.
Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D45450
llvm-svn: 329606
I believe all the pieces are now in place in the backend to make this work correctly. We can either mask the input to 32 bits for pmuludg or shl/ashr for pmuldq and use a regular mul instruction. The backend should combine this to PMULUDQ/PMULDQ and then SimplifyDemandedBits will remove the and/shifts.
Differential Revision: https://reviews.llvm.org/D45421
llvm-svn: 329605
In somes cases fast-isel fails to remove the and/shifts and uses blends or conditional moves.
But once masking gets involved, fast-isel aborts on the mask portion and we DAG combine more thorougly.
llvm-svn: 329604
This allows MachineMemOperand::getSize()'s result to be fed directly into
MachineMemOperand::MachineMemOperand() without a narrowing type conversion
warning.
llvm-svn: 329602
building.
https://reviews.llvm.org/D45067
This change attempts to do two things:
1) It separates out the state that is stored in the
MachineIRBuilder(InsertionPt, MF, MRI, InsertFunction etc) into a
separate object called MachineIRBuilderState.
2) Add the ability to constant fold operations while building instructions
(optionally). MachineIRBuilder is now refactored into a MachineIRBuilderBase
which contains lots of non foldable build methods and their implementation.
Instructions which can be constant folded/transformed are now in a class
called FoldableInstructionBuilder which uses CRTP to use the implementation
of the derived class for buildBinaryOps. Additionally buildInstr in the derived
class can be used to implement other kinds of transformations.
Also because of separation of state, given a MachineIRBuilder in an API,
if one wishes to use another MachineIRBuilder, a new one can be
constructed from the state locally. For eg,
void doFoo(MachineIRBuilder &B) {
MyCustomBuilder CustomB(B.getState());
// Use CustomB for building.
}
reviewed by : aemerson
llvm-svn: 329596
Summary:
Still pursuing the ultimate goal of splitting the Symbolizer code from
RTSanitizerCommon core, allow `BackgroundThread` to work even when not linked
with `sanitizer_stackdepot.cc`. There is no reason this function should pull in
the whole `StackDepot` if symbolization is not supported.
Currently this has no functional change as the depot is always linked anyway.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: kubamracek, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D45296
llvm-svn: 329595
Explicitly include and build lib/Testing/Support from LLVM sources when
doing a stand-alone build. This is necessary since clangd tests started
to depend on LLVMTestingSupport library which is neither installed
by LLVM, nor built by clang itself.
Since completely separate build of clang-tools-extra is not supported,
this relies on variables set by clang CMakeLists.
Differential Revision: https://reviews.llvm.org/D45409
llvm-svn: 329594
While it appears to be correct information based on Intel's optimization manual and Agner's data, it causes perf regressions on a couple of the benchmarks in our internal list.
llvm-svn: 329593
Author: Samuel Pitoiset
ds_read_b128 and ds_write_b128 have been recently enabled
under the amdgpu-ds128 option because the performance benefit
is unclear.
Though, using 128-bit loads/stores for the local address space
appears to introduce regressions in tessellation shaders. Not
sure what is broken, but as ds_read_b128/ds_write_b128 are not
enabled by default, just introduce a global option and enable
128-bit only if requested (until it's fixed/used correctly).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105464
llvm-svn: 329591
This patch teaches llvm-mca how to parse code comments in search for special
"markers" used to select regions of code.
Example:
# LLVM-MCA-BEGIN My Code Region
....
# LLVM-MCA-END
The MCAsmLexer now delegates to an object of class MCACommentParser (i.e. an
AsmCommentConsumer) the parsing of code comments to search for begin/end code
region markers.
A comment starting with substring "LLVM-MCA-BEGIN" marks the beginning of a new
region of code. A comment starting with substring "LLVM-MCA-END" marks the end
of the last region.
This implementation doesn't allow regions to overlap. Each region can have a
optional description; internally, each region is identified by a range of source
code locations (SMLoc).
MCInst objects are added to a region R only if the source location for the
MCInst is in the range of locations specified by R.
By default, the tool allocates an implicit "Default" code region which contains
every source location. See new tests llvm-mca-marker-*.s for a few examples.
A new Backend object is created for every region. So, the analysis is conducted
on every parsed code region. The final report is the union of the reports
generated for every code region. Note that empty regions are skipped.
Special "[#] Code Region - ..." strings are used in the report to mark the
portion which is specific to a code region only. For example, see
llvm-mca-markers-5.s.
Differential Revision: https://reviews.llvm.org/D45433
llvm-svn: 329590
Summary:
This fixes AMDGPU GlobalISel test failures when enabling the AMDGPU
target without any other targets that use GlobalISel.
Reviewers: arsenm
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D45353
llvm-svn: 329588
Summary:
Minor style changes to complement D44404:
- make use of a new ErrorBase ctor
- de-duplicate a comment about VS2013 support
Reviewers: eugenis
Subscribers: kubamracek, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D45390
llvm-svn: 329586
Without the fast math flags, the llvm.experimental.vector.reduce.fadd/fmul intrinsic expansions must be expanded in order.
This patch scalarizes the reduction, applying the accumulator at the start of the sequence: ((((Acc + Scl[0]) + Scl[1]) + Scl[2]) + ) ... + Scl[NumElts-1]
Differential Revision: https://reviews.llvm.org/D45366
llvm-svn: 329585
amdgcn targets only support HIP, which does not define __CUDA_ARCH__.
this is a partial unroll of r329232 / D45277.
Differential Revision: https://reviews.llvm.org/D45387
llvm-svn: 329584
Summary:
The option is helpful for large projects where it's not feasible to specify sources which
user would like to see in the report. Instead, it allows to black-list specific sources via
regular expressions (e.g. now it's possible to skip all files that have "test" in its name).
This also partially fixes https://bugs.llvm.org/show_bug.cgi?id=34277
Reviewers: vsk, morehouse, liaoyuke
Reviewed By: vsk
Subscribers: kcc, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D43907
llvm-svn: 329581
Summary:
The wrapper finds the closest matching compile command using filename heuristics
and makes minimal tweaks so it can be used with the header.
Subscribers: klimek, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D45006
llvm-svn: 329580
Summary:
Calculating the include path from absolute file path does not always
work for all build system, e.g. bazel uses symlink as the build working
directory. The absolute file path from editor and clang is diverged from
each other. We need to address it properly in build sysmtem integration.
This patch worksarounds the issue by providing a hook in URI which allows
clients to provide their customized include path.
Reviewers: sammccall
Subscribers: klimek, ilya-biryukov, jkorous-apple, ioeric, MaskRay, cfe-commits
Differential Revision: https://reviews.llvm.org/D45426
llvm-svn: 329578