-print_full_coverage=1 produces a detailed branch coverage dump when run on a single file.
Uses same infrastructure as -print_coverage flag, but prints all branches (regardless of coverage status) in an easy-to-parse format.
Usage: For internal use with machine learning fuzzing models which require detailed coverage information on seed files to generate mutations.
Differential Revision: https://reviews.llvm.org/D85928
Reverts the XFAIL added in b67a2aef8a,
which had no effect.
Adjust the test to make sure all output is dumped to stderr, so that
hopefully I can get a better idea of where/why this is failing.
Remove some redundant checking while here.
This doesn't support -structurizecfg-skip-uniform-regions since that
would require porting LegacyDivergenceAnalysis.
The NPM doesn't support adding a non-analysis pass as a dependency of
another, so I had to add -lowerswitch to some tests or pin them to the
legacy PM.
This is the only RegionPass in tree, so I simply copied the logic for
finding all Regions from the legacy PM's RGManager into
StructurizeCFG::run().
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D89026
This makes it pass under the NPM.
The legacy PM pass ran passes on SCCs in a different order, causing
argpromotion to not trigger on @bar().
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D89889
This diff adds the option -prepend_rpath which inserts an rpath as
the first rpath in the binary.
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D89605
This change introduces a GC parseable lowering for element atomic
memcpy/memmove intrinsics. This way runtime can provide an
implementation which can take a safepoint during copy operation.
See "GC-parseable element atomic memcpy/memmove" thread on llvm-dev
for the background and details:
https://groups.google.com/g/llvm-dev/c/NnENHzmX-b8/m/3PyN8Y2pCAAJ
Differential Revision: https://reviews.llvm.org/D88861
I added a test to verify that the associated symbol did not have errors before
doing the anaylsis of a call to a component ref along with a test that
triggers the original problem.
Differential Revision: https://reviews.llvm.org/D90074
The current pattern for vector unrolling takes the native shape to
unroll to at pattern instantiation time, but the native shape might
defer based on the types of the operand. Introduce a
UnrollVectorOptions struct which allows for using a function that will
return the native shape based on the operation. Move other options of
unrolling like `filterConstraints` into this struct.
Differential Revision: https://reviews.llvm.org/D89744
While implementing inline stack traces on Windows I noticed that the stack
traces in many asan tests included an inlined frame that shouldn't be there.
Currently we get the PC and then do a stack unwind and use the PC to
find the beginning of the stack trace.
In the failing tests the first thing in the stack trace is inside an inline
call site that shouldn't be in the stack trace, so replace it with the PC.
Differential Revision: https://reviews.llvm.org/D89996
For performance reasons the reproducers don't copy the files captured by
the file collector eagerly, but wait until the reproducer needs to be
generated.
This is a problematic when LLDB crashes and we have to do all this
signal-unsafe work in the signal handler. This patch uses a similar
trick to clang, which has the driver invoke a new cc1 instance to do all
this work out-of-process.
This patch moves the writing of the mapping file as well as copying over
the reproducers into a separate process spawned when lldb crashes.
Differential revision: https://reviews.llvm.org/D89600
Add folder for the case where ExtractStridedSliceOp source comes from a chain
of InsertStridedSliceOp. Also add a folder for the trivial case where the
ExtractStridedSliceOp is a no-op.
Differential Revision: https://reviews.llvm.org/D89850
Use `LineOffsetMapping:get` directly and remove/inline the helper
`ComputeLineNumbers`, simplifying the callers.
Differential Revision: https://reviews.llvm.org/D89922
It's currently ambiguous in IR whether the source language explicitly
did not want a stack a stack protector (in C, via function attribute
no_stack_protector) or doesn't care for any given function.
It's common for code that manipulates the stack via inline assembly or
that has to set up its own stack canary (such as the Linux kernel) would
like to avoid stack protectors in certain functions. In this case, we've
been bitten by numerous bugs where a callee with a stack protector is
inlined into an __attribute__((__no_stack_protector__)) caller, which
generally breaks the caller's assumptions about not having a stack
protector. LTO exacerbates the issue.
While developers can avoid this by putting all no_stack_protector
functions in one translation unit together and compiling those with
-fno-stack-protector, it's generally not very ergonomic or as
ergonomic as a function attribute, and still doesn't work for LTO. See also:
https://lore.kernel.org/linux-pm/20200915172658.1432732-1-rkir@google.com/https://lore.kernel.org/lkml/20200918201436.2932360-30-samitolvanen@google.com/T/#u
Typically, when inlining a callee into a caller, the caller will be
upgraded in its level of stack protection (see adjustCallerSSPLevel()).
By adding an explicit attribute in the IR when the function attribute is
used in the source language, we can now identify such cases and prevent
inlining. Block inlining when the callee and caller differ in the case that one
contains `nossp` when the other has `ssp`, `sspstrong`, or `sspreq`.
Fixes pr/47479.
Reviewed By: void
Differential Revision: https://reviews.llvm.org/D87956
We were returning the default constructed unique_pointer from
TypeSystem.h for which the compiler does not have a definition. Move the
implementation into the cpp file.
On AIX, to support vector types, which should always be 16 bytes aligned,
we set alloca to return 16 bytes aligned memory space.
Differential Revision: https://reviews.llvm.org/D89910
This does not change anything at the moment, but needed for
D89170. In that change I am probing a physical SGPR to see if
it is legal. RC is SReg_32, but DRC for scratch instructions
is SReg_32_XEXEC_HI and test fails.
That is sufficient just to check if DRC contains a register
here in case of physreg. Physregs also do not use subregs
so the subreg handling below is irrelevant for these.
Differential Revision: https://reviews.llvm.org/D90064
The change in 0ba9843397 changed the behaviour of the build when
using an XL build compiler because `-G` is not a pure linker option:
it also implies `-shared`. This was accounted for in the base CMake
configuration, so an analysis of the change from 0ba9843397 in
relation to a build using Clang (where `-shared` is introduced by CMake)
would not identify the issue. This patch resolves this particular issue
by adding `-shared` alongside `-Wl,-G`.
At the same time, the investigation reveals that several aspects of the
various build configurations are not operating in the manner originally
intended.
The other issue related to the `-G` linker option in the build is that
the removal of it (to avoid unnecessary use of run-time linking) is not
effective for the build using the Clang compiler. This patch addresses
this by adjusting the regular expressions used to remove the broadly-
applied `-G`.
Finally, the issue of specifying the export list with `-Wl,` instead of
a compiler option is flagged with a FIXME comment.
Reviewed By: daltenty, amyk
Differential Revision: https://reviews.llvm.org/D90041
For unknown reasons, this test started failing only on the
llvm-avr-linux bot after 5c20d7db9f2791367b9311130eb44afecb16829c:
http://lab.llvm.org:8011/#/builders/112/builds/365
The error message is not helpful, and I have an email out to the bot
owner to help with debugging. XFAIL it on avr for now.
This was initiated from the uses of MCRegUnitIterator, so while likely
not exhaustive, it's a step forward.
Differential Revision: https://reviews.llvm.org/D89975
I'm not sure whether this can cause actual non-determinism in the
compiler output, but at least it causes non-determinism in the
statistics collected by BasicAA.
Use SetVector to have a predictable iteration order.
This partially reverts D85994.
In glibc, elf/dl-sym.c calls the raw `__tls_get_addr` by specifying the
tls_index parameter. Such a call does not have a pairing R_PPC64_TLSGD/R_PPC64_TLSLD.
This is legitimate. Since we cannot distinguish the benign case from cases due
to toolchain issues, we have to be permissive.
Acked by Stefan Pintilie
Avoid some noisy `const_cast`s by making `ContentCache::SourceLineCache`
and `SourceManager::LastLineNoContentCache` both mutable.
Differential Revision: https://reviews.llvm.org/D89914
There are two optimizations here:
1. Consider the following code:
FCMPSrr %0, %1, implicit-def $nzcv
%sel1:gpr32 = CSELWr %_, %_, 12, implicit $nzcv
%sub:gpr32 = SUBSWrr %_, %_, implicit-def $nzcv
FCMPSrr %0, %1, implicit-def $nzcv
%sel2:gpr32 = CSELWr %_, %_, 12, implicit $nzcv
This kind of code where we have 2 FCMPs each feeding a CSEL can happen
when we have a single IR fcmp being used by two selects. During selection,
to ensure that there can be no clobbering of nzcv between the fcmp and the
csel, we have to generate an fcmp immediately before each csel is
selected.
However, often we can essentially CSE these together later in MachineCSE.
This doesn't work though if there are unrelated flag-setting instructions
in between the two FCMPs. In this case, the SUBS defines NZCV
but it doesn't have any users, being overwritten by the second FCMP.
Our solution here is to try to convert flag setting operations between
a interval of identical FCMPs, so that CSE will be able to eliminate one.
2. SelectionDAG imported patterns for arithmetic ops currently select the
flag-setting ops for CSE reasons, and add the implicit-def $nzcv operand
to those instructions. However if those impdef operands are not marked as
dead, the peephole optimizations are not able to optimize them into non-flag
setting variants. The optimization here is to find these dead imp-defs and
mark them as such.
This pass is only enabled when optimizations are enabled.
Differential Revision: https://reviews.llvm.org/D89415
If CUDA version can not be determined based on version.txt file, attempt to find
CUDA_VERSION macro in cuda.h.
This is a follow-up to D89752,
Differntial Revision: https://reviews.llvm.org/D89832
CUDA-11.1 does not carry version.txt which causes clang to assume that it's
CUDA-7.0, which used to be the only CUDA version w/o version.txt.
In order to tell CUDA-7.0 apart from the new versions, clang now probes for the
presence of libdevice.10.bc which is not present in the old CUDA versions.
This should keep Clang working for CUDA-11.1.
PR47332: https://bugs.llvm.org/show_bug.cgi?id=47332
Differential Revision: https://reviews.llvm.org/D89752
This patch redesigns the Target::GetUtilityFunctionForLanguage API:
- Use a unique_ptr instead of a raw pointer for the return type.
- Wrap the result in an llvm::Expected instead of using a Status object as an I/O parameter.
- Combine the action of "getting" and "installing" the UtilityFunction as they always get called together.
- Pass std::strings instead of const char* and std::move them where appropriate.
There's more room for improvement but I think this tackles the most
prevalent issues with the current API.
Differential revision: https://reviews.llvm.org/D90011
These compiler-rt tests should be UNSUPPORTED instead of XFAIL, which seems to be the real intent of the authors.
Reviewed By: vvereschaka
Differential Revision: https://reviews.llvm.org/D89840