allow bit masking to select various trace features.
bit 0 => Launch tracing (stderr)
bit 1 => timing of runtime (stdout)
bit 2 => detailed launch tracing (stderr)
bit 3 => timing goes to stdout instead of stderr
example: LIBOMPTARGET_KERNEL_TRACE=7 does it all
LIBOMPTARGET_KERNEL_TRACE=5 Launch + details
LIBOMPTARGET_KERNEL_TRACE=2 timings + launch to stderr
LIBOMPTARGET_KERNEL_TRACE=10 timings + launch to stdout
Differential Revision: https://reviews.llvm.org/D96998
Track lanes when processing definitions for marking WQM/WWM.
If all lanes have been defined then marking can stop.
This prevents marking unnecessary instructions as WQM/WWM.
In particular this fixes a bug where values passing through
V_SET_INACTIVE would me marked as requiring WWM.
Reviewed By: piotr
Differential Revision: https://reviews.llvm.org/D95503
In both ADCE and BDCE (via DemandedBits) we should not remove
instructions that are not guaranteed to return. This issue was
pointed out by fhahn in the recent llvm-dev thread.
Differential Revision: https://reviews.llvm.org/D96993
Add the following options:
* -fdebug-measure-parse-tree
* -fdebug-pre-fir-tree
Summary of changes:
- Add 2 new frontend actions: DebugMeasureParseTreeAction and DebugPreFIRTreeAction
- Add MeasurementVisitor to FrontendActions.h
- Make reportFatalSemanticErrors return true if there are any fatal errors
- Port most of the `-fdebug-pre-fir-tree` tests to use the new driver if built, otherwise use f18.
Differential Revision: https://reviews.llvm.org/D96884
This moves the willReturn() helper from CallBase to Instruction,
so that it can be used in a more generic manner. This will make
it easier to fix additional passes (ADCE and BDCE), and will give
us one place to change if additional instructions should become
non-willreturn (e.g. there has been talk about handling volatile
operations this way).
I have also included the IntrinsicInst workaround directly in
here, so that it gets applied consistently. (As such this change
is not entirely NFC -- FuncAttrs will now use this as well.)
Differential Revision: https://reviews.llvm.org/D96992
This is a simpler variant of D96647. It just adds a straightforward
depth limit with a high cutoff, without introducing complex logic
for BatchAA consistency. It accepts that we may cache a sub-optimal
result if the depth limit is hit.
Eventually this should be more fully addressed by D96647 or similar,
but in the meantime this avoids stack overflows in a cheap way.
Differential Revision: https://reviews.llvm.org/D96996
A folder of `tensor_load + tensor_to_memref` exists but it only applies when
source and destination memref types are the same.
This revision adds a canonicalize `tensor_load + tensor_to_memref` to `memref_cast`
when type mismatches prevent folding to kick in.
Differential Revision: https://reviews.llvm.org/D97038
This patch fixes some crashes coming from
X86ISelLowering::getSetCCResultType, which would occasionally return
an EVT constructed from an invalid MVT, which has a null Type pointer.
This patch refers to D95434.
Differential Revision: https://reviews.llvm.org/D97036
This enables AES fusion and the post RA scheduler for the Neoverse cores.
And while we are it also for the A55 that we had missed earlier.
Differential Revision: https://reviews.llvm.org/D96866
Some instructions defined in table-gen files sets usesCustomInserter
bit, which means it has to be lowered by target code and isn't actually
valid instruction at MC level. So we should treat them like pseudo
instructions.
Reviewed By: gchatelet
Differential Revision: https://reviews.llvm.org/D94898
This test was accidently removed when the directory structure was shuffled
around for dexter in f78c236efd.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D96968
As discussed on the RFC [0], I am sharing the set of patches that
enables checking of original Debug Info metadata preservation in
optimizations. The proof-of-concept/proposal can be found at [1].
The implementation from the [1] was full of duplicated code,
so this set of patches tries to merge this approach into the existing
debugify utility.
For example, the utility pass in the original-debuginfo-check
mode could be invoked as follows:
$ opt -verify-debuginfo-preserve -pass-to-test sample.ll
Since this is very initial stage of the implementation,
there is a space for improvements such as:
- Add support for the new pass manager
- Add support for metadata other than DILocations and DISubprograms
[0] https://groups.google.com/forum/#!msg/llvm-dev/QOyF-38YPlE/G213uiuwCAAJ
[1] https://github.com/djolertrk/llvm-di-checker
Differential Revision: https://reviews.llvm.org/D82545
The test that was failing is now forced to use the old PM.
Add a facility in the LanguageRuntime to provide a special
UnwindPlan based on the register values in a RegisterContext,
instead of using the return-pc to find a function and use its
normal UnwindPlans.
Needed when the runtime has special stack frames that we want
to show the user, but aren't actually on the real stack.
Specifically for Swift asynchronous functions.
With feedback from Greg Clayton, Jonas Devlieghere, Dave Lee
<rdar://problem/70398009>
Differential Revision: https://reviews.llvm.org/D96839
Rationale:
Providing the wrong number of sparse/dense annotations was silently
ignored or caused unrelated crashes. This minor change verifies that
the provided number matches the rank.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D97034
We were creating more combinations of value and index lmul than
we needed.
I've copied the loop structure used here from VPseudoAMOEI with
all data sew values instead of just 32/64.
Similar can be done for segment loads/store.
Reviewed By: khchen
Differential Revision: https://reviews.llvm.org/D97008
This patch implements 2802. Requires _Deleter to have call operator and be move constructible. Based on D62233.
Refs PR37637.
Differential Revision: https://reviews.llvm.org/D62274
Add option -fgpu-sanitize to enable sanitizer for AMDGPU target.
Since it is experimental, it is off by default.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D96835
As discussed in D94834, we don't really need to do complicated analysis. It's safe to just drop the tail call attribute.
Differential Revision: https://reviews.llvm.org/D96926
Intrinsic ID is a 32-bit value which made each row of the table 4
byte aligned. The remaining fields used 5 bytes. This meant 3 bytes
of padding per row.
This patch breaks the table into 4 separate tables and indexes them
by properties we know about the intrinsic. NF, masked,
strided, ordered, etc. The indexed load/store tables have no
padding in their rows now.
All together this reduces the size of llc binary by ~28K.
I'm considering adding similar tables for isel of non-segment
load/store as well to cut down the size of the isel table and
probably improve our isel performance. Those tables would need to
indexed from intrinsics, IR loads/stores, gathers/scatters, and
RISCVISD opcodes. So having a table that can be indexed without using
intrinsic ID is more flexible.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D96894
This table is queried in RISCVMCInstLower without knowing
whether the instruction is a vector pseudo. Due to the way the
binary search works, we have to do log2(tablesize) checks just
to determine a non-vector instruction isn't in the table.
Conveniently, all the vector pseudos are pretty tightly
packed within the internal instruction enum. By enabling the
PrimaryKeyEarlyOut, tablegen will emit a check against the
beginning and end of the table before doing the binary search.
This gives a quick early out on the search for the majority
of non-vector instructions.
Differential Revision: https://reviews.llvm.org/D97016
mode.
We use that mode when evaluating ICEs in C, and those shortcuts could
result in ICE evaluation producing the wrong answer, specifically if we
evaluate a statement-expression as part of evaluating the ICE.