This patch hides the logic for setting the location kind of an entry
value inside the begin/finalize/cancel functions. This way we get rid
the strange workaround that is currently in setLocation().
In the future, this will allow us to set the location kind of the
entry value independently from the location kind of the main
expression.
Differential Revision: https://reviews.llvm.org/D96554
It appears some instructions doesn't have the debug location info and the symbolizer will return an empty call stack for them which will cause some crash later in profile unwinding. Actually we do not record the sample info for them, so this change just filter out those instruction.
As those instruction would appears at the begin and end of the instruction list, without them we need to add the boundary check for IP `advance` and `backward`.
Also for pseudo probe based profile, we actually don't need the symbolized location info, so here just change to use an empty stack for it. This could save half of the binary loading time.
Differential Revision: https://reviews.llvm.org/D96434
I've already witnessed two separate changes missing runNewPMPasses()
because runNewPMCustomPasses() is so similar.
This cleans up some duplicated code.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D96553
vec_xl() and vec_xst() should not emit alignment hints since they take a
scalar pointer and also add a byte offset if passed.
This patch uses memcpy to achieve the desired result.
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D96471
Swift async functions receive function arguments inside a
heap-allocated data structure, similar to how ObjC block captures or
C++ coroutine arguments are implement. In DWARF they are described
relative to an entry value that produces a pointer into that heap
object. At typical location looks like
DW_OP_entry_value [ DW_OP_reg14 ] DW_OP_deref DW_OP_plus_uconst 32 DW_OP_deref
This allows the unwinder (which has special ABI knowledge to restore
the contents of r14) to push the base address onto the stack thus
allowing the deref/offset operations to continue. The result of the
entry value is a scalar, because DW_OP_reg14 is a register location —
as it should be since we want to restore the pointer value contained
in r14 at the beginning of the function and not the historical memory
contents it was pointing to. The entry value should restore the
address, which is still valid, not the contents at function entry.
To make this work, we need to allow LLDB to dereference Scalar stack
results like load addresses, which is what this patch
does. Unfortunately it is difficult to test this in isolation, since
the DWARFExpression unit test doesn't have a process.
Differential Revision: https://reviews.llvm.org/D96549
The comment for ValueType claims that all values <1 are errors, but
not all switch statements take this into account. This patch
introduces an explicit Error case and deletes all default: cases, so
we get warned about incomplete switch coverage.
https://reviews.llvm.org/D96537
This include some changes related with PerfReader's the input check and command line change:
1) It appears there might be thousands of leading MMAP-Event line in the perfscript for large workload. For this case, the 4k threshold is not eligible to determine it's a hybrid sample. This change renovated the `isHybridPerfScript` by going through the script without threshold limitation checking whether there is a non-empty call stack immediately followed by a LBR sample. It will stop once it find a valid one.
2) Added several input validations for the command line switches in PerfReader.
3) Changed the command line `show-disassembly` to `show-disassembly-only`, it will print to stdout and exit early which leave an empty output profile.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D96387
This is pretty much just ports `performGlobalAddressCombine` from
AArch64ISelLowering. (AArch64 doesn't use the generic DAG combine for this.)
This adds a pre-legalize combine which looks for this pattern:
```
%g = G_GLOBAL_VALUE @x
%ptr1 = G_PTR_ADD %g, cst1
%ptr2 = G_PTR_ADD %g, cst2
...
%ptrN = G_PTR_ADD %g, cstN
```
And then, if possible, transforms it like so:
```
%g = G_GLOBAL_VALUE @x
%offset_g = G_PTR_ADD %g, -min(cst)
%ptr1 = G_PTR_ADD %offset_g, cst1
%ptr2 = G_PTR_ADD %offset_g, cst2
...
%ptrN = G_PTR_ADD %offset_g, cstN
```
Where min(cst) is the smallest out of the G_PTR_ADD constants.
This means we should save at least one G_PTR_ADD.
This also updates code in the legalizer + selector which assumes that
G_GLOBAL_VALUE will never have an offset and adds/updates relevant tests.
Differential Revision: https://reviews.llvm.org/D96624
Added __cxxabi_config.h includes to resolve _LIBCXXABI_ARM_EHABI and
proper building the forces_unwindX.cpp tests for the ARM/EHABI targets.
Differential Revision: https://reviews.llvm.org/D96378
There's no need to call verifyVectorElementMatch since we already know
that the source and destination types are identical.
Differential Revision: https://reviews.llvm.org/D96589
Rather than storing the query depth in AAResults, store it in AAQI.
This makes more sense, as it is a property of the query. This
sidesteps the issue of D94363, fixing slightly inaccurate AA
statistics. Additionally, I plan to use the Depth from BasicAA in
the future, where fetching it from AAResults would be unreliable.
This change is not quite as straightforward as it seems, because
we need to preserve the depth when creating a new AAQI for recursive
queries across phis. I'm adding a new method for this, as we may
need to preserve additional information here in the future.
Currently, vector.contract joins the intermediate result and the accumulator
argument (of ranks K) using summation. We desire more joining operations ---
such as max --- to help vector.contract express reductions. This change extends
Vector_ContractionOp to take an optional attribute (called "kind", of enum type
CombiningKind) specifying the joining operation to be add/mul/min/max for int/fp
, and and/or/xor for int only. By default this attribute has value "add".
To implement this we also need to extend vector.outerproduct, since
vector.contract gets transformed to vector.outerproduct (and that to
vector.fma). The extension for vector.outerproduct is also an optional kind
attribute that uses the same enum type and possible values. The default is
"add". In case of max/min we transform vector.outerproduct to a combination of
compare and select.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D93280
This silences warnings like these, in mingw builds with clang:
runtime/src/kmp_atomic.h:1021:13: warning: '__kmpc_atomic_cmplx8_rd' has C-linkage specified, but returns user-defined type 'kmp_cmplx64' (aka '__kmp_cmplx64_t') which is incompatible with C [-Wreturn-type-c-linkage]
runtime/src/z_Windows_NT_util.cpp:479:17: warning: cast from 'volatile void *' to 'type-parameter-0-0 *' drops volatile qualifier [-Wcast-qual]
flag = (C *)th->th.th_sleep_loc;
runtime/src/z_Windows_NT_util.cpp:1321:14: warning: cast to 'void *' from smaller integer type 'DWORD' (aka 'unsigned long') [-Wint-to-void-pointer-cast]
} else if ((void *)exit_val != (void *)th) {
Differential Revision: https://reviews.llvm.org/D96585
Add ifdefs around one function that only is used in unix build
configurations.
Add a void cast for a windows specific function that currently is
unused but may be intended to be used at some point.
Differential Revision: https://reviews.llvm.org/D96584
These variables are used only in certain build configurations,
or marked with a todo comment indicating that they should be
used/checked/reported.
Differential Revision: https://reviews.llvm.org/D96582
MinGW build configurations don't support this pragma (unless
compiling with clang, with -fms-extensions, and linking with
lld), and at least clang warns about it.
This library does end up linked by the cmake files anyway (as
long as the check works properly).
Differential Revision: https://reviews.llvm.org/D96581
check_library_exists fails for stdcall functions, because that
check doesn't include the necessary headers (and thus fails with
an undefined reference to _EnumProcessModules, when the import
library symbol actually is called _EnumProcessModules@16).
Merge the two previous checks check_include_files and
check_library_exists into one with check_c_source_compiles, and
merge the variables that indicate whether it succeeded.
Differential Revision: https://reviews.llvm.org/D96580
This combine tries to do inter-block hoisting of extends of G_PHIs, into the
originating blocks of the phi's incoming value. The idea is to expose further
optimization opportunities that are normally obscured by the PHI.
Some basic heuristics, and a target hook for AArch64 is added, to allow tuning.
E.g. if the extend is used by a G_PTR_ADD, it doesn't perform this combine
since it may be folded into the addressing mode during selection.
There are very minor code size improvements on AArch64 -Os, but the real benefit
is that it unlocks optimizations like AArch64 conditional compares on some
benchmarks.
Differential Revision: https://reviews.llvm.org/D95703
This patch adds 2 new options to control when Clang adds `mustprogress`:
1. -ffinite-loops: assume all loops are finite; mustprogress is added
to all loops, regardless of the selected language standard.
2. -fno-finite-loops: assume no loop is finite; mustprogress is not
added to any loop or function. We could add mustprogress to
functions without loops, but we would have to detect that in Clang,
which is probably not worth it.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D96419
MVP object files may import at most one table, and if they do, it must
be assigned table number zero in the output, as the references to that
table are not relocatable. Ensure that this is the case, even if some
inputs define other tables.
Differential Revision: https://reviews.llvm.org/D96001
This change adds additional unit tests for fuzzer::Merger::Parse and fuzzer::Merger::Merge in anticipation of additional changes to the merge control file format to support cross-process fuzzing.
It modifies the parameter handling of Merge slightly in order to make NewFeatures and NewCov consistent with NewFiles; namely, Merge *replaces* the contents of these output parameters rather than accumulating them (thereby fixing a buggy return value).
This is change 1 of (at least) 18 for cross-process fuzzing support.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D94506
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.
This reverts commit 3f22547fd1 and reland 973e133b76 with a workaround for
a gcc bug that does not accept lambda default parameters:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949
Differential Revision: https://reviews.llvm.org/D96598
It looks like a previous change switched these from LLDB_LOGF but did not update the format strings.
Differential Revision: https://reviews.llvm.org/D96550
Their names don't convey much information, so they should be excluded.
The behavior matches addr2line.
Differential Revision: https://reviews.llvm.org/D96617
Align the vector gather/scatter/expand/compress API with
the vector load/store/maskedload/maskedstore API.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D96396
This patch adds the 'vector.load' and 'vector.store' ops to the Vector
dialect [1]. These operations model *contiguous* vector loads and stores
from/to memory. Their semantics are similar to the 'affine.vector_load' and
'affine.vector_store' counterparts but without the affine constraints. The
most relevant feature is that these new vector operations may perform a vector
load/store on memrefs with a non-vector element type, unlike 'std.load' and
'std.store' ops. This opens the representation to model more generic vector
load/store scenarios: unaligned vector loads/stores, perform scalar and vector
memory access on the same memref, decouple memory allocation constraints from
memory accesses, etc [1]. These operations will also facilitate the progressive
lowering of both Affine vector loads/stores and Vector transfer reads/writes
for those that read/write contiguous slices from/to memory.
In particular, this patch adds the 'vector.load' and 'vector.store' ops to the
Vector dialect, implements their lowering to the LLVM dialect, and changes the
lowering of 'affine.vector_load' and 'affine.vector_store' ops to the new vector
ops. The lowering of Vector transfer reads/writes will be implemented in the
future, probably as an independent pass. The API of 'vector.maskedload' and
'vector.maskedstore' has also been changed slightly to align it with the
transfer read/write ops and the vector new ops. This will improve reusability
among all these operations. For example, the lowering of 'vector.load',
'vector.store', 'vector.maskedload' and 'vector.maskedstore' to the LLVM dialect
is implemented with a single template conversion pattern.
[1] https://llvm.discourse.group/t/memref-type-and-data-layout/
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D96185
Some of these accidentally disabled tests failed as a result; updated
tests per @qcolombet instructions. A small number needed additional
updates because legalization has actually changed since they were
written.
Found by the Rotten Green Tests project.
Differential Revision: https://reviews.llvm.org/D95257