This adds a new platform class, whose job is to enable running
(debugging) executables under qemu.
(For general information about qemu, I recommend reading the RFC thread
on lldb-dev
<https://lists.llvm.org/pipermail/lldb-dev/2021-October/017106.html>.)
This initial patch implements the necessary boilerplate as well as the
minimal amount of functionality needed to actually be able to do
something useful (which, in this case means debugging a fully statically
linked executable).
The knobs necessary to emulate dynamically linked programs, as well as
to control other aspects of qemu operation (the emulated cpu, for
instance) will be added in subsequent patches. Same goes for the ability
to automatically bind to the executables of the emulated architecture.
Currently only two settings are available:
- architecture: the architecture that we should emulate
- emulator-path: the path to the emulator
Even though this patch is relatively small, it doesn't lack subtleties
that are worth calling out explicitly:
- named sockets: qemu supports tcp and unix socket connections, both of
them in the "forward connect" mode (qemu listening, lldb connecting).
Forward TCP connections are impossible to realise in a race-free way.
This is the reason why I chose unix sockets as they have larger, more
structured names, which can guarantee that there are no collisions
between concurrent connection attempts.
- the above means that this code will not work on windows. I don't think
that's an issue since user mode qemu does not support windows anyway.
- Right now, I am leaving the code enabled for windows, but maybe it
would be better to disable it (otoh, disabling it means windows
developers can't check they don't break it)
- qemu-user also does not support macOS, so one could contemplate
disabling it there too. However, macOS does support named sockets, so
one can even run the (mock) qemu tests there, and I think it'd be a
shame to lose that.
Differential Revision: https://reviews.llvm.org/D114509
This patch upstream the array value copy pass.
Transform the set of array value primitives to a memory-based array
representation.
The Ops `array_load`, `array_store`, `array_fetch`, and `array_update` are
used to manage abstract aggregate array values. A simple analysis is done
to determine if there are potential dependences between these operations.
If not, these array operations can be lowered to work directly on the memory
representation. If there is a potential conflict, a temporary is created
along with appropriate copy-in/copy-out operations. Here, a more refined
analysis might be deployed, such as using the affine framework.
This pass is required before code gen to the LLVM IR dialect.
This patch is part of the upstreaming effort from fir-dev branch. The
pass is bringing quite a lot of file with it.
Reviewed By: kiranchandramohan, schweitz
Differential Revision: https://reviews.llvm.org/D111337
Co-authored-by: Jean Perier <jperier@nvidia.com>
Co-authored-by: Eric Schweitz <eschweitz@nvidia.com>
Co-authored-by: V Donaldson <vdonaldson@nvidia.com>
Over in D114631 and [0] there's a plan for turning instruction referencing
on by default for x86. This patch adds / removes all the relevant bits of
code, with the aim that the final patch is extremely small, for an easy
revert. It should just be a condition in CommandFlags.cpp and removing the
XFail on instr-ref-flag.ll.
[0] https://lists.llvm.org/pipermail/llvm-dev/2021-November/153653.html
If we have a DYN_ALLOCA_* instruction, it will eventually be expanded to a
stack probe and subtract-from-SP. Add debug-info instrumentation to
X86FrameLowering::emitStackProbe so that it can redirect debug-info for the
DYN_ALLOCA to the lowered stack probe. In practice, this means putting an
instruction number label either the call instruction to _chkstk for win32,
or more commonly on the subtract from SP instruction. The two tests added
cover both of these cases.
Differential Revision: https://reviews.llvm.org/D114452
InstrRefBasedLDV used to crash on the added test -- the exit block is not
in scope for the variable being propagated, but is still considered because
it contains an assignment. The failure-mode was vlocJoin ignoring
assign-only blocks and not updating DIExpressions, but pickVPHILoc would
still find a variable location for it. That led to DBG_VALUEs created with
the wrong fragment information.
Fix this by removing a filter inherited from VarLocBasedLDV: vlocJoin will
now consider assign-only blocks and will update their expressions.
Differential Revision: https://reviews.llvm.org/D114727
This adds a fold in DAGCombine to create fptosi_sat from sequences for
smin(smax(fptosi(x))) nodes, where the min/max saturate the output of
the fp convert to a specific bitwidth (say INT_MIN and INT_MAX). Because
it is dealing with smin(/smax) in DAG they may currently be ISD::SMIN,
ISD::SETCC/ISD::SELECT, ISD::VSELECT or ISD::SELECT_CC nodes which need
to be handled similarly.
A shouldConvertFpToSat method was added to control when converting may
be profitable. The original fptosi will have a less strict semantics
than the fptosisat, with less values that need to produce defined
behaviour.
This especially helps on ARM/AArch64 where the vcvt instructions
naturally saturate the result.
Differential Revision: https://reviews.llvm.org/D111976
We only support Clangs that implement nullptr as an extension in C++03 mode,
and we don't support GCC in C++03 mode. Hence, this patch disables the
use of the std::nullptr_t emulation in C++03 mode by default. Doing that
is technically an ABI break since it changes the mangling for std::nullptr_t.
However:
(1) The only affected users are those compiling in C++03 mode that have
std::nullptr_t as part of their ABI, which should be reasonably rare.
(2) Those users already have a lingering problem in that their code will
be incompatible in C++03 and C++11 modes because of that very ABI break.
Hence, the only users that could really be inconvenienced about this
change is those that planned on compiling in C++03 mode forever - for
other users, we're just breaking them now instead of letting them break
themselves later on when they try to upgrade to C++11.
(3) The ABI break will cause a linker error since the mangling changed,
and will not result in an obscure runtime error.
Furthermore, if anyone is broken by this, they can define the
_LIBCPP_ABI_USE_CXX03_NULLPTR_EMULATION macro to return to the
previous behavior. We will then remove that macro after shipping
this for one release if we haven't seen widespread issues.
Concretely, the motivation for making this change is to make our own ABI
consistent in C++03 and C++11 modes and to remove complexity around the
definition of nullptr.
Furthermore, we could investigate making nullptr a keyword in C++03 mode
as a Clang extension -- I don't think that would break anyone, since
libc++ already defines nullptr as a macro to something else. Only users
that do not use libc++ and compile in C++03 mode could potentially be
broken by that.
Differential Revision: https://reviews.llvm.org/D109459
This patch enables the benchmarking of `memmove`.
Ideally, this should be submitted before D114637.
Differential Revision: https://reviews.llvm.org/D114694
Two "totally definitely the last ones" instruction referencing test
updates:
* fp-stack.ll: this test targets i686, and so it won't be getting
instruction referencing, or at least not right now,
* X86/live-debug-values.ll: instruction referencing will produce entry
values in this test, add check lines to account for this. It's not clear
what the test is supposed to be testing anyway, but the entry values
appear to be correct.
Differential Revision: https://reviews.llvm.org/D114626
The code in widenSelectInstruction has already been transitioned
to only rely on information provided by VPWidenSelectRecipe directly.
Moving the code directly to VPWidenSelectRecipe::execute completes
the transition for the recipe.
It provides the following advantages:
1. Less indirection, easier to see what's going on.
2. Removes accesses to fields of ILV.
2) in particular ensures that no dependencies on
fields in ILV for vector code generation are re-introduced.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D114323
Add the capability to simplify more complex constraints where there are 3
symbols in the tree. In this change I extend simplifySVal to query constraints
of children sub-symbols in a symbol tree. (The constraint for the parent is
asked in getKnownValue.)
Differential Revision: https://reviews.llvm.org/D103317
Currently, during symbol simplification we remove the original member symbol
from the equivalence class (`ClassMembers` trait). However, we keep the
reverse link (`ClassMap` trait), in order to be able the query the
related constraints even for the old member. This asymmetry can lead to
a problem when we merge equivalence classes:
```
ClassA: [a, b] // ClassMembers trait,
a->a, b->a // ClassMap trait, a is the representative symbol
```
Now lets delete `a`:
```
ClassA: [b]
a->a, b->a
```
Let's merge the trivial class `c` into ClassA:
```
ClassA: [c, b]
c->c, b->c, a->a
```
Now after the merge operation, `c` and `a` are actually in different
equivalence classes, which is inconsistent.
One solution to this problem is to simply avoid removing the original
member and this is what this patch does.
Other options I have considered:
1) Always merge the trivial class into the non-trivial class. This might
work most of the time, however, will fail if we have to merge two
non-trivial classes (in that case we no longer can track equivalences
precisely).
2) In `removeMember`, update the reverse link as well. This would cease
the inconsistency, but we'd loose precision since we could not query
the constraints for the removed member.
Differential Revision: https://reviews.llvm.org/D114619
The LLDBSWIGPython functions had (at least) two problems:
- There wasn't a single source of truth (a header file) for the
prototypes of these functions. This meant that subtle differences
in copies of function declarations could go by undetected. And
not-so-subtle differences would result in strange runtime failures.
- All of the declarations had to have an extern "C" interface, because
the function definitions were being placed inside and extert "C" block
generated by swig.
This patch fixes both problems by moving the function definitions to the
%header block of the swig files. This block is not surrounded by extern
"C", and seems more appropriate anyway, as swig docs say it is meant for
"user-defined support code" (whereas the previous %wrapper code was for
automatically-generated wrappers).
It also puts the declarations into the SWIGPythonBridge header file
(which seems to have been created for this purpose), and ensures it is
included by all code wishing to define or use these functions. This
means that any differences in the declaration become a compiler error
instead of a runtime failure.
Differential Revision: https://reviews.llvm.org/D114369
This change exposes isBuildVectorConstantSplat() to the llvm namespace
and uses it to implement the constant splat versions of
m_SpecificICst().
CombinerHelper::matchOrShiftToFunnelShift() can now work with vector
types and CombinerHelper::matchMulOBy2()'s match for a constant splat is
simplified.
Differential Revision: https://reviews.llvm.org/D114625
This patch introduces a new conversion to convert bufferization.clone operations
into a memref.alloc and a memref.copy operation. This transformation is needed to
transform all remaining clones which "survive" all previous transformations, before
a given program is lowered further (to LLVM e.g.). Otherwise, these operations
cannot be handled anymore and lead to compile errors.
See: https://llvm.discourse.group/t/bufferization-error-related-to-memref-clone/4665
Differential Revision: https://reviews.llvm.org/D114233
Update the shapes of the convolution / pooling tests that where detected after enabling verification during printing (https://reviews.llvm.org/D114680). Also split the emit_structured_generic.py file that previously contained all tests into multiple separate files to simplify debugging.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D114731
This patch fixes a crash when doing "llvm-objdump -D --mattr=+experimental-v"
against an object file which happens to keep a word that can be decoded to
VSETVLI & VSETIVLI with reserved vlmul[2:0]=4. All vtype values with
reserved fields (vlmul[2:0]=4, vsew[2:0]=0b1xx, non-zero bits 8/9/10) are
printed to raw immediate.
Reviewed By: jhenderson, jrtc27, craig.topper
Differential Revision: https://reviews.llvm.org/D114581
AutoFDO performance is sensitive to profile density, i.e., the amount of samples in the profile relative to the program size, because profiles with insufficient samples could be inaccurate due to statistical noise and thus hurt AutoFDO performance. A previous investigation showed that AutoFDO performed better on MySQL with increased amount of samples. Therefore, we implement a profile-density computation feature to give hints about profile density to users and the compiler.
We define the density of a profile Prof as follows:
- For each function A in the profile, density(A) = total_samples(A) / sizeof(A).
- density(Prof) = min(density(A)) for all functions A that are warm (defined below).
A function is considered warm if its total-samples is within top N percent of the profile. For implementation, we reuse the `ProfileSummaryBuilder::getHotCountThreshold(..)` as threshold which can be set by percent(`--profile-summary-cutoff-hot`) or by value(`--profile-summary-hot-count`).
We also introduce `--hot-function-density-threshold` to set hot function density threshold and will give suggestion if profile density is below it which implies we should increase samples.
This also applies for CS profile with all profiles merged into base.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D113781
We ask `TTI.getAddressComputationCost()` about the cost of computing vector address,
and then multiply it by the vector width. This doesn't make any sense,
it implies that we'd do a vector GEP and then scalarize the vector of pointers,
but there is no such thing in the vectorized IR, we perform scalar GEP's.
This is *especially* bad on X86, and was effectively prohibiting any scalarized
vectorization of gathers/scatters, because `X86TTIImpl::getAddressComputationCost()`
says that cost of vector address computation is `10` as compared to `1` for scalar.
The computed costs are similar to the ones with D111222+D111220,
but we end up without masked memory intrinsics that we'd then have to
expand later on, without much luck. (D111363)
Differential Revision: https://reviews.llvm.org/D111460
We can't use the existing pseudo ARM::tLDRLIT_ga_pcrel for loading the
stack guard for PIC code that references the GOT, since arm-pseudo may
expand this to the narrow tLDRpci rather than the wider t2LDRpci.
Create a new pseudo, t2LDRLIT_ga_pcrel, and expand it to t2LDRpci.
Fixes: https://bugs.chromium.org/p/chromium/issues/detail?id=1270361
Reviewed By: ardb
Differential Revision: https://reviews.llvm.org/D114762
The google-readability-casting check is meant to be on par
with cpplint's readability/casting check, according to the
documentation. However it currently does not diagnose
functional casts, like:
float x = 1.5F;
int y = int(x);
This is detected by cpplint, however, and the guidelines
are clear that such a cast is only allowed when the type
is a class type (constructor call):
> You may use cast formats like `T(x)` only when `T` is a class type.
Therefore, update the clang-tidy check to check this
case.
Differential Revision: https://reviews.llvm.org/D114427
We should match GCC's behavior which allows floating-point type for -mno-x87 option on 32-bits. https://godbolt.org/z/KrbhfWc9o
The previous block issues have partially been fixed by D112143.
Reviewed By: asavonic, nickdesaulniers
Differential Revision: https://reviews.llvm.org/D114162
* Classes that are still todo are marked with "# TODO: Auto-generated. Audit and fix."
* Those without this note have been cross-checked with C++ sources and most have been spot checked by hovering in VsCode.
Differential Revision: https://reviews.llvm.org/D114767
* set_symbol_name, get_symbol_name, set_visibility, get_visibility, replace_all_symbol_uses, walk_symbol_tables
* In integrations I've been doing, I've been reaching for all of these to do both general IR manipulation and module merging.
* I don't love the replace_all_symbol_uses underlying APIs since they necessitate SYMBOL_COUNT walks and have various sharp edges. I'm hoping that whatever emerges eventually for this can still retain this simple API as a one-shot.
Differential Revision: https://reviews.llvm.org/D114687
There is no completely automated facility for generating stubs that are both accurate and comprehensive for native modules. After some experimentation, I found that MyPy's stubgen does the best at generating correct stubs with a few caveats that are relatively easy to fix:
* Some types resolve to cross module symbols incorrectly.
* staticmethod and classmethod signatures seem to always be completely generic and need to be manually provided.
* It does not generate an __all__ which, from testing, causes namespace pollution to be visible to IDE code completion.
As a first step, I did the following:
* Ran `stubgen` for `_mlir.ir`, `_mlir.passmanager`, and `_mlirExecutionEngine`.
* Manually looked for all instances where unnamed arguments were being emitted (i.e. as 'arg0', etc) and updated the C++ side to include names (and re-ran stubgen to get a good initial state).
* Made/noted a few structural changes to each `pyi` file to make it minimally functional.
* Added the `pyi` files to the CMake rules so they are installed and visible.
To test, I added a `.env` file to the root of the project with `PYTHONPATH=...` set as per instructions. Then reload the developer window (in VsCode) and verify that completion works for various changes to test cases.
There are still a number of overly generic signatures, but I want to check in this low-touch baseline before iterating on more ambiguous changes. This is already a big improvement.
Differential Revision: https://reviews.llvm.org/D114679
When creating a new DIBuilder with an existing DICompileUnit, load the
DINodes from the current DICompileUnit so they don't get overwritten.
This is done in the MachineOutliner pass, but it didn't change the CU so
the bug never appeared. We need this if we ever want to add DINodes to
the CU after it has been created, e.g., DIGlobalVariables.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D114556
Greedy register allocator prefers to move a constrained
live range into a larger allocatable class over spilling
them. This patch defines the necessary superclasses for
vector registers. For subtargets that support copy between
VGPRs and AGPRs, the vector register spills during regalloc
now become just copies.
Reviewed By: rampitec, arsenm
Differential Revision: https://reviews.llvm.org/D109301
Currently we create register mappings for registers used only once in current
MBB. For registers with multiple uses, when all the uses are in the current MBB,
we can also create mappings for them similarly according to the last use.
For example
%reg101 = ...
= ... reg101
%reg103 = ADD %reg101, %reg102
We can create mapping between %reg101 and %reg103.
Differential Revision: https://reviews.llvm.org/D113193
In multi-threaded application concurrent StackStore::Store may
finish in order different from assigned Id. So we can't assume
that after we switch writing the next block the previous is done.
The workaround is to count exact number of uptr stored into the block,
including skipped tail/head which were not able to fit entire trace.
Depends on D114490.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D114493
When we have out-going arguments passing through stack and we do not
reserve the stack space in the prologue. Use BP to access stack objects
after adjusting the stack pointer before function calls.
callseq_start -> sp = sp - reserved_space
//
// Use FP to access fixed stack objects.
// Use BP to access non-fixed stack objects.
//
call @foo
callseq_end -> sp = sp + reserved_space
Differential Revision: https://reviews.llvm.org/D114246
If the number of arguments is too large to use register passing, it
needs to occupy stack space to pass the arguments to the callee. There
are two scenarios. One is to reserve the space in prologue and the other
is to reserve the space before the function calls. When we need to
reserve the stack space before function calls, the stack pointer is
adjusted. Under the scenario, we should not use stack pointer to access
the stack objects. It looks like,
callseq_start -> sp = sp - reserved_space
//
// We should not use SP to access stack objects in this area.
//
call @foo
callseq_end -> sp = sp + reserved_space
Differential Revision: https://reviews.llvm.org/D114245