PackTags is used by to compress tags to go in the QMemTags packet
and be passed to ptrace when writing memory tags.
The behaviour of RepeatTagsForRange matches that described for QMemTags
in the GDB documentation:
https://sourceware.org/gdb/current/onlinedocs/gdb/General-Query-Packets.html#General-Query-Packets
In addition, unpacking tags with number of tags 0 now means
do not check that number of tags matches the range.
This will be used by lldb-server to unpack tags before repeating
them to fill the requested range.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D105179
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.
Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105880
This avoids relying on G_EXTRACT on unusual types, and also properly
decomposes structs into multiple registers. This also preserves the
LLTs in the memory operands.
If you attach __attribute__((optnone)) to a function when using
optimisations, that function will use fast-isel instead of the usual
SelectionDAG method. This is a problem for instruction referencing,
because it means DBG_VALUEs of virtual registers will be created,
triggering some safety assertions in LiveDebugVariables. Those assertions
exist to detect exactly this scenario, where an unexpected piece of code is
generating virtual register references in instruction referencing mode.
Fix this by transforming the DBG_VALUEs created by fast-isel into
half-formed DBG_INSTR_REFs, after which they get patched up in
finalizeDebugInstrRefs. The test modified adds a fast-isel mode to the
instruction referencing isel test.
Differential Revision: https://reviews.llvm.org/D105694
This may be necessary in partial multi-stage conversion when a container type
from dialect A containing types from dialect B goes through the conversion
where only dialect A is converted to the LLVM dialect. We will need to keep a
pointer-to-non-LLVM type in the IR until a further conversion can convert
dialect B types to LLVM types.
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D106076
Specifying the latencies of specific LDP variants appears to improve
performance almost universally.
Differential Revision: https://reviews.llvm.org/D105882
This patch returns an Invalid cost from getInstructionCost() for alloca
instructions if the VF is scalable, as otherwise loops which contain
these instructions will crash when attempting to scalarize the alloca.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D105824
This patch adds support for following contiguous load and store
instructions:
* LD1B, LD1H, LD1W, LD1D, LD1Q
* ST1B, ST1H, ST1W, ST1D, ST1Q
A new register class and operand is added for the 32-bit vector select
register W12-W15. The differences in the following tests which have been
re-generated are caused by the introduction of this register class:
* llvm/test/CodeGen/AArch64/GlobalISel/irtranslator-inline-asm.ll
* llvm/test/CodeGen/AArch64/GlobalISel/regbank-inlineasm.mir
* llvm/test/CodeGen/AArch64/stp-opt-with-renaming-reserved-regs.mir
* llvm/test/CodeGen/AArch64/stp-opt-with-renaming.mir
D88663 attempts to resolve the issue with the store pair test
differences in the AArch64 load/store optimizer.
The GlobalISel differences are caused by changes in the enum values of
register classes, tests have been updated with the new values.
The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06
Reviewed By: CarolineConcatto
Differential Revision: https://reviews.llvm.org/D105572
Previously GetMemoryTagManager checked many things in one:
* architecture supports memory tagging
* process supports memory tagging
* memory range isn't inverted
* memory range is all tagged
Since writing follow up patches for tag writing (in review
at the moment) it has become clear that this gets unwieldy
once we add the features needed for that.
It also implies that the memory tag manager is tied to the
range you used to request it with but it is not. It's a per
process object.
Instead:
* GetMemoryTagManager just checks architecture and process.
* Then the MemoryTagManager can later be asked to check a
memory range.
This is better because:
* We don't imply that range and manager are tied together.
* A slightly diferent range calculation for tag writing
doesn't add more code to Process.
* Range checking code can now be unit tested.
Reviewed By: omjavaid
Differential Revision: https://reviews.llvm.org/D105630
The original patch was:
https://reviews.llvm.org/D105806
There were some issues with undeterministic behaviour of the sorting
function, which led to scalable-call.ll passing and/or failing. This
patch fixes the issue by numbering all instructions in the array first,
and using that number as the order, which should provide a consistent
ordering.
This reverts commit a607f64118.
This patch teaches the compiler to identify a wider variety of
`BUILD_VECTOR`s which form integer arithmetic sequences, and to lower
them to `vid.v` with modifications for non-unit steps and non-zero
addends.
The sequences handled by this optimization must either be monotonically
increasing or decreasing. Consecutive elements holding the same value
indicate a fractional step which, while simple mathematically,
becomes more complex to handle both in the realm of lossy integer
division and in the presence of `undef`s.
For example, a common "interleaving" shuffle index will be lowered by
LLVM to both `<0,u,1,u,2,...>` and `<u,0,u,1,u,...>` `BUILD_VECTOR`
nodes. Either of these would ideally be lowered to `vid.v` shifted right
by 1. Detection of this sequence in presence of general `undef` values
is more complicated, however: `<0,u,u,1,>` could match either
`<0,0,0,1,>` or `<0,0,1,1,>` depending on later values in the sequence.
Both are possible, so backtracking or multiple passes is inevitable.
Sticking to monotonic sequences keeps the logic simpler as it can be
done in one pass. Fractional steps will likely be a separate
optimization in a future patch.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D104921
This change addresses this assertion that occurs in a downstream
compiler with a custom target.
```APInt.h:1151: bool llvm::APInt::operator==(const llvm::APInt &) const: Assertion `BitWidth == RHS.BitWidth && "Comparison requires equal bit widths"'```
No covering test case is susbmitted with this change since this crash
cannot be reproduced using any upstream supported target. The test case
that exposes this issue is as simple as:
```lang=c++
void test(int * p) {
int * q = p-1;
if (q) {}
if (q) {} // crash
(void)q;
}
```
The custom target that exposes this problem supports two address spaces,
16-bit `char`s, and a `_Bool` type that maps to 16-bits. There are no upstream
supported targets with similar attributes.
The assertion appears to be happening as a result of evaluating the
`SymIntExpr` `(reg_$0<int * p>) != 0U` in `VisitSymIntExpr` located in
`SimpleSValBuilder.cpp`. The `LHS` is evaluated to `32b` and the `RHS` is
evaluated to `16b`. This eventually leads to the assertion in `APInt.h`.
While this change addresses the crash and passes LITs, two follow-ups
are required:
1) The remainder of `getZeroWithPtrWidth()` and `getIntWithPtrWidth()`
should be cleaned up following this model to prevent future
confusion.
2) We're not sure why references are found along with the modified
code path, that should not be the case. A more principled
fix may be found after some further comprehension of why this
is the case.
Acks: Thanks to @steakhal and @martong for the discussions leading to this
fix.
Reviewed By: NoQ
Differential Revision: https://reviews.llvm.org/D105974
We can build it with -Werror=global-constructors now. This helps
in situation where libSupport is embedded as a shared library,
potential with dlopen/dlclose scenario, and when command-line
parsing or other facilities may not be involved. Avoiding the
implicit construction of these cl::opt can avoid double-registration
issues and other kind of behavior.
Reviewed By: lattner, jpienaar
Differential Revision: https://reviews.llvm.org/D105959
This patch handles the `<<` operator defined for `std::unique_ptr` in
the std namespace (ignores custom overloads of the operator).
Differential Revision: https://reviews.llvm.org/D105421
We can build it with -Werror=global-constructors now. This helps
in situation where libSupport is embedded as a shared library,
potential with dlopen/dlclose scenario, and when command-line
parsing or other facilities may not be involved. Avoiding the
implicit construction of these cl::opt can avoid double-registration
issues and other kind of behavior.
Reviewed By: lattner, jpienaar
Differential Revision: https://reviews.llvm.org/D105959
After we relocate counters, we no longer need to keep the original copy
around so we can return the memory back to the operating system.
Differential Revision: https://reviews.llvm.org/D104839
This patch addresses assertion failure in case when the only found formula for LSR
is `1*reg => reg` which was supposed to be an impossible situation, however there
is a test that shows it is possible.
In this case, we can use scale register with scale of 1 as the missing base register.
Reviewed By: huihuiz, reames
Differential Revision: https://reviews.llvm.org/D105009
This patch handles all the comparision methods (defined via overloaded
operators) on std::unique_ptr. These operators compare the underlying
pointers, which is modelled by comparing the corresponding inner-pointer
SVal. There is also a special case for comparing the same pointer.
Differential Revision: https://reviews.llvm.org/D104616
When setting Allocatable on a generated register class check all
superclasses and set Allocatable true if any superclass is
allocatable.
Without this change generated register classes based on an
allocatable class may end up unallocatable due to the topological
inheritance order.
This change primarily effects AMDGPU backend; however, there are
a few changes in MIPs GlobalISel register constraints as a result.
Reviewed By: kparzysz
Differential Revision: https://reviews.llvm.org/D105967
In D105866, we used an intermediate container to store a list of opcodes. Here,
we use that data structure to help us perform optimization passes that would allow
a more efficient encoding of bind opcodes. Currently, the functionality mirrors the
optimization pass {1,2} done in ld64 for bind opcodes under optimization gate
to prevent slight regressions.
Reviewed By: int3, #lld-macho
Differential Revision: https://reviews.llvm.org/D105867
A common use of `ChangeStatus` is as follows:
```
ChangeStatus Changed = ChangeStatus::UNCHANGED;
Changed |= foo();
```
where `foo` returns `ChangeStatus` as well. Currently `ChangeStatus` doesn't
support compound assignment, we have to write as
```
Changed = Changed | foo();
```
which is not that convenient.
This patch add the support for compound assignment for `ChangeStatus`. Compound
assignment is usually implemented as a member function, and binary arithmetic
operator is therefore implemented using compound assignment. However, unlike
regular C++ class, enum class doesn't support member functions. As a result, they
can only be implemented in the way shown in the patch.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D106109
We can build it with -Werror=global-constructors now. This helps
in situation where libSupport is embedded as a shared library,
potential with dlopen/dlclose scenario, and when command-line
parsing or other facilities may not be involved. Avoiding the
implicit construction of these cl::opt can avoid double-registration
issues and other kind of behavior.
Reviewed By: lattner, jpienaar
Differential Revision: https://reviews.llvm.org/D105959
In cases where an operation has an argument or result named 'property', the
ODS-generated python fails on import because the `@property` resolves to the
`property` operation argument instead of the builtin `@property` decorator. We
should always use the fully qualified decorator name.
Reviewed By: mikeurbach
Differential Revision: https://reviews.llvm.org/D106106
There are two places in current deviceRTLs where it computes parallel level explicitly,
which is basically the functionality of `__kmpc_parallel_level`. Starting from
D105787, we plan to introduce a series of function call folding based on information
that can be deducted during compilation time. Computation of parallel level is
the next target. This patch makes steps for the optimization.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D105955
This configuration is interesting because GCC has a different level of
strictness for some C++ rules. In particular, it implements the older
standards more stringently than Clang, which can help find places where
we are non-conforming (especially in the test suite).
Differential Revision: https://reviews.llvm.org/D105936
It was possible to re-add a module to a shared in-memory module cache
when search paths are changed. This can eventually cause a crash if the
original module is referenced after this occurs.
1. Module A depends on B
2. B exists in two paths C and D
3. First run only has C on the search path, finds A and B and loads
them
4. Second run adds D to the front of the search path. A is loaded and
contains a reference to the already compiled module from C. But
searching finds the module from D instead, causing a mismatch
5. B and the modules that depend on it are considered out of date and
thus rebuilt
6. The recompiled module A is added to the in-memory cache, freeing
the previously inserted one
This can never occur from a regular clang process, but is very easy to
do through the API - whether through the use of a shared case or just
running multiple compilations from a single `CompilerInstance`. Update
the compilation to return early if a module is already finalized so that
the pre-condition in the in-memory module cache holds.
Resolves rdar://78180255
Differential Revision: https://reviews.llvm.org/D105328
Use double pound at the start of the line to differentiate comments from
statements for Lit or FileCheck.
I will also use this small commit to check my commit access.
Differential Revision: https://reviews.llvm.org/D106103