The AST for the constexpr.cl test contains address space conversion
nodes to cast through the implicit generic address space. These
caused the evaluator to reject the input as constexpr in C++ for
OpenCL mode, whereas the input was considered constexpr in plain C++
mode as the AST won't have address space cast nodes then.
Fixes PR44177.
Differential Revision: https://reviews.llvm.org/D71015
This attempts to teach the cost model in Arm that code such as:
%s = shl i32 %a, 3
%a = and i32 %s, %b
Can under Arm or Thumb2 become:
and r0, r1, r2, lsl #3
So the cost of the shift can essentially be free. To do this without
trying to artificially adjust the cost of the "and" instruction, it
needs to get the users of the shl and check if they are a type of
instruction that the shift can be folded into. And so it needs to have
access to the actual instruction in getArithmeticInstrCost, which if
available is added as an extra parameter much like getCastInstrCost.
We otherwise limit it to shifts with a single user, which should
hopefully handle most of the cases. The list of instruction that the
shift can be folded into include ADC, ADD, AND, BIC, CMP, EOR, MVN, ORR,
ORN, RSB, SBC and SUB. This translates to Add, Sub, And, Or, Xor and
ICmp.
Differential Revision: https://reviews.llvm.org/D70966
This adds some extra cost model tests for shifts, and does some minor
adjustments to some Neon code to make it clear as to what it applies to.
Both NFC.
Summary:
This new warning (enabled by -Wextra) fires when a std::move is
redundant, as the default compiler behavior would be to select a move
operation anyway (e.g., when returning a local variable). Unlike
-Wpessimizing-move, it has no performance impact -- it just adds noise.
Currently llvm has about 1500 of these warnings. Unfortunately, the
suggested fix -- removing std::move -- does not work because of some
older compilers we still support. Specifically clang<=3.8 will not use a
move operation if an implicit conversion is needed (Core issue 1579). In
code like "A f(ConvertibleToA a) { return a; }" it will prefer a copy,
or fail to compile if a copy is not possible.
This patch disables that warning to get a meaningful signal out of a GCC
9 build.
Reviewers: rnk, aaron.ballman, xbolva00
Subscribers: mgorny, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70963
Summary:
Currently the describeLoadedValue() hook is assumed to describe the
value of the instruction's first explicit define. The hook will not be
called for instructions with more than one explicit define.
This commit adds a register parameter to the describeLoadedValue() hook,
and invokes the hook for all registers in the worklist.
This will allow us to for example describe instructions which produce
more than two parameters' values; e.g. Hexagon's various combine
instructions.
This also fixes situations in our downstream target where we may pass
smaller parameters in the high part of a register. If such a parameter's
value is produced by a larger copy instruction, we can't describe the
call site value using the super-register, and we instead need to know
which sub-register that should be used.
This also allows us to handle cases like this:
$ebx = [...]
$rdi = MOVSX64rr32 $ebx
$esi = MOV32rr $edi
CALL64pcrel32 @call
The hook will first be invoked for the MOV32rr instruction, which will
say that @call's second parameter (passed in $esi) is described by $edi.
As $edi is not preserved it will be added to the worklist. When we get
to the MOVSX64rr32 instruction, we need to describe two values; the
sign-extended value of $ebx -> $rdi for the first parameter, and $ebx ->
$edi for the second parameter, which is now possible.
This commit modifies the dbgcall-site-lea-interpretation.mir test case.
In the test case, the values of some 32-bit parameters were produced
with LEA64r. Perhaps we can in general cases handle such by emitting
expressions that AND out the lower 32-bits, but I have not been able to
land in a case where a LEA64r is used for a 32-bit parameter instead of
LEA64_32 from C code.
I have not found a case where it would be useful to describe parameters
using implicit defines, so in this patch the hook is still only invoked
for explicit defines of forwarding registers.
Reviewers: djtodoro, NikolaPrica, aprantl, vsk
Reviewed By: djtodoro, vsk
Subscribers: ormris, hiraditya, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D70431
Summary:
This patch adds support for atomic types (DW_TAG_atomic_type) to LLDB. It's mostly just filling out all the switch-statements that didn't implement Atomic case with the usual boilerplate.
Thanks Pavel for writing the test case.
Reviewers: labath, aprantl, shafik
Reviewed By: labath
Subscribers: jfb, abidh, JDevlieghere, lldb-commits
Tags: #lldb
Differential Revision: https://reviews.llvm.org/D71183
Currently the describeLoadedValue() hook is assumed to describe the
value of the instruction's first explicit define. The hook will not be
called for instructions with more than one explicit define.
This commit adds a register parameter to the describeLoadedValue() hook,
and invokes the hook for all registers in the worklist.
This will allow us to for example describe instructions which produce
more than two parameters' values; e.g. Hexagon's various combine
instructions.
This also fixes a case in our downstream target where we may pass
smaller parameters in the high part of a register. If such a parameter's
value is produced by a larger copy instruction, we can't describe the
call site value using the super-register, and we instead need to know
which sub-register that should be used.
This also allows us to handle cases like this:
$ebx = [...]
$rdi = MOVSX64rr32 $ebx
$esi = MOV32rr $edi
CALL64pcrel32 @call
The hook will first be invoked for the MOV32rr instruction, which will
say that @call's second parameter (passed in $esi) is described by $edi.
As $edi is not preserved it will be added to the worklist. When we get
to the MOVSX64rr32 instruction, we need to describe two values; the
sign-extended value of $ebx -> $rdi for the first parameter, and $ebx ->
$edi for the second parameter, which is now possible.
This commit modifies the dbgcall-site-lea-interpretation.mir test case.
In the test case, the values of some 32-bit parameters were produced
with LEA64r. Perhaps we can in general cases handle such by emitting
expressions that AND out the lower 32-bits, but I have not been able to
land in a case where a LEA64r is used for a 32-bit parameter instead of
LEA64_32 from C code.
I have not found a case where it would be useful to describe parameters
using implicit defines, so in this patch the hook is still only invoked
for explicit defines of forwarding registers.
Summary:
Counters can be flushed in a multi-threaded context for example when the process is forked in different threads (https://github.com/llvm/llvm-project/blob/master/llvm/lib/Transforms/Instrumentation/GCOVProfiling.cpp#L632-L663).
In order to avoid pretty bad things, a critical section is needed around the flush.
We had a lot of crashes in this code in Firefox CI when we switched to clang for linux ccov builds and those crashes disappeared with this patch.
Reviewers: marco-c, froydnj, dmajor, davidxl
Reviewed By: marco-c, dmajor
Subscribers: froydnj, dmajor, dberris, jfb, #sanitizers, llvm-commits, sylvestre.ledru
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D70910
Summary:
One of the ways we try to make LLDB faster is by only creating the Clang declarations (and loading the associated types)
when we actually need them for something. For example an evaluated expression might need to load types to
type check and codegen the expression.
Currently this mechanism isn't really tested, so we currently have no way to know how many Clang nodes we load and
when we load them. In general there seems to be some confusion when and why certain Clang nodes are created.
As we are about to make some changes to the code which is creating Clang AST nodes we probably should have
a test that at least checks that the current behaviour doesn't change. It also serves as some kind of documentation
on the current behaviour.
The test in this patch is just evaluating some expressions and checks which Clang nodes are created due to this in the
module AST. The check happens by looking at the AST dump of the current module and then scanning it for the
declarations we are looking for.
I'm aware that there are things missing in this test (inheritance, template parameters, non-expression evaluation commands)
but I'll expand it in follow up patches.
Also this test found two potential bugs in LLDB which are documented near the respective asserts in the test:
1. LLDB seems to always load all types of local variables even when we don't reference them in the expression. We had patches
that tried to prevent this but it seems that didn't work as well as it should have (even though we don't complete these
types).
2. We always seem to complete the first field of any record we run into. This has the funny side effect that LLDB is faster when
all classes in a project have an arbitrary `char unused;` as their first member. We probably want to fix this.
Reviewers: shafik
Subscribers: abidh, JDevlieghere, lldb-commits
Tags: #lldb
Differential Revision: https://reviews.llvm.org/D71056
This caused "Too many bits for uint64_t" asserts when building Chromium. See
https://crbug.com/1031978#c2 for a reproducer. I'll follow up on the
llvm-commits thread with a creduced version.
> ARMCodeGenPrepare has already been generalized and renamed to
> TypePromotion. We've had it enabled and tested downstream for a
> while, so enable it by default.
>
> Differential Revision: https://reviews.llvm.org/D70998
Array members are not yet handled. In addition, defaulted comparisons
can't yet find comparison operators by unqualified lookup (only by
member lookup and ADL). These issues will be fixed in follow-on changes.
Summary:
D30644 added OpenMP offloading to AArch64 targets, then D32035 changed the
frontend to throw an error when offloading is requested for an unsupported
target architecture. However the latter did not include AArch64 in the list
of supported architectures, causing the following unit tests to fail:
libomptarget :: api/omp_get_num_devices.c
libomptarget :: mapping/pr38704.c
libomptarget :: offloading/offloading_success.c
libomptarget :: offloading/offloading_success.cpp
Reviewers: pawosm01, gtbercea, jdoerfert, ABataev
Subscribers: kristof.beyls, guansong, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D70804
This is another transform suggested in PR44153:
https://bugs.llvm.org/show_bug.cgi?id=44153
The backend for some targets already manages to get
this if it converts copysign to bitwise logic.
Summary:
The patch removes OffsetToFirstDefinition in the 'scope bytes total'
statistic computation. Thus it unifies the way the scope and the coverage
buckets are computed. The rationals behind that are the following:
1. OffsetToFirstDefinition was used to calculate the variable's life range.
However, there is no simple way to do it accurately, so the scope calculated
this way might be misleading. See D69027 for more details on the subject.
2. Both 'scope bytes total' and coverage buckets seem to be intended
to represent the same data in different ways. Otherwise, the statistics
might be controversial and confusing.
Note that the approach gives up a thorough evaluation of debug information
completeness (i.e. coverage buckets by themselves doesn't tell how good
the debug information is). Only changes in coverage over time make
a 'physical' sense.
Reviewers: djtodoro, aprantl, vsk, dblaikie, avl
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D70548
MVE doesn't have the range of shuffle instructions available in Neon. We
also cannot use the trick of cutting a difficult vector shuffle in half
to simplify things. Instead we need to be more careful about how we
lower shuffles.
This patch adds an extra combine that attempts to find "whole lane"
vmovs when lowering shuffles of smaller types. This helps us make some
shuffles a lot simpler, generating single lane movs for the parts that
can make use of it, falling back to the original shuffle for the rest.
Differential Revision: https://reviews.llvm.org/D69509
Alas, using half the available vector registers in a single instruction
is just too much for the register allocator to handle. The mve-vldst4.ll
test here fails when these instructions are enabled at present. This
patch disables the generation of VLD4 and VST4 by adding a
mve-max-interleave-factor option, which we currently default to 2.
Differential Revision: https://reviews.llvm.org/D71109
Currently we fail to pick the right insertion point when
PreviousLastPart of a first-order-recurrence is a PHI node not in the
LoopVectorBody. This can happen when PreviousLastPart is produce in a
predicated block. In that case, we should pick the insertion point in
the BB the PHI is in.
Fixes PR44020.
Reviewers: hsaito, fhahn, Ayal, dorit
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D71071
Use type elaborated spellings for the parameter to avoid the ambiguity
between `llvm` and `lldb_private` names. This is needed for building
with Visual Studio.
My patch 9db13b5a7d seems to have
caused some build bots to fail due to warnings that appear only
when using -Wcovered-switch-default.
This patch is an attempt to fix this by trying to avoid both the warning
"default label in switch which covers all enumeration values"
for the inner switch statements and at the same time the warning
"this statement may fall through"
for the outer switch statement in getVectorComparison
(SystemZISelLowering.cpp).
Generate types for global variables with "weak" attribute.
Keep allocation scope the same for both weak and non-weak
globals as ELF symbol table can determine whether a global
symbol is weak or not.
Differential Revision: https://reviews.llvm.org/D71162
AssumptionCache can be null in SimplifyCFGOptions. However, FoldCondBranchOnPHI() was not properly handling that when passing a null AssumptionCache to simplifyCFG.
Patch by Rodrigo Caetano Rocha <rcor.cs@gmail.com>
Reviewers: fhahn, lebedev.ri, spatel
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D69963
A trivially copyable type provides a trivial copy constructor and a trivial
copy assignment operator. This is enough for the runtime to memcpy the data
to the device. Additionally there must be no virtual functions or virtual
base classes and the destructor is guaranteed to be trivial, ie performs
no action.
The runtime does not require trivial default constructors because on alloc
the memory is undefined. Thus, weaken the warning to be only issued if the
mapped type is not trivially copyable.
Differential Revision: https://reviews.llvm.org/D71134
This adds support for constrained floating-point comparison intrinsics.
Specifically, we add:
declare <ty2>
@llvm.experimental.constrained.fcmp(<type> <op1>, <type> <op2>,
metadata <condition code>,
metadata <exception behavior>)
declare <ty2>
@llvm.experimental.constrained.fcmps(<type> <op1>, <type> <op2>,
metadata <condition code>,
metadata <exception behavior>)
The first variant implements an IEEE "quiet" comparison (i.e. we only
get an invalid FP exception if either argument is a SNaN), while the
second variant implements an IEEE "signaling" comparison (i.e. we get
an invalid FP exception if either argument is any NaN).
The condition code is implemented as a metadata string. The same set
of predicates as for the fcmp instruction is supported (except for the
"true" and "false" predicates).
These new intrinsics are mapped by SelectionDAG codegen onto two new
ISD opcodes, ISD::STRICT_FSETCC and ISD::STRICT_FSETCCS, again
representing quiet vs. signaling comparison operations. Otherwise
those nodes look like SETCC nodes, with an additional chain argument
and result as usual for strict FP nodes. The patch includes support
for the common legalization operations for those nodes.
The patch also includes full SystemZ back-end support for the new
ISD nodes, mapping them to all available SystemZ instruction to
fully implement strict semantics (scalar and vector).
Differential Revision: https://reviews.llvm.org/D69281
The file is intended to gather various VPlan transformations, not only
CFG related transforms. Actually, the only transformation there is not
CFG related.
Reviewers: Ayal, gilr, hsaito, rengolin
Reviewed By: gilr
Differential Revision: https://reviews.llvm.org/D70732
Summary:
This patch fixes an issue where the PPC MI peephole optimization pass incorrectly remove a vector swap.
Specifically, the pass can combine a splat/swap to a splat/copy. It uses `TargetRegisterInfo::lookThruCopyLike` to determine that the operands to the splat are the same. However, the current logic only compares the operands based on register numbers. In the case where the splat operands are ultimately feed from the same physical register, the pass can incorrectly remove a swap if the feed register for one of the operands has been clobbered.
This patch adds a check to ensure that the registers feeding are both virtual registers or the operands to the splat or swap are both the same register.
Here is an example in pseudo-MIR of what happens in the test cased added in this patch:
Before PPC MI peephole optimization:
```
%arg = XVADDDP %0, %1
$f1 = COPY %arg.sub_64
call double rint(double)
%res.first = COPY $f1
%vec.res.first = SUBREG_TO_REG 1, %res.first, %subreg.sub_64
%arg.swapped = XXPERMDI %arg, %arg, 2
$f1 = COPY %arg.swapped.sub_64
call double rint(double)
%res.second = COPY $f1
%vec.res.second = SUBREG_TO_REG 1, %res.second, %subreg.sub_64
%vec.res.splat = XXPERMDI %vec.res.first, %vec.res.second, 0
%vec.res = XXPERMDI %vec.res.splat, %vec.res.splat, 2
; %vec.res == [ %vec.res.second[0], %vec.res.first[0] ]
```
After optimization:
```
; ...
%vec.res.splat = XXPERMDI %vec.res.first, %vec.res.second, 0
; lookThruCopyLike(%vec.res.first) == lookThruCopyLike(%vec.res.second) == $f1
; so the pass replaces the swap with a copy:
%vec.res = COPY %vec.res.splat
; %vec.res == [ %vec.res.first[0], %vec.res.second[0] ]
```
As best as I can tell, this has occurred since r288152, which added support for lowering certain vector operations to direct moves in the form of a splat.
Committed for vddvss (Colin Samples). Thanks Colin for the patch!
Differential Revision: https://reviews.llvm.org/D69497
The NDK uses a separate set of libc++ headers in the sysroot. Any headers
in the installation directory are not going to work on Android, not least
because they use a different name for the inline namespace (std::__1 instead
of std::__ndk1).
This effectively makes it impossible to produce a single toolchain that is
capable of targeting both Android and another platform that expects libc++
headers to be installed in the installation directory, such as Mac.
In order to allow this scenario to work, stop looking for headers in the
install directory on Android.
Differential Revision: https://reviews.llvm.org/D71154
Scudo only supports building for android/linux/fuchsia, so require target_os to
be one of linux/fuchsia to do a stage2_unix scudo build. Android is already
covered by the stage2_android* toolchains below.
Differential Revision: https://reviews.llvm.org/D71131