Currently an indirect call produces the following sequence on PCRelative mode:
extern void function( );
extern void (*ptrfunc) ( );
void g() {
ptrfunc=function;
}
void f() {
(*ptrfunc) ( );
}
Producing
paddi 3, 0, .LC0@PCREL, 1
ld 3, 0(3)
std 2, 24(1)
ld 12, 0(3)
mtctr 12
bctrl
ld 2, 24(1)
Though the caller does not use or preserve r2, it is still saved and restored
across a function call. This patch is added to remove these redundant save and
restores for indirect calls.
Differential Revision: https://reviews.llvm.org/D77749
llvm/lib/Target/Hexagon/HexagonTargetObjectFile.cpp:296:11: warning: enumeration value 'ScalableVectorTyID' not handled in switch [-Wswitch]
switch (Ty->getTypeID()) {
^
Summary:
This commit recommits the reversion of https://reviews.llvm.org/D75039.
Concensus appears to be in favour of assembly-time resolution of
these ADR and LDR relocations, in line with GNU. The previous
backout broke many lld tests, now fixed by Peter Smith in
61bccda9d9.
Reviewers: psmith
Subscribers: kristof.beyls, hiraditya, danielkiss, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78301
Add initial support for PC Relative addressing to get jump table base
address instead of using TOC.
Differential Revision: https://reviews.llvm.org/D75931
Add the IsOverloadNone flag to tell CGBuiltin that it does not have
an overloaded type. This is used for e.g. svpfalse which does
not take any arguments and always returns a svbool_t.
This patch also adds builtins for svcntb_pat, svcnth_pat, svcntw_pat
and svcntd_pat, as those don't require custom codegen.
Reviewers: SjoerdMeijer, efriedma, rovka
Reviewed By: efriedma
Tags: #clang
Differential Revision: https://reviews.llvm.org/D77596
Summary:
Even when BreakBeforeBinaryOperators is set, AlignOperands kept
aligning the beginning of the line, even when it could align the
actual operands (e.g. after an assignment).
With this patch, there is an option to actually align the operands, so
that the operator gets right-aligned with the equal sign or return
operator:
int aaaaa = bbbbbb
+ cccccc;
return aaaaaaa
&& bbbbbbb;
This not happen in parentheses, to avoid 'breaking' the indentation:
if (aaaaa
&& bbbbb)
return;
Reviewers: krasimir, djasper
Subscribers: cfe-commits, klimek
Differential Revision: https://reviews.llvm.org/D32478
When multiple ternary operators are chained, e.g. like an if/else-if/
else-if/.../else sequence, clang-format will keep aligning the colon
with the question mark, which increases the indent for each
conditionals:
int a = condition1 ? result1
: condition2 ? result2
: condition3 ? result3
: result4;
This patch detects the situation (e.g. conditionals used in false branch
of another conditional), to avoid indenting in that case:
int a = condition1 ? result1
: condition2 ? result2
: condition3 ? result3
: result4;
When BreakBeforeTernaryOperators is false, this will format like this:
int a = condition1 ? result1 :
condition2 ? result2 :
conditino3 ? result3 :
result4;
Summary:
This change mentions CDE assembly in the LLVM release notes and CDE
intrinsics in both Clang and LLVM release notes.
Reviewers: kristof.beyls, simon_tatham
Reviewed By: kristof.beyls
Subscribers: danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D78481
Add support to optionally emit different instrumentation for accesses to
volatile variables. While the default TSAN runtime likely will never
require this feature, other runtimes for different environments that
have subtly different memory models or assumptions may require
distinguishing volatiles.
One such environment are OS kernels, where volatile is still used in
various places for various reasons, and often declare volatile to be
"safe enough" even in multi-threaded contexts. One such example is the
Linux kernel, which implements various synchronization primitives using
volatile (READ_ONCE(), WRITE_ONCE()). Here the Kernel Concurrency
Sanitizer (KCSAN) [1], is a runtime that uses TSAN instrumentation but
otherwise implements a very different approach to race detection from
TSAN.
While in the Linux kernel it is generally discouraged to use volatiles
explicitly, the topic will likely come up again, and we will eventually
need to distinguish volatile accesses [2]. The other use-case is
ignoring data races on specially marked variables in the kernel, for
example bit-flags (here we may hide 'volatile' behind a different name
such as 'no_data_race').
[1] https://github.com/google/ktsan/wiki/KCSAN
[2] https://lkml.kernel.org/r/CANpmjNOfXNE-Zh3MNP=-gmnhvKbsfUfTtWkyg_=VqTxS4nnptQ@mail.gmail.com
Author: melver (Marco Elver)
Reviewed-in: https://reviews.llvm.org/D78554
Pavel Labath wrote in D73206:
The internal representation of DebugNames and Apple indexes is fixed by
the relevant (pseudo-)standards, so we can't really change it. The
question is how to efficiently (and cleanly) convert from the internal
representation to some common thing. The conversion from AppleIndex to
DIERef is trivial (which is not surprising as it was the first and the
overall design was optimized for that). With debug_names, the situation
gets more tricky. The internal representation of debug_names uses
CU-relative DIE offsets, but DIERef wants an absolute offset. That means
the index has to do more work to produce the common representation. And
it needs to do that for all results, even though a lot of the index
users are really interested only in a single entry. With the switch to
user_id_t, _all_ indexes would have to do some extra work to encode it,
only for their users to have to immediately decode it back. Having
a iterator/callback based api would allow us to minimize the impact of
that, as it would only need to happen for the entries that are really
used. And /I think/ we could make it interface returns DWARFDies
directly, and each index converts to that using the most direct approach
available.
Jan Kratochvil:
It also makes all the callers shorter as they no longer need to fetch
DWARFDIE from DIERef (and handling if not found by ReportInvalidDIERef)
but the callers are already served DWARFDIE which they need.
In some cases the DWARFDIE had to be fetched both by callee (DWARFIndex
implementation) and caller.
Differential Revision: https://reviews.llvm.org/D77970
As discussed in http://lists.llvm.org/pipermail/llvm-dev/2020-March/140349.html,
the minimum version of CMake required to build LLVM will be upgraded to
3.13.4 right after we create the release branch for LLVM 11.0.0.
As part of this effort, this commit adds a warning to give a heads up
to folks regarding the upcoming upgrade. This should allow users to
upgrade their CMake in advance so that the upgrade can sail right
through when the time comes.
Differential Revision: https://reviews.llvm.org/D77740
If a 16-bit thumb STM with writeback stores the base register but it isn't the
first register in the list, then an unknown value is stored. The load/store
optimizer knows this and generates a 32-bit STM without writeback instead, but
thumb2 size reduction converts it into a 16-bit STM. Fix this by having thumb2
size reduction notice such STMs and leave them as they are.
Differential Revision: https://reviews.llvm.org/D78493
The ACLE has builtins that take a scalar value that is to be expanded
into a vector by the operation. While the ISA may have an instruction
that takes an immediate or a scalar to represent this, the LLVM IR
intrinsic may not, so Clang will have to splat the scalar value.
This patch also adds the _n forms for svabd, svadd, svdiv, svdivr,
svmax, svmin, svmul, svmulh, svub and svsubr.
Reviewers: SjoerdMeijer, efriedma, rovka
Reviewed By: SjoerdMeijer
Tags: #clang
Differential Revision: https://reviews.llvm.org/D77594
This adds some extra processing into the Pre-RA ARM load/store optimizer
to detect and merge MVE loads/stores and adds of the same base. This we
don't always turn into a post-inc during ISel, and due to the nature of
it being a graph we don't always know an order to use for the nodes, not
knowing which nodes to make post-inc and which to use the new post-inc
of. After ISel, we have an order that we can use to post-inc the
following instructions.
So this looks for a loads/store with a starting offset of 0, and an
add/sub from the same base, plus a number of other loads/stores. We then
do some checks and convert the zero offset load/store into a postinc
variant. Any loads/stores after it have the offset subtracted from their
immediates. For example:
LDR #4 LDR #4
LDR #0 LDR_POSTINC #16
LDR #8 LDR #-8
LDR #12 LDR #-4
ADD #16
It only handles MVE loads/stores at the moment. Normal loads/store will
be added in a followup patch, they just have some extra details to
ensure that we keep generating LDRD/LDM successfully.
Differential Revision: https://reviews.llvm.org/D77813
CMAKE_SOURCE_DIR is not the right directory if llvm is included in
another cmake project. PROJECT_SOURCE_DIR is always the same and should
be used instead.
Instead of having different names for the same Lit feature accross code
bases, use the same name everywhere. This NFC commit is in preparation
for a refactor where all three projects will be using the same Lit
feature detection logic, and hence it won't be convenient to use
different names for the feature.
Differential Revision: https://reviews.llvm.org/D78370
Add 96-bit, 160-bit and 256-bit AReg classes to match VReg and SReg.
NFC as far as I know, but it may avoid weird legalization problems.
Differential Revision: https://reviews.llvm.org/D78348
After D78301 MC no longer emits a relocation for this case. Change to use
.inst and .reloc to synthesize the same instruction and relocation. One
more test case I missed.
Prior to this patch, llvm-objdump would only look in the last section
(according to the section header table order) that matched an address
for a symbol when identifying the target symbol of a call or branch
operation. If there are multiple sections with the same address, due to
some of them being empty, it did not look in those, even if the symbol
couldn't be found in the first section looked in.
This patch causes llvm-objdump to look in all sections for possible
candidate symbols. If there are multiple possible symbols, it picks one
from a non-empty section, if possible (as that is more likely to be the
"real" symbol since functions can't really be in emptiy sections),
before falling back to those in empty sections. If all else fails, it
falls back to absolute symbols as it did before.
Differential Revision: https://reviews.llvm.org/D78549
Reviewed by: grimar, Higuoxing