Commit Graph

6008 Commits

Author SHA1 Message Date
Nick Desaulniers 246398ece7 [clang][Parse] properly parse asm-qualifiers, asm inline
Summary:
The parsing of GNU C extended asm statements was a little brittle and
had a few issues:
- It was using Parse::ParseTypeQualifierListOpt to parse the `volatile`
  qualifier.  That parser is really meant for TypeQualifiers; an asm
  statement doesn't really have a type qualifier. This is still maybe
  nice to have, but not necessary. We now can check for the `volatile`
  token by properly expanding the grammer, rather than abusing
  Parse::ParseTypeQualifierListOpt.
- The parsing of `goto` was position dependent, so `asm goto volatile`
  wouldn't parse. The qualifiers should be position independent to one
  another. Now they are.
- We would warn on duplicate `volatile`, but the parse error for
  duplicate `goto` was a generic parse error and wasn't clear.
- We need to add support for the recent GNU C extension `asm inline`.
  Adding support to the parser with the above issues highlighted the
  need for this refactoring.

Link: https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html

Reviewers: aaron.ballman

Reviewed By: aaron.ballman

Subscribers: aheejin, jfb, nathanchance, cfe-commits, echristo, efriedma, rsmith, chandlerc, craig.topper, erichkeane, jyu2, void, srhines

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75563
2020-03-12 15:13:59 -07:00
Simon Tatham 3f8e714e2f [ARM,MVE] Add intrinsics and isel for MVE fused multiply-add.
Summary:
This adds the ACLE intrinsic family for the VFMA and VFMS
instructions, which perform fused multiply-add on vectors of floats.

I've represented the unpredicated versions in IR using the cross-
platform `@llvm.fma` IR intrinsic. We already had isel rules to
convert one of those into a vector VFMA in the simplest possible way;
but we didn't have rules to detect a negated argument and turn it into
VFMS, or rules to detect a splat argument and turn it into one of the
two vector/scalar forms of the instruction. Now we have all of those.

The predicated form uses a target-specific intrinsic as usual, but
I've stuck to just one, for a predicated FMA. The subtraction and
splat versions are code-generated by passing an fneg or a splat as one
of its operands, the same way as the unpredicated version.

In arm_mve_defs.h, I've had to introduce a tiny extra piece of
infrastructure: a record `id` for use in codegen dags which implements
the identity function. (Just because you can't declare a Tablegen
value of type dag which is //only// a `$varname`: you have to wrap it
in something. Now I can write `(id $varname)` to get the same effect.)

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75998
2020-03-12 11:13:50 +00:00
Shengchen Kan 214d24e1f8 [X86] Support intrinsic _mm_broadcastsi128_si256
Reviewers: LuoYuanke, craig.topper, RKSimon, pengfei

Reviewed By: craig.topper

Subscribers: cfe-commits, llvm-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75897
2020-03-12 10:56:39 +08:00
Shengchen Kan ab69cd0779 [X86] Support intrinsic _mm_cldemote
Reviewers: LuoYuanke, craig.topper, RKSimon, pengfei

Reviewed By: craig.topper

Subscribers: cfe-commits, llvm-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75896
2020-03-12 10:03:41 +08:00
Shengchen Kan 560aa53f8f [X86] Support intrinsics _bextr2*
Reviewers: LuoYuanke, craig.topper, RKSimon, pengfei

Reviewed By: craig.topper

Subscribers: cfe-commits, llvm-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75894
2020-03-12 09:26:51 +08:00
Simon Moll d871ef4e6a [instcombine] remove fsub to fneg hacks; only emit fneg
Summary: Rewrite the fsub-0.0 idiom to fneg and always emit fneg for fp
negation. This also extends the scalarization cost in instcombine for unary
operators to result in the same IR rewrites for fneg as for the idiom.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D75467
2020-03-10 16:57:02 +01:00
Mikhail Maltsev 47edf5bafb [ARM,CDE] Generalize MVE intrinsics infrastructure to support CDE
Summary:
This patch generalizes the existing code to support CDE intrinsics
which will share some properties with existing MVE intrinsics
(some of the intrinsics will be polymorphic and accept/return values
of MVE vector types).
Specifically the patch:
* Adds new tablegen backends -gen-arm-cde-builtin-def,
  -gen-arm-cde-builtin-codegen, -gen-arm-cde-builtin-sema,
  -gen-arm-cde-builtin-aliases, -gen-arm-cde-builtin-header based on
  existing MVE backends.
* Renames the '__clang_arm_mve_alias' attribute into
  '__clang_arm_builtin_alias' (it will be used with CDE intrinsics as
  well as MVE intrinsics)
* Implements semantic checks for the coprocessor argument of the CDE
  intrinsics as well as the existing coprocessor intrinsics.
* Adds one CDE intrinsic __arm_cx1 to test the above changes

Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen

Reviewed By: simon_tatham

Subscribers: sdesmalen, mgorny, kristof.beyls, danielkiss, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75850
2020-03-10 14:03:16 +00:00
Djordje Todorovic 5aa5c943f7 Reland "[DebugInfo] Enable the debug entry values feature by default"
Differential Revision: https://reviews.llvm.org/D73534
2020-03-10 09:15:06 +01:00
Sjoerd Meijer e32f8ef927 Follow up of 3d9a0445cc, clang driver defaulting to -fno-common
Attempt to pacify windows bot where this failed:

clang/test/CodeGen/vlt_to_pointer.c
2020-03-09 20:43:05 +00:00
Sjoerd Meijer 3d9a0445cc Recommit #2 "[Driver] Default to -fno-common for all targets"
After a first attempt to fix the test-suite failures, my first recommit
caused the same failures again. I had updated CMakeList.txt files of
tests that needed -fcommon, but it turns out that there are also
Makefiles which are used by some bots, so I've updated these Makefiles
now too.

See the original commit message for more details on this change:
0a9fc9233e
2020-03-09 19:57:03 +00:00
Erich Keane cc8390bfe3 Permit attribute 'used' with 'target' multiversioning.
This adds infrastructure for a multiversioning whitelist, plus adds
'used' to the allowed list with 'target'.  The behavior here mirrors the
implementation in GCC, where 'used' only applies to the single
declaration and doesn't apply to the ifunc or resolver.

This is not being applied to cpu_dispatch and cpu_specific, since the
rules are more complicated for cpu_specific, which emits multiple
symbols. Additionally, the author isn't currently aware of uses in the
wild of this combination, but is aware of a number of target+used
combinations.
2020-03-09 12:38:03 -07:00
Erich Keane 7b66160828 Fix Target Multiversioning renaming.
The initial implementation only did 'first declaration renaming' when
a default version came after. This is insufficient in cases where a
default does not exist, so this patch makes sure that we do the renaming
in all cases.

This renaming is necessary because we emit the first declaration before
knowing that it IS a target multiversion function, which would change
its name. The second declaration (the one that caused the
multiversioning) then needs to make sure that the first one has its name
changed to be consistent with the resolver usage.
2020-03-09 08:29:18 -07:00
Sjoerd Meijer f35d112efd Revert "Recommit "[Driver] Default to -fno-common for all targets""
This reverts commit 2c36c23f34.

Still problems in the test-suite, which I really thought I had fixed...
2020-03-09 10:37:28 +00:00
Sjoerd Meijer 2c36c23f34 Recommit "[Driver] Default to -fno-common for all targets"
This includes fixes for:
- test-suite: some benchmarks need to be compiled with -fcommon, see D75557.
- compiler-rt: one test needed -fcommon, and another a change, see D75520.
2020-03-09 10:07:37 +00:00
Matt Arsenault a4e71f01c0 Assume ieee behavior without denormal-fp-math attribute 2020-03-07 12:10:56 -05:00
Thomas Lively d43fcd0c04 [WebAssembly] Add SIMD integer min/max builtins
Summary:
Although SIMD integer min/max operations can be expressed using the ?:
operator in C++, that operator is disallowed for vectors in C. As a
workaround, this change introduces new WebAssembly-specific builtin
functions that lower to the desired vector icmp/select sequences.

Reviewers: aheejin, dschuff, kripken

Subscribers: sbc100, jgravelle-google, sunfish, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75770
2020-03-06 14:28:52 -08:00
Simon Tatham 068b2f313c [ARM,MVE] Add the `vshlcq` intrinsics.
Summary:
The VSHLC instruction performs a left shift of a whole vector register
by an immediate shift count up to 32, shifting in new bits at the low
end from a GPR and delivering the shifted-out bits from the high end
back into the same GPR.

Since the instruction produces two outputs (the shifted vector
register and the output GPR of shifted-out bits), it has to be
instruction-selected in C++ rather than Tablegen.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75445
2020-03-04 08:49:27 +00:00
Simon Tatham 810127f6ab [ARM,MVE] Add the `vsbciq` intrinsics.
Summary:
These are exactly parallel to the existing `vadciq` intrinsics, which
we implemented last year as part of the original MVE intrinsics
framework setup.

Just like VADC/VADCI, the MVE VSBC/VSBCI instructions deliver two
outputs, both of which the intrinsic exposes: a modified vector
register and a carry flag. So they have to be instruction-selected in
C++ rather than Tablegen. However, in this case, that's trivial: the
same C++ isel routine we already have for VADC works unchanged, and
all we have to do is to pass it a different instruction id.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75444
2020-03-04 08:49:27 +00:00
Sjoerd Meijer 4e363563fa Revert "[Driver] Default to -fno-common for all targets"
This reverts commit 0a9fc9233e.

Going to look at the asan failures.

I find the failures in the test suite weird, because they look
like compile time test and I don't understand how that can be
failing, but will have a brief look at that too.
2020-03-03 10:00:36 +00:00
Sjoerd Meijer 0a9fc9233e [Driver] Default to -fno-common for all targets
This makes -fno-common the default for all targets because this has performance
and code-size benefits and is more language conforming for C code.
Additionally, GCC10 also defaults to -fno-common and so we get consistent
behaviour with GCC.

With this change, C code that uses tentative definitions as definitions of a
variable in multiple translation units will trigger multiple-definition linker
errors. Generally, this occurs when the use of the extern keyword is neglected
in the declaration of a variable in a header file. In some cases, no specific
translation unit provides a definition of the variable. The previous behavior
can be restored by specifying -fcommon.

As GCC has switched already, we benefit from applications already being ported
and existing documentation how to do this. For example:
- https://gcc.gnu.org/gcc-10/porting_to.html
- https://wiki.gentoo.org/wiki/Gcc_10_porting_notes/fno_common

Differential revision: https://reviews.llvm.org/D75056
2020-03-03 09:15:07 +00:00
Sanjay Patel 1e308452bf [CodeGen] avoid running the entire optimizer pipeline in clang test file; NFC
I'm making the CHECK lines vague enough that they pass at -O0.
If that is too vague (we really want to check the data flow
to verify that the variables are not mismatched, etc), then
we can adjust those lines again to more closely match the output
at -O0 rather than -O1.

This change is based on the post-commit comments for:
83f4372f3a
http://lists.llvm.org/pipermail/cfe-commits/Week-of-Mon-20200224/307888.html
2020-03-02 09:47:32 -05:00
Sanjay Patel 8cdcbcaa02 [CodeGen] avoid running the entire optimizer pipeline in clang test file; NFC
There are no failures from the first set of RUN lines here,
so the CHECKs were already vague enough to not be affected
by optimizations. The final RUN line does induce some kind
of failure, so I'll try to fix that separately in a
follow-up.
2020-03-02 09:12:53 -05:00
Luke Geeson 7d594cf003 [ARM] Add Cortex-M55 Support for clang and llvm
This patch upstreams support for the ARM Armv8.1m cpu Cortex-M55.

In detail adding support for:

 - mcpu option in clang
 - Arm Target Features in clang
 - llvm Arm TargetParser definitions

details of the CPU can be found here:
https://developer.arm.com/ip-products/processors/cortex-m/cortex-m55

Reviewers: chill

Reviewed By: chill

Subscribers: dmgreen, kristof.beyls, hiraditya, cfe-commits,
llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74966
2020-03-02 11:42:26 +00:00
Simon Tatham 1a8cbfa514 [ARM,MVE] Add ACLE intrinsics for VCVT[ANPM] family.
Summary:
These instructions convert a vector of floats to a vector of integers
of the same size, with assorted non-default rounding modes.
Implemented in IR as target-specific intrinsics, because as far as I
can see there are no matches for that functionality in the standard IR
intrinsics list.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75255
2020-03-02 10:33:30 +00:00
Simon Tatham b08d2ddd69 [ARM,MVE] Add ACLE intrinsics for VCVT.F32.F16 family.
Summary:
These instructions make a vector of `<4 x float>` by widening every
other lane of a vector of `<8 x half>`.

I wondered about representing these using standard IR, along the lines
of a shufflevector to extract elements of the input into a `<4 x half>`
followed by an `fpext` to turn that into `<4 x float>`. But it looks as
if that would take a lot of work in isel lowering to make it match any
pattern I could sensibly write in Tablegen, and also I haven't been
able to think of any other case where that pattern might be generated
in IR, so there wouldn't be any extra code generation win from doing
it that way.

Therefore, I've just used another target-specific intrinsic. We can
always change it to the other way later if anyone thinks of a good
reason.

(In order to put the intrinsic definition near similar things in
`IntrinsicsARM.td`, I've also lifted the definition of the
`MVEMXPredicated` multiclass higher up the file, without changing it.)

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75254
2020-03-02 10:33:30 +00:00
Simon Tatham a41ecf0eb0 [ARM,MVE] Add ACLE intrinsics for VQMOV[U]N family.
Summary:
These instructions work like VMOVN (narrowing a vector of wide values
to half size, and overwriting every other lane of an output register
with the result), except that the narrowing conversion is saturating.
They come in three signedness flavours: signed to signed, unsigned to
unsigned, and signed to unsigned. All are represented in IR by a
target-specific intrinsic that takes two separate 'unsigned' flags.

Reviewers: MarkMurrayARM, dmgreen, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75252
2020-03-02 10:33:30 +00:00
Simon Pilgrim 7e9747b50b [X86][F16C] Remove cvtph2ps intrinsics and use generic half2float conversion (PR37554)
This removes everything but int_x86_avx512_mask_vcvtph2ps_512 which provides the SAE variant, but even this can use the fpext generic if the rounding control is the default.

Differential Revision: https://reviews.llvm.org/D75162
2020-02-29 18:57:35 +00:00
Simon Pilgrim bfa0aaf37f [AVX512] Add strict-fp cvtph2ps constrained tests
As suggested on D75162
2020-02-28 16:55:00 +00:00
Simon Pilgrim a06402cc69 [F16C] Add strict-fp constrained tests
As suggested on D75162
2020-02-28 16:55:00 +00:00
Simon Moll ddd11273d9 Remove BinaryOperator::CreateFNeg
Use UnaryOperator::CreateFNeg instead.

Summary:
With the introduction of the native fneg instruction, the
fsub -0.0, %x idiom is obsolete. This patch makes LLVM
emit fneg instead of the idiom in all places.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D75130
2020-02-27 09:06:03 -08:00
Dan Gohman 00072c08c7 [WebAssembly] Mangle the argc/argv `main` as `__wasm_argc_argv`.
WebAssembly enforces a rule that caller and callee signatures must
match. This means that the traditional technique of passing `main`
`argc` and `argv` even when it doesn't need them doesn't work.

Currently the backend renames `main` to `__original_main`, however this
doesn't interact well with LTO'ing libc, and the name isn't intuitive.
This patch allows us to transition to `__main_argc_argv` instead.

This implements the proposal in
https://github.com/WebAssembly/tool-conventions/pull/134
with a flag to disable it when targeting Emscripten, though this is
expected to be temporary, as discussed in the proposal comments.

Differential Revision: https://reviews.llvm.org/D70700
2020-02-27 07:55:36 -08:00
Simon Tatham 8c26f42fe9 [clang,ARM,MVE] Remove redundant #includes in test file.
I made that file by pasting together several pieces, and forgot to
take out the #include <arm_mve.h> from the tops of the later ones, so
the test was pointlessly including the same header five times. NFC.
2020-02-27 09:39:35 +00:00
Simon Tatham 9eb3cc10b2 [ARM,MVE] Add predicated intrinsics for many unary functions.
Summary:
This commit adds the predicated MVE intrinsics for the same set of
unary operations that I added in their unpredicated forms in

* D74333 (vrint)
* D74334 (vrev)
* D74335 (vclz, vcls)
* D74336 (vmovl)
* D74337 (vmovn)

but since the predicated versions are a lot more similar to each
other, I've kept them all together in a single big patch. Everything
here is done in the standard way we've been doing other predicated
operations: an IR intrinsic called `@llvm.arm.mve.foo.predicated` and
some isel rules that match that alongside whatever they accept for the
unpredicated version of the same instruction.

In order to write the isel rules conveniently, I've refactored the
existing isel rules for the affected instructions into multiclasses
parametrised by a vector-type class, in the usual way. All those
refactorings are intended to leave the existing isel rules unchanged:
the only difference should be that new ones for the predicated
intrinsics are introduced.

The only tiny infrastructure change I needed in this commit was to
change the implementation of `IntrinsicMX` in `arm_mve_defs.td` so
that the records it defines are anonymous rather than named (and use
`NameOverride` to set the output intrinsic name), which allows me to
call it twice in two multiclasses with the same `NAME` without a
tablegen-time error.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75165
2020-02-26 15:12:07 +00:00
Rong Xu 11857d4994 [remark][diagnostics] [codegen] Fix PR44896
This patch fixes PR44896. For IR input files, option fdiscard-value-names
should be ignored as we need named values in loadModule().
Commit 60d3947922 sets this option after loadModule() where valued names
already created. This creates an inconsistent state in setNameImpl()
that leads to a seg fault.
This patch forces fdiscard-value-names to be false for IR input files.

This patch also emits a warning of "ignoring -fdiscard-value-names" if
option fdiscard-value-names is explictly enabled in the commandline for
IR input files.

Differential Revision: https://reviews.llvm.org/D74878
2020-02-25 08:15:17 -08:00
Benjamin Kramer fc466f8780 Make test not write to the source directory 2020-02-25 16:03:06 +01:00
Sanjay Patel 83f4372f3a [CodeGen] fix clang test that runs the optimizer pipeline; NFC
There's already a FIXME note on this file; it can break when the
underlying LLVM behavior changes independently of anything in clang.
2020-02-25 09:13:49 -05:00
Bill Wendling 50cac24877 Support output constraints on "asm goto"
Summary:
Clang's "asm goto" feature didn't initially support outputs constraints. That
was the same behavior as gcc's implementation. The decision by gcc not to
support outputs was based on a restriction in their IR regarding terminators.
LLVM doesn't restrict terminators from returning values (e.g. 'invoke'), so
it made sense to support this feature.

Output values are valid only on the 'fallthrough' path. If an output value's used
on an indirect branch, then it's 'poisoned'.

In theory, outputs *could* be valid on the 'indirect' paths, but it's very
difficult to guarantee that the original semantics would be retained. E.g.
because indirect labels could be used as data, we wouldn't be able to split
critical edges in situations where two 'callbr' instructions have the same
indirect label, because the indirect branch's destination would no longer be
the same.

Reviewers: jyknight, nickdesaulniers, hfinkel

Reviewed By: jyknight, nickdesaulniers

Subscribers: MaskRay, rsmith, hiraditya, llvm-commits, cfe-commits, craig.topper, rnk

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69876
2020-02-24 18:51:29 -08:00
Craig Topper 727328433a [X86] Add back fmaddsub intrinsics to work towards fixing the strict fp implementation
Previously we emitted an fmadd and a fmadd+fneg and combined them with a shufflevector. But this doesn't follow the correct exception behavior for unselected elements so the backend can't merge them into the fmaddsub/fmsubadd instructions.

This patch restores the the fmaddsub intrinsics so we don't have two arithmetic operations. We lose out on optimization opportunity in the non-strict FP case, but I don't think this is a big loss. If someone gives us a test case we can look into adding instcombine/dagcombine improvements. I'd rather not have the frontend do completely different things for strict and non-strict.

This still has problems because target specific intrinsics don't support strict semantics yet. We also still have all of the problems with masking. But we at least generate the right instruction in constrained mode now.

Differential Revision: https://reviews.llvm.org/D74268
2020-02-24 12:07:21 -08:00
Xiangling Liao 8bee52bdb5 [AIX][Frontend] C++ ABI customizations for AIX boilerplate
This PR enables "XL" C++ ABI in frontend AST to IR codegen. And it is driven by
static init work. The current kind in Clang by default is Generic Itanium, which
has different behavior on static init with IBM xlclang compiler on AIX.

Differential Revision: https://reviews.llvm.org/D74015
2020-02-24 10:26:51 -05:00
Fangrui Song fc6057e34f [Frontend] Replace CC1 option -mcode-model with -mcmodel=
Before:

% clang -mcmodel=x -xc /dev/null
error: invalid argument 'x' in '-mcode-model x'

Now:

% clang -mcmodel=x -xc /dev/null
clang-11: error: invalid argument 'x' to -mcmodel=
2020-02-21 23:10:50 -08:00
Luís Marques 0781e93a6e [CodeGen][RISCV] Fix clang/test/CodeGen/atomic_ops.c for RISC-V
By default the RISC-V target doesn't have the atomics standard extension
enabled. The first RUN line in `clang/test/CodeGen/atomic_ops.c` didn't
specify a target triple, which meant that on RISC-V Linux hosts it would
target RISC-V, but because it used clang cc1 we didn't get the toolchain
driver functionality to automatically turn on the extensions implied by
the target triple (riscv64-linux includes atomics). This would cause the
test to fail on RISC-V hosts.

This patch changes the test to have RUN lines for two explicit targets,
one with native atomics and one without. To work around FileCheck
limitations and more accurately match the output, some tests now have
separate prefixes for the two cases.

Reviewers: jyknight, eli.friedman, lenary, efriedma
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D74847
2020-02-21 19:29:57 +00:00
Djordje Todorovic 2f215cf36a Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit rGfaff707db82d.
A failure found on an ARM 2-stage buildbot.
The investigation is needed.
2020-02-20 14:41:39 +01:00
Roman Lebedev 9ea5d17cc9
[Sema] Demote call-site-based 'alignment is a power of two' check for AllocAlignAttr into a warning
Summary:
As @rsmith notes in https://reviews.llvm.org/D73020#inline-672219
while that is certainly UB land, it may not be actually reachable at runtime, e.g.:
```
template<int N> void *make() {
  if ((N & (N-1)) == 0)
    return operator new(N, std::align_val_t(N));
  else
    return operator new(N);
}
void *p = make<7>();
```
and we shouldn't really error-out there.

That being said, i'm not really following the logic here.
Which ones of these cases should remain being an error?

Reviewers: rsmith, erichkeane

Reviewed By: erichkeane

Subscribers: cfe-commits, rsmith

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73996
2020-02-20 16:39:26 +03:00
Mikhail Maltsev f4fd7dbf85 [ARM,MVE] Add vqdmull[b,t]q intrinsic families
Summary:
This patch adds two families of ACLE intrinsics: vqdmullbq and
vqdmulltq (including vector-vector and vector-scalar variants) and the
corresponding LLVM IR intrinsics llvm.arm.mve.vqdmull and
llvm.arm.mve.vqdmull.predicated.

Reviewers: simon_tatham, MarkMurrayARM, dmgreen, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74845
2020-02-20 10:51:19 +00:00
Reid Kleckner 0edb212925 [MS] Mark vectorcall FP and vector args inreg
This has no effect on how LLVM passes the arguments, but it prevents
rewriteWithInAlloca from thinking that these parameters should be part
of the inalloca pack.

Follow-up to D72114

Reviewed By: erichkeane

Differential Revision: https://reviews.llvm.org/D74452
2020-02-19 16:37:50 -08:00
Krzysztof Parzyszek b1d47467e2 [Hexagon] Change HVX vector predicate types from v512/1024i1 to v64/128i1
This commit removes the artificial types <512 x i1> and <1024 x i1>
from HVX intrinsics, and makes v512i1 and v1024i1 no longer legal on
Hexagon.

It may cause existing bitcode files to become invalid.

* Converting between vector predicates and vector registers must be
  done explicitly via vandvrt/vandqrt instructions (their intrinsics),
  i.e. (for 64-byte mode):
    %Q = call <64 x i1> @llvm.hexagon.V6.vandvrt(<16 x i32> %V, i32 -1)
    %V = call <16 x i32> @llvm.hexagon.V6.vandqrt(<64 x i1> %Q, i32 -1)

  The conversion intrinsics are:
    declare  <64 x i1> @llvm.hexagon.V6.vandvrt(<16 x i32>, i32)
    declare <128 x i1> @llvm.hexagon.V6.vandvrt.128B(<32 x i32>, i32)
    declare <16 x i32> @llvm.hexagon.V6.vandqrt(<64 x i1>, i32)
    declare <32 x i32> @llvm.hexagon.V6.vandqrt.128B(<128 x i1>, i32)
  They are all pure.

* Vector predicate values cannot be loaded/stored directly. This directly
  reflects the architecture restriction. Loading and storing or vector
  predicates must be done indirectly via vector registers and explicit
  conversions via vandvrt/vandqrt instructions.
2020-02-19 14:14:56 -06:00
Mikhail Maltsev 461fd94f00 [ARM,MVE] Fix predicate types of some intrinsics
Summary:
Some predicated MVE intrinsics return a vector with element size
different from the input vector element size. In this case the
predicate must type correspond to the output vector type.

The following intrinsics use the incorrect predicate type:
* llvm.arm.mve.mull.int.predicated
* llvm.arm.mve.mull.poly.predicated
* llvm.arm.mve.vshll.imm.predicated

This patch fixes the issue.

Reviewers: simon_tatham, dmgreen, ostannard, MarkMurrayARM

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74838
2020-02-19 16:24:54 +00:00
Sander de Smalen 49b307e96d [AArch64][SVE] CodeGen of ACLE Builtin Types
Summary:
This patch adds codegen support for the ACLE builtin types added in:

  https://reviews.llvm.org/D62960

so that the ACLE builtin types are emitted as corresponding scalable
vector types in LLVM.

Reviewers: rsandifo-arm, rovka, rjmccall, efriedma

Reviewed By: efriedma

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits, cfe-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74724
2020-02-19 12:10:47 +00:00
Oliver Stannard 78654e8511 Revert "Reland D74436 "Change clang option -ffp-model=precise to select ffp-contract=on"""
Reverting because this patch is causing ~20 llvm-test-suite failures on
a number of different bots:
* http://lab.llvm.org:8011/builders/clang-cmake-armv8-lld/builds/3366
* http://lab.llvm.org:8011/builders/clang-cmake-aarch64-lld/builds/8222
* http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux/builds/13275
* http://lab.llvm.org:8011/builders/clang-s390x-linux-lnt/builds/17213

This reverts commit cd2c5af6df.
2020-02-19 12:03:27 +00:00
Djordje Todorovic faff707db8 Reland "[DebugInfo] Enable the debug entry values feature by default"
Differential Revision: https://reviews.llvm.org/D73534
2020-02-19 11:12:26 +01:00
David Tenty 58817a0783 [clang][XCOFF] Indicate that XCOFF does not support COMDATs
Summary: XCOFF doesn't support COMDATs, so clang shouldn't emit them.

Reviewers: stevewan, sfertile, Xiangling_L

Reviewed By: sfertile

Subscribers: dschuff, aheejin, dexonsmith, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74631
2020-02-18 16:10:11 -05:00
Mikhail Maltsev 63809d365e [ARM,MVE] Add vbrsrq intrinsics family
Summary:
This patch adds a new MVE intrinsics family, `vbrsrq`: vector bit
reverse and shift right. The intrinsics are compiled into the VBRSR
instruction. Two new LLVM IR intrinsics were also added: arm.mve.vbrsr
and arm.mve.vbrsr.predicated.

Reviewers: simon_tatham, dmgreen, ostannard, MarkMurrayARM

Reviewed By: simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74721
2020-02-18 17:31:21 +00:00
Djordje Todorovic 2bf44d11cb Revert "Reland "[DebugInfo] Enable the debug entry values feature by default""
This reverts commit rGa82d3e8a6e67.
2020-02-18 16:38:11 +01:00
Melanie Blower cd2c5af6df Reland D74436 "Change clang option -ffp-model=precise to select ffp-contract=on""
Change clang option -ffp-model=precise, the default, to select ffp-contract=on
    The patch caused some problems for PowerPC but ibm has made
    adjustments so I am resubmitting this patch.  Additionally, Andy looked
    at the performance regressions on LNT and it looks like a loop
    unrolling decision that could be adjusted.

    Reviewers: rjmccall, Andy Kaylor

    Differential Revision: https://reviews.llvm.org/D74436
2020-02-18 06:55:36 -08:00
Djordje Todorovic a82d3e8a6e Reland "[DebugInfo] Enable the debug entry values feature by default"
This patch enables the debug entry values feature.

  - Remove the (CC1) experimental -femit-debug-entry-values option
  - Enable it for x86, arm and aarch64 targets
  - Resolve the test failures
  - Leave the llc experimental option for targets that do not
    support the CallSiteInfo yet

Differential Revision: https://reviews.llvm.org/D73534
2020-02-18 14:41:08 +01:00
Simon Tatham c32af4447f [ARM,MVE] Add the vmovnbq,vmovntq intrinsic family.
Summary:
These are in some sense the inverse of vmovl[bt]q: they take a vector
of n wide elements and truncate each to half its width. So they only
write half a vector's worth of output data, and therefore they also
take an 'inactive' parameter to provide the other half of the data in
the output vector. So vmovnb overwrites the even lanes of 'inactive'
with the narrowed values from the main input, and vmovnt overwrites
the odd lanes.

LLVM had existing codegen which generates these MVE instructions in
response to IR that takes two vectors of wide elements, or two vectors
of narrow ones. But in this case, we have one vector of each. So my
clang codegen strategy is to narrow the input vector of wide elements
by simply reinterpreting it as the output type, and then we have two
narrow vectors and can represent the operation as a vector shuffle
that interleaves lanes from both of them.

Even so, not all the cases I needed ended up being selected as a
single MVE instruction, so I've added a couple more patterns that spot
combinations of the 'MVEvmovn' and 'ARMvrev32' SDNodes which can be
generated as a VMOVN instruction with operands swapped.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74337
2020-02-18 09:34:50 +00:00
Simon Tatham 5e97940cd2 [ARM,MVE] Add the vmovlbq,vmovltq intrinsic family.
Summary:
These intrinsics take a vector of 2n elements, and return a vector of
n wider elements obtained by sign- or zero-extending every other
element of the input vector. They're represented in IR as a
shufflevector that extracts the odd or even elements of the input,
followed by a sext or zext.

Existing LLVM codegen already matches this pattern and generates the
VMOVLB instruction (which widens the even-index input lanes). But no
existing isel rule was generating VMOVLT, so I've added some. However,
the new rules currently only work in little-endian MVE, because the
pattern they expect from isel lowering includes a bitconvert which
doesn't have the right semantics in big-endian.

The output of one existing codegen test is improved by those new
rules.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74336
2020-02-18 09:34:50 +00:00
Simon Tatham 68b49f7ef4 [ARM,MVE] Add intrinsics vclzq and vclsq.
Summary:
vclzq maps nicely to the existing target-independent @llvm.ctlz IR
intrinsic. But vclsq ('count leading sign bits') has no corresponding
target-independent intrinsic, so I've made up @llvm.arm.mve.vcls.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74335
2020-02-18 09:34:50 +00:00
Simon Tatham b6236e9479 [ARM,MVE] Add the vrev16q, vrev32q, vrev64q family.
Summary:
These intrinsics just reorder the lanes of a vector, so the natural IR
representation is as a shufflevector operation. Existing LLVM codegen
already recognizes those particular shufflevectors and generates the
MVE VREV instruction.

This commit adds the unpredicated forms only.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D74334
2020-02-18 09:34:50 +00:00
Simon Tatham c8b3196e54 [ARM,MVE] Add intrinsics for FP rounding operations.
Summary:
This adds the unpredicated forms of six different MVE intrinsics which
all round a vector of floating-point numbers to integer values,
leaving them still in FP format, differing only in rounding mode and
exception settings.

Five of them map to existing target-independent intrinsics in LLVM IR,
such as @llvm.trunc and @llvm.rint. The sixth, mapping to the `vrintn`
instruction, is done by inventing a target-specific intrinsic.

(`vrintn` behaves the same as `vrintx` in terms of the output value:
the side effects on the FPSCR flags are the only difference between
the two. But ACLE specifies separate user-callable intrinsics for the
two, so the side effects matter enough to make sure we generate the
right one of the two instructions in each case.)

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74333
2020-02-18 09:34:50 +00:00
Simon Tatham df3ed6c0fe [ARM,MVE] Add intrinsics for int <-> float conversion.
Summary:
This adds the unpredicated versions of the family of vcvtq intrinsics
that convert between a vector of floats and a vector of the same size
of integer. These are represented in IR using the standard fptosi,
fptoui, sitofp and uitofp operations, which existing LLVM codegen
already handles.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D74332
2020-02-18 09:34:50 +00:00
Simon Tatham 90dc78bc62 [ARM,MVE] Add intrinsics for abs, neg and not operations.
Summary:
This commit adds the unpredicated intrinsics for the unary operations
vabsq (absolute value), vnegq (arithmetic negation), vmvnq (bitwise
complement), vqabsq and vqnegq (saturating versions of abs and neg for
signed integers, in the sense that they give INT_MAX if an input lane
is INT_MIN).

This is done entirely in clang: all of these operations have existing
isel patterns and existing tests for them on the LLVM side, so I've
just made clang emit the same IR that those patterns already match.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D74331
2020-02-18 09:34:50 +00:00
Mikhail Maltsev 489f62e801 [ARM,MVE] Add vector-scalar intrinsics
Summary:
This patch adds vector-scalar variants to the following families of
MVE intrinsics:
* vaddq
* vsubq
* vmulq
* vqaddq
* vqsubq
* vhaddq
* vhsubq
* vqdmulhq
* vqrdmulhq

The vector-scalar variants perform a splat operation on the scalar
operand and then perform the same operations as their vector-vector
counterparts. Code generation is done accordingly (using LLVM IR 'insert'
and 'shuffle' operations which are later converted into an ARMvdup
SDNode).

Reviewers: simon_tatham, dmgreen, MarkMurrayARM, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74620
2020-02-17 17:47:05 +00:00
Mark de Wever 9658d895c8 [Sema] Adds the pointer-to-int-cast diagnostic
Converting a pointer to an integer whose result cannot represented in the
integer type is undefined behavior is C and prohibited in C++. C++ already
has a diagnostic when casting. This adds a diagnostic for C.

Since this diagnostic uses the range of the conversion it also modifies
int-to-pointer-cast diagnostic to use a range.

Fixes PR8718: No warning on casting between pointer and non-pointer-sized int

Differential Revision: https://reviews.llvm.org/D72231
2020-02-16 15:38:25 +01:00
Melanie Blower 9122b92f8e Revert "Reland D74436 "Change clang option -ffp-model=precise to select ffp-contract=on""
This reverts commit 0a1123eb43.
Want to revert this because it's causing trouble for PowerPC
I also fixed test fp-model.c which was looking for an incorrect error message
2020-02-14 07:32:09 -08:00
Fangrui Song 0a1123eb43 Reland D74436 "Change clang option -ffp-model=precise to select ffp-contract=on"
Buildbot are failing with the current revert status. So reland with a
fix to fp-model.c
2020-02-13 16:22:03 -08:00
Melanie Blower 88ec01ca1b Revert "Revert "Revert "Change clang option -ffp-model=precise to select ffp-contract=on"""
This reverts commit abd09053bc.
It's causing internal buildbot fails on ppc

Conflicts:
	clang/lib/Driver/ToolChains/Clang.cpp
2020-02-13 15:06:12 -08:00
Erik Pilkington e26c24b849 Revert "[IRGen] Emit lifetime intrinsics around temporary aggregate argument allocas"
This reverts commit fafc6e4fdf.

Should fix ppc stage2 failure: http://lab.llvm.org:8011/builders/clang-ppc64be-linux-multistage/builds/23546

Conflicts:
	clang/lib/CodeGen/CGCall.cpp
2020-02-12 12:26:46 -08:00
Melanie Blower abd09053bc Revert "Revert "Change clang option -ffp-model=precise to select ffp-contract=on""
This reverts commit 99c5bcbce8.
Change clang option -ffp-model=precise to select ffp-contract=on
Including some small touch-ups to the original commit

Reviewers: rjmccall, Andy Kaylor

Differential Revision: https://reviews.llvm.org/D74436
2020-02-12 07:30:43 -08:00
Djordje Todorovic 97ed706a96 Revert "[DebugInfo] Enable the debug entry values feature by default"
This reverts commit rG9f6ff07f8a39.

Found a test failure on clang-with-thin-lto-ubuntu buildbot.
2020-02-12 11:59:04 +01:00
jasonliu 55e2678fcd [clang] Add -fignore-exceptions
Summary:

This is trying to implement the functionality proposed in:
http://lists.llvm.org/pipermail/cfe-dev/2017-April/053417.html
An exception can throw, but no cleanup is going to happen.
A module compiled with exceptions on, can catch the exception throws
from module compiled with -fignore-exceptions.

The use cases for enabling this option are:
1. Performance analysis of EH instrumentation overhead
2. The ability to QA non EH functionality when EH functionality is not available.
3. User of EH enabled headers knows the calls won't throw in their program and
   wants the performance gain from ignoring EH construct.

The implementation tried to accomplish that by removing any landing pad code
 that might get generated.

Reviewed by: aaron.ballman

Differential Revision: https://reviews.llvm.org/D72644
2020-02-12 09:56:18 +00:00
Djordje Todorovic 9f6ff07f8a [DebugInfo] Enable the debug entry values feature by default
This patch enables the debug entry values feature.

  - Remove the (CC1) experimental -femit-debug-entry-values option
  - Enable it for x86, arm and aarch64 targets
  - Resolve the test failures
  - Leave the llc experimental option for targets that do not
    support the CallSiteInfo yet

Differential Revision: https://reviews.llvm.org/D73534
2020-02-12 10:25:14 +01:00
Reid Kleckner 2c6a3896ab Re-land "[MS] Overhaul how clang passes overaligned args on x86_32"
This brings back 2af74e27ed and reverts
eaabaf7e04.

The changes were correct, the code that was broken contained an ODR
violation that assumed that these types are passed equivalently:
  struct alignas(uint64_t) Wrapper { uint64_t P };
  void f(uint64_t p);
  void f(Wrapper p);

MSVC does not pass them the same way, and so clang-cl should not pass
them the same way either.
2020-02-11 16:49:28 -08:00
Melanie Blower 99c5bcbce8 Revert "Change clang option -ffp-model=precise to select ffp-contract=on"
This reverts commit 3fcdf2fa94.
Sorry I was too hasty with my commit, I will review Andy's comments
and resubmit.
2020-02-11 14:20:00 -08:00
Melanie Blower 3fcdf2fa94 Change clang option -ffp-model=precise to select ffp-contract=on
Reviewers: rjmccall

Differential Revision: https://reviews.llvm.org/D74436
2020-02-11 14:07:10 -08:00
Ian Levesque 14f870366a [xray][clang] Always add xray-skip-entry/exit and xray-ignore-loops attrs
The function attributes xray-skip-entry, xray-skip-exit, and
xray-ignore-loops were only being applied if a function had an
xray-instrument attribute, but they should apply if xray is enabled
globally too.

Differential Revision: https://reviews.llvm.org/D73842
2020-02-11 14:00:41 -08:00
Krzysztof Parzyszek 57148e0379 [Hexagon] Fix ABI info for returning HVX vectors 2020-02-11 12:38:54 -06:00
Kai Nacke a5040d5ec9 [SytemZ] Disable vector ABI when using option -march=arch[8|9|10]
When specifying -march=arch[8|9|10], those CPU types do NOT support
the vector extension. In this case the vector ABI must be disabled.
The generated data layout should NOT contain 64-v128.

Reviewers: uweigand

Differential Revision: https://reviews.llvm.org/D74146
2020-02-10 04:14:05 -05:00
serge_sans_paille e67cbac812 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 10:42:45 +01:00
serge-sans-paille 4546211600 Revert "Support -fstack-clash-protection for x86"
This reverts commit 0fd51a4554.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-win-x-armv7l/builds/4354
2020-02-09 10:06:31 +01:00
serge_sans_paille 0fd51a4554 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 09:35:42 +01:00
serge-sans-paille 658495e6ec Revert "Support -fstack-clash-protection for x86"
This reverts commit e229017732.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-debian/builds/2604
http://lab.llvm.org:8011/builders/llvm-clang-win-x-aarch64/builds/4308
2020-02-08 14:26:22 +01:00
serge_sans_paille e229017732 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with better option
handling and more portable testing

Differential Revision: https://reviews.llvm.org/D68720
2020-02-08 13:31:52 +01:00
Guillaume Chatelet d65bbf81f8 [clang] Add support for __builtin_memcpy_inline
Summary: This is a follow up on D61634 and the last step to implement http://lists.llvm.org/pipermail/llvm-dev/2019-April/131973.html

Reviewers: efriedma, courbet, tejohnson

Subscribers: hiraditya, cfe-commits, llvm-commits, jdoerfert, t.p.northover

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73543
2020-02-07 23:55:26 +01:00
Erik Pilkington fafc6e4fdf [IRGen] Emit lifetime intrinsics around temporary aggregate argument allocas
These temporaries are only used in the callee, and their memory can be reused
after the call is complete.

rdar://58552124

Differential revision: https://reviews.llvm.org/D74094
2020-02-07 14:39:31 -08:00
Nico Weber b03c3d8c62 Revert "Support -fstack-clash-protection for x86"
This reverts commit 4a1a0690ad.
Breaks tests on mac and win, see https://reviews.llvm.org/D68720
2020-02-07 14:49:38 -05:00
serge_sans_paille 4a1a0690ad Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with correct option
flags set.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-07 19:54:39 +01:00
serge-sans-paille f6d98429fc Revert "Support -fstack-clash-protection for x86"
This reverts commit 39f50da2a3.

The -fstack-clash-protection is being passed to the linker too, which
is not intended.

Reverting and fixing that in a later commit.
2020-02-07 11:36:53 +01:00
Diogo Sampaio 9d869180c4 [ARM] Follow AACPS for preserving number of loads/stores of volatile bit-fields
Summary:
Following the AAPCS, every store to a volatile bit-field requires to generate one load of that field, even if all the bits are going to be replaced.
This patch allows the user to opt-in in following such rule, whenever the a.

AAPCS Release 2019Q1.1 (https://static.docs.arm.com/ihi0042/g/aapcs32.pdf)
section 8.1 Data Types, page 35, paragraph: Volatile bit-fields – preserving number and width of container accesses

```
When a volatile bit-field is written, and its container does not overlap with any non-bit-field member, its
container must be read exactly once and written exactly once using the access width appropriate to the
type of the container. The two accesses are not atomic.

```

Reviewers: lebedev.ri, ostannard, jfb, eli.friedman

Reviewed By: jfb

Subscribers: rsmith, rjmccall, dexonsmith, kristof.beyls, jfb, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67399
2020-02-07 10:11:54 +00:00
serge_sans_paille 39f50da2a3 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

Differential Revision: https://reviews.llvm.org/D68720
2020-02-07 10:56:15 +01:00
Craig Topper 96400ae2a4 Recommit "[FPEnv][X86] Platform-specific builtin constrained FP enablement"
With REQUIRES: x86-register-target added to the tests.

Also remove some unneeded FIXMEs

But add a FIXME for bad IR generation for FMADDSUB/FMSUBADD with
constrained FP.

Original patch by Kevin P. Neal
2020-02-06 16:54:35 -08:00
Kevin P. Neal ad0e03fd4c Revert "[FPEnv][X86] Platform-specific builtin constrained FP enablement"
This reverts commit 208470dd5d.

Tests fail:
error: unable to create target: 'No available targets are compatible with triple "x86_64-apple-darwin"'

This happens on clang-hexagon-elf, clang-cmake-armv7-quick, and
clang-cmake-armv7-quick bots.

If anyone has any suggestions on why then I'm all ears.

Differential Revision: https://reviews.llvm.org/D73570

Revert "[FPEnv][X86] Speculative fix for failures introduced by eda495426."

This reverts commit 80e17e5fcc.

The speculative fix didn't solve the test failures on Hexagon, ARMv6, and
MSVC AArch64.
2020-02-06 19:17:14 -05:00
Kevin P. Neal 80e17e5fcc [FPEnv][X86] Speculative fix for failures introduced by eda495426.
Differential Revision: https://reviews.llvm.org/D73570
2020-02-06 15:28:36 -05:00
Kevin P. Neal 208470dd5d [FPEnv][X86] Platform-specific builtin constrained FP enablement
When constrained floating point is enabled the X86-specific builtins don't
use constrained intrinsics in some cases. Fix that.

Differential Revision: https://reviews.llvm.org/D73570
2020-02-06 14:20:44 -05:00
Mikhail Maltsev 2694cc3dca [ARM][MVE] Add fixed point vector conversion intrinsics
Summary:
This patch implements the following Arm ACLE MVE intrinsics:
* vcvtq_n_*
* vcvtq_m_n_*
* vcvtq_x_n_*

and two corresponding LLVM IR intrinsics:
* int_arm_mve_vcvt_fix (vcvtq_n_*)
* int_arm_mve_vcvt_fix_predicated (vcvtq_m_n_*, vcvtq_x_n_*)

Reviewers: simon_tatham, ostannard, MarkMurrayARM, dmgreen

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74134
2020-02-06 16:49:45 +00:00
Thomas Lively 8c3e6af71b [WebAssembly] Add experimental multivalue calling ABI
Summary:
For now, this ABI simply expands all possible aggregate arguments and
returns all possible aggregates directly. This ABI will change rapidly
as we prototype and benchmark a new ABI that takes advantage of
multivalue return and possibly other changes from the MVP ABI.

Reviewers: aheejin, dschuff

Subscribers: sbc100, jgravelle-google, sunfish, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72972
2020-02-04 21:09:49 -08:00
Matt Arsenault a3c814d234 Separately track input and output denormal mode
AMDGPU and x86 at least both have separate controls for whether
denormal results are flushed on output, and for whether denormals are
implicitly treated as 0 as an input. The current DAGCombiner use only
really cares about the input treatment of denormals.
2020-02-04 12:59:21 -05:00
Yonghong Song 9271cab270 [BPF] use base lvalue type for preserve_{struct,union}_access_index metadata
Linux commit
  1cf5b23988 (diff-289313b9fec99c6f0acfea19d9cfd949)
uses "#pragma clang attribute push (__attribute__((preserve_access_index)),
      apply_to = record)"
to apply CO-RE relocations to all records including the following pattern:
  #pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)
  typedef struct {
    int a;
  } __t;
  #pragma clang attribute pop
  int test(__t *arg) { return arg->a; }

The current approach to use struct type in the relocation record will
result in an anonymous struct, which make later type matching difficult
    in bpf loader. In fact, current BPF backend will fail the above program
with assertion:
  clang: ../lib/Target/BPF/BPFAbstractMemberAccess.cpp:796: ...
     Assertion `TypeName.size()' failed.

The patch use the base lvalue type for the "base" value to annotate
preservee_{struct,union}_access_index intrinsics. In the above example,
the type will be "__t" which preserved the type name.

Differential Revision: https://reviews.llvm.org/D73900
2020-02-04 09:28:30 -08:00
Jonas Paulsson 563e84790f [SystemZ] Support -msoft-float
This is needed when building the Linux kernel.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D72189
2020-02-04 10:32:45 -05:00
Fangrui Song dbc96b518b Revert "[CodeGenModule] Assume dso_local for -fpic -fno-semantic-interposition"
This reverts commit 789a46f2d7.

Accidentally committed.
2020-02-03 10:09:39 -08:00
Fangrui Song 789a46f2d7 [CodeGenModule] Assume dso_local for -fpic -fno-semantic-interposition
Summary:
Clang -fpic defaults to -fno-semantic-interposition (GCC -fpic defaults
to -fsemantic-interposition).
Users need to specify -fsemantic-interposition to get semantic
interposition behavior.

Semantic interposition is currently a best-effort feature. There may
still be some cases where it is not handled well.

Reviewers: peter.smith, rnk, serge-sans-paille, sfertile, jfb, jdoerfert

Subscribers: dschuff, jyknight, dylanmckay, nemanjai, jvesely, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, arphaman, PkmX, jocewei, jsji, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73865
2020-02-03 09:52:48 -08:00
Simon Tatham 961530fdc9 [ARM,MVE] Fix vreinterpretq in big-endian mode.
Summary:
In big-endian MVE, the simple vector load/store instructions (i.e.
both contiguous and non-widening) don't all store the bytes of a
register to memory in the same order: it matters whether you did a
VSTRB.8, VSTRH.16 or VSTRW.32. Put another way, the in-register
formats of different vector types relate to each other in a different
way from the in-memory formats.

So, if you want to 'bitcast' or 'reinterpret' one vector type as
another, you have to carefully specify which you mean: did you want to
reinterpret the //register// format of one type as that of the other,
or the //memory// format?

The ACLE `vreinterpretq` intrinsics are specified to reinterpret the
register format. But I had implemented them as LLVM IR bitcast, which
is specified for all types as a reinterpretation of the memory format.
So a `vreinterpretq` intrinsic, applied to values already in registers,
would code-generate incorrectly if compiled big-endian: instead of
emitting no code, it would emit a `vrev`.

To fix this, I've introduced a new IR intrinsic to perform a
register-format reinterpretation: `@llvm.arm.mve.vreinterpretq`. It's
implemented by a trivial isel pattern that expects the input in an
MQPR register, and just returns it unchanged.

In the clang codegen, I only emit this new intrinsic where it's
actually needed: I prefer a bitcast wherever it will have the right
effect, because LLVM understands bitcasts better. So we still generate
bitcasts in little-endian mode, and even in big-endian when you're
casting between two vector types with the same lane size.

For testing, I've moved all the codegen tests of vreinterpretq out
into their own file, so that they can have a different set of RUN
lines to check both big- and little-endian.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73786
2020-02-03 11:20:06 +00:00
Simon Tatham f8d4afc49a [ARM,MVE] Add intrinsics for v[id]dupq and v[id]wdupq.
Summary:
These instructions generate a vector of consecutive elements starting
from a given base value and incrementing by 1, 2, 4 or 8. The `wdup`
versions also wrap the values back to zero when they reach a given
limit value. The instruction updates the scalar base register so that
another use of the same instruction will continue the sequence from
where the previous one left off.

At the IR level, I've represented these instructions as a family of
target-specific intrinsics with two return values (the constructed
vector and the updated base). The user-facing ACLE API provides a set
of intrinsics that throw away the written-back base and another set
that receive it as a pointer so they can update it, plus the usual
predicated versions.

Because the intrinsics return two values (as do the underlying
instructions), the isel has to be done in C++.

This is the first family of MVE intrinsics that use the `imm_1248`
immediate type in the clang Tablegen framework, so naturally, I found
I'd given it the wrong C integer type. Also added some tests of the
check that the immediate has a legal value, because this is the first
time those particular checks have been exercised.

Finally, I also had to fix a bug in MveEmitter which failed an
assertion when I nested two `seq` nodes (the inner one used to extract
the two values from the pair returned by the IR intrinsic, and the
outer one put on by the predication multiclass).

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73357
2020-02-03 11:20:06 +00:00
Simon Tatham cf7e98e6f7 [ARM,MVE] Add intrinsics for vdupq.
Summary:
The unpredicated case of this is trivial: the clang codegen just makes
a vector splat of the input, and LLVM isel is already prepared to
handle that. For the predicated version, I've generated a `select`
between the same vector splat and the `inactive` input parameter, and
added new Tablegen isel rules to match that pattern into a predicated
`MVE_VDUP` instruction.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73356
2020-02-03 11:20:06 +00:00
Fangrui Song aed488e3a4 [Driver] Move -fsemantic-interposition decision from cc1 to driver
And add test/Driver/fsemantic-interposition.c
2020-02-02 20:45:29 -08:00
Fangrui Song 1acf129bcf [Frontend] Delete a redundant check of -pg for setFramePointer()
Driver errors if -fomit-frame-pointer is used together with -pg.
useFramePointerForTargetByDefault() returns true if -pg is specified.
=>
(!OmitFP && useFramePointerForTargetByDefault(Args, Triple)) is true
=>
We cannot get FramePointerKind::None
2020-02-01 00:29:29 -08:00
serge-sans-paille fd09f12f32 Implement -fsemantic-interposition
First attempt at implementing -fsemantic-interposition.

Rely on GlobalValue::isInterposable that already captures most of the expected
behavior.

Rely on a ModuleFlag to state whether we should respect SemanticInterposition or
not. The default remains no.

So this should be a no-op if -fsemantic-interposition isn't used, and if it is,
isInterposable being already used in most optimisation, they should honor it
properly.

Note that it only impacts architecture compiled with -fPIC and no pie.

Differential Revision: https://reviews.llvm.org/D72829
2020-01-31 14:02:33 +01:00
Craig Topper a10cec02f7 [X86] Improve X86 cmpps/cmppd/cmpss/cmpsd intrinsics with strictfp
The constrained fcmp intrinsics don't allow the TRUE/FALSE predicates.
Using them will assert. To workaround this I'm emitting the old X86 specific intrinsics that were never removed from the backend when we switched to using fcmp in IR. We have no way to mark them as being strict, but that's true of all target specific intrinsics so doesn't seem like we need to solve that here.

I've also added support for selecting between signaling and quiet.

Still need to support SAE which will require using a target specific
intrinsic. Also need to fix masking to not use an AND instruction
after the compare.

Differential Revision: https://reviews.llvm.org/D72906
2020-01-29 15:52:11 -08:00
Sanne Wouda 2939fc13c8 [AArch64] Add IR intrinsics for sq(r)dmulh_lane(q)
Summary:
Currently, sqdmulh_lane and friends from the ACLE (implemented in arm_neon.h),
are represented in LLVM IR as a (by vector) sqdmulh and a vector of (repeated)
indices, like so:

   %shuffle = shufflevector <4 x i16> %v, <4 x i16> undef, <4 x i32> <i32 3, i32 3, i32 3, i32 3>
   %vqdmulh2.i = tail call <4 x i16> @llvm.aarch64.neon.sqdmulh.v4i16(<4 x i16> %a, <4 x i16> %shuffle)

When %v's values are known, the shufflevector is optimized away and we are no
longer able to select the lane variant of sqdmulh in the backend.

This defeats a (hand-coded) optimization that packs several constants into a
single vector and uses the lane intrinsics to reduce register pressure and
trade-off materialising several constants for a single vector load from the
constant pool, like so:

   int16x8_t v = {2,3,4,5,6,7,8,9};
   a = vqdmulh_laneq_s16(a, v, 0);
   b = vqdmulh_laneq_s16(b, v, 1);
   c = vqdmulh_laneq_s16(c, v, 2);
   d = vqdmulh_laneq_s16(d, v, 3);
   [...]

In one microbenchmark from libjpeg-turbo this accounts for a 2.5% to 4%
performance difference.

We could teach the compiler to recover the lane variants, but this would likely
require its own pass.  (Alternatively, "volatile" could be used on the constants
vector, but this is a bit ugly.)

This patch instead implements the following LLVM IR intrinsics for AArch64 to
maintain the original structure through IR optmization and into instruction
selection:
- sqdmulh_lane
- sqdmulh_laneq
- sqrdmulh_lane
- sqrdmulh_laneq.

These 'lane' variants need an additional register class.  The second argument
must be in the lower half of the 64-bit NEON register file, but only when
operating on i16 elements.

Note that the existing patterns for shufflevector and sqdmulh into sqdmulh_lane
(etc.) remain, so code that does not rely on NEON intrinsics to generate these
instructions is not affected.

This patch also changes clang to emit these IR intrinsics for the corresponding
NEON intrinsics (AArch64 only).

Reviewers: SjoerdMeijer, dmgreen, t.p.northover, rovka, rengolin, efriedma

Reviewed By: efriedma

Subscribers: kristof.beyls, hiraditya, jdoerfert, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71469
2020-01-29 13:25:23 +00:00
Sanne Wouda cbc45e4e75 Regenerate aarch64-neon-2velem.c CHECK lines 2020-01-29 13:03:27 +00:00
Hans Wennborg eaabaf7e04 Revert "[MS] Overhaul how clang passes overaligned args on x86_32"
It broke some Chromium tests, so let's revert until it can be fixed; see
https://crbug.com/1046362

This reverts commit 2af74e27ed.
2020-01-28 22:25:07 +01:00
Aaron Ballman 5547919280 Fix a crash when casting _Complex and ignoring the results.
Performing a cast where the result is ignored caused Clang to crash when
performing codegen for the conversion:

  _Complex int a;
  void fn1() { (_Complex double) a; }

This patch addresses the crash by not trying to emit the scalar conversions,
causing it to be a noop. Fixes PR44624.
2020-01-28 13:05:56 -05:00
Wang, Pengfei 3239b5034e [FPEnv] Add pragma FP_CONTRACT support under strict FP.
Summary: Support pragma FP_CONTRACT under strict FP.

Reviewers: craig.topper, andrew.w.kaylor, uweigand, RKSimon, LiuChen3

Subscribers: hiraditya, jdoerfert, cfe-commits, llvm-commits, LuoYuanke

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72820
2020-01-28 20:43:43 +08:00
Teresa Johnson 2f63d549f1 Restore "[LTO/WPD] Enable aggressive WPD under LTO option"
This restores 59733525d3 (D71913), along
with bot fix 19c76989bb.

The bot failure should be fixed by D73418, committed as
af954e441a.

I also added a fix for non-x86 bot failures by requiring x86 in new test
lld/test/ELF/lto/devirt_vcall_vis_public.ll.
2020-01-27 07:55:05 -08:00
Fangrui Song d600ab3bb5 [Frontend] Delete some unneeded CC1 options 2020-01-23 22:01:04 -08:00
Craig Topper a2137d6e09 [X86] Add -flax-vector-conversions=none to all of the x86 vector intrinsic header tests. 2020-01-23 20:43:50 -08:00
Teresa Johnson 90e630a95e Revert "[LTO/WPD] Enable aggressive WPD under LTO option"
This reverts commit 59733525d3.

There is a windows sanitizer bot failure in one of the cfi tests
that I will need some time to figure out:
http://lab.llvm.org:8011/builders/sanitizer-windows/builds/57155/steps/stage%201%20check/logs/stdio
2020-01-23 17:29:24 -08:00
Fangrui Song 69bf40c45f [Driver][CodeGen] Support -fpatchable-function-entry=N,M and __attribute__((patchable_function_entry(N,M))) where M>0
Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D73072
2020-01-23 17:02:54 -08:00
Teresa Johnson 59733525d3 [LTO/WPD] Enable aggressive WPD under LTO option
Summary:
Third part in series to support Safe Whole Program Devirtualization
Enablement, see RFC here:
http://lists.llvm.org/pipermail/llvm-dev/2019-December/137543.html

This patch adds type test metadata under -fwhole-program-vtables,
even for classes without hidden visibility. It then changes WPD to skip
devirtualization for a virtual function call when any of the compatible
vtables has public vcall visibility.

Additionally, internal LLVM options as well as lld and gold-plugin
options are added which enable upgrading all public vcall visibility
to linkage unit (hidden) visibility during LTO. This enables the more
aggressive WPD to kick in based on LTO time knowledge of the visibility
guarantees.

Support was added to all flavors of LTO WPD (regular, hybrid and
index-only), and to both the new and old LTO APIs.

Unfortunately it was not simple to split the first and second parts of
this part of the change (the unconditional emission of type tests and
the upgrading of the vcall visiblity) as I needed a way to upgrade the
public visibility on legacy WPD llvm assembly tests that don't include
linkage unit vcall visibility specifiers, to avoid a lot of test churn.

I also added a mechanism to LowerTypeTests that allows dropping type
test assume sequences we now aggressively insert when we invoke
distributed ThinLTO backends with null indexes, which is used in testing
mode, and which doesn't invoke the normal ThinLTO backend pipeline.

Depends on D71907 and D71911.

Reviewers: pcc, evgeny777, steven_wu, espindola

Subscribers: emaste, Prazek, inglorion, arichardson, hiraditya, MaskRay, dexonsmith, dang, davidxl, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71913
2020-01-23 16:09:44 -08:00
Reid Kleckner 2af74e27ed [MS] Overhaul how clang passes overaligned args on x86_32
MSVC 2013 would refuse to pass highly aligned things (typically vectors
and aggregates) by value. Users would receive this error:
  t.cpp(11) : error C2719: 'w': formal parameter with __declspec(align('32')) won't be aligned
  t.cpp(11) : error C2719: 'q': formal parameter with __declspec(align('32')) won't be aligned

However, in MSVC 2015, this behavior was changed, and highly aligned
things are now passed indirectly. To avoid breaking backwards
incompatibility, objects that do not have a *required* high alignment
(i.e. double) are still passed directly, even though they are not
naturally aligned. This change implements the new behavior of passing
things indirectly.

The new behavior is:
- up to three vector parameters can be passed in [XYZ]MM0-2
- remaining arguments with required alignment greater than 4 bytes are
  passed indirectly

Previously, MSVC never passed things truly indirectly, meaning clang
would always apply the byval attribute to indirect arguments. We had to
go to the trouble of adding inalloca so that non-trivially copyable C++
types could be passed in place without copying the object
representation. When inalloca was added, we asserted that all arguments
passed indirectly must use byval. With this change, that assert no
longer holds, and I had to update inalloca to handle that case. The
implicit sret pointer parameter was already handled this way, and this
change generalizes some of that logic to arguments.

There are two cases that this change leaves unfixed:
1. objects that are non-trivially copyable *and* overaligned
2. vectorcall + inalloca + vectors

For case 1, I need to touch C++ ABI code in MicrosoftCXXABI.cpp, so I
want to do it in a follow-up.

For case 2, my fix is one line, but it will require updating IR tests to
use lots of inreg, so I wanted to separate it out.

Related to D71915 and D72110

Fixes most of PR44395

Reviewed By: rjmccall, craig.topper, erichkeane

Differential Revision: https://reviews.llvm.org/D72114
2020-01-23 16:04:00 -08:00
Roman Lebedev 5ffe6408ff
[Codegen] If reasonable, materialize clang's `AllocAlignAttr` as llvm's Alignment Attribute on call-site function return value
Summary:
Much like with the previous patch (D73005) with `AssumeAlignedAttr`
handling, results in mildly more readable IR,
and will improve test coverage in upcoming patch.

Note that in `AllocAlignAttr`'s case, there is no requirement
for that alignment parameter to end up being an I-C-E.

Reviewers: erichkeane, jdoerfert, hfinkel, aaron.ballman, rsmith

Reviewed By: erichkeane

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73006
2020-01-23 22:50:49 +03:00
Roman Lebedev e819f7c9fe
[Codegen] If reasonable, materialize clang's `AssumeAlignedAttr` as llvm's Alignment Attribute on call-site function return value
Summary:
This should be mostly NFC - we still lower the same alignment
knowledge to the IR. The main reasoning here is that
this somewhat improves readability of IR like this,
and will improve test coverage in upcoming patch.

Even though the alignment is guaranteed to always be an I-C-E,
we don't always materialize it as llvm's Alignment Attribute because:
1. There may be a non-zero offset
2. We may be sanitizing for alignment

Note that if there already was an IR alignment attribute
on return value, we union them, and thus the alignment
only ever rises.

Also, there is a second relevant clang attribute `AllocAlignAttr`,
so that is why `AbstractAssumeAlignedAttrEmitter` is templated.

Reviewers: erichkeane, jdoerfert, hfinkel, aaron.ballman, rsmith

Reviewed By: erichkeane

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73005
2020-01-23 22:50:49 +03:00
Simon Tatham 98ea4b30c2 [ARM,MVE] Make the MVE intrinsics work in C++!
Summary:
Apparently nobody has tried this in months of development. It turns
out that `FunctionDecl::getBuiltinID` will never consider a function
to be a builtin if it is in C++ and not extern "C". So none of the
function declarations in <arm_mve.h> are recognized as builtins when
clang is compiling in C++ mode: it just emits calls to them as
ordinary functions, which then turn out not to exist at link time.

The trivial fix is to wrap most of arm_mve.h in an extern "C".

Added a test in clang/test/CodeGen/arm-mve-intrinsics which checks
basic functioning of the MVE header file in C++ mode. I've filled it
with copies of existing test functions from other files in that
directory, including a few moderately tricky cases of overloading (in
particular one that relies on the strict-polymorphism attribute added
in D72518).

(I considered making //every// test in that directory compile in both
C and C++ mode and check the code generation was identical. But I
think that would increase testing time by more than the value it adds,
and also update_cc_test_checks gets confused when the output function
name varies between RUN lines.)

Reviewers: LukeGeeson, MarkMurrayARM, miyuki, dmgreen

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73268
2020-01-23 14:10:27 +00:00
Simon Tatham 4321c6af28 [ARM,MVE] Support immediate vbicq,vorrq,vmvnq intrinsics.
Summary:
Immediate vmvnq is code-generated as a simple vector constant in IR,
and left to the backend to recognize that it can be created with an
MVE VMVN instruction. The predicated version is represented as a
select between the input and the same constant, and I've added a
Tablegen isel rule to turn that into a predicated VMVN. (That should
be better than the previous VMVN + VPSEL: it's the same number of
instructions but now it can fold into an adjacent VPT block.)

The unpredicated forms of VBIC and VORR are done by enabling the same
isel lowering as for NEON, recognizing appropriate immediates and
rewriting them as ARMISD::VBICIMM / ARMISD::VORRIMM SDNodes, which I
then instruction-select into the right MVE instructions (now that I've
also reworked those instructions to use the same MC operand encoding).
In order to do that, I had to promote the Tablegen SDNode instance
`NEONvorrImm` to a general `ARMvorrImm` available in MVE as well, and
similarly for `NEONvbicImm`.

The predicated forms of VBIC and VORR are represented as a vector
select between the original input vector and the output of the
unpredicated operation. The main convenience of this is that it still
lets me use the existing isel lowering for VBICIMM/VORRIMM, and not
have to write another copy of the operand encoding translation code.

This intrinsic family is the first to use the `imm_simd` system I put
into the MveEmitter tablegen backend. So, naturally, it showed up a
bug or two (emitting bogus range checks and the like). Fixed those,
and added a full set of tests for the permissible immediates in the
existing Sema test.

Also adjusted the isel pattern for `vmovlb.u8`, which stopped matching
because lowering started turning its input into a VBICIMM. Now it
recognizes the VBICIMM instead.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72934
2020-01-23 11:53:52 +00:00
Russell Gallop 4662f6e1c7 [test] Avoid loop-unroll.c test getting confused by fadd in git revision
Differential Revision: https://reviews.llvm.org/D73162
2020-01-23 09:27:16 +00:00
Roman Lebedev 372cb38f45
[Codegen] Emit both AssumeAlignedAttr and AllocAlignAttr assumptions if they exist
Summary:
We shouldn't be just giving up if we find one of them
(like we currently do with `AssumeAlignedAttr`),
we should emit them all.

As the tests show, even if we materialized good knowledge
from `__attribute__((assume_aligned(32)`, it doesn't mean
`__attribute__((alloc_align([...])))` info won't be useful.
It might be, but that isn't given.

Reviewers: erichkeane, jdoerfert, aaron.ballman

Reviewed By: erichkeane

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72979
2020-01-21 21:18:27 +03:00
Kevin P. Neal 2e667d07c7 [FPEnv][SystemZ] Platform-specific builtin constrained FP enablement
When constrained floating point is enabled the SystemZ-specific builtins
don't use constrained intrinsics in some cases. Fix that.

Differential Revision: https://reviews.llvm.org/D72722
2020-01-21 12:44:39 -05:00
Krzysztof Parzyszek 305bf5b21d [Hexagon] Add support for Hexagon v67t microarchitecture (tiny core) 2020-01-21 11:35:10 -06:00
Diogo Sampaio 2147703bde Revert "[ARM] Follow AACPS standard for volatile bit-fields access width"
This reverts commit 6a24339a45.
Submitted using ide button by mistake
2020-01-21 15:31:33 +00:00
Diogo Sampaio 6a24339a45 [ARM] Follow AACPS standard for volatile bit-fields access width
Summary:
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2019Q1.1
(https://static.docs.arm.com/ihi0042/g/aapcs32.pdf)
section 8.1 Data Types, page 35, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths
of bit-fields where the container overlaps with a non-bit-field member.
 This is because the C/C++ memory model defines these as being separate
memory locations, which can be accessed by two threads
 simultaneously. For this reason, compilers must be permitted to use a
narrower memory access width (including splitting the access
 into multiple instructions) to avoid writing to a different memory location.
```

I've updated the patch D16586 to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Reviewers: rsmith, rjmccall, eli.friedman, ostannard

Subscribers: ostannard, kristof.beyls, cfe-commits, carwil, olista01

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72932
2020-01-21 15:23:38 +00:00
Zakk Chen e15fb06e2d [RISCV] Pass target-abi via module flag metadata
Reviewers: lenary, asb

Reviewed By: lenary

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72755
2020-01-20 23:30:54 -08:00
Krzysztof Parzyszek c12a5917d2 [Hexagon] Add support for Hexagon/HVX v67 ISA 2020-01-20 16:16:49 -06:00
Mark Murray b10a0eb04a [ARM][MVE][Intrinsics] Take abs() of VMINNMAQ, VMAXNMAQ intrinsics' first arguments.
Summary: Fix VMINNMAQ, VMAXNMAQ intrinsics; BOTH arguments have the absolute values taken.

Reviewers: dmgreen, simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72830
2020-01-20 14:33:26 +00:00
Ian Levesque 97ba483026 [xray] Allow instrumenting only function entry and/or only function exit
Extend -fxray-instrumentation-bundle to split function-entry and
function-exit into two separate options, so that it is possible to
instrument only function entry or only function exit.  For use cases
that only care about one or the other this will save significant overhead
and code size.

Differential Revision: https://reviews.llvm.org/D72890
2020-01-17 13:32:34 -08:00
Ian Levesque 1d62be2441 [clang][xray] Add -fxray-ignore-loops option
XRay allows tuning by minimum function size, but also always instruments
functions with loops in them. If the minimum function size is set to a
large value the loop instrumention ends up causing most functions to be
instrumented anyway. This adds a new flag, -fxray-ignore-loops, to disable
the loop detection logic.

Differential Revision: https://reviews.llvm.org/D72873
2020-01-17 13:32:24 -08:00
Adrian Prantl 7b30370e5b Move the sysroot attribute from DIModule to DICompileUnit
[this re-applies c0176916a4
 with the correct commit message and phabricator link]

This addresses point 1 of PR44213.
https://bugs.llvm.org/show_bug.cgi?id=44213

The DW_AT_LLVM_sysroot attribute is used for Clang module debug info,
to allow LLDB to import a Clang module from source. Currently it is
part of each DW_TAG_module, however, it is the same for all modules in
a compile unit. It is more efficient and less ambiguous to store it
once in the DW_TAG_compile_unit.

This should have no effect on DWARF consumers other than LLDB.

Differential Revision: https://reviews.llvm.org/D71732
2020-01-17 12:55:40 -08:00
Adrian Prantl c17aee67f1 Revert "Rename DW_AT_LLVM_isysroot to DW_AT_LLVM_sysroot"
This reverts commit 12e479475a.

I accidentally landed this patch with the wrong commit message ...
2020-01-17 12:52:36 -08:00
Alina Sbirlea 90bdb03727 Update clang test. 2020-01-17 11:08:59 -08:00
Sanne Wouda ecfd6d3e84 [clang] Set function attributes on SEH filter functions correctly.
Summary:
When compiling with -munwind-tables, the SEH filter funclet needs the uwtable
function attribute, which gets automatically added if we use
SetInternalFunctionAttributes.  The filter funclet is internal so this seems
appropriate.

Reviewers: rnk

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72786
2020-01-17 18:09:42 +00:00
Fangrui Song 932b5d6fca [test] Fix tests after D52810 2020-01-17 10:02:56 -08:00
Adrian Prantl 12e479475a Rename DW_AT_LLVM_isysroot to DW_AT_LLVM_sysroot
This is a purely cosmetic change that is NFC in terms of the binary
output. I bugs me that I called the attribute DW_AT_LLVM_isysroot
since the "i" is an artifact of GCC command line option syntax
(-isysroot is in the category of -i options) and doesn't carry any
useful information otherwise.

This attribute only appears in Clang module debug info.

Differential Revision: https://reviews.llvm.org/D71722
2020-01-17 09:36:48 -08:00
serge-sans-paille d293417931 Add __warn_memset_zero_len builtin as a workaround for glibc issue
Glibc issue: https://sourceware.org/bugzilla/show_bug.cgi?id=25399
The fix consist in considering the missing function as a builtin lowered to a nop.

Differential Revision: https://reviews.llvm.org/D72869
2020-01-17 09:58:32 +01:00
serge-sans-paille d437fba8ef Reapply Allow system header to provide their own implementation of some builtin
This reverts commit 3d210ed3d1.

See https://reviews.llvm.org/D71082 for the patch and discussion that make it
possible to reapply this patch.
2020-01-17 09:58:32 +01:00
Krzysztof Parzyszek 6f3effbbf0 [Hexagon] Update autogenerated intrinsic info in clang
In addition to that, use target features to validate intrinsic
availability on a given target.
2020-01-16 14:20:12 -06:00
Amy Huang 3d210ed3d1 Revert "Allow system header to provide their own implementation of some builtin"
This reverts commit 921f871ac4 because it
causes libc++ code to trigger __warn_memset_zero_len.

See https://reviews.llvm.org/D71082.
2020-01-15 15:03:45 -08:00
Mark Murray da9d57d2c2 [ARM][MVE][Intrinsics] Add VMINAQ, VMINNMAQ, VMAXAQ, VMAXNMAQ intrinsics.
Summary: Add VMINAQ, VMINNMAQ, VMAXAQ, VMAXNMAQ intrinsics and unit tests.

Reviewers: simon_tatham, miyuki, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72761
2020-01-15 17:20:15 +00:00
Teresa Johnson 76b92cc7c1 Fix bot by adjusting wildcard matching
I noticed one bot failure due to
24a00ef240 because the wildcard matching
was not working as intended, fixed it to act similar to other checks of
CGSCCToFunctionPassAdaptor.
2020-01-15 08:37:15 -08:00
Teresa Johnson 24a00ef240 Restore "[ThinLTO] Add additional ThinLTO pipeline testing with new PM"
This restores 2af97be802 (reverted at
6288f86e87), with all the fixes I had
applied at the time, along with a new fix for non-determinism in the
ordering of a couple of passes due to being accessed as parameters on
the same call.

I've also added --dump-input=fail to the new tests so I can more
thoroughly fix any additional failures.
2020-01-15 07:33:08 -08:00
Reid Kleckner 8e780252a7 [X86] ABI compat bugfix for MSVC vectorcall
Summary:
Before this change, X86_32ABIInfo::classifyArgument would be called
twice on vector arguments to vectorcall functions. This function has
side effects to track GPR register usage, and this would lead to
incorrect GPR usage in some cases.  The specific case I noticed is from
running out of XMM registers with mixed FP and vector arguments and no
aggregates of any kind. Consider this prototype:

  void __vectorcall vectorcall_indirect_vec(
      double xmm0, double xmm1, double xmm2, double xmm3, double xmm4,
      __m128 xmm5,
      __m128 ecx,
      int edx,
      __m128 mem);

classifyArgument has no effects when called on a plain FP type, but when
called on a vector type, it modifies FreeRegs to model GPR consumption.
However, this should not happen during the vector call first pass.

I refactored the code to unify vectorcall HVA logic with regcall HVA
logic. The conventions pass HVAs in registers differently (expanded vs.
not expanded), but if they do not fit in registers, they both pass them
indirectly by address.

Reviewers: erichkeane, craig.topper

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72110
2020-01-14 17:49:13 -08:00
Rong Xu c9ee5e996e Fix windows bot failures in c410adb092c9cb51ddb0b55862b70f2aa8c5b16f
(clang diagnostic handler for IR input files)
2020-01-14 16:32:17 -08:00
Rong Xu 60d3947922 [remark][diagnostics] Using clang diagnostic handler for IR input files
For IR input files, we currently use LLVM diagnostic handler even the
compilation is from clang. As a result, we are not able to use -Rpass
to get the transformation reports. Some warnings are not handled
properly either: We found many mysterious warnings in our ThinLTO backend
compilations in SamplePGO and CSPGO. An example of the warning:
"warning: net/proto2/public/metadata_lite.h:51:21: 0.02% (1 / 4999)"

This turns out to be a warning by Wmisexpect, which is supposed to be
filtered out by default. But since the filter is in clang's
diagnostic hander, we emit these incomplete warnings from LLVM's
diagnostic handler.

This patch uses clang diagnostic handler for IR input files. We create
a fake backendconsumer just to install the diagnostic handler.

With this change, we will have proper handling of all the warnings and we can
use -Rpass* options in IR input files compilation.
Also note that with is patch, LLVM's diagnostic options, like
"-mllvm -pass-remarks=*", are no longer be able to get optimization remarks.

Differential Revision: https://reviews.llvm.org/D72523
2020-01-14 15:44:57 -08:00
James Clarke 3d6c492d7a [RISCV] Fix ILP32D lowering for double+double/double+int return types
Summary:
Previously, since these aggregates are > 2*XLen, Clang would think they
were being returned indirectly and thus would decrease the number of
available GPRs available by 1. For long argument lists this could lead
to a struct argument incorrectly being passed indirectly.

Reviewers: asb, lenary

Reviewed By: asb, lenary

Subscribers: luismarques, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, lenary, s.egerton, pzheng, sameer.abuasal, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D69590
2020-01-14 11:17:19 +00:00
Teresa Johnson 6288f86e87 Revert "[ThinLTO] Add additional ThinLTO pipeline testing with new PM"
This reverts commit 2af97be802.

After attempting to fix bot failures from matching issues (mostly due to
inconsistent printing of "llvm::" prefixes on objects, and
AnalysisManager objects being printed differntly, I am now seeing some
differences I don't understand (real differences in the passes being
printed). Giving up at this point to allow the bots to recover. Will
revisit later.
2020-01-13 11:01:48 -08:00
Teresa Johnson cb988a858a Add a couple of missed wildcards in debug-pass-manager output checking
Along with the previous fix for bot failures from
2af97be802, need to add a wildcard in a
couple of places where my local output did not print "llvm::" but the
bot is.
2020-01-13 10:49:40 -08:00
Teresa Johnson 7aed43b607 Hopefully last fix for bot failures
Hopefully final bot fix for last few failures from
2af97be802.

Looks like sometimes the "llvm::" preceeding objects get printed in the
debug pass manager output and sometimes they don't. Replace with
wildcard matching.
2020-01-13 10:34:54 -08:00
Teresa Johnson 292562c004 Try number 2 for fixing bot failures
Additional fixes for bot failures from 2af97be802.
Remove more exact matching on AnalyisManagers, as they can vary.
Also allow different orders between LoopAnalysis and
BranchProbabilityAnalysis as that can vary due to both being accessed in
the parameter list of a call.
2020-01-13 10:12:35 -08:00
Teresa Johnson bb2e5f5e45 Fix tests for builtbot failures
Should fix most of the buildbot failures from
2af97be802, by loosening up the matching
on the AnalysisProxy output.

Added in --dump-input=fail on the one test that appears to be something
different, so I can hopefully debug it better.
2020-01-13 09:28:13 -08:00
Teresa Johnson 2af97be802 [ThinLTO] Add additional ThinLTO pipeline testing with new PM
Summary:
I've added some more extensive ThinLTO pipeline testing with the new PM,
motivated by the bug fixed in D72386.

I beefed up llvm/test/Other/new-pm-pgo.ll a little so that it tests
ThinLTO pre and post link with PGO, similar to the testing for the
default pipelines with PGO.

Added new pre and post link PGO tests for both instrumentation and
sample PGO that exhaustively test the pipelines at different
optimization levels via opt.

Added a clang test to exhaustively test the post link pipeline invoked for
distributed builds. I am currently only testing O2 and O3 since these
are the most important for performance.

It would be nice to add similar exhaustive testing for full LTO, and for
the old PM, but I don't have the bandwidth now and this is a start to
cover some of the situations that are not currently default and were
under tested.

Reviewers: wmi

Subscribers: mehdi_amini, inglorion, hiraditya, steven_wu, dexonsmith, jfb, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72538
2020-01-13 08:29:56 -08:00
Nico Weber 44e0daf16e driver: Allow -fdebug-compilation-dir=foo in joined form.
All 130+ f_Group flags that take an argument allow it after a '=',
except for fdebug-complation-dir. Add a Joined<> alias so that
it behaves consistently with all the other f_Group flags.
(Keep the old Separate flag for backwards compat.)
2020-01-10 19:20:51 -05:00
Fangrui Song f17ae668a9 [Driver][CodeGen] Add -fpatchable-function-entry=N[,0]
In the backend, this feature is implemented with the function attribute
"patchable-function-entry". Both the attribute and XRay use
TargetOpcode::PATCHABLE_FUNCTION_ENTER, so the two features are
incompatible.

Reviewed By: ostannard, MaskRay

Differential Revision: https://reviews.llvm.org/D72222
2020-01-10 09:57:39 -08:00
Fangrui Song a44c434b68 Support function attribute patchable_function_entry
This feature is generic. Make it applicable for AArch64 and X86 because
the backend has only implemented NOP insertion for AArch64 and X86.

Reviewed By: nickdesaulniers, aaron.ballman

Differential Revision: https://reviews.llvm.org/D72221
2020-01-10 09:57:34 -08:00
Andrew Paverd bdd88b7ed3 Add support for __declspec(guard(nocf))
Summary:
Avoid using the `nocf_check` attribute with Control Flow Guard. Instead, use a
new `"guard_nocf"` function attribute to indicate that checks should not be
added on indirect calls within that function. Add support for
`__declspec(guard(nocf))` following the same syntax as MSVC.

Reviewers: rnk, dmajor, pcc, hans, aaron.ballman

Reviewed By: aaron.ballman

Subscribers: aaron.ballman, tomrittervg, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72167
2020-01-10 16:04:12 +00:00
Ulrich Weigand 76e9c2a987 [FPEnv] Generate constrained FP comparisons from clang
Update the IRBuilder to generate constrained FP comparisons in
CreateFCmp when IsFPConstrained is true, similar to the other
places in the IRBuilder.

Also, add a new CreateFCmpS to emit signaling FP comparisons,
and use it in clang where comparisons are supposed to be signaling
(currently, only when emitting code for the <, <=, >, >= operators).

Note that there is currently no way to add fast-math flags to a
constrained FP comparison, since this is implemented as an intrinsic
call that returns a boolean type, and FMF are only allowed for calls
returning a floating-point type. However, given the discussion around
https://bugs.llvm.org/show_bug.cgi?id=42179, it seems that FCmp itself
really shouldn't have any FMF either, so this is probably OK.

Reviewed by: craig.topper

Differential Revision: https://reviews.llvm.org/D71467
2020-01-10 14:33:10 +01:00
Simon Tatham 1ccee0e863 [ARM,MVE] Make `vqrshrun` generate the right instruction.
Summary:
A copy-paste error in `arm_mve.td` meant that the MVE `vqrshrun`
intrinsic family was generating the `vqshrun` machine instruction,
because in the IR intrinsic call, the rounding flag argument was set
to 0 rather than 1.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72496
2020-01-10 11:25:05 +00:00
serge-sans-paille 921f871ac4 Allow system header to provide their own implementation of some builtin
If a system header provides an (inline) implementation of some of their
function, clang still matches on the function name and generate the appropriate
llvm builtin, e.g. memcpy. This behavior is in line with glibc recommendation «
users may not provide their own version of symbols » but doesn't account for the
fact that glibc itself can provide inline version of some functions.

It is the case for the memcpy function when -D_FORTIFY_SOURCE=1 is on. In that
case an inline version of memcpy calls __memcpy_chk, a function that performs
extra runtime checks. Clang currently ignores the inline version and thus
provides no runtime check.

This code fixes the issue by detecting functions whose name is a builtin name
but also have an inline implementation.

Differential Revision: https://reviews.llvm.org/D71082
2020-01-10 09:44:20 +01:00
Wei Mi 21a4710c67 [ThinLTO] Pass CodeGenOpts like UnrollLoops/VectorizeLoop/VectorizeSLP
down to pass builder in ltobackend.

Currently CodeGenOpts like UnrollLoops/VectorizeLoop/VectorizeSLP in clang
are not passed down to pass builder in ltobackend when new pass manager is
used. This is inconsistent with the behavior when new pass manager is used
and thinlto is not used. Such inconsistency causes slp vectorization pass
not being enabled in ltobackend for O3 + thinlto right now. This patch
fixes that.

Differential Revision: https://reviews.llvm.org/D72386
2020-01-09 21:13:11 -08:00
Alex Richardson 8c387cbea7 Add builtins for aligning and checking alignment of pointers and integers
This change introduces three new builtins (which work on both pointers
and integers) that can be used instead of common bitwise arithmetic:
__builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and
__builtin_is_aligned(x, alignment).

I originally added these builtins to the CHERI fork of LLVM a few years ago
to handle the slightly different C semantics that we use for CHERI [1].
Until recently these builtins (or sequences of other builtins) were
required to generate correct code. I have since made changes to the default
C semantics so that they are no longer strictly necessary (but using them
does generate slightly more efficient code). However, based on our experience
using them in various projects over the past few years, I believe that adding
these builtins to clang would be useful.

These builtins have the following benefit over bit-manipulation and casts
via uintptr_t:

- The named builtins clearly convey the semantics of the operation. While
  checking alignment using __builtin_is_aligned(x, 16) versus
  ((x & 15) == 0) is probably not a huge win in readably, I personally find
  __builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1).
- They preserve the type of the argument (including const qualifiers). When
  using casts via uintptr_t, it is easy to cast to the wrong type or strip
  qualifiers such as const.
- If the alignment argument is a constant value, clang can check that it is
  a power-of-two and within the range of the type. Since the semantics of
  these builtins is well defined compared to arbitrary bit-manipulation,
  it is possible to add a UBSAN checker that the run-time value is a valid
  power-of-two. I intend to add this as a follow-up to this change.
- The builtins avoids int-to-pointer casts both in C and LLVM IR.
  In the future (i.e. once most optimizations handle it), we could use the new
  llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally
  be generated.
- They can be used to round up/down to the next aligned value for both
  integers and pointers without requiring two separate macros.
- In many projects the alignment operations are already wrapped in macros (e.g.
  roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation
  with a builtin call, we get improved diagnostics for many call-sites while
  only having to change a few lines.
- Finally, the builtins also emit assume_aligned metadata when used on pointers.
  This can improve code generation compared to the uintptr_t casts.

[1] In our CHERI compiler we have compilation mode where all pointers are
implemented as capabilities (essentially unforgeable 128-bit fat pointers).
In our original model, casts from uintptr_t (which is a 128-bit capability)
to an integer value returned the "offset" of the capability (i.e. the
difference between the virtual address and the base of the allocation).
This causes problems for cases such as checking the alignment: for example, the
expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the
pointer is aligned to a multiple of 64 bytes. The problem with offsets is that
any pointer to the beginning of an allocation will have an offset of zero, so
this check always succeeds in that case (even if the address is not correctly
aligned). The same issues also exist when aligning up or down. Using the
alignment builtins ensures that the address is used instead of the offset. While
I have since changed the default C semantics to return the address instead of
the offset when casting, this offset compilation mode can still be used by
passing a command-line flag.

Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune
Reviewed By: aaron.ballman, lebedev.ri
Differential Revision: https://reviews.llvm.org/D71499
2020-01-09 21:48:29 +00:00
serge-sans-paille b35f5d4914 [clang] Enforce triple in mempcpy test
Fixes http://lab.llvm.org:8011/builders/llvm-clang-win-x-armv7l/builds/2597
2020-01-09 21:09:15 +01:00
Eric Astor 1c545f6dbc [ms] [X86] Use "P" modifier on all branch-target operands in inline X86 assembly.
Summary:
Extend D71677 to apply to all branch-target operands, rather than special-casing call instructions.

Also add a regression test for llvm.org/PR44272, since this finishes fixing it.

Reviewers: thakis, rnk

Reviewed By: thakis

Subscribers: merge_guards_bot, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72417
2020-01-09 14:55:03 -05:00
Simon Tatham 06d07ec4a3 [Clang] Handle target-specific builtins returning aggregates.
Summary:
A few of the ARM MVE builtins directly return a structure type. This
causes an assertion failure at code-gen time if you try to assign the
result of the builtin to a variable, because the `RValue` created in
`EmitBuiltinExpr` from the `llvm::Value` produced by codegen is always
made by `RValue::get()`, which creates a non-aggregate `RValue` that
will fail an assertion when `AggExprEmitter::withReturnValueSlot` calls
`Src.getAggregatePointer()`. A similar failure occurs if you try to use
the struct return value directly to extract one field, e.g.
`vld2q(address).val[0]`.

The existing code-gen tests for those MVE builtins pass the returned
structure type directly to the C `return` statement, which apparently
managed to avoid that particular code path, so we didn't notice the
crash.

Now `EmitBuiltinExpr` checks the evaluation kind of the builtin's return
value, and does the necessary handling for aggregate returns. I've added
two extra test cases, both of which crashed before this change.

Reviewers: dmgreen, rjmccall

Reviewed By: rjmccall

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72271
2020-01-09 17:28:37 +00:00
serge-sans-paille cee4a1c957 Improve support of GNU mempcpy
- Lower to the memcpy intrinsic
- Raise warnings when size/bounds are known

Differential Revision: https://reviews.llvm.org/D71374
2020-01-09 17:31:00 +01:00
Momchil Velikov 173b711e83 [ARM][MVE] MVE-I should not be disabled by -mfpu=none
Architecturally, it's allowed to have MVE-I without an FPU, thus
-mfpu=none should not disable MVE-I, or moves to/from FP-registers.

This patch removes `+/-fpregs` from features unconditionally added to
target feature list, depending on FPU and moves the logic to Clang
driver, where the negative form (`-fpregs`) is conditionally added to
the target features list for the cases of `-mfloat-abi=soft`, or
`-mfpu=none` without either `+mve` or `+mve.fp`. Only the negative
form is added by the driver, the positive one is derived from other
features in the backend.

Differential Revision: https://reviews.llvm.org/D71843
2020-01-09 14:03:25 +00:00
Simon Tatham dac7b23cc3 [ARM,MVE] Intrinsics for variable shift instructions.
This batch of intrinsics fills in all the shift instructions that take
a variable shift distance in a register, instead of an immediate. Some
of these instructions take a single shift distance in a scalar
register and apply it to all lanes; others take a vector of per-lane
distances.

These instructions are all basically one family, varying in whether
they saturate out-of-range values, and whether they round when bits
are shifted off the bottom. I've implemented them at the IR level by a
much smaller family of IR intrinsics, which take flag parameters to
indicate saturating and/or rounding (along with the usual one to
specify signed/unsigned integers).

An oddity is that all of them are //left// shift instructions – but if
you pass a negative shift count, they'll shift right. So the vector
shift distances are always vectors of //signed// integers, regardless
of whether you're considering the other input vector to be of signed
or unsigned. Also, even the simplest `vshlq` instruction in this
family (neither saturating nor rounding) has to be implemented as an
IR intrinsic, because the ordinary LLVM IR `shl` operation would
consider an out-of-range shift count to be undefined behavior.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72329
2020-01-08 14:42:24 +00:00
Simon Tatham 3100480925 [ARM,MVE] Intrinsics for partial-overwrite imm shifts.
This batch of intrinsics covers two sets of immediate shift
instructions, which have in common that they only overwrite part of
their output register and so they need an extra input giving its
previous value.

The VSLI and VSRI instructions shift each lane of the input vector
left or right just as if they were normal immediate VSHL/VSHR, but
then they only overwrite the output bits that correspond to actual
shifted bits of the input. So VSLI will leave the low n bits of each
output lane unchanged, and VSRI the same with the top n bits.

The V[Q][R]SHR[U]N family are all narrowing shifts: they take an input
vector of 2n-bit integers, shift each lane right by a constant, and
then narrowing the shifted result to only n bits. So they only
overwrite half of the n-bit lanes in the output register, and the B/T
suffix indicates whether it's the bottom or top half of each 2n-bit
lane.

I've implemented the whole of the latter family using a single IR
intrinsic `vshrn`, which takes a lot of i32 parameters indicating
which instruction it expands to (by specifying signedness of the input
and output types, whether it saturates and/or rounds, etc).

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72328
2020-01-08 14:42:24 +00:00
Aaron Ballman 55a51e1c79 Disallow an empty string literal in an asm label
An empty string literal in an asm label does not make a whole lot of sense. GCC
does not diagnose such a construct, but it also generates code that cannot be
assembled by gas should two symbols have an empty asm label within the same TU.
This does not affect an asm statement with an empty string literal, which is
still a useful construct.
2020-01-08 08:38:02 -05:00
Simon Tatham 34817e04fe [ARM,MVE] Fix many signedness errors in MVE intrinsics.
Summary:
Running an end-to-end test last week I noticed that a lot of the ACLE
intrinsics that operate differently on vectors of signed and unsigned
integers were ending up generating the signed version of the
instruction unconditionally. This is because the IR intrinsics had no
way to distinguish signed from unsigned: the LLVM type system just
calls them both `v8i16` (or whatever), so you need either separate
intrinsics for signed and unsigned, or a flag parameter that tells
ISel which one to choose.

This patch fixes all the problems of that kind that I've noticed, by
adding an i32 flag parameter to many of the IR intrinsics which is set
to 1 for unsigned (matching the existing practice in cases where we
got it right), and conditioning all the isel patterns on that flag. So
the fundamental change is in `IntrinsicsARM.td`, changing the
low-level IR intrinsics API; there are knock-on changes in
`arm_mve.td` (adjusting code gen for the ACLE intrinsics to use the
modified API) and in `ARMInstrMVE.td` (adjusting isel to expect the
new unsigned flags). The rest of this patch is boringly updating tests.

Reviewers: dmgreen, miyuki, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72270
2020-01-06 16:33:16 +00:00
Simon Tatham 4978296cd8 [ARM,MVE] Support -ve offsets in gather-load intrinsics.
Summary:
The ACLE intrinsics with `gather_base` or `scatter_base` in the name
are wrappers on the MVE load/store instructions that take a vector of
base addresses and an immediate offset. The immediate offset can be up
to 127 times the alignment unit, and it can be positive or negative.

At the MC layer, we got that right. But in the Sema error checking for
the wrapping intrinsics, the offset was erroneously constrained to be
positive.

To fix this I've adjusted the `imm_mem7bit` class in the Tablegen that
defines the intrinsics. But that causes integer literals like
`0xfffffffffffffe04` to appear in the autogenerated calls to
`SemaBuiltinConstantArgRange`, which provokes a compiler warning
because that's out of the non-overflowing range of an `int64_t`. So
I've also tweaked `MveEmitter` to emit that as `-0x1fc` instead.

Updated the tests of the Sema checks themselves, and also adjusted a
random sample of the CodeGen tests to actually use negative offsets
and prove they get all the way through code generation without causing
a crash.

Reviewers: dmgreen, miyuki, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: kristof.beyls, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72268
2020-01-06 16:33:07 +00:00
Kevin P. Neal 89d6c288ba [SystemZ] Use FNeg in s390x clang builtins
The s390x builtins are still using FSub instead of FNeg. Correct that.
2020-01-02 12:14:43 -05:00
Craig Topper 5e5a1d2790 [CodeGen] Emit conj/conjf/confjl libcalls as fneg instructions if possible.
We already recognize the __builtin versions of these, might as well
recognize the libcall version.

Differential Revision: https://reviews.llvm.org/D72028
2019-12-31 10:41:00 -08:00
Craig Topper 70f8dd4cf6 [CodeGen] Use IRBuilder::CreateFNeg for __builtin_conj
This replaces the fsub -0.0 idiom with an fneg instruction. We didn't see to have a test that showed the current codegen. Just some tests for constant folding and a test that was only checking the declare lines for libcalls. The latter just checked that we did not have a declare for @conj when using __builtin_conj.

Differential Revision: https://reviews.llvm.org/D72012
2019-12-30 13:25:23 -08:00
Craig Topper 8b23b2bbd9 [CodeGen] Use CreateFNeg in buildFMulAdd
We have an fneg instruction now and should use it instead of the fsub -0.0 idiom. Looks like we had no test that showed that we handled the negation cases here so I've added new tests.

Differential Revision: https://reviews.llvm.org/D72010
2019-12-30 13:24:11 -08:00
Eric Astor 4a7aa252a3 [X86][AsmParser] re-introduce 'offset' operator
Summary:
Amend MS offset operator implementation, to more closely fit with its MS counterpart:

    1. InlineAsm: evaluate non-local source entities to their (address) location
    2. Provide a mean with which one may acquire the address of an assembly label via MS syntax, rather than yielding a memory reference (i.e. "offset asm_label" and "$asm_label" should be synonymous
    3. address PR32530

Based on http://llvm.org/D37461

Fix broken test where the break appears unrelated.

- Set up appropriate memory-input rewrites for variable references.

- Intel-dialect assembly printing now correctly handles addresses by adding "offset".

- Pass offsets as immediate operands (using "r" constraint for offsets of locals).

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D71436
2019-12-30 14:35:26 -05:00
Fangrui Song 502a77f125 Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" as cleanups after D56351 2019-12-24 15:57:33 -08:00
Craig Topper d35bcbbb5d [Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize.
The validateOutputSize and validateInputSize need to check whether
AVX or AVX512 are enabled. But this can be affected by the
target attribute so we need to factor that in.

This patch moves some of the code from CodeGen to create an
appropriate feature map that we can pass to the function.

Differential Revision: https://reviews.llvm.org/D68627
2019-12-23 11:23:30 -08:00
Yonghong Song e3d8ee35e4 reland "[DebugInfo] Support to emit debugInfo for extern variables"
Commit d77ae1552f
("[DebugInfo] Support to emit debugInfo for extern variables")
added deebugInfo for extern variables for BPF target.
The commit is reverted by 891e25b02d
as the committed tests using %clang instead of %clang_cc1 causing
test failed in certain scenarios as reported by Reid Kleckner.

This patch fixed the tests by using %clang_cc1.

Differential Revision: https://reviews.llvm.org/D71818
2019-12-22 18:28:50 -08:00
Reid Kleckner 891e25b02d Revert "[DebugInfo] Support to emit debugInfo for extern variables"
This reverts commit d77ae1552f.

The tests committed along with this change do not pass, and should be
changed to use %clang_cc1.
2019-12-22 12:54:06 -08:00
Eric Astor dc5b614fa9 [ms] [X86] Use "P" modifier on operands to call instructions in inline X86 assembly.
Summary:
This is documented as the appropriate template modifier for call operands.
Fixes PR44272, and adds a regression test.

Also adds support for operand modifiers in Intel-style inline assembly.

Reviewers: rnk

Reviewed By: rnk

Subscribers: merge_guards_bot, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71677
2019-12-22 09:16:34 -05:00
Fangrui Song 0792ef7256 [Driver] Verify -mrecord-mcount in Driver, instead of CodeGen after D71627
GCC's x86 and s390 ports support -mrecord-mcount. Other ports reject the
option.

  aarch64-linux-gnu-gcc: error: unrecognized command line option ‘-mrecord-mcount’

Allowing this option can cause failures when building Linux kernel for
aarch64, powerpc64, etc, which will think the feature is available if
the clang command returns 0.
2019-12-21 22:47:24 -08:00
Jonas Paulsson 2520bef865 [Clang FE, SystemZ] Recognize -mrecord-mcount CL option.
Recognize -mrecord-mcount from the command line and add a function attribute
"mrecord-mcount" when passed.

Only valid on SystemZ (when used with -mfentry).

Review: Ulrich Weigand
https://reviews.llvm.org/D71627
2019-12-19 08:51:55 -08:00
Thomas Lively 71eb8023d8 [WebAssembly] Add avgr_u intrinsics and require nuw in patterns
Summary:
The vector pattern `(a + b + 1) / 2` was previously selected to an
avgr_u instruction regardless of nuw flags, but this is incorrect in
the case where either addition may have an unsigned wrap. This CL
changes the existing pattern to require both adds to have nuw flags
and adds builtin functions and intrinsics for the avgr_u instructions
because the corrected pattern is not representable in C.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71648
2019-12-18 15:31:38 -08:00
Jonas Paulsson ca520592c0 [Clang FE, SystemZ] Don't add "true" value for the "mnop-mcount" attribute.
Let the "mnop-mcount" function attribute simply be present or non-present.
Update SystemZ backend as well to use hasFnAttribute() instead.

Review: Ulrich Weigand
https://reviews.llvm.org/D71669
2019-12-18 11:04:13 -08:00
Amy Huang a85f5efd95 Add support for the MS qualifiers __ptr32, __ptr64, __sptr, __uptr.
Summary:
This adds parsing of the qualifiers __ptr32, __ptr64, __sptr, and __uptr and
lowers them to the corresponding address space pointer for 32-bit and 64-bit pointers.
(32/64-bit pointers added in https://reviews.llvm.org/D69639)

A large part of this patch is making these pointers ignore the address space
when doing things like overloading and casting.

https://bugs.llvm.org/show_bug.cgi?id=42359

Reviewers: rnk, rsmith

Subscribers: jholewinski, jvesely, nhaehnle, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71039
2019-12-18 10:41:12 -08:00
Ulrich Weigand 1e89188d35 [FPEnv] Remove unnecessary rounding mode argument for constrained intrinsics
The following intrinsics currently carry a rounding mode metadata argument:

    llvm.experimental.constrained.minnum
    llvm.experimental.constrained.maxnum
    llvm.experimental.constrained.ceil
    llvm.experimental.constrained.floor
    llvm.experimental.constrained.round
    llvm.experimental.constrained.trunc

This is not useful since the semantics of those intrinsics do not in any way
depend on the rounding mode. In similar cases, other constrained intrinsics
do not have the rounding mode argument. Remove it here as well.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D71218
2019-12-17 21:10:36 +01:00
Jonas Paulsson 599d1cc07a [Clang FE, SystemZ] Recognize -mpacked-stack CL option
Recognize -mpacked-stack from the command line and add a function attribute
"mpacked-stack" when passed. This is needed for building the Linux kernel.

If this option is passed for any other target than SystemZ, an error is
generated.

Review: Ulrich Weigand
https://reviews.llvm.org/D71441
2019-12-17 11:26:17 -08:00
Erich Keane 1ed832e424 Reland [NFC-I] Remove hack for fp-classification builtins
The FP-classification builtins (__builtin_isfinite, etc) use variadic
packs in the definition file to mean an overload set.  Because of that,
floats were converted to doubles, which is incorrect. There WAS a patch
to remove the cast after the fact.

THis patch switches these builtins to just be custom type checking,
calls the implicit conversions for the integer members, and makes sure
the correct L->R casts are put into place, then does type checking like
normal.

A future direction (that wouldn't be NFC) would consider making
conversions for the floating point parameter legal.

Note: The initial patch for this missed that certain systems need to
still convert half to float, since they dont' support that type.
2019-12-17 06:58:29 -08:00
Sam Clegg 0a1e349a79 [WebAssembly] Setting export_name implies llvm.used
This change updates the clang front end to add symbols to llvm.used
when they have explicit export_name attribute.

Differential Revision: https://reviews.llvm.org/D71493
2019-12-16 14:48:38 -08:00
Thomas Lively 3a93756dfb [WebAssembly] Replace SIMD int min/max builtins with patterns
Summary:
The instructions were originally implemented via builtins and
intrinsics so users would have to explicitly opt-in to using
them. This was useful while were validating whether these instructions
should have been merged into the spec proposal. Now that they have
been, we can use normal codegen patterns, so the intrinsics and
builtins are no longer useful.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71500
2019-12-16 11:48:49 -08:00
Erich Keane f02d6dd6c7 Fix floating point builtins to not promote float->double
As brought up in D71467, a group of floating point builtins
automatically promoted floats to doubles because they used the variadic
builtin tag to support an overload set. The result is that the
parameters were treated as a variadic pack, which always promots
float->double.

This resulted in the wrong answer being given in cases with certain
values of NaN.
2019-12-16 07:20:29 -08:00
Mark Murray a2cd4600ec [ARM][MVE][Intrinsics] All vqdmulhq/vqrdmulhq tests should be for signed numbers.
Fix broken tests. I can't yet explain how they worked locally pre-commit.
2019-12-13 17:29:59 +00:00
Mikhail Maltsev 99581fd4c8 [ARM][MVE] Add vector reduction intrinsics with two vector operands
Summary:
This patch adds intrinsics for the following MVE instructions:
* VABAV
* VMLADAV, VMLSDAV
* VMLALDAV, VMLSLDAV
* VRMLALDAVH, VRMLSLDAVH

Each of the above 4 groups has a corresponding new LLVM IR intrinsic,
since the instructions cannot be easily represented using
general-purpose IR operations.

Reviewers: simon_tatham, ostannard, dmgreen, MarkMurrayARM

Reviewed By: MarkMurrayARM

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71062
2019-12-13 13:17:29 +00:00
Simon Tatham 25305a9311 [ARM][MVE] Add intrinsics for more immediate shifts.
Summary:
This fills in the remaining shift operations that take a single vector
input and an immediate shift count: the `vqshl`, `vqshlu`, `vrshr` and
`vshll[bt]` families.

`vshll[bt]` (which shifts each input lane left into a double-width
output lane) is the most interesting one. There are separate MC
instruction ids for shifting by exactly the input lane width and
shifting by less than that, because the instruction encoding is so
completely different for the lane-width special case. So I had to
write two sets of patterns to match based on the immediate shift
count, which involved adding a ComplexPattern matcher to avoid the
general-case pattern accidentally matching the special case too. For
that family I've made sure to add an llc codegen test for both
versions of each instruction.

I'm experimenting with a new strategy for parametrising the isel
patterns for all these instructions: adding extra fields to the
relevant `Instruction` subclass itself, which are ignored by the
Tablegen backends that generate the MC data, but can be retrieved from
each instance of that instruction subclass when it's passed as a
template parameter to the multiclass that generates its isel patterns.
A nice effect of that is that I can fill in those informational fields
using `let` blocks, rather than having to type them out once per
instruction at `defm` time.

(As a result, quite a lot of existing instruction `def`s are
reindented by this patch, so it's clearer to read with whitespace
changes ignored.)

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71458
2019-12-13 13:07:39 +00:00
Mark Murray 228c74076d [ARM][MVE][Intrinsics] Add *_x() variants of my *_m() intrinsics.
Summary:
Better use of multiclass is used, and this helped find some existing
bugs in the predicated VMULL* intrinsics, which are now fixed.

The refactored VMULL[TB]Q_(INT|POLY)_M() intrinsics were discovered
to have an argument ("inactive") with incorrect type, and this required
a fix that is included in this whole patch. The argument "inactive"
should have been the same width (per vector element) as the return
type of the intrinsic, but was not in the case where the return type
was double the element width of the input types.

To assist in testing the multiclassing , and to thwart further gremlins,
the unit tests are improved in scope.

The *.ll tests are all generated by a small bit of throw-away scripting
from the corresponding *.c tests, and as such the diffs are large and
nasty. Look at the file rather than the diff.

Reviewers: dmgreen, miyuki, ostannard, simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71421
2019-12-13 11:51:23 +00:00
Guillaume Chatelet 0508c994f0 [clang] Turn -fno-builtin flag into an IR Attribute
Summary:
This is a follow up on https://reviews.llvm.org/D61634#1742154 to turn the clang driver -fno-builtin flag into an IR attribute.
I also investigated pushing the attribute earlier on (in Sema) but it looks like this patch is simple and will cover all function calls.

Reviewers: aaron.ballman, courbet

Subscribers: cfe-commits, tejohnson

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71193
2019-12-12 17:21:12 +01:00
Momchil Velikov 600d123c6f [ARM][CMSE] Add CMSE header and builtins
This is patch C2 as mentioned in RFC
http://lists.llvm.org/pipermail/cfe-dev/2019-March/061834.html

This adds CMSE builtin functions, and introduces arm_cmse.h header which has
useful macros, functions, and data types for end-users of CMSE.

Patch by Javed Absar.

Diferential Revision: https://reviews.llvm.org/D70817
2019-12-12 15:01:14 +00:00
Sam Clegg 881d877846 [WebAssembly] Add new `export_name` clang attribute for controlling wasm export names
This is equivalent to the existing `import_name` and `import_module`
attributes which control the import names in the final wasm binary
produced by lld.

This maps the existing

This attribute currently requires a string rather than using the
symbol name for a couple of reasons:

1. Avoid confusion with static and dynamic linking which is
   based on symbol name.  Exporting a function from a wasm module using
   this directive is orthogonal to both static and dynamic linking.
2. Avoids name mangling.

Differential Revision: https://reviews.llvm.org/D70520
2019-12-11 11:54:57 -08:00
Nicolai Hähnle f21c081b78 CodeGen: Allow annotations on globals in non-zero address space
Summary:
Attribute annotations are recorded in a special global composite variable
that points to annotation strings and the annotated objects.

As a restriction of the LLVM IR type system, those pointers are all
pointers to address space 0, so let's insert an addrspacecast when the
annotated global is in a non-0 address space.

Since this addrspacecast is only reachable from the global annotations
object, this should allow us to represent annotations on all globals
regardless of which addrspacecasts are usually legal for the target.

Reviewers: rjmccall

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71208
2019-12-11 13:24:32 +01:00
Simon Tatham bd0f271c9e [ARM][MVE] Add intrinsics for immediate shifts. (reland)
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.

There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.

In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen, MarkMurrayARM

Subscribers: echristo, hokein, rdhindsa, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71065
2019-12-11 10:10:09 +00:00
Sourabh Singh Tomar fb4d8fe1a8 Recommit "[DWARF5] Start emitting DW_AT_dwo_name when -gdwarf-5 is specified."
Reviewers: dblaikie, aprantl, probinson

Tags: #debug-info #llvm

Differential Revision: https://reviews.llvm.org/D71185
2019-12-11 01:24:50 +05:30
Sourabh Singh Tomar d82b6ba21b Revert "[DWARF5] Start emitting DW_AT_dwo_name when -gdwarf-5 is specified."
This reverts commit 6ef01588f4.
Missing Differetial revision.
2019-12-11 01:20:40 +05:30
Sourabh Singh Tomar 6ef01588f4 [DWARF5] Start emitting DW_AT_dwo_name when -gdwarf-5 is specified. 2019-12-11 01:18:02 +05:30
Yaxun (Sam) Liu 21b43885b8 Fix bug 44190 - wrong code with #pragma pack(1)
5b330e8d61 caused
a regression on s390:

https://bugs.llvm.org/show_bug.cgi?id=44190

we need to copy if if either the argument is non-byval or the argument is underaligned.

Differential Revision: https://reviews.llvm.org/D71282
2019-12-10 13:56:34 -05:00
Kevin P. Neal 6515c524b0 [FPEnv] clang support for constrained FP builtins
Change the IRBuilder and clang so that constrained FP intrinsics will be
emitted for builtins when appropriate. Only non-target-specific builtins
are affected in this patch.

Differential Revision: https://reviews.llvm.org/D70256
2019-12-10 13:09:12 -05:00
Mikhail Maltsev e6d3261c67 [ARM][MVE] Refactor complex vector intrinsics [NFCI]
Summary:
This patch refactors instruction selection of the complex vector
addition, multiplication and multiply-add intrinsics, so that it is
now based on TableGen patterns rather than C++ code.

It also changes the first parameter (halving vs non-halving) of the
arm_mve_vcaddq IR intrinsic to match the corresponding instruction
encoding, hence it requires some changes in the tests.

The patch addresses David's comment in https://reviews.llvm.org/D71190

Reviewers: dmgreen, ostannard, simon_tatham, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71245
2019-12-10 16:21:52 +00:00
Yonghong Song d77ae1552f [DebugInfo] Support to emit debugInfo for extern variables
Extern variable usage in BPF is different from traditional
pure user space application. Recent discussion in linux bpf
mailing list has two use cases where debug info types are
required to use extern variables:
  - extern types are required to have a suitable interface
    in libbpf (bpf loader) to provide kernel config parameters
    to bpf programs.
    https://lore.kernel.org/bpf/CAEf4BzYCNo5GeVGMhp3fhysQ=_axAf=23PtwaZs-yAyafmXC9g@mail.gmail.com/T/#t
  - extern types are required so kernel bpf verifier can
    verify program which uses external functions more precisely.
    This will make later link with actual external function no
    need to reverify.
    https://lore.kernel.org/bpf/87eez4odqp.fsf@toke.dk/T/#m8d5c3e87ffe7f2764e02d722cb0d8cbc136880ed

This patch added clang support to emit debuginfo for extern variables
with a TargetInfo hook to enable it. The debuginfo for the
extern variable is emitted only if that extern variable is
referenced in the current compilation unit.

Currently, only BPF target enables to generate debug info for
extern variables. The emission of such debuginfo is disabled for C++
 at this moment since BPF only supports a subset of C language.
Emission with C++ can be enabled later if an appropriate use case
is identified.

-fstandalone-debug permits us to see more debuginfo with the cost
of bloated binary size. This patch did not add emission of extern
variable debug info with -fstandalone-debug. This can be
re-evaluated if there is a real need.

Differential Revision: https://reviews.llvm.org/D70696
2019-12-10 08:09:51 -08:00
Jim Lin cefac9dfaa Remove implicit conversion that promotes half to other larger precision types for fp classification builtins
Summary:
It shouldn't promote half to double or any larger precision types for fp classification builtins.
Because fp classification builtins would get incorrect result with promoted argument.
For example, __builtin_isnormal with a subnormal half value should return false, but it is not.
That the subnormal half value is promoted to a normal double value.

Reviewers: aaron.ballman

Reviewed By: aaron.ballman

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71049
2019-12-10 13:24:21 +08:00
Eric Christopher 9c6b7f68b8 Revert "[ARM][MVE] Add intrinsics for immediate shifts."
and two follow-on commits: one warning fix and one functionality.

As it's breaking at least the lto bot:

http://lab.llvm.org:8011/builders/clang-with-lto-ubuntu/builds/15132/steps/test-stage1-compiler/logs/stdio

This reverts commits:

 8d70f3c933
 ff4dceef92
 d97b3e3e65
2019-12-09 16:47:38 -08:00
Mark Murray fc3417cb5a [ARM][MVE][Intrinsics] Add VQADDQ, VHADDQ, VRHADDQ, VQSUBQ, VHSUBQ, VQDMULHQ, VQRDMULHQ intrinsics.
Summary: Add VQADDQ, VHADDQ, VRHADDQ, VQSUBQ, VHSUBQ, VQDMULHQ, VQRDMULHQ intrinsics and unit tests.

Reviewers: simon_tatham, ostannard, dmgreen, miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71198
2019-12-09 17:41:47 +00:00
Mark Murray 2eb61fa5d6 [ARM][MVE][Intrinsics] Add VMULL[BT]Q_(INT|POLY) intrinsics.
Summary: Add VMULL[BT]Q_(INT|POLY) intrinsics and unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71066
2019-12-09 17:41:47 +00:00
Simon Tatham d97b3e3e65 [ARM][MVE] Add intrinsics for immediate shifts.
Summary:
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.

There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.

In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71065
2019-12-09 15:44:09 +00:00
Zahira Ammarguellat 32c802e0f5 Fix build bot fails due to the patch here:
https://reviews.llvm.org/D70691
Fixed the LIT test case. Added the REQUIRES instruction.
2019-12-09 09:24:47 -05:00
Mikhail Maltsev 0d1490bf6a [ARM][MVE] Add complex vector intrinsics
Summary:
This patch adds intrinsics for the following MVE instructions:
* VCADD, VHCADD
* VCMUL
* VCMLA

Each of the above 3 groups has a corresponding new LLVM IR intrinsic.

Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen

Reviewed By: MarkMurrayARM

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71190
2019-12-09 12:05:59 +00:00
Zahira Ammarguellat 27f5d35137 Fix for build bot failure. For more details see:
https://reviews.llvm.org/D70691
Upated LIT test.
2019-12-09 00:50:30 -05:00
Reid Kleckner eff08f4097 Revert "[Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize."
This reverts commit e1578fd2b7.

It introduces a dependency on Attr.h which I am removing from
ASTContext.h.
2019-12-06 15:42:14 -08:00
Craig Topper e1578fd2b7 [Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize.
The validateOutputSize and validateInputSize need to check whether
AVX or AVX512 are enabled. But this can be affected by the
target attribute so we need to factor that in.

This patch copies some of the code from CodeGen to create an
appropriate feature map that we can pass to the function. Probably
need some refactoring here to share more code with Codegen. Is
there a good place to do that? Also need to support the cpu_specific
attribute as well.

Differential Revision: https://reviews.llvm.org/D68627
2019-12-06 15:30:59 -08:00
Zahira Ammarguellat a3b2552575 Fix for PR44000. Optimization record for bytecode input missing.
Review is here:  https://reviews.llvm.org/D70691
2019-12-06 07:48:42 -05:00
Melanie Blower 7f9b513847 Reapply af57dbf12e "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048
        The original patch is modified to set the strictfp IR attribute
        explicitly in CodeGen instead of as a side effect of IRBuilder.
        In the 2nd attempt to reapply there was a windows lit test fail, the
        tests were fixed to use wildcard matching.

        Differential Revision: https://reviews.llvm.org/D62731
2019-12-05 03:48:04 -08:00
Melanie Blower 5412913631 Revert " Reapply af57dbf12e "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior=""
This reverts commit cdbed2dd85.
Build break on Windows (lit fail)
2019-12-04 12:21:23 -08:00
Melanie Blower cdbed2dd85 Reapply af57dbf12e "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048
        The original patch is modified to set the strictfp IR attribute
        explicitly in CodeGen instead of as a side effect of IRBuilder

        Differential Revision: https://reviews.llvm.org/D62731
2019-12-04 11:32:33 -08:00
Mark Murray d3f62ceac0 [ARM][MVE][Intrinsics] Add VMULH/VRMULH intrinsics.
Summary: Add MVE VMULH/VRMULH intrinsics and unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70948
2019-12-04 14:27:12 +00:00
Kadir Cetinkaya 01a26fa74a
[clang][CodeGen] Make use of cc1 instead of clang in the tests 2019-12-03 12:02:26 +01:00
Reid Kleckner 536cedaecb FileCheck IR output for blockaddress in new test
Minor improvement to a test added in 1ac700cdef
2019-12-02 15:05:50 -08:00
David Green 1d4587346f [AArch64] Attempt to fixup test line. NFC
The test is complaining on some of the builders. This attempts to
adjust the run line to be more line the others in the same folder, using
clang_cc1 as opposed to the driver.
2019-12-02 19:30:54 +00:00
Simon Tatham d173fb5d28 [ARM,MVE] Add intrinsics to deal with predicates.
Summary:
This commit adds the `vpselq` intrinsics which take an MVE predicate
word and select lanes from two vectors; the `vctp` intrinsics which
create a tail predicate word suitable for processing the first m
elements of a vector (e.g. in the last iteration of a loop); and
`vpnot`, which simply complements a predicate word and is just
syntactic sugar for the `~` operator.

The `vctp` ACLE intrinsics are lowered to the IR intrinsics we've
already added (and which D70592 just reorganized). I've filled in the
missing isel rule for VCTP64, and added another set of rules to
generate the predicated forms.

I needed one small tweak in MveEmitter to allow the `unpromoted` type
modifier to apply to predicates as well as integers, so that `vpnot`
doesn't pointlessly convert its input integer to an `<n x i1>` before
complementing it.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70485
2019-12-02 16:20:30 +00:00
Victor Campos dcf11c5e86 [ARM][AArch64] Complex addition Neon intrinsics for Armv8.3-A
Summary:
Add support for vcadd_* family of intrinsics. This set of intrinsics is
available in Armv8.3-A.

The fp16 versions require the FP16 extension, which has been available
(opt-in) since Armv8.2-A.

Reviewers: t.p.northover

Reviewed By: t.p.northover

Subscribers: t.p.northover, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70862
2019-12-02 14:38:39 +00:00
Mark Murray 510792a2e0 [ARM][MVE][Intrinsics] Add VMINQ/VMAXQ/VMINNMQ/VMAXNMQ intrinsics.
Summary: Add VMINQ/VMAXQ/VMINNMQ/VMAXNMQ intrinsics and their predicated versions. Add unit tests.

Subscribers: kristof.beyls, hiraditya, dmgreen, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70829
2019-12-02 11:18:53 +00:00
Simon Atanasyan f4d32ae75b [mips] Check that features required by built-ins are enabled
Now Clang does not check that features required by built-in functions
are enabled. That causes errors in the backend reported in PR44018.

This patch fixes this bug by checking that required features
are enabled.

This should fix PR44018.

Differential Revision: https://reviews.llvm.org/D70808
2019-11-29 00:23:00 +03:00
Johannes Altmanninger 1ac700cdef [CodeGen] Fix clang crash on aggregate initialization of array of labels
Summary: Fix PR43700

The ConstantEmitter in AggExprEmitter::EmitArrayInit was initialized
with the CodeGenFunction set to null, which caused the crash.
Also simplify another call, and make the CGF member a const pointer
since it is public but only assigned in the constructor.

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D70302
2019-11-28 00:59:25 +01:00
Roman Lebedev b98a0c7f6c
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.

As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.

This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
901cd3b3f6/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp (L1320-L1334)

But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
   their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]

Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.

It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
   in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
   (i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
   Which kinda defeats the point of
   using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
   we'd have two `add`s with basically the same arguments,
   one of which is used for check+error-less codepath and other one
   for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
   Which means we'll need a whole new handler.
   But then what about the `CompoundAssignOperator`,
   it would also be applicable for it.
   So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
   (due to partial register stalls), so even if we avoid promotion+demotion,
   the computations will //likely// be performed in `i32` anyways.

So i'm not really seeing much benefit of
not doing the straight-forward thing.

While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].

Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.

TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.

I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..

This originally landed in 9872ea4ed1
but got immediately reverted in cbfa237892
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940

Reviewers: rjmccall, erichkeane, rsmith, vsk

Reviewed By: erichkeane

Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr

Tags: #llvm, #clang, #sanitizers

Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 21:52:41 +03:00
Mark Murray a048bf87fb [ARM][MVE][Intrinsics] Add MVE VAND/VORR/VORN/VEOR/VBIC intrinsics. Add unit tests.
Summary: Add MVE VAND/VORR/VORN/VEOR/VBIC intrinsics. Add unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70547
2019-11-27 16:52:05 +00:00
Mark Murray e8a8dbe9c4 [ARM][MVE][Intrinsics] Add MVE VMUL intrinsics. Remove annoying "t1" from VMUL* instructions. Add unit tests.
Summary: Add MVE VMUL intrinsics. Remove annoying "t1" from VMUL* instructions. Add unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70546
2019-11-27 16:52:05 +00:00
Mark Murray f4bba07b87 [ARM][MVE][Intrinsics] Add MVE VABD intrinsics. Add unit tests.
Summary: Add MVE VABD intrinsics. Add unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70545
2019-11-27 16:52:04 +00:00
Roman Lebedev cbfa237892
Revert "[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)"
The asssertion that was added does not hold,
breaks on test-suite/MultiSource/Applications/SPASS/analyze.c
Will reduce the testcase and revisit.

This reverts commit 9872ea4ed1, 870f3542d3.
2019-11-27 17:05:21 +03:00
David Green 9f15fcc271 [ARM] Replace arm_neon_vqadds with sadd_sat
This replaces the A32 NEON vqadds, vqaddu, vqsubs and vqsubu intrinsics
with the target independent sadd_sat, uadd_sat, ssub_sat and usub_sat.
This helps generate vqadds from standard IR nodes, which might be
produced from the vectoriser. The old variants are removed in the
process.

Differential Revision: https://reviews.llvm.org/D69350
2019-11-27 13:32:29 +00:00
Roman Lebedev 870f3542d3
[CodeGen][UBSan] Relax newly-added verbose sanitization tests for inc/dec
In particular, don't hardcode the signature of the handler:
it takes src filepath so the length of buffers will not match,
2019-11-27 16:05:34 +03:00
Roman Lebedev 9872ea4ed1
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.

As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.

This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
901cd3b3f6/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp (L1320-L1334)

But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
   their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]

Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.

It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
   in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
   (i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
   Which kinda defeats the point of
   using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
   we'd have two `add`s with basically the same arguments,
   one of which is used for check+error-less codepath and other one
   for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
   Which means we'll need a whole new handler.
   But then what about the `CompoundAssignOperator`,
   it would also be applicable for it.
   So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
   (due to partial register stalls), so even if we avoid promotion+demotion,
   the computations will //likely// be performed in `i32` anyways.

So i'm not really seeing much benefit of
not doing the straight-forward thing.

While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].

Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.

TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.

I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].

Reviewers: rjmccall, erichkeane, rsmith, vsk

Reviewed By: erichkeane

Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr

Tags: #llvm, #clang, #sanitizers

Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 15:39:55 +03:00
Eric Christopher fd39b1bb20 Revert "Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.""
This reapplies: 8ff85ed905

Original commit message:

As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.

This change doesn't include any change to move from selection dag to fast isel
and that will come with other numbers that should help inform that decision.
There also haven't been any real debuggability studies with this pipeline yet,
this is just the initial start done so that people could see it and we could start
tweaking after.

Test updates: Outside of the newpm tests most of the updates are coming from either
optimization passes not run anymore (and without a compelling argument at the moment)
that were largely used for canonicalization in clang.

Original post:

http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410

This reverts commit c9ddb02659.
2019-11-26 20:28:52 -08:00
Fangrui Song 3bb24bf257 Fix tests on Windows after D49466
It is tricky to use replace_path_prefix correctly on Windows which uses
backslashes as native path separators. Switch back to the old approach
(startswith is not ideal) to appease build bots for now.
2019-11-26 16:15:39 -08:00
Dan McGregor 6c92cdff72 Initial implementation of -fmacro-prefix-map and -ffile-prefix-map
GCC 8 implements -fmacro-prefix-map. Like -fdebug-prefix-map, it replaces a string prefix for the __FILE__ macro.
-ffile-prefix-map is the union of -fdebug-prefix-map and -fmacro-prefix-map

Reviewed By: rnk, Lekensteyn, maskray

Differential Revision: https://reviews.llvm.org/D49466
2019-11-26 15:17:49 -08:00
Tim Northover 78ad22e0cc Recommit ARM-NEON: make type modifiers orthogonal and allow multiple modifiers.
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.

When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.

The original version broke vcreate_* because it became a macro and didn't
apply the normal integer promotion rules before bitcasting to a vector.
This adds a temporary.
2019-11-26 09:21:47 +00:00
Muhammad Omair Javaid c9ddb02659 Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there."
This reverts commit 8ff85ed905.

This commit introduced 9 new failures on lldb buildbot host at http://lab.llvm.org:8014/builders/lldb-aarch64-ubuntu

Following tests were failing:
    lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq1/TestAmbiguousTailCallSeq1.py
    lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq2/TestAmbiguousTailCallSeq2.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_call_site/TestDisambiguateCallSite.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_paths_to_common_sink/TestDisambiguatePathsToCommonSink.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_tail_call_seq/TestDisambiguateTailCallSeq.py
    lldb-api :: functionalities/tail_call_frames/inlining_and_tail_calls/TestInliningAndTailCalls.py
    lldb-api :: functionalities/tail_call_frames/sbapi_support/TestTailCallFrameSBAPI.py
    lldb-api :: functionalities/tail_call_frames/thread_step_out_message/TestArtificialFrameStepOutMessage.py
    lldb-api :: functionalities/tail_call_frames/thread_step_out_or_return/TestSteppingOutWithArtificialFrames.py
    lldb-api :: functionalities/tail_call_frames/unambiguous_sequence/TestUnambiguousTailCalls.py

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410
2019-11-26 09:32:13 +05:00
Eric Christopher 8ff85ed905 As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.
This change doesn't include any change to move from selection dag to fast isel
and that will come with other numbers that should help inform that decision.
There also haven't been any real debuggability studies with this pipeline yet,
this is just the initial start done so that people could see it and we could start
tweaking after.

Test updates: Outside of the newpm tests most of the updates are coming from either
optimization passes not run anymore (and without a compelling argument at the moment)
that were largely used for canonicalization in clang.

Original post:

http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410
2019-11-25 17:16:46 -08:00
Peter Collingbourne 90b8bc003c IRGen: Call SetLLVMFunctionAttributes{,ForDefinition} on __cfi_check_fail.
This has the main effect of causing target-cpu and target-features to be set
on __cfi_check_fail, causing the function to become ABI-compatible with other
functions in the case where these attributes affect ABI (e.g. reserve-x18).

Technically we only need to call SetLLVMFunctionAttributes to get the target-*
attributes set, but since we're creating a definition we probably ought to
call the ForDefinition function as well.

Fixes PR44094.

Differential Revision: https://reviews.llvm.org/D70692
2019-11-25 15:16:43 -08:00
Hans Wennborg 21f26470e9 Revert 3f91705ca5 "ARM-NEON: make type modifiers orthogonal and allow multiple modifiers."
This broke the vcreate_u64 intrinsic. Example:

  $ cat /tmp/a.cc
  #include <arm_neon.h>

  void g() {
    auto v = vcreate_u64(0);
  }
  $ bin/clang -c /tmp/a.cc --target=arm-linux-androideabi16 -march=armv7-a
  /tmp/a.cc:4:12: error: C-style cast from scalar 'int' to vector 'uint64x1_t' (vector of 1 'uint64_t' value) of different size
    auto v = vcreate_u64(0);
             ^~~~~~~~~~~~~~
  /work/llvm.monorepo/build.release/lib/clang/10.0.0/include/arm_neon.h:4144:11: note: expanded from macro 'vcreate_u64'
    __ret = (uint64x1_t)(__p0); \
            ^~~~~~~~~~~~~~~~~~

Reverting until this can be investigated.

> The modifier system used to mutate types on NEON intrinsic definitions had a
> separate letter for all kinds of transformations that might be needed, and we
> were quite quickly running out of letters to use. This patch converts to a much
> smaller set of orthogonal modifiers that can be applied together to achieve the
> desired effect.
>
> When merging with downstream it is likely to cause a conflict with any local
> modifications to the .td files. There is a new script in
> utils/convert_arm_neon.py that was used to convert all .td definitions and I
> would suggest running it on the last downstream version of those files before
> this commit rather than resolving conflicts manually.
2019-11-25 16:27:53 +01:00
David Blaikie e956952ede DebugInfo: Flag Dwarf Version metadata for merging during LTO
When the Dwarf Version metadata was initially added (r184276) there was
no support for Module::Max - though the comment suggested that was the
desired behavior. The original behavior was Module::Warn which would
warn and then pick whichever version came first - which is pretty
arbitrary/luck-based if the consumer has some need for one version or
the other.

Now that the functionality's been added (r303590) this change updates
the implementation to match the desired goal.

The general logic here is - if you compile /some/ of your program with a
more recent DWARF version, you must have a consumer that can handle it,
so might as well use it for /everything/.

The only place where this might fall down is if you have a need to use
an old tool (supporting only the older DWARF version) for some subset of
your program. In which case now it'll all be the higher version. That
seems pretty narrow (& the inverse could happen too - you specifically
/need/ the higher DWARF version for some extra expressivity, etc, in
some part of the program)
2019-11-22 17:16:35 -08:00
Tim Northover 5cf58768cb Atomics: support min/max orthogonally
We seem to have been gradually growing support for atomic min/max operations
(exposing longstanding IR atomicrmw instructions). But until now there have
been gaps in the expected intrinsics. This adds support for the C11-style
intrinsics (i.e. taking _Atomic, rather than individually blessed by C11
standard), and the variants that return the new value instead of the original
one.

That way, people won't be misled by trying one form and it not working, and the
front-end is more friendly to people using _Atomic types, as we recommend.
2019-11-21 10:37:56 +00:00
Tim Northover 3f91705ca5 ARM-NEON: make type modifiers orthogonal and allow multiple modifiers.
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.

When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.
2019-11-20 13:20:02 +00:00
Tim Northover b80e483c42 Update tests after change to llvm-cxxfilt's underscore stripping behaviour. 2019-11-20 13:10:55 +00:00
Djordje Todorovic ce1f95a6e0 Reland "[clang] Remove the DIFlagArgumentNotModified debug info flag"
It turns out that the ExprMutationAnalyzer can be very slow when AST
gets huge in some cases. The idea is to move this analysis to the LLVM
back-end level (more precisely, in the LiveDebugValues pass). The new
approach will remove the performance regression, simplify the
implementation and give us front-end independent implementation.

Differential Revision: https://reviews.llvm.org/D68206
2019-11-20 10:08:07 +01:00
Vedant Kumar 568db780bb [CGDebugInfo] Emit subprograms for decls when AT_tail_call is understood (reland with fixes)
Currently, clang emits subprograms for declared functions when the
target debugger or DWARF standard is known to support entry values
(DW_OP_entry_value & the GNU equivalent).

Treat DW_AT_tail_call the same way to allow debuggers to follow cross-TU
tail calls.

Pre-patch debug session with a cross-TU tail call:

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

Post-patch (note that the tail-calling frame, "helper", is visible):

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f80 main`helper [opt] [artificial]
    frame #2: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

This was reverted in 5b9a072c because it attached declaration
subprograms to inlinable builtin calls, which interacted badly with the
MergeICmps pass. The fix is to not attach declarations to builtins.

rdar://46577651

Differential Revision: https://reviews.llvm.org/D69743
2019-11-19 12:49:27 -08:00
Matt Arsenault e531750c6c clang: Add -fconvergent-functions flag
The CUDA builtin library is apparently compiled in C++ mode, so the
assumption of convergent needs to be made in a typically non-SPMD
language. The functions in the library should still be assumed
convergent. Currently they are not, which is potentially incorrect and
this happens to work after the library is linked.
2019-11-19 23:20:15 +05:30
Simon Tatham 254b4f2500 [ARM,MVE] Add intrinsics for scalar shifts.
This fills in the small family of MVE intrinsics that have nothing to
do with vectors: they implement bit-shift operations on 32- or 64-bit
values held in one or two general-purpose registers. Most of these
shift operations saturate if shifting left, and round to nearest if
shifting right, although LSLL and ASRL behave like ordinary shifts.

When these instructions take a variable shift count in a register,
they pay attention to its sign, so that (for example) LSLL or UQRSHLL
will shift left if given a positive number but right if given a
negative one. That makes even LSLL and ASRL different enough from
standard LLVM IR shift semantics that I couldn't see any better
alternative than to simply model the whole family as a set of
MVE-specific IR intrinsics.

(The //immediate// forms of LSLL and ASRL, on the other hand, do
behave exactly like a standard IR shift of a 64-bit value. In fact,
those forms don't have ACLE intrinsics defined at all, because you can
just write an ordinary C shift operation if you want one of those.)

The 64-bit shifts have to be instruction-selected in C++, because they
deliver two output values. But the 32-bit ones are simple enough that
I could write a DAG isel pattern directly into each Instruction
record.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70319
2019-11-19 14:47:29 +00:00
Eric Christopher 30e7ee3c4b Temporarily Revert "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
and a follow-up NFC rearrangement as it's causing a crash on valid. Testcase is on the original review thread.

This reverts commits af57dbf12e and e6584b2b7b
2019-11-18 10:46:48 -08:00
Simon Tatham 4a4dd85e5a [ARM,MVE] Add intrinsics for vector comparisons.
This adds the `vcmp` family of ACLE MVE intrinsics: vector/vector,
vector/scalar, and the predicated forms of both. All are represented
using standard existing IR: vector/scalar comparisons are represented
by making a vector out of the scalar first, and predicated forms are
represented by taking the bitwise AND of the input predicate and the
output of the comparison. Existing LLVM-side tests demonstrate that
ISel will pattern-match all of that back down to single MVE VCMPs.

The idiom of handling a vector/scalar operation by generating IR to
expand the scalar into a second vector is going to be needed for a lot
of MVE intrinsics, so to make that easy, I've provided a helper
function that automatically works out the element count.

The comparison intrinsics are the first ones that have to //return// a
predicate, in the user-facing `mve_pred16_t` format. This means we
have to use the `arm_mve_pred_v2i` low-level intrinsic to convert it
back from the logical `<n x i1>` form used in IR. I've done that
explicitly in the code gen specification for the builtins, because it
happens much more rarely in the ACLE API than passing a Predicate as
input, so it didn't seem worth automating in MveEmitter.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70297
2019-11-18 10:39:30 +00:00
Momchil Velikov aa6d48fa70 Implement target(branch-protection) attribute for AArch64
This patch implements `__attribute__((target("branch-protection=...")))`
in a manner, compatible with the analogous GCC feature:

https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/AArch64-Function-Attributes.html#AArch64-Function-Attributes

Differential Revision: https://reviews.llvm.org/D68711
2019-11-15 15:40:46 +00:00
Djordje Todorovic 41d6ad6efd Revert "[clang] Remove the DIFlagArgumentNotModified debug info flag"
This reverts commit rG1643734741d2 due to LLDB test failure.
2019-11-15 12:16:44 +01:00
Djordje Todorovic 1643734741 [clang] Remove the DIFlagArgumentNotModified debug info flag
It turns out that the ExprMutationAnalyzer can be very slow when AST
gets huge in some cases. The idea is to move this analysis to the LLVM
back-end level (more precisely, in the LiveDebugValues pass). The new
approach will remove the performance regression, simplify the
implementation and give us front-end independent implementation.

Differential Revision: https://reviews.llvm.org/D68206
2019-11-15 11:10:19 +01:00
Simon Tatham 9e37892773 [ARM,MVE] Add intrinsics for vector get/set lane.
This adds the `vgetq_lane` and `vsetq_lane` families, to copy between
a scalar and a specified lane of a vector.

One of the new `vgetq_lane` intrinsics returns a `float16_t`, which
causes a compile error if `%clang_cc1` doesn't get the option
`-fallow-half-arguments-and-returns`. The driver passes that option to
cc1 already, but I've had to edit all the explicit cc1 command lines
in the existing MVE intrinsics tests.

A couple of fixes are included for the code I wrote up front in
MveEmitter to support lane-index immediates (and which nothing has
tested until now): the type was wrong (`uint32_t` instead of `int`)
and the range was off by one.

I've also added a method of bypassing the default promotion to `i32`
that is done by the MveEmitter code generation: it's sensible to
promote short scalars like `i16` to `i32` if they're going to be
passed to custom IR intrinsics representing a machine instruction
operating on GPRs, but not if they're going to be passed to standard
IR operations like `insertelement` which expect the exact type.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D70188
2019-11-15 09:53:58 +00:00
Simon Tatham 902e84556a [ARM,MVE] Add intrinsics for 'administrative' vector operations.
This batch of intrinsics includes lots of things that move vector data
around or change its type without really affecting its value very
much. It includes the `vreinterpretq` family (cast one vector type to
another); `vuninitializedq` (create a vector of a given type with
don't-care contents); and `vcreateq` (make a 128-bit vector out of two
`uint64_t` halves).

These are all implemented using completely standard IR that's already
tested in existing LLVM unit tests, so I've just written a clang test
to check the IR is correct, and left it at that.

I've also added some richer infrastructure to the MveEmitter Tablegen
backend, to make it specify the exact integer type of integer
arguments passed to IR construction functions, and wrap those
arguments in a `static_cast` in the autogenerated C++. That was
necessary to prevent an overloading ambiguity when passing the integer
literal `0` to `IRBuilder::CreateInsertElement`, because otherwise, it
could mean either a null pointer `llvm::Value *` or a zero `uint64_t`.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D70133
2019-11-15 09:53:43 +00:00
Craig Topper 188d92b947 [X86] Don't treat mxcsr as a register name when parsing MS inline assembly.
No instruction takes mxcsr as a an operand so we should always
treat it as an identifier name.
2019-11-13 15:26:18 -08:00
Yonghong Song 1583158042 [BPF] fix clang test failure for bpf-attr-preserve-access-index-4.c
Depending on different cmake configures, clang may generate different
IR name for slot variables. Let us use the regex instead of hard
coding the name. I did the same for other bpf-attr-preserve-access-index
tests with such an approach, but somehow did not do for this one.
2019-11-13 09:40:57 -08:00
Yonghong Song 4e2ce228ae [BPF] Add preserve_access_index attribute for record definition
This is a resubmission for the previous reverted commit
9434360401 with the same subject. This commit fixed the
segfault issue and addressed additional review comments.

This patch introduced a new bpf specific attribute which can
be added to struct or union definition. For example,
  struct s { ... } __attribute__((preserve_access_index));
  union u { ... } __attribute__((preserve_access_index));
The goal is to simplify user codes for cases
where preserve access index happens for certain struct/union,
so user does not need to use clang __builtin_preserve_access_index
for every members.

The attribute has no effect if -g is not specified.

When the attribute is specified and -g is specified, any member
access defined by that structure or union, including array subscript
access and inner records, will be preserved through
  __builtin_preserve_{array,struct,union}_access_index()
IR intrinsics, which will enable relocation generation
in bpf backend.

The following is an example to illustrate the usage:
  -bash-4.4$ cat t.c
  #define __reloc__ __attribute__((preserve_access_index))
  struct s1 {
    int c;
  } __reloc__;

  struct s2 {
    union {
      struct s1 b[3];
    };
  } __reloc__;

  struct s3 {
    struct s2 a;
  } __reloc__;

  int test(struct s3 *arg) {
    return arg->a.b[2].c;
  }
  -bash-4.4$ clang -target bpf -g -S -O2 t.c

A relocation with access string "0:0:0:0:2:0" will be generated
representing access offset of arg->a.b[2].c.

forward declaration with attribute is also handled properly such
that the attribute is copied and populated in real record definition.

Differential Revision: https://reviews.llvm.org/D69759
2019-11-13 08:23:44 -08:00
Simon Tatham a12f588ebb [ARM,MVE] Add intrinsics for contiguous load/stores.
This patch adds the ACLE intrinsics for all the MVE load and store
instructions not already handled by D69791. These ones don't need new
IR intrinsics, because they can be implemented in terms of standard
LLVM IR constructions.

Some of the load and store instructions access less than 128 bits of
memory, sign/zero extending each value to a wider vector lane on load
or truncating it on store. These are represented in IR by a load of a
shorter vector followed by a zext/sext, and conversely, a trunc
followed by a short store. Existing ISel patterns already recognize
those combinations and turn them into the right MVE instructions.

The predicated forms of all these instructions are represented in the
same way, except that the ordinary load/store operation is replaced
with the existing intrinsics @llvm.masked.{load,store}. These are
currently only code-generated as predicated MVE load/store
instructions if you give LLVM the `-enable-arm-maskedldst` option; so
I've done that in the LLVM codegen test. When we make that the
default, that option can be removed.

In the Tablegen backend, I've had to add a handful of extra support
features:

* We need to be able to make clang::Address objects out of a
  pointer and an alignment (previously we only needed these when the
  user passed us an existing one).

* We can now specify vector types that aren't 128 bits wide (for use
  in those intermediate values in IR), the parametrized type system
  can make one starting from two existing vector types (using the lane
  count of one and the element type of the other).

* I've added support for code generation of pointer casts, and for
  specifying LLVM types as operands to IRBuilder operations (for zext
  and sext, though I think they'll come in useful again).

* Now not all IR construction operations need to be specified as
  Builder.CreateFoo; some don't involve a Builder at all, and one
  passes it as a parameter to a tiny static helper function in
  CGBuiltin.cpp.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Subscribers: kristof.beyls, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70088
2019-11-13 12:47:00 +00:00
Tim Northover 44e5879f0f AArch64: add arm64_32 support to Clang. 2019-11-12 12:45:18 +00:00
Yonghong Song 9434360401 Revert "[BPF] Add preserve_access_index attribute for record definition"
This reverts commit 4a5aa1a7bf.

There are some other test failures. Investigate them first.
2019-11-09 08:32:44 -08:00
Yonghong Song 4a5aa1a7bf [BPF] Add preserve_access_index attribute for record definition
This patch introduced a new bpf specific attribute which can
be added to struct or union definition. For example,
  struct s { ... } __attribute__((preserve_access_index));
  union u { ... } __attribute__((preserve_access_index));
The goal is to simplify user codes for cases
where preserve access index happens for certain struct/union,
so user does not need to use clang __builtin_preserve_access_index
for every members.

The attribute has no effect if -g is not specified.

When the attribute is specified and -g is specified, any member
access defined by that structure or union, including array subscript
access and inner records, will be preserved through
  __builtin_preserve_{array,struct,union}_access_index()
IR intrinsics, which will enable relocation generation
in bpf backend.

The following is an example to illustrate the usage:
  -bash-4.4$ cat t.c
  #define __reloc__ __attribute__((preserve_access_index))
  struct s1 {
    int c;
  } __reloc__;

  struct s2 {
    union {
      struct s1 b[3];
    };
  } __reloc__;

  struct s3 {
    struct s2 a;
  } __reloc__;

  int test(struct s3 *arg) {
    return arg->a.b[2].c;
  }
  -bash-4.4$ clang -target bpf -g -S -O2 t.c

A relocation with access string "0:0:0:0:2:0" will be generated
representing access offset of arg->a.b[2].c.

forward declaration with attribute is also handled properly such
that the attribute is copied and populated in real record definition.

Differential Revision: https://reviews.llvm.org/D69759
2019-11-09 08:17:12 -08:00
Jan Korous 590f279c45 [clang] Add VFS support for sanitizers' blacklists
Differential Revision: https://reviews.llvm.org/D69648
2019-11-08 10:58:50 -08:00
Nemanja Ivanovic e0407f5496 [PowerPC][Altivec] Fix offsets for vec_xl and vec_xst
As we currently have it implemented in altivec.h, the offsets for these two
intrinsics are element offsets. The documentation in the ABI (as well as the
implementation in both XL and GCC) states that these should be byte offsets.

Differential revision: https://reviews.llvm.org/D63636
2019-11-07 20:58:11 -06:00
Nemanja Ivanovic 070e4027b0 [PowerPC][Altivec] Emit correct builtin for single precision vec_all_ne
We currently emit a double precision comparison instruction for this, whereas we
need to emit the single precision version.

Differential revision: https://reviews.llvm.org/D64024
2019-11-07 20:40:32 -06:00
Melanie Blower af57dbf12e Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior=
Add options to control floating point behavior: trapping and
    exception behavior, rounding, and control of optimizations that affect
    floating point calculations. More details in UsersManual.rst.

    Reviewers: rjmccall

    Differential Revision: https://reviews.llvm.org/D62731
2019-11-07 07:22:45 -08:00
Tim Northover 10e0d64337 CodeGen: set correct result for atomic compound expressions
Atomic compound expressions try to use atomicrmw if possible, but this
path doesn't set the Result variable, leaving it to crash in later code
if anything ever tries to use the result of the expression. This fixes
that issue by recalculating the new value based on the old one
atomically loaded.
2019-11-07 13:36:44 +00:00
Hans Wennborg 5b9a072c39 Revert a5c8ec4 "[CGDebugInfo] Emit subprograms for decls when AT_tail_call is understood"
This caused Chromium builds to fail with "inlinable function call in a function
with debug info must have a !dbg location" errors. See
https://bugs.chromium.org/p/chromium/issues/detail?id=1022296#c1 for a
reproducer.

> Currently, clang emits subprograms for declared functions when the
> target debugger or DWARF standard is known to support entry values
> (DW_OP_entry_value & the GNU equivalent).
>
> Treat DW_AT_tail_call the same way to allow debuggers to follow cross-TU
> tail calls.
>
> Pre-patch debug session with a cross-TU tail call:
>
> ```
>   * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
>     frame #1: 0x0000000100000f99 main`main at a.c:8:10 [opt]
> ```
>
> Post-patch (note that the tail-calling frame, "helper", is visible):
>
> ```
>   * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
>     frame #1: 0x0000000100000f80 main`helper [opt] [artificial]
>     frame #2: 0x0000000100000f99 main`main at a.c:8:10 [opt]
> ```
>
> rdar://46577651
>
> Differential Revision: https://reviews.llvm.org/D69743
2019-11-07 10:30:07 +01:00
Tim Northover 59f063b89c NeonEmitter: remove special 'a' type modifier.
'a' used to implement a splat in C++ code in NeonEmitter.cpp, but this
can be done directly from .td expansions now (and most ops already did).
So removing it simplifies the overall code.

https://reviews.llvm.org/D69716
2019-11-06 10:23:36 +00:00
Simon Tatham 6c3fee47a6 [ARM,MVE] Add intrinsics for gather/scatter load/stores.
This patch adds two new families of intrinsics, both of which are
memory accesses taking a vector of locations to load from / store to.

The vldrq_gather_base / vstrq_scatter_base intrinsics take a vector of
base addresses, and an immediate offset to be added consistently to
each one. vldrq_gather_offset / vstrq_scatter_offset take a scalar
base address, and a vector of offsets to add to it. The
'shifted_offset' variants also multiply each offset by the element
size type, so that the vector is effectively of array indices.

At the IR level, these operations are represented by a single set of
four IR intrinsics: {gather,scatter} × {base,offset}. The other
details (signed/unsigned, shift, and memory element size as opposed to
vector element size) are all specified by IR intrinsic polymorphism
and immediate operands, because that made the selection job easier
than making a huge family of similarly named intrinsics.

I considered using the standard IR representations such as
llvm.masked.gather, but they're not a good fit. In order to use
llvm.masked.gather to represent a gather_offset load with element size
smaller than a pointer, you'd have to expand the <8 x i16> vector of
offsets into an <8 x i16*> vector of pointers, which would be split up
during legalization, so you'd spend most of your time undoing the mess
it had made. Also, ISel support for llvm.masked.gather would be easy
enough in a trivial way (you can expand it into a gather-base load
with a zero immediate offset), but instruction-selecting lots of
fiddly idioms back into all the _other_ MVE load instructions would be
much more work. So I think dedicated IR intrinsics are the more
sensible approach, at least for the moment.

On the clang tablegen side, I've added two new features to the
Tablegen source accepted by MveEmitter: a 'CopyKind' type node for
defining a type that varies with the parameter type (it lets you ask
for an unsigned integer type of the same width as the parameter), and
an 'unsignedflag' value node for passing an immediate IR operand which
is 0 for a signed integer type or 1 for an unsigned one. That lets me
write each kind of intrinsic just once and get all its subtypes and
immediate arguments generated automatically.

Also I've tweaked the handling of pointer-typed values in the code
generation part of MveEmitter: they're generated as Address rather
than Value (i.e. including an alignment) so that they can be given to
the ordinary IR load and store operations, but I'd omitted the code to
convert them back to Value when they're going to be used as an
argument to an IR intrinsic.

On the MC side, I've enhanced MVEVectorVTInfo so that it can tell you
not only the full assembly-language suffix for a given vector type
(like 's32' or 'u16') but also the numeric-only one used by store
instructions (just '32' or '16').

Reviewers: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69791
2019-11-06 09:01:42 +00:00
Bill Wendling ee10d934dd Fix typo so that '-O0' is correctly specified 2019-11-05 13:15:55 -08:00
Jonas Paulsson 9376714314 [Clang FE] Recognize -mnop-mcount CL option (SystemZ only).
Recognize -mnop-mcount from the command line and add a function attribute
"mnop-mcount"="true" when passed.

When this option is used, a nop is added instead of a call to fentry. This
is used when building the Linux Kernel.

If this option is passed for any other target than SystemZ, an error is
generated.

Review: Ulrich Weigand
https://reviews.llvm.org/D67763
2019-11-05 12:12:36 +01:00
Vedant Kumar a5c8ec4baa [CGDebugInfo] Emit subprograms for decls when AT_tail_call is understood
Currently, clang emits subprograms for declared functions when the
target debugger or DWARF standard is known to support entry values
(DW_OP_entry_value & the GNU equivalent).

Treat DW_AT_tail_call the same way to allow debuggers to follow cross-TU
tail calls.

Pre-patch debug session with a cross-TU tail call:

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

Post-patch (note that the tail-calling frame, "helper", is visible):

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f80 main`helper [opt] [artificial]
    frame #2: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

rdar://46577651

Differential Revision: https://reviews.llvm.org/D69743
2019-11-04 15:14:24 -08:00
Amy Huang ab76cfdd20 Recommit "[CodeView] Add option to disable inline line tables."
This reverts commit 004ed2b0d1.
Original commit hash 6d03890384

Summary:
This adds a clang option to disable inline line tables. When it is used,
the inliner uses the call site as the location of the inlined function instead of
marking it as an inline location with the function location.

https://reviews.llvm.org/D67723
2019-11-04 09:15:26 -08:00
Pengfei Wang af3a7de20c [X86] add mayRaiseFPException flag and FPCW registers for X87 instructions
Summary:
This patch adds flag "mayRaiseFPException"  , FPCW and FPSW for X87 instructions which could raise
float exception.

Reviewers: pengfei, RKSimon, andrew.w.kaylor, uweigand, kpn, spatel, cameron.mcinally, craig.topper

Reviewed By: craig.topper

Subscribers: thakis, hiraditya, llvm-commits

Patch by LiuChen.

Differential Revision: https://reviews.llvm.org/D68854
2019-11-01 21:12:43 -07:00
David Blaikie 098d901bd1 DebugInfo: Let -gdwarf use the toolchain default DWARF version, instead of hardcoded/aliased to -gdwarf-4 2019-11-01 15:17:51 -07:00
Thomas Lively 935c84c3c2 [WebAssembly] Add experimental SIMD dot product instruction
Summary:
This instruction is not merged to the spec proposal, but we need it to
be implemented in the toolchain to experiment with it. It is available
only on an opt-in basis through a clang builtin.

Defined in https://github.com/WebAssembly/simd/pull/127.

Depends on D69696.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69697
2019-11-01 10:45:48 -07:00
Thomas Lively a07019a275 [WebAssembly] SIMD integer min and max instructions
Summary:
Introduces a clang builtins and LLVM intrinsics representing integer
min/max instructions. These instructions have not been merged to the
SIMD spec proposal yet, so they are currently opt-in only via builtins
and not produced by general pattern matching. If these instructions
are accepted into the spec proposal the builtins and intrinsics will
be replaced with normal pattern matching.

Defined in https://github.com/WebAssembly/simd/pull/27.

Reviewers: aheejin

Reviewed By: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69696
2019-10-31 20:22:11 -07:00
Matt Arsenault c6da9ec0e9 clang: Fix assert on void pointer arithmetic with address_space
This attempted to always use the default address space void pointer
type instead of preserving the source address space.
2019-10-31 20:07:23 -07:00
Yaxun (Sam) Liu bb1616ba47 [CodeGen] Fix invalid llvm.linker.options about pragma detect_mismatch
When a target does not support pragma detect_mismatch, an llvm.linker.options
metadata with an empty entry is created, which causes diagnostic in backend
since backend expects name/value pair in llvm.linker.options entries.

This patch fixes that.

Differential Revision: https://reviews.llvm.org/D69678
2019-10-31 22:27:35 -04:00
Amy Huang 004ed2b0d1 Revert "[CodeView] Add option to disable inline line tables."
because it breaks compiler-rt tests.

This reverts commit 6d03890384.
2019-10-30 17:31:12 -07:00
Amy Huang 6d03890384 [CodeView] Add option to disable inline line tables.
Summary:
This adds a clang option to disable inline line tables. When it is used,
the inliner uses the call site as the location of the inlined function instead of
marking it as an inline location with the function location.

See https://bugs.llvm.org/show_bug.cgi?id=42344

Reviewers: rnk

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D67723
2019-10-30 16:52:39 -07:00
Evandro Menezes 215da6606c [clang][llvm] Obsolete Exynos M1 and M2 2019-10-30 15:02:59 -05:00
Guillaume Chatelet 5607ff12fa Fix missing memcpy builtin on ppc64be
See D68028
2019-10-29 16:35:32 +01:00
Guillaume Chatelet 98f3151a7d [clang] Add no_builtin attribute
Summary:
This is a follow up on https://reviews.llvm.org/D61634
This patch is simpler and only adds the no_builtin attribute.

Reviewers: tejohnson, courbet, theraven, t.p.northover, jdoerfert

Subscribers: mgrang, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68028

This is a re-submit after it got reverted in https://reviews.llvm.org/rGbd8791610948 since the breakage doesn't seem to come from this patch.
2019-10-29 15:50:29 +01:00
Dávid Bolvanský dfece0a108 [Builtins] Teach Clang about memccpy
Summary: Hopefully, -fno-builtin-memccpy will work now. Required for https://reviews.llvm.org/D67986.

Reviewers: aaron.ballman, rsmith

Reviewed By: aaron.ballman

Subscribers: RKSimon, kristina, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68377
2019-10-28 23:27:08 +01:00
Vlad Tsyrklevich ad531fff81 Revert "[clang] Add no_builtin attribute"
This reverts commit bd87916109. It was
causing ASan/MSan failures on the sanitizer buildbots.
2019-10-28 15:21:59 -07:00