Commit Graph

5642 Commits

Author SHA1 Message Date
Alex Richardson 8c387cbea7 Add builtins for aligning and checking alignment of pointers and integers
This change introduces three new builtins (which work on both pointers
and integers) that can be used instead of common bitwise arithmetic:
__builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and
__builtin_is_aligned(x, alignment).

I originally added these builtins to the CHERI fork of LLVM a few years ago
to handle the slightly different C semantics that we use for CHERI [1].
Until recently these builtins (or sequences of other builtins) were
required to generate correct code. I have since made changes to the default
C semantics so that they are no longer strictly necessary (but using them
does generate slightly more efficient code). However, based on our experience
using them in various projects over the past few years, I believe that adding
these builtins to clang would be useful.

These builtins have the following benefit over bit-manipulation and casts
via uintptr_t:

- The named builtins clearly convey the semantics of the operation. While
  checking alignment using __builtin_is_aligned(x, 16) versus
  ((x & 15) == 0) is probably not a huge win in readably, I personally find
  __builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1).
- They preserve the type of the argument (including const qualifiers). When
  using casts via uintptr_t, it is easy to cast to the wrong type or strip
  qualifiers such as const.
- If the alignment argument is a constant value, clang can check that it is
  a power-of-two and within the range of the type. Since the semantics of
  these builtins is well defined compared to arbitrary bit-manipulation,
  it is possible to add a UBSAN checker that the run-time value is a valid
  power-of-two. I intend to add this as a follow-up to this change.
- The builtins avoids int-to-pointer casts both in C and LLVM IR.
  In the future (i.e. once most optimizations handle it), we could use the new
  llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally
  be generated.
- They can be used to round up/down to the next aligned value for both
  integers and pointers without requiring two separate macros.
- In many projects the alignment operations are already wrapped in macros (e.g.
  roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation
  with a builtin call, we get improved diagnostics for many call-sites while
  only having to change a few lines.
- Finally, the builtins also emit assume_aligned metadata when used on pointers.
  This can improve code generation compared to the uintptr_t casts.

[1] In our CHERI compiler we have compilation mode where all pointers are
implemented as capabilities (essentially unforgeable 128-bit fat pointers).
In our original model, casts from uintptr_t (which is a 128-bit capability)
to an integer value returned the "offset" of the capability (i.e. the
difference between the virtual address and the base of the allocation).
This causes problems for cases such as checking the alignment: for example, the
expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the
pointer is aligned to a multiple of 64 bytes. The problem with offsets is that
any pointer to the beginning of an allocation will have an offset of zero, so
this check always succeeds in that case (even if the address is not correctly
aligned). The same issues also exist when aligning up or down. Using the
alignment builtins ensures that the address is used instead of the offset. While
I have since changed the default C semantics to return the address instead of
the offset when casting, this offset compilation mode can still be used by
passing a command-line flag.

Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune
Reviewed By: aaron.ballman, lebedev.ri
Differential Revision: https://reviews.llvm.org/D71499
2020-01-09 21:48:29 +00:00
serge-sans-paille b35f5d4914 [clang] Enforce triple in mempcpy test
Fixes http://lab.llvm.org:8011/builders/llvm-clang-win-x-armv7l/builds/2597
2020-01-09 21:09:15 +01:00
Eric Astor 1c545f6dbc [ms] [X86] Use "P" modifier on all branch-target operands in inline X86 assembly.
Summary:
Extend D71677 to apply to all branch-target operands, rather than special-casing call instructions.

Also add a regression test for llvm.org/PR44272, since this finishes fixing it.

Reviewers: thakis, rnk

Reviewed By: thakis

Subscribers: merge_guards_bot, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72417
2020-01-09 14:55:03 -05:00
Simon Tatham 06d07ec4a3 [Clang] Handle target-specific builtins returning aggregates.
Summary:
A few of the ARM MVE builtins directly return a structure type. This
causes an assertion failure at code-gen time if you try to assign the
result of the builtin to a variable, because the `RValue` created in
`EmitBuiltinExpr` from the `llvm::Value` produced by codegen is always
made by `RValue::get()`, which creates a non-aggregate `RValue` that
will fail an assertion when `AggExprEmitter::withReturnValueSlot` calls
`Src.getAggregatePointer()`. A similar failure occurs if you try to use
the struct return value directly to extract one field, e.g.
`vld2q(address).val[0]`.

The existing code-gen tests for those MVE builtins pass the returned
structure type directly to the C `return` statement, which apparently
managed to avoid that particular code path, so we didn't notice the
crash.

Now `EmitBuiltinExpr` checks the evaluation kind of the builtin's return
value, and does the necessary handling for aggregate returns. I've added
two extra test cases, both of which crashed before this change.

Reviewers: dmgreen, rjmccall

Reviewed By: rjmccall

Subscribers: kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72271
2020-01-09 17:28:37 +00:00
serge-sans-paille cee4a1c957 Improve support of GNU mempcpy
- Lower to the memcpy intrinsic
- Raise warnings when size/bounds are known

Differential Revision: https://reviews.llvm.org/D71374
2020-01-09 17:31:00 +01:00
Momchil Velikov 173b711e83 [ARM][MVE] MVE-I should not be disabled by -mfpu=none
Architecturally, it's allowed to have MVE-I without an FPU, thus
-mfpu=none should not disable MVE-I, or moves to/from FP-registers.

This patch removes `+/-fpregs` from features unconditionally added to
target feature list, depending on FPU and moves the logic to Clang
driver, where the negative form (`-fpregs`) is conditionally added to
the target features list for the cases of `-mfloat-abi=soft`, or
`-mfpu=none` without either `+mve` or `+mve.fp`. Only the negative
form is added by the driver, the positive one is derived from other
features in the backend.

Differential Revision: https://reviews.llvm.org/D71843
2020-01-09 14:03:25 +00:00
Simon Tatham dac7b23cc3 [ARM,MVE] Intrinsics for variable shift instructions.
This batch of intrinsics fills in all the shift instructions that take
a variable shift distance in a register, instead of an immediate. Some
of these instructions take a single shift distance in a scalar
register and apply it to all lanes; others take a vector of per-lane
distances.

These instructions are all basically one family, varying in whether
they saturate out-of-range values, and whether they round when bits
are shifted off the bottom. I've implemented them at the IR level by a
much smaller family of IR intrinsics, which take flag parameters to
indicate saturating and/or rounding (along with the usual one to
specify signed/unsigned integers).

An oddity is that all of them are //left// shift instructions – but if
you pass a negative shift count, they'll shift right. So the vector
shift distances are always vectors of //signed// integers, regardless
of whether you're considering the other input vector to be of signed
or unsigned. Also, even the simplest `vshlq` instruction in this
family (neither saturating nor rounding) has to be implemented as an
IR intrinsic, because the ordinary LLVM IR `shl` operation would
consider an out-of-range shift count to be undefined behavior.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72329
2020-01-08 14:42:24 +00:00
Simon Tatham 3100480925 [ARM,MVE] Intrinsics for partial-overwrite imm shifts.
This batch of intrinsics covers two sets of immediate shift
instructions, which have in common that they only overwrite part of
their output register and so they need an extra input giving its
previous value.

The VSLI and VSRI instructions shift each lane of the input vector
left or right just as if they were normal immediate VSHL/VSHR, but
then they only overwrite the output bits that correspond to actual
shifted bits of the input. So VSLI will leave the low n bits of each
output lane unchanged, and VSRI the same with the top n bits.

The V[Q][R]SHR[U]N family are all narrowing shifts: they take an input
vector of 2n-bit integers, shift each lane right by a constant, and
then narrowing the shifted result to only n bits. So they only
overwrite half of the n-bit lanes in the output register, and the B/T
suffix indicates whether it's the bottom or top half of each 2n-bit
lane.

I've implemented the whole of the latter family using a single IR
intrinsic `vshrn`, which takes a lot of i32 parameters indicating
which instruction it expands to (by specifying signedness of the input
and output types, whether it saturates and/or rounds, etc).

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72328
2020-01-08 14:42:24 +00:00
Aaron Ballman 55a51e1c79 Disallow an empty string literal in an asm label
An empty string literal in an asm label does not make a whole lot of sense. GCC
does not diagnose such a construct, but it also generates code that cannot be
assembled by gas should two symbols have an empty asm label within the same TU.
This does not affect an asm statement with an empty string literal, which is
still a useful construct.
2020-01-08 08:38:02 -05:00
Simon Tatham 34817e04fe [ARM,MVE] Fix many signedness errors in MVE intrinsics.
Summary:
Running an end-to-end test last week I noticed that a lot of the ACLE
intrinsics that operate differently on vectors of signed and unsigned
integers were ending up generating the signed version of the
instruction unconditionally. This is because the IR intrinsics had no
way to distinguish signed from unsigned: the LLVM type system just
calls them both `v8i16` (or whatever), so you need either separate
intrinsics for signed and unsigned, or a flag parameter that tells
ISel which one to choose.

This patch fixes all the problems of that kind that I've noticed, by
adding an i32 flag parameter to many of the IR intrinsics which is set
to 1 for unsigned (matching the existing practice in cases where we
got it right), and conditioning all the isel patterns on that flag. So
the fundamental change is in `IntrinsicsARM.td`, changing the
low-level IR intrinsics API; there are knock-on changes in
`arm_mve.td` (adjusting code gen for the ACLE intrinsics to use the
modified API) and in `ARMInstrMVE.td` (adjusting isel to expect the
new unsigned flags). The rest of this patch is boringly updating tests.

Reviewers: dmgreen, miyuki, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72270
2020-01-06 16:33:16 +00:00
Simon Tatham 4978296cd8 [ARM,MVE] Support -ve offsets in gather-load intrinsics.
Summary:
The ACLE intrinsics with `gather_base` or `scatter_base` in the name
are wrappers on the MVE load/store instructions that take a vector of
base addresses and an immediate offset. The immediate offset can be up
to 127 times the alignment unit, and it can be positive or negative.

At the MC layer, we got that right. But in the Sema error checking for
the wrapping intrinsics, the offset was erroneously constrained to be
positive.

To fix this I've adjusted the `imm_mem7bit` class in the Tablegen that
defines the intrinsics. But that causes integer literals like
`0xfffffffffffffe04` to appear in the autogenerated calls to
`SemaBuiltinConstantArgRange`, which provokes a compiler warning
because that's out of the non-overflowing range of an `int64_t`. So
I've also tweaked `MveEmitter` to emit that as `-0x1fc` instead.

Updated the tests of the Sema checks themselves, and also adjusted a
random sample of the CodeGen tests to actually use negative offsets
and prove they get all the way through code generation without causing
a crash.

Reviewers: dmgreen, miyuki, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: kristof.beyls, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72268
2020-01-06 16:33:07 +00:00
Kevin P. Neal 89d6c288ba [SystemZ] Use FNeg in s390x clang builtins
The s390x builtins are still using FSub instead of FNeg. Correct that.
2020-01-02 12:14:43 -05:00
Craig Topper 5e5a1d2790 [CodeGen] Emit conj/conjf/confjl libcalls as fneg instructions if possible.
We already recognize the __builtin versions of these, might as well
recognize the libcall version.

Differential Revision: https://reviews.llvm.org/D72028
2019-12-31 10:41:00 -08:00
Craig Topper 70f8dd4cf6 [CodeGen] Use IRBuilder::CreateFNeg for __builtin_conj
This replaces the fsub -0.0 idiom with an fneg instruction. We didn't see to have a test that showed the current codegen. Just some tests for constant folding and a test that was only checking the declare lines for libcalls. The latter just checked that we did not have a declare for @conj when using __builtin_conj.

Differential Revision: https://reviews.llvm.org/D72012
2019-12-30 13:25:23 -08:00
Craig Topper 8b23b2bbd9 [CodeGen] Use CreateFNeg in buildFMulAdd
We have an fneg instruction now and should use it instead of the fsub -0.0 idiom. Looks like we had no test that showed that we handled the negation cases here so I've added new tests.

Differential Revision: https://reviews.llvm.org/D72010
2019-12-30 13:24:11 -08:00
Eric Astor 4a7aa252a3 [X86][AsmParser] re-introduce 'offset' operator
Summary:
Amend MS offset operator implementation, to more closely fit with its MS counterpart:

    1. InlineAsm: evaluate non-local source entities to their (address) location
    2. Provide a mean with which one may acquire the address of an assembly label via MS syntax, rather than yielding a memory reference (i.e. "offset asm_label" and "$asm_label" should be synonymous
    3. address PR32530

Based on http://llvm.org/D37461

Fix broken test where the break appears unrelated.

- Set up appropriate memory-input rewrites for variable references.

- Intel-dialect assembly printing now correctly handles addresses by adding "offset".

- Pass offsets as immediate operands (using "r" constraint for offsets of locals).

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D71436
2019-12-30 14:35:26 -05:00
Fangrui Song 502a77f125 Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" as cleanups after D56351 2019-12-24 15:57:33 -08:00
Craig Topper d35bcbbb5d [Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize.
The validateOutputSize and validateInputSize need to check whether
AVX or AVX512 are enabled. But this can be affected by the
target attribute so we need to factor that in.

This patch moves some of the code from CodeGen to create an
appropriate feature map that we can pass to the function.

Differential Revision: https://reviews.llvm.org/D68627
2019-12-23 11:23:30 -08:00
Yonghong Song e3d8ee35e4 reland "[DebugInfo] Support to emit debugInfo for extern variables"
Commit d77ae1552f
("[DebugInfo] Support to emit debugInfo for extern variables")
added deebugInfo for extern variables for BPF target.
The commit is reverted by 891e25b02d
as the committed tests using %clang instead of %clang_cc1 causing
test failed in certain scenarios as reported by Reid Kleckner.

This patch fixed the tests by using %clang_cc1.

Differential Revision: https://reviews.llvm.org/D71818
2019-12-22 18:28:50 -08:00
Reid Kleckner 891e25b02d Revert "[DebugInfo] Support to emit debugInfo for extern variables"
This reverts commit d77ae1552f.

The tests committed along with this change do not pass, and should be
changed to use %clang_cc1.
2019-12-22 12:54:06 -08:00
Eric Astor dc5b614fa9 [ms] [X86] Use "P" modifier on operands to call instructions in inline X86 assembly.
Summary:
This is documented as the appropriate template modifier for call operands.
Fixes PR44272, and adds a regression test.

Also adds support for operand modifiers in Intel-style inline assembly.

Reviewers: rnk

Reviewed By: rnk

Subscribers: merge_guards_bot, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71677
2019-12-22 09:16:34 -05:00
Fangrui Song 0792ef7256 [Driver] Verify -mrecord-mcount in Driver, instead of CodeGen after D71627
GCC's x86 and s390 ports support -mrecord-mcount. Other ports reject the
option.

  aarch64-linux-gnu-gcc: error: unrecognized command line option ‘-mrecord-mcount’

Allowing this option can cause failures when building Linux kernel for
aarch64, powerpc64, etc, which will think the feature is available if
the clang command returns 0.
2019-12-21 22:47:24 -08:00
Jonas Paulsson 2520bef865 [Clang FE, SystemZ] Recognize -mrecord-mcount CL option.
Recognize -mrecord-mcount from the command line and add a function attribute
"mrecord-mcount" when passed.

Only valid on SystemZ (when used with -mfentry).

Review: Ulrich Weigand
https://reviews.llvm.org/D71627
2019-12-19 08:51:55 -08:00
Thomas Lively 71eb8023d8 [WebAssembly] Add avgr_u intrinsics and require nuw in patterns
Summary:
The vector pattern `(a + b + 1) / 2` was previously selected to an
avgr_u instruction regardless of nuw flags, but this is incorrect in
the case where either addition may have an unsigned wrap. This CL
changes the existing pattern to require both adds to have nuw flags
and adds builtin functions and intrinsics for the avgr_u instructions
because the corrected pattern is not representable in C.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71648
2019-12-18 15:31:38 -08:00
Jonas Paulsson ca520592c0 [Clang FE, SystemZ] Don't add "true" value for the "mnop-mcount" attribute.
Let the "mnop-mcount" function attribute simply be present or non-present.
Update SystemZ backend as well to use hasFnAttribute() instead.

Review: Ulrich Weigand
https://reviews.llvm.org/D71669
2019-12-18 11:04:13 -08:00
Amy Huang a85f5efd95 Add support for the MS qualifiers __ptr32, __ptr64, __sptr, __uptr.
Summary:
This adds parsing of the qualifiers __ptr32, __ptr64, __sptr, and __uptr and
lowers them to the corresponding address space pointer for 32-bit and 64-bit pointers.
(32/64-bit pointers added in https://reviews.llvm.org/D69639)

A large part of this patch is making these pointers ignore the address space
when doing things like overloading and casting.

https://bugs.llvm.org/show_bug.cgi?id=42359

Reviewers: rnk, rsmith

Subscribers: jholewinski, jvesely, nhaehnle, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71039
2019-12-18 10:41:12 -08:00
Ulrich Weigand 1e89188d35 [FPEnv] Remove unnecessary rounding mode argument for constrained intrinsics
The following intrinsics currently carry a rounding mode metadata argument:

    llvm.experimental.constrained.minnum
    llvm.experimental.constrained.maxnum
    llvm.experimental.constrained.ceil
    llvm.experimental.constrained.floor
    llvm.experimental.constrained.round
    llvm.experimental.constrained.trunc

This is not useful since the semantics of those intrinsics do not in any way
depend on the rounding mode. In similar cases, other constrained intrinsics
do not have the rounding mode argument. Remove it here as well.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D71218
2019-12-17 21:10:36 +01:00
Jonas Paulsson 599d1cc07a [Clang FE, SystemZ] Recognize -mpacked-stack CL option
Recognize -mpacked-stack from the command line and add a function attribute
"mpacked-stack" when passed. This is needed for building the Linux kernel.

If this option is passed for any other target than SystemZ, an error is
generated.

Review: Ulrich Weigand
https://reviews.llvm.org/D71441
2019-12-17 11:26:17 -08:00
Erich Keane 1ed832e424 Reland [NFC-I] Remove hack for fp-classification builtins
The FP-classification builtins (__builtin_isfinite, etc) use variadic
packs in the definition file to mean an overload set.  Because of that,
floats were converted to doubles, which is incorrect. There WAS a patch
to remove the cast after the fact.

THis patch switches these builtins to just be custom type checking,
calls the implicit conversions for the integer members, and makes sure
the correct L->R casts are put into place, then does type checking like
normal.

A future direction (that wouldn't be NFC) would consider making
conversions for the floating point parameter legal.

Note: The initial patch for this missed that certain systems need to
still convert half to float, since they dont' support that type.
2019-12-17 06:58:29 -08:00
Sam Clegg 0a1e349a79 [WebAssembly] Setting export_name implies llvm.used
This change updates the clang front end to add symbols to llvm.used
when they have explicit export_name attribute.

Differential Revision: https://reviews.llvm.org/D71493
2019-12-16 14:48:38 -08:00
Thomas Lively 3a93756dfb [WebAssembly] Replace SIMD int min/max builtins with patterns
Summary:
The instructions were originally implemented via builtins and
intrinsics so users would have to explicitly opt-in to using
them. This was useful while were validating whether these instructions
should have been merged into the spec proposal. Now that they have
been, we can use normal codegen patterns, so the intrinsics and
builtins are no longer useful.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71500
2019-12-16 11:48:49 -08:00
Erich Keane f02d6dd6c7 Fix floating point builtins to not promote float->double
As brought up in D71467, a group of floating point builtins
automatically promoted floats to doubles because they used the variadic
builtin tag to support an overload set. The result is that the
parameters were treated as a variadic pack, which always promots
float->double.

This resulted in the wrong answer being given in cases with certain
values of NaN.
2019-12-16 07:20:29 -08:00
Mark Murray a2cd4600ec [ARM][MVE][Intrinsics] All vqdmulhq/vqrdmulhq tests should be for signed numbers.
Fix broken tests. I can't yet explain how they worked locally pre-commit.
2019-12-13 17:29:59 +00:00
Mikhail Maltsev 99581fd4c8 [ARM][MVE] Add vector reduction intrinsics with two vector operands
Summary:
This patch adds intrinsics for the following MVE instructions:
* VABAV
* VMLADAV, VMLSDAV
* VMLALDAV, VMLSLDAV
* VRMLALDAVH, VRMLSLDAVH

Each of the above 4 groups has a corresponding new LLVM IR intrinsic,
since the instructions cannot be easily represented using
general-purpose IR operations.

Reviewers: simon_tatham, ostannard, dmgreen, MarkMurrayARM

Reviewed By: MarkMurrayARM

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71062
2019-12-13 13:17:29 +00:00
Simon Tatham 25305a9311 [ARM][MVE] Add intrinsics for more immediate shifts.
Summary:
This fills in the remaining shift operations that take a single vector
input and an immediate shift count: the `vqshl`, `vqshlu`, `vrshr` and
`vshll[bt]` families.

`vshll[bt]` (which shifts each input lane left into a double-width
output lane) is the most interesting one. There are separate MC
instruction ids for shifting by exactly the input lane width and
shifting by less than that, because the instruction encoding is so
completely different for the lane-width special case. So I had to
write two sets of patterns to match based on the immediate shift
count, which involved adding a ComplexPattern matcher to avoid the
general-case pattern accidentally matching the special case too. For
that family I've made sure to add an llc codegen test for both
versions of each instruction.

I'm experimenting with a new strategy for parametrising the isel
patterns for all these instructions: adding extra fields to the
relevant `Instruction` subclass itself, which are ignored by the
Tablegen backends that generate the MC data, but can be retrieved from
each instance of that instruction subclass when it's passed as a
template parameter to the multiclass that generates its isel patterns.
A nice effect of that is that I can fill in those informational fields
using `let` blocks, rather than having to type them out once per
instruction at `defm` time.

(As a result, quite a lot of existing instruction `def`s are
reindented by this patch, so it's clearer to read with whitespace
changes ignored.)

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71458
2019-12-13 13:07:39 +00:00
Mark Murray 228c74076d [ARM][MVE][Intrinsics] Add *_x() variants of my *_m() intrinsics.
Summary:
Better use of multiclass is used, and this helped find some existing
bugs in the predicated VMULL* intrinsics, which are now fixed.

The refactored VMULL[TB]Q_(INT|POLY)_M() intrinsics were discovered
to have an argument ("inactive") with incorrect type, and this required
a fix that is included in this whole patch. The argument "inactive"
should have been the same width (per vector element) as the return
type of the intrinsic, but was not in the case where the return type
was double the element width of the input types.

To assist in testing the multiclassing , and to thwart further gremlins,
the unit tests are improved in scope.

The *.ll tests are all generated by a small bit of throw-away scripting
from the corresponding *.c tests, and as such the diffs are large and
nasty. Look at the file rather than the diff.

Reviewers: dmgreen, miyuki, ostannard, simon_tatham

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71421
2019-12-13 11:51:23 +00:00
Guillaume Chatelet 0508c994f0 [clang] Turn -fno-builtin flag into an IR Attribute
Summary:
This is a follow up on https://reviews.llvm.org/D61634#1742154 to turn the clang driver -fno-builtin flag into an IR attribute.
I also investigated pushing the attribute earlier on (in Sema) but it looks like this patch is simple and will cover all function calls.

Reviewers: aaron.ballman, courbet

Subscribers: cfe-commits, tejohnson

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71193
2019-12-12 17:21:12 +01:00
Momchil Velikov 600d123c6f [ARM][CMSE] Add CMSE header and builtins
This is patch C2 as mentioned in RFC
http://lists.llvm.org/pipermail/cfe-dev/2019-March/061834.html

This adds CMSE builtin functions, and introduces arm_cmse.h header which has
useful macros, functions, and data types for end-users of CMSE.

Patch by Javed Absar.

Diferential Revision: https://reviews.llvm.org/D70817
2019-12-12 15:01:14 +00:00
Sam Clegg 881d877846 [WebAssembly] Add new `export_name` clang attribute for controlling wasm export names
This is equivalent to the existing `import_name` and `import_module`
attributes which control the import names in the final wasm binary
produced by lld.

This maps the existing

This attribute currently requires a string rather than using the
symbol name for a couple of reasons:

1. Avoid confusion with static and dynamic linking which is
   based on symbol name.  Exporting a function from a wasm module using
   this directive is orthogonal to both static and dynamic linking.
2. Avoids name mangling.

Differential Revision: https://reviews.llvm.org/D70520
2019-12-11 11:54:57 -08:00
Nicolai Hähnle f21c081b78 CodeGen: Allow annotations on globals in non-zero address space
Summary:
Attribute annotations are recorded in a special global composite variable
that points to annotation strings and the annotated objects.

As a restriction of the LLVM IR type system, those pointers are all
pointers to address space 0, so let's insert an addrspacecast when the
annotated global is in a non-0 address space.

Since this addrspacecast is only reachable from the global annotations
object, this should allow us to represent annotations on all globals
regardless of which addrspacecasts are usually legal for the target.

Reviewers: rjmccall

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71208
2019-12-11 13:24:32 +01:00
Simon Tatham bd0f271c9e [ARM][MVE] Add intrinsics for immediate shifts. (reland)
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.

There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.

In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: dmgreen, MarkMurrayARM

Subscribers: echristo, hokein, rdhindsa, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71065
2019-12-11 10:10:09 +00:00
Sourabh Singh Tomar fb4d8fe1a8 Recommit "[DWARF5] Start emitting DW_AT_dwo_name when -gdwarf-5 is specified."
Reviewers: dblaikie, aprantl, probinson

Tags: #debug-info #llvm

Differential Revision: https://reviews.llvm.org/D71185
2019-12-11 01:24:50 +05:30
Sourabh Singh Tomar d82b6ba21b Revert "[DWARF5] Start emitting DW_AT_dwo_name when -gdwarf-5 is specified."
This reverts commit 6ef01588f4.
Missing Differetial revision.
2019-12-11 01:20:40 +05:30
Sourabh Singh Tomar 6ef01588f4 [DWARF5] Start emitting DW_AT_dwo_name when -gdwarf-5 is specified. 2019-12-11 01:18:02 +05:30
Yaxun (Sam) Liu 21b43885b8 Fix bug 44190 - wrong code with #pragma pack(1)
5b330e8d61 caused
a regression on s390:

https://bugs.llvm.org/show_bug.cgi?id=44190

we need to copy if if either the argument is non-byval or the argument is underaligned.

Differential Revision: https://reviews.llvm.org/D71282
2019-12-10 13:56:34 -05:00
Kevin P. Neal 6515c524b0 [FPEnv] clang support for constrained FP builtins
Change the IRBuilder and clang so that constrained FP intrinsics will be
emitted for builtins when appropriate. Only non-target-specific builtins
are affected in this patch.

Differential Revision: https://reviews.llvm.org/D70256
2019-12-10 13:09:12 -05:00
Mikhail Maltsev e6d3261c67 [ARM][MVE] Refactor complex vector intrinsics [NFCI]
Summary:
This patch refactors instruction selection of the complex vector
addition, multiplication and multiply-add intrinsics, so that it is
now based on TableGen patterns rather than C++ code.

It also changes the first parameter (halving vs non-halving) of the
arm_mve_vcaddq IR intrinsic to match the corresponding instruction
encoding, hence it requires some changes in the tests.

The patch addresses David's comment in https://reviews.llvm.org/D71190

Reviewers: dmgreen, ostannard, simon_tatham, MarkMurrayARM

Reviewed By: dmgreen

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71245
2019-12-10 16:21:52 +00:00
Yonghong Song d77ae1552f [DebugInfo] Support to emit debugInfo for extern variables
Extern variable usage in BPF is different from traditional
pure user space application. Recent discussion in linux bpf
mailing list has two use cases where debug info types are
required to use extern variables:
  - extern types are required to have a suitable interface
    in libbpf (bpf loader) to provide kernel config parameters
    to bpf programs.
    https://lore.kernel.org/bpf/CAEf4BzYCNo5GeVGMhp3fhysQ=_axAf=23PtwaZs-yAyafmXC9g@mail.gmail.com/T/#t
  - extern types are required so kernel bpf verifier can
    verify program which uses external functions more precisely.
    This will make later link with actual external function no
    need to reverify.
    https://lore.kernel.org/bpf/87eez4odqp.fsf@toke.dk/T/#m8d5c3e87ffe7f2764e02d722cb0d8cbc136880ed

This patch added clang support to emit debuginfo for extern variables
with a TargetInfo hook to enable it. The debuginfo for the
extern variable is emitted only if that extern variable is
referenced in the current compilation unit.

Currently, only BPF target enables to generate debug info for
extern variables. The emission of such debuginfo is disabled for C++
 at this moment since BPF only supports a subset of C language.
Emission with C++ can be enabled later if an appropriate use case
is identified.

-fstandalone-debug permits us to see more debuginfo with the cost
of bloated binary size. This patch did not add emission of extern
variable debug info with -fstandalone-debug. This can be
re-evaluated if there is a real need.

Differential Revision: https://reviews.llvm.org/D70696
2019-12-10 08:09:51 -08:00
Jim Lin cefac9dfaa Remove implicit conversion that promotes half to other larger precision types for fp classification builtins
Summary:
It shouldn't promote half to double or any larger precision types for fp classification builtins.
Because fp classification builtins would get incorrect result with promoted argument.
For example, __builtin_isnormal with a subnormal half value should return false, but it is not.
That the subnormal half value is promoted to a normal double value.

Reviewers: aaron.ballman

Reviewed By: aaron.ballman

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71049
2019-12-10 13:24:21 +08:00
Eric Christopher 9c6b7f68b8 Revert "[ARM][MVE] Add intrinsics for immediate shifts."
and two follow-on commits: one warning fix and one functionality.

As it's breaking at least the lto bot:

http://lab.llvm.org:8011/builders/clang-with-lto-ubuntu/builds/15132/steps/test-stage1-compiler/logs/stdio

This reverts commits:

 8d70f3c933
 ff4dceef92
 d97b3e3e65
2019-12-09 16:47:38 -08:00
Mark Murray fc3417cb5a [ARM][MVE][Intrinsics] Add VQADDQ, VHADDQ, VRHADDQ, VQSUBQ, VHSUBQ, VQDMULHQ, VQRDMULHQ intrinsics.
Summary: Add VQADDQ, VHADDQ, VRHADDQ, VQSUBQ, VHSUBQ, VQDMULHQ, VQRDMULHQ intrinsics and unit tests.

Reviewers: simon_tatham, ostannard, dmgreen, miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71198
2019-12-09 17:41:47 +00:00
Mark Murray 2eb61fa5d6 [ARM][MVE][Intrinsics] Add VMULL[BT]Q_(INT|POLY) intrinsics.
Summary: Add VMULL[BT]Q_(INT|POLY) intrinsics and unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71066
2019-12-09 17:41:47 +00:00
Simon Tatham d97b3e3e65 [ARM][MVE] Add intrinsics for immediate shifts.
Summary:
This adds the family of `vshlq_n` and `vshrq_n` ACLE intrinsics, which
shift every lane of a vector left or right by a compile-time
immediate. They mostly work by expanding to the IR `shl`, `lshr` and
`ashr` operations, with their second operand being a vector splat of
the immediate.

There's a fiddly special case, though. ACLE specifies that the
immediate in `vshrq_n` can take values up to //and including// the bit
size of the vector lane. But LLVM IR thinks that shifting right by the
full size of the lane is UB, and feels free to replace the `lshr` with
an `undef` half way through the optimization pipeline. Hence, to keep
this legal in source code, I have to detect it at codegen time.
Logical (unsigned) right shifts by the element size are handled by
simply emitting the zero vector; arithmetic ones are converted into a
shift of one bit less, which will always give the same output.

In order to do that check, I also had to enhance the tablegen
MveEmitter so that it can cope with converting a builtin function's
operand into a bare integer to pass to a code-generating subfunction.
Previously the only bare integers it knew how to handle were flags
generated from within `arm_mve.td`.

Reviewers: dmgreen, miyuki, MarkMurrayARM, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71065
2019-12-09 15:44:09 +00:00
Zahira Ammarguellat 32c802e0f5 Fix build bot fails due to the patch here:
https://reviews.llvm.org/D70691
Fixed the LIT test case. Added the REQUIRES instruction.
2019-12-09 09:24:47 -05:00
Mikhail Maltsev 0d1490bf6a [ARM][MVE] Add complex vector intrinsics
Summary:
This patch adds intrinsics for the following MVE instructions:
* VCADD, VHCADD
* VCMUL
* VCMLA

Each of the above 3 groups has a corresponding new LLVM IR intrinsic.

Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen

Reviewed By: MarkMurrayARM

Subscribers: merge_guards_bot, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71190
2019-12-09 12:05:59 +00:00
Zahira Ammarguellat 27f5d35137 Fix for build bot failure. For more details see:
https://reviews.llvm.org/D70691
Upated LIT test.
2019-12-09 00:50:30 -05:00
Reid Kleckner eff08f4097 Revert "[Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize."
This reverts commit e1578fd2b7.

It introduces a dependency on Attr.h which I am removing from
ASTContext.h.
2019-12-06 15:42:14 -08:00
Craig Topper e1578fd2b7 [Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize.
The validateOutputSize and validateInputSize need to check whether
AVX or AVX512 are enabled. But this can be affected by the
target attribute so we need to factor that in.

This patch copies some of the code from CodeGen to create an
appropriate feature map that we can pass to the function. Probably
need some refactoring here to share more code with Codegen. Is
there a good place to do that? Also need to support the cpu_specific
attribute as well.

Differential Revision: https://reviews.llvm.org/D68627
2019-12-06 15:30:59 -08:00
Zahira Ammarguellat a3b2552575 Fix for PR44000. Optimization record for bytecode input missing.
Review is here:  https://reviews.llvm.org/D70691
2019-12-06 07:48:42 -05:00
Melanie Blower 7f9b513847 Reapply af57dbf12e "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048
        The original patch is modified to set the strictfp IR attribute
        explicitly in CodeGen instead of as a side effect of IRBuilder.
        In the 2nd attempt to reapply there was a windows lit test fail, the
        tests were fixed to use wildcard matching.

        Differential Revision: https://reviews.llvm.org/D62731
2019-12-05 03:48:04 -08:00
Melanie Blower 5412913631 Revert " Reapply af57dbf12e "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior=""
This reverts commit cdbed2dd85.
Build break on Windows (lit fail)
2019-12-04 12:21:23 -08:00
Melanie Blower cdbed2dd85 Reapply af57dbf12e "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048
        The original patch is modified to set the strictfp IR attribute
        explicitly in CodeGen instead of as a side effect of IRBuilder

        Differential Revision: https://reviews.llvm.org/D62731
2019-12-04 11:32:33 -08:00
Mark Murray d3f62ceac0 [ARM][MVE][Intrinsics] Add VMULH/VRMULH intrinsics.
Summary: Add MVE VMULH/VRMULH intrinsics and unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70948
2019-12-04 14:27:12 +00:00
Kadir Cetinkaya 01a26fa74a
[clang][CodeGen] Make use of cc1 instead of clang in the tests 2019-12-03 12:02:26 +01:00
Reid Kleckner 536cedaecb FileCheck IR output for blockaddress in new test
Minor improvement to a test added in 1ac700cdef
2019-12-02 15:05:50 -08:00
David Green 1d4587346f [AArch64] Attempt to fixup test line. NFC
The test is complaining on some of the builders. This attempts to
adjust the run line to be more line the others in the same folder, using
clang_cc1 as opposed to the driver.
2019-12-02 19:30:54 +00:00
Simon Tatham d173fb5d28 [ARM,MVE] Add intrinsics to deal with predicates.
Summary:
This commit adds the `vpselq` intrinsics which take an MVE predicate
word and select lanes from two vectors; the `vctp` intrinsics which
create a tail predicate word suitable for processing the first m
elements of a vector (e.g. in the last iteration of a loop); and
`vpnot`, which simply complements a predicate word and is just
syntactic sugar for the `~` operator.

The `vctp` ACLE intrinsics are lowered to the IR intrinsics we've
already added (and which D70592 just reorganized). I've filled in the
missing isel rule for VCTP64, and added another set of rules to
generate the predicated forms.

I needed one small tweak in MveEmitter to allow the `unpromoted` type
modifier to apply to predicates as well as integers, so that `vpnot`
doesn't pointlessly convert its input integer to an `<n x i1>` before
complementing it.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70485
2019-12-02 16:20:30 +00:00
Victor Campos dcf11c5e86 [ARM][AArch64] Complex addition Neon intrinsics for Armv8.3-A
Summary:
Add support for vcadd_* family of intrinsics. This set of intrinsics is
available in Armv8.3-A.

The fp16 versions require the FP16 extension, which has been available
(opt-in) since Armv8.2-A.

Reviewers: t.p.northover

Reviewed By: t.p.northover

Subscribers: t.p.northover, kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70862
2019-12-02 14:38:39 +00:00
Mark Murray 510792a2e0 [ARM][MVE][Intrinsics] Add VMINQ/VMAXQ/VMINNMQ/VMAXNMQ intrinsics.
Summary: Add VMINQ/VMAXQ/VMINNMQ/VMAXNMQ intrinsics and their predicated versions. Add unit tests.

Subscribers: kristof.beyls, hiraditya, dmgreen, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70829
2019-12-02 11:18:53 +00:00
Simon Atanasyan f4d32ae75b [mips] Check that features required by built-ins are enabled
Now Clang does not check that features required by built-in functions
are enabled. That causes errors in the backend reported in PR44018.

This patch fixes this bug by checking that required features
are enabled.

This should fix PR44018.

Differential Revision: https://reviews.llvm.org/D70808
2019-11-29 00:23:00 +03:00
Johannes Altmanninger 1ac700cdef [CodeGen] Fix clang crash on aggregate initialization of array of labels
Summary: Fix PR43700

The ConstantEmitter in AggExprEmitter::EmitArrayInit was initialized
with the CodeGenFunction set to null, which caused the crash.
Also simplify another call, and make the CGF member a const pointer
since it is public but only assigned in the constructor.

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D70302
2019-11-28 00:59:25 +01:00
Roman Lebedev b98a0c7f6c
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)(take 2)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.

As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.

This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
901cd3b3f6/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp (L1320-L1334)

But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
   their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]

Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.

It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
   in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
   (i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
   Which kinda defeats the point of
   using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
   we'd have two `add`s with basically the same arguments,
   one of which is used for check+error-less codepath and other one
   for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
   Which means we'll need a whole new handler.
   But then what about the `CompoundAssignOperator`,
   it would also be applicable for it.
   So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
   (due to partial register stalls), so even if we avoid promotion+demotion,
   the computations will //likely// be performed in `i32` anyways.

So i'm not really seeing much benefit of
not doing the straight-forward thing.

While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].

Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.

TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.

I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..

This originally landed in 9872ea4ed1
but got immediately reverted in cbfa237892
because the assertion was faulty. That fault ended up being caused
by the enum - while there will be promotion, both types are unsigned,
with same width. So we still don't need to sanitize non-signed cases.
So far. Maybe the assert will tell us this isn't so.

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].
Refs. https://github.com/google/sanitizers/issues/940

Reviewers: rjmccall, erichkeane, rsmith, vsk

Reviewed By: erichkeane

Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr

Tags: #llvm, #clang, #sanitizers

Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 21:52:41 +03:00
Mark Murray a048bf87fb [ARM][MVE][Intrinsics] Add MVE VAND/VORR/VORN/VEOR/VBIC intrinsics. Add unit tests.
Summary: Add MVE VAND/VORR/VORN/VEOR/VBIC intrinsics. Add unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70547
2019-11-27 16:52:05 +00:00
Mark Murray e8a8dbe9c4 [ARM][MVE][Intrinsics] Add MVE VMUL intrinsics. Remove annoying "t1" from VMUL* instructions. Add unit tests.
Summary: Add MVE VMUL intrinsics. Remove annoying "t1" from VMUL* instructions. Add unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70546
2019-11-27 16:52:05 +00:00
Mark Murray f4bba07b87 [ARM][MVE][Intrinsics] Add MVE VABD intrinsics. Add unit tests.
Summary: Add MVE VABD intrinsics. Add unit tests.

Reviewers: simon_tatham, ostannard, dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70545
2019-11-27 16:52:04 +00:00
Roman Lebedev cbfa237892
Revert "[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)"
The asssertion that was added does not hold,
breaks on test-suite/MultiSource/Applications/SPASS/analyze.c
Will reduce the testcase and revisit.

This reverts commit 9872ea4ed1, 870f3542d3.
2019-11-27 17:05:21 +03:00
David Green 9f15fcc271 [ARM] Replace arm_neon_vqadds with sadd_sat
This replaces the A32 NEON vqadds, vqaddu, vqsubs and vqsubu intrinsics
with the target independent sadd_sat, uadd_sat, ssub_sat and usub_sat.
This helps generate vqadds from standard IR nodes, which might be
produced from the vectoriser. The old variants are removed in the
process.

Differential Revision: https://reviews.llvm.org/D69350
2019-11-27 13:32:29 +00:00
Roman Lebedev 870f3542d3
[CodeGen][UBSan] Relax newly-added verbose sanitization tests for inc/dec
In particular, don't hardcode the signature of the handler:
it takes src filepath so the length of buffers will not match,
2019-11-27 16:05:34 +03:00
Roman Lebedev 9872ea4ed1
[clang][CodeGen] Implicit Conversion Sanitizer: handle increment/decrement (PR44054)
Summary:
Implicit Conversion Sanitizer is *almost* feature complete.
There aren't *that* much unsanitized things left,
two major ones are increment/decrement (this patch) and bit fields.

As it was discussed in
[[ https://bugs.llvm.org/show_bug.cgi?id=39519 | PR39519 ]],
unlike `CompoundAssignOperator` (which is promoted internally),
or `BinaryOperator` (for which we always have promotion/demotion in AST)
or parts of `UnaryOperator` (we have promotion/demotion but only for
certain operations), for inc/dec, clang omits promotion/demotion
altogether, under as-if rule.

This is technically correct: https://rise4fun.com/Alive/zPgD
As it can be seen in `InstCombineCasts.cpp` `canEvaluateTruncated()`,
`add`/`sub`/`mul`/`and`/`or`/`xor` operators can all arbitrarily
be extended or truncated:
901cd3b3f6/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp (L1320-L1334)

But that has serious implications:
1. Since we no longer model implicit casts, do we pessimise
   their AST representation and everything that uses it?
2. There is no demotion, so lossy demotion sanitizer does not trigger :]

Now, i'm not going to argue about the first problem here,
but the second one **needs** to be addressed. As it was stated
in the report, this is done intentionally, so changing
this in all modes would be considered a penalization/regression.
Which means, the sanitization-less codegen must not be altered.

It was also suggested to not change the sanitized codegen
to the one with demotion, but i quite strongly believe
that will not be the wise choice here:
1. One will need to re-engineer the check that the inc/dec was lossy
   in terms of `@llvm.{u,s}{add,sub}.with.overflow` builtins
2. We will still need to compute the result we would lossily demote.
   (i.e. the result of wide `add`ition/`sub`traction)
3. I suspect it would need to be done right here, in sanitization.
   Which kinda defeats the point of
   using `@llvm.{u,s}{add,sub}.with.overflow` builtins:
   we'd have two `add`s with basically the same arguments,
   one of which is used for check+error-less codepath and other one
   for the error reporting. That seems worse than a single wide op+check.
4. OR, we would need to do that in the compiler-rt handler.
   Which means we'll need a whole new handler.
   But then what about the `CompoundAssignOperator`,
   it would also be applicable for it.
   So this also doesn't really seem like the right path to me.
5. At least X86 (but likely others) pessimizes all sub-`i32` operations
   (due to partial register stalls), so even if we avoid promotion+demotion,
   the computations will //likely// be performed in `i32` anyways.

So i'm not really seeing much benefit of
not doing the straight-forward thing.

While looking into this, i have noticed a few more LLVM middle-end
missed canonicalizations, and filed
[[ https://bugs.llvm.org/show_bug.cgi?id=44100 | PR44100 ]],
[[ https://bugs.llvm.org/show_bug.cgi?id=44102 | PR44102 ]].

Those are not specific to inc/dec, we also have them for
`CompoundAssignOperator`, and it can happen for normal arithmetics, too.
But if we take some other path in the patch, it will not be applicable
here, and we will have most likely played ourselves.

TLDR: front-end should emit canonical, easy-to-optimize yet
un-optimized code. It is middle-end's job to make it optimal.

I'm really hoping reviewers agree with my personal assessment
of the path this patch should take..

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=44054 | PR44054 ]].

Reviewers: rjmccall, erichkeane, rsmith, vsk

Reviewed By: erichkeane

Subscribers: mehdi_amini, dexonsmith, cfe-commits, #sanitizers, llvm-commits, aaron.ballman, t.p.northover, efriedma, regehr

Tags: #llvm, #clang, #sanitizers

Differential Revision: https://reviews.llvm.org/D70539
2019-11-27 15:39:55 +03:00
Eric Christopher fd39b1bb20 Revert "Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.""
This reapplies: 8ff85ed905

Original commit message:

As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.

This change doesn't include any change to move from selection dag to fast isel
and that will come with other numbers that should help inform that decision.
There also haven't been any real debuggability studies with this pipeline yet,
this is just the initial start done so that people could see it and we could start
tweaking after.

Test updates: Outside of the newpm tests most of the updates are coming from either
optimization passes not run anymore (and without a compelling argument at the moment)
that were largely used for canonicalization in clang.

Original post:

http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410

This reverts commit c9ddb02659.
2019-11-26 20:28:52 -08:00
Fangrui Song 3bb24bf257 Fix tests on Windows after D49466
It is tricky to use replace_path_prefix correctly on Windows which uses
backslashes as native path separators. Switch back to the old approach
(startswith is not ideal) to appease build bots for now.
2019-11-26 16:15:39 -08:00
Dan McGregor 6c92cdff72 Initial implementation of -fmacro-prefix-map and -ffile-prefix-map
GCC 8 implements -fmacro-prefix-map. Like -fdebug-prefix-map, it replaces a string prefix for the __FILE__ macro.
-ffile-prefix-map is the union of -fdebug-prefix-map and -fmacro-prefix-map

Reviewed By: rnk, Lekensteyn, maskray

Differential Revision: https://reviews.llvm.org/D49466
2019-11-26 15:17:49 -08:00
Tim Northover 78ad22e0cc Recommit ARM-NEON: make type modifiers orthogonal and allow multiple modifiers.
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.

When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.

The original version broke vcreate_* because it became a macro and didn't
apply the normal integer promotion rules before bitcasting to a vector.
This adds a temporary.
2019-11-26 09:21:47 +00:00
Muhammad Omair Javaid c9ddb02659 Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there."
This reverts commit 8ff85ed905.

This commit introduced 9 new failures on lldb buildbot host at http://lab.llvm.org:8014/builders/lldb-aarch64-ubuntu

Following tests were failing:
    lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq1/TestAmbiguousTailCallSeq1.py
    lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq2/TestAmbiguousTailCallSeq2.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_call_site/TestDisambiguateCallSite.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_paths_to_common_sink/TestDisambiguatePathsToCommonSink.py
    lldb-api :: functionalities/tail_call_frames/disambiguate_tail_call_seq/TestDisambiguateTailCallSeq.py
    lldb-api :: functionalities/tail_call_frames/inlining_and_tail_calls/TestInliningAndTailCalls.py
    lldb-api :: functionalities/tail_call_frames/sbapi_support/TestTailCallFrameSBAPI.py
    lldb-api :: functionalities/tail_call_frames/thread_step_out_message/TestArtificialFrameStepOutMessage.py
    lldb-api :: functionalities/tail_call_frames/thread_step_out_or_return/TestSteppingOutWithArtificialFrames.py
    lldb-api :: functionalities/tail_call_frames/unambiguous_sequence/TestUnambiguousTailCalls.py

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410
2019-11-26 09:32:13 +05:00
Eric Christopher 8ff85ed905 As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there.
This change doesn't include any change to move from selection dag to fast isel
and that will come with other numbers that should help inform that decision.
There also haven't been any real debuggability studies with this pipeline yet,
this is just the initial start done so that people could see it and we could start
tweaking after.

Test updates: Outside of the newpm tests most of the updates are coming from either
optimization passes not run anymore (and without a compelling argument at the moment)
that were largely used for canonicalization in clang.

Original post:

http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html

Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65410
2019-11-25 17:16:46 -08:00
Peter Collingbourne 90b8bc003c IRGen: Call SetLLVMFunctionAttributes{,ForDefinition} on __cfi_check_fail.
This has the main effect of causing target-cpu and target-features to be set
on __cfi_check_fail, causing the function to become ABI-compatible with other
functions in the case where these attributes affect ABI (e.g. reserve-x18).

Technically we only need to call SetLLVMFunctionAttributes to get the target-*
attributes set, but since we're creating a definition we probably ought to
call the ForDefinition function as well.

Fixes PR44094.

Differential Revision: https://reviews.llvm.org/D70692
2019-11-25 15:16:43 -08:00
Hans Wennborg 21f26470e9 Revert 3f91705ca5 "ARM-NEON: make type modifiers orthogonal and allow multiple modifiers."
This broke the vcreate_u64 intrinsic. Example:

  $ cat /tmp/a.cc
  #include <arm_neon.h>

  void g() {
    auto v = vcreate_u64(0);
  }
  $ bin/clang -c /tmp/a.cc --target=arm-linux-androideabi16 -march=armv7-a
  /tmp/a.cc:4:12: error: C-style cast from scalar 'int' to vector 'uint64x1_t' (vector of 1 'uint64_t' value) of different size
    auto v = vcreate_u64(0);
             ^~~~~~~~~~~~~~
  /work/llvm.monorepo/build.release/lib/clang/10.0.0/include/arm_neon.h:4144:11: note: expanded from macro 'vcreate_u64'
    __ret = (uint64x1_t)(__p0); \
            ^~~~~~~~~~~~~~~~~~

Reverting until this can be investigated.

> The modifier system used to mutate types on NEON intrinsic definitions had a
> separate letter for all kinds of transformations that might be needed, and we
> were quite quickly running out of letters to use. This patch converts to a much
> smaller set of orthogonal modifiers that can be applied together to achieve the
> desired effect.
>
> When merging with downstream it is likely to cause a conflict with any local
> modifications to the .td files. There is a new script in
> utils/convert_arm_neon.py that was used to convert all .td definitions and I
> would suggest running it on the last downstream version of those files before
> this commit rather than resolving conflicts manually.
2019-11-25 16:27:53 +01:00
David Blaikie e956952ede DebugInfo: Flag Dwarf Version metadata for merging during LTO
When the Dwarf Version metadata was initially added (r184276) there was
no support for Module::Max - though the comment suggested that was the
desired behavior. The original behavior was Module::Warn which would
warn and then pick whichever version came first - which is pretty
arbitrary/luck-based if the consumer has some need for one version or
the other.

Now that the functionality's been added (r303590) this change updates
the implementation to match the desired goal.

The general logic here is - if you compile /some/ of your program with a
more recent DWARF version, you must have a consumer that can handle it,
so might as well use it for /everything/.

The only place where this might fall down is if you have a need to use
an old tool (supporting only the older DWARF version) for some subset of
your program. In which case now it'll all be the higher version. That
seems pretty narrow (& the inverse could happen too - you specifically
/need/ the higher DWARF version for some extra expressivity, etc, in
some part of the program)
2019-11-22 17:16:35 -08:00
Tim Northover 5cf58768cb Atomics: support min/max orthogonally
We seem to have been gradually growing support for atomic min/max operations
(exposing longstanding IR atomicrmw instructions). But until now there have
been gaps in the expected intrinsics. This adds support for the C11-style
intrinsics (i.e. taking _Atomic, rather than individually blessed by C11
standard), and the variants that return the new value instead of the original
one.

That way, people won't be misled by trying one form and it not working, and the
front-end is more friendly to people using _Atomic types, as we recommend.
2019-11-21 10:37:56 +00:00
Tim Northover 3f91705ca5 ARM-NEON: make type modifiers orthogonal and allow multiple modifiers.
The modifier system used to mutate types on NEON intrinsic definitions had a
separate letter for all kinds of transformations that might be needed, and we
were quite quickly running out of letters to use. This patch converts to a much
smaller set of orthogonal modifiers that can be applied together to achieve the
desired effect.

When merging with downstream it is likely to cause a conflict with any local
modifications to the .td files. There is a new script in
utils/convert_arm_neon.py that was used to convert all .td definitions and I
would suggest running it on the last downstream version of those files before
this commit rather than resolving conflicts manually.
2019-11-20 13:20:02 +00:00
Tim Northover b80e483c42 Update tests after change to llvm-cxxfilt's underscore stripping behaviour. 2019-11-20 13:10:55 +00:00
Djordje Todorovic ce1f95a6e0 Reland "[clang] Remove the DIFlagArgumentNotModified debug info flag"
It turns out that the ExprMutationAnalyzer can be very slow when AST
gets huge in some cases. The idea is to move this analysis to the LLVM
back-end level (more precisely, in the LiveDebugValues pass). The new
approach will remove the performance regression, simplify the
implementation and give us front-end independent implementation.

Differential Revision: https://reviews.llvm.org/D68206
2019-11-20 10:08:07 +01:00
Vedant Kumar 568db780bb [CGDebugInfo] Emit subprograms for decls when AT_tail_call is understood (reland with fixes)
Currently, clang emits subprograms for declared functions when the
target debugger or DWARF standard is known to support entry values
(DW_OP_entry_value & the GNU equivalent).

Treat DW_AT_tail_call the same way to allow debuggers to follow cross-TU
tail calls.

Pre-patch debug session with a cross-TU tail call:

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

Post-patch (note that the tail-calling frame, "helper", is visible):

```
  * frame #0: 0x0000000100000fa4 main`target at b.c:4:3 [opt]
    frame #1: 0x0000000100000f80 main`helper [opt] [artificial]
    frame #2: 0x0000000100000f99 main`main at a.c:8:10 [opt]
```

This was reverted in 5b9a072c because it attached declaration
subprograms to inlinable builtin calls, which interacted badly with the
MergeICmps pass. The fix is to not attach declarations to builtins.

rdar://46577651

Differential Revision: https://reviews.llvm.org/D69743
2019-11-19 12:49:27 -08:00
Matt Arsenault e531750c6c clang: Add -fconvergent-functions flag
The CUDA builtin library is apparently compiled in C++ mode, so the
assumption of convergent needs to be made in a typically non-SPMD
language. The functions in the library should still be assumed
convergent. Currently they are not, which is potentially incorrect and
this happens to work after the library is linked.
2019-11-19 23:20:15 +05:30
Simon Tatham 254b4f2500 [ARM,MVE] Add intrinsics for scalar shifts.
This fills in the small family of MVE intrinsics that have nothing to
do with vectors: they implement bit-shift operations on 32- or 64-bit
values held in one or two general-purpose registers. Most of these
shift operations saturate if shifting left, and round to nearest if
shifting right, although LSLL and ASRL behave like ordinary shifts.

When these instructions take a variable shift count in a register,
they pay attention to its sign, so that (for example) LSLL or UQRSHLL
will shift left if given a positive number but right if given a
negative one. That makes even LSLL and ASRL different enough from
standard LLVM IR shift semantics that I couldn't see any better
alternative than to simply model the whole family as a set of
MVE-specific IR intrinsics.

(The //immediate// forms of LSLL and ASRL, on the other hand, do
behave exactly like a standard IR shift of a 64-bit value. In fact,
those forms don't have ACLE intrinsics defined at all, because you can
just write an ordinary C shift operation if you want one of those.)

The 64-bit shifts have to be instruction-selected in C++, because they
deliver two output values. But the 32-bit ones are simple enough that
I could write a DAG isel pattern directly into each Instruction
record.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70319
2019-11-19 14:47:29 +00:00
Eric Christopher 30e7ee3c4b Temporarily Revert "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
and a follow-up NFC rearrangement as it's causing a crash on valid. Testcase is on the original review thread.

This reverts commits af57dbf12e and e6584b2b7b
2019-11-18 10:46:48 -08:00
Simon Tatham 4a4dd85e5a [ARM,MVE] Add intrinsics for vector comparisons.
This adds the `vcmp` family of ACLE MVE intrinsics: vector/vector,
vector/scalar, and the predicated forms of both. All are represented
using standard existing IR: vector/scalar comparisons are represented
by making a vector out of the scalar first, and predicated forms are
represented by taking the bitwise AND of the input predicate and the
output of the comparison. Existing LLVM-side tests demonstrate that
ISel will pattern-match all of that back down to single MVE VCMPs.

The idiom of handling a vector/scalar operation by generating IR to
expand the scalar into a second vector is going to be needed for a lot
of MVE intrinsics, so to make that easy, I've provided a helper
function that automatically works out the element count.

The comparison intrinsics are the first ones that have to //return// a
predicate, in the user-facing `mve_pred16_t` format. This means we
have to use the `arm_mve_pred_v2i` low-level intrinsic to convert it
back from the logical `<n x i1>` form used in IR. I've done that
explicitly in the code gen specification for the builtins, because it
happens much more rarely in the ACLE API than passing a Predicate as
input, so it didn't seem worth automating in MveEmitter.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70297
2019-11-18 10:39:30 +00:00
Momchil Velikov aa6d48fa70 Implement target(branch-protection) attribute for AArch64
This patch implements `__attribute__((target("branch-protection=...")))`
in a manner, compatible with the analogous GCC feature:

https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/AArch64-Function-Attributes.html#AArch64-Function-Attributes

Differential Revision: https://reviews.llvm.org/D68711
2019-11-15 15:40:46 +00:00
Djordje Todorovic 41d6ad6efd Revert "[clang] Remove the DIFlagArgumentNotModified debug info flag"
This reverts commit rG1643734741d2 due to LLDB test failure.
2019-11-15 12:16:44 +01:00
Djordje Todorovic 1643734741 [clang] Remove the DIFlagArgumentNotModified debug info flag
It turns out that the ExprMutationAnalyzer can be very slow when AST
gets huge in some cases. The idea is to move this analysis to the LLVM
back-end level (more precisely, in the LiveDebugValues pass). The new
approach will remove the performance regression, simplify the
implementation and give us front-end independent implementation.

Differential Revision: https://reviews.llvm.org/D68206
2019-11-15 11:10:19 +01:00