Commit Graph

1420 Commits

Author SHA1 Message Date
Chris Hamilton da55e9ba12 [Sema] Address-space sensitive index check for unbounded arrays
Check applied to unbounded (incomplete) arrays and pointers
to spot cases where the computed address is beyond the
largest possible addressable extent of the array, based
on the address space in which the array is delcared, or
which the pointer refers to.

Check helps to avoid cases of nonsense pointer math and
array indexing which could lead to linker failures or
runtime exceptions.  Of particular interest when building
for embedded systems with small address spaces.

Reviewed By: aaron.ballman

Differential Revision: https://reviews.llvm.org/D86796
2020-09-14 18:13:19 -05:00
Richard Smith 0ffbbce78d Don't take the expression range into account when looking for widening
of a unary - expression.

This fixes an issue where we'd produce bogus diagnostics, and also
should recover ~0.3% compile time.
2020-09-01 17:42:12 -07:00
Richard Smith f819dbf012 Classify (small unsigned bitfield) < 0 comparisons under
-Wtautological-unsigned-zero-compare not under
-Wtautological-value-range-compare.
2020-08-31 23:16:48 -07:00
Richard Smith cff6dda604 More accurately compute the ranges of possible values for +, -, *, &, %.
Continue to heuristically pick the wider of the two operands for
narrowing conversion warnings so that some_char + 1 isn't treated as
being wider than a char, but use the more accurate computation for
tautological comparison warnings.

Differential Revision: https://reviews.llvm.org/D85778
2020-08-31 23:16:48 -07:00
Bevin Hansson 1a995a0af3 [ADT] Move FixedPoint.h from Clang to LLVM.
This patch moves FixedPointSemantics and APFixedPoint
from Clang to LLVM ADT.

This will make it easier to use the fixed-point
classes in LLVM for constructing an IR builder for
fixed-point and for reusing the APFixedPoint class
for constant evaluation purposes.

RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-August/144025.html

Reviewed By: leonardchan, rjmccall

Differential Revision: https://reviews.llvm.org/D85312
2020-08-20 10:29:45 +02:00
Craig Topper 6b1f9f2bd4 [X86] Don't call SemaBuiltinConstantArg from CheckX86BuiltinTileDuplicate if Argument is Type or Value Dependent.
SemaBuiltinConstantArg has an early exit for that case that doesn't
produce an error and doesn't update the APInt. We need to detect that
case and not use the APInt value.

While there delete the signature of CheckX86BuiltinTileArgumentsRange
that takes a single Argument index to check. There's another version
that takes an ArrayRef and single value is convertible to an ArrayRef.
2020-08-18 12:33:40 -07:00
Mott, Jeffrey T ca77ab494a Disable use of _ExtInt with '__atomic' builtins
We're (temporarily) disabling ExtInt for the '__atomic' builtins so we can better design their behavior later. The idea is until we do an audit/design for the way atomic builtins are supposed to work with _ExtInt, we should leave them restricted so they don't limit our future options, such as by binding us to a sub-optimal implementation via ABI.

Example after this change:

    $ cat test.c

        void f(_ExtInt(64) *ptr) {
          __atomic_fetch_add(ptr, 1, 0);
        }

    $ clang -c test.c

        test.c:2:22: error: argument to atomic builtin of type '_ExtInt' is not supported
          __atomic_fetch_add(ptr, 1, 0);
                             ^
        1 error generated.

Differential Revision: https://reviews.llvm.org/D84049
2020-08-18 09:17:26 -07:00
Mark de Wever 827ba67e38 [Sema] Validate calls to GetExprRange.
When a conditional expression has a throw expression it called
GetExprRange with a void expression, which caused an assertion failure.

This approach was suggested by Richard Smith.

Fixes PR46484: Clang crash in clang/lib/Sema/SemaChecking.cpp:10028

Differential Revision: https://reviews.llvm.org/D85601
2020-08-16 18:32:38 +02:00
Richard Smith d6492d8744 Add -Wtautological-value-range-compare warning.
This warning diagnoses cases where an expression is compared to a
constant, and the comparison is tautological due to the form of the
expression (but not merely due to its type). This applies in cases such
as comparisons of bit-fields and the result of bit-masks.

The new warning is added to the Clang diagnostic group
-Wtautological-constant-in-range-compare but not to the
formerly-equivalent GCC-compatibility diagnostic group -Wtype-limits,
which retains its old meaning of diagnosing only tautological
comparisons to extremal values of a type (eg, int > INT_MAX).

Reviewed By: rtrieu

Differential Revision: https://reviews.llvm.org/D85256
2020-08-06 13:28:50 -07:00
Bruno Ricci 19701458d4
[clang][nearly-NFC] Remove some superfluous uses of NamedDecl::getNameAsString
`OS << ND->getDeclName();` is equivalent to `OS << ND->getNameAsString();`
without the extra temporary string.

This is not quite a NFC since two uses of `getNameAsString` in a
diagnostic are replaced, which results in the named entity being
quoted with additional "'"s (ie: 'var' instead of var).
2020-08-05 13:54:37 +01:00
Yonghong Song 6d67506964 [clang][BPF] support type exist/size and enum exist/value relocations
This patch added the following additional compile-once
run-everywhere (CO-RE) relocations:
  - existence/size of typedef, struct/union or enum type
  - enum value and enum value existence

These additional relocations will make CO-RE bpf programs more
adaptive for potential kernel internal data structure changes.

For existence/size relocations, the following two code patterns
are supported:
  1. uint32_t __builtin_preserve_type_info(*(<type> *)0, flag);
  2. <type> var;
     uint32_t __builtin_preserve_field_info(var, flag);
flag = 0 for existence relocation and flag = 1 for size relocation.

For enum value existence and enum value relocations, the following code
pattern is supported:
  uint64_t __builtin_preserve_enum_value(*(<enum_type> *)<enum_value>,
                                         flag);
flag = 0 means existence relocation and flag = 1 for enum value.
relocation. In the above <enum_type> can be an enum type or
a typedef to enum type. The <enum_value> needs to be an enumerator
value from the same enum type. The return type is uint64_t to
permit potential 64bit enumerator values.

Differential Revision: https://reviews.llvm.org/D83242
2020-08-04 08:39:53 -07:00
JF Bastien 389f009c57 [NFC] Sema: use checkArgCount instead of custom checking
As requested in D79279.

Differential Revision: https://reviews.llvm.org/D84666
2020-07-28 13:41:06 -07:00
Bruno Ricci eb10b065f2
[clang] Pass the NamedDecl* instead of the DeclarationName into many diagnostics.
Background:
-----------
There are two related argument types which can be sent into a diagnostic to
display the name of an entity: DeclarationName (ak_declarationname) or
NamedDecl* (ak_nameddecl) (there is also ak_identifierinfo for
IdentifierInfo*, but we are not concerned with it here).

A DeclarationName in a diagnostic will just be streamed to the output,
which will directly result in a call to DeclarationName::print.

A NamedDecl* in a diagnostic will also ultimately result in a call to
DeclarationName::print, but with two customisation points along the way:

The first customisation point is NamedDecl::getNameForDiagnostic which is
overloaded by FunctionDecl, ClassTemplateSpecializationDecl and
VarTemplateSpecializationDecl to print the template arguments, if any.

The second customisation point is NamedDecl::printName. By default it just
streams the stored DeclarationName into the output but it can be customised
to provide a user-friendly name for an entity. It is currently overloaded by
DecompositionDecl and MSGuidDecl.

What this patch does:
---------------------
For many diagnostics a DeclarationName is used instead of the NamedDecl*.
This bypasses the two customisation points mentioned above. This patches fix
this for diagnostics in Sema.cpp, SemaCast.cpp, SemaChecking.cpp, SemaDecl.cpp,
SemaDeclAttr.cpp, SemaDecl.cpp, SemaOverload.cpp and SemaStmt.cpp.

I have only modified diagnostics where I could construct a test-case which
demonstrates that the change is appropriate (either with this patch or the next
one).

Reviewed By: erichkeane, aaron.ballman

Differential Revision: https://reviews.llvm.org/D84656
2020-07-28 10:30:35 +01:00
Richard Smith 6c18f7db73 For PR46800, implement the GCC __builtin_complex builtin.
glibc's implementation of the CMPLX macro uses it (with -fgnuc-version
set to 4.7 or later).
2020-07-22 13:43:10 -07:00
David Blaikie 36036aa70e Reapply "Rename/refactor isIntegerConstantExpression to getIntegerConstantExpression"
Reapply 49e5f603d4
which had been reverted in c94332919b.

Originally reverted because I hadn't updated it in quite a while when I
got around to committing it, so there were a bunch of missing changes to
new code since I'd written the patch.

Reviewers: aaron.ballman

Differential Revision: https://reviews.llvm.org/D76646
2020-07-21 20:57:12 -07:00
Mott, Jeffrey T d083adb068 Prohibit use of _ExtInt in atomic intrinsic
The _ExtInt type allows custom width integers, but the atomic memory
access's operand must have a power-of-two size. _ExtInts with
non-power-of-two size should not be allowed for atomic intrinsic.

Before this change:

$ cat test.c

typedef unsigned _ExtInt(42) dtype;
void verify_binary_op_nand(dtype* pval1, dtype val2)
{    __sync_nand_and_fetch(pval1, val2); }

$ clang test.c

clang-11:
/home/ubuntu/llvm_workspace/llvm/clang/lib/CodeGen/CGBuiltin.cpp:117:
llvm::Value*
EmitToInt(clang::CodeGen::CodeGenFunction&, llvm::Value*,
clang::QualType, llvm::IntegerType*): Assertion `V->getType() ==
IntType' failed.
PLEASE submit a bug report to https://bugs.llvm.org/ and include the
crash backtrace, preprocessed source, and associated run script.

After this change:

$ clang test.c

test.c:3:30: error: Atomic memory operand must have a power-of-two size
{    __sync_nand_and_fetch(pval1, val2); }
^

List of the atomic intrinsics that have this
problem:

__sync_fetch_and_add
__sync_fetch_and_sub
__sync_fetch_and_or
__sync_fetch_and_and
__sync_fetch_and_xor
__sync_fetch_and_nand
__sync_nand_and_fetch
__sync_and_and_fetch
__sync_add_and_fetch
__sync_sub_and_fetch
__sync_or_and_fetch
__sync_xor_and_fetch
__sync_fetch_and_min
__sync_fetch_and_max
__sync_fetch_and_umin
__sync_fetch_and_umax
__sync_val_compare_and_swap
__sync_bool_compare_and_swap

Differential Revision: https://reviews.llvm.org/D83340
2020-07-14 06:11:04 -07:00
David Blaikie c94332919b Revert "Rename/refactor isIntegerConstantExpression to getIntegerConstantExpression"
Broke buildbots since I hadn't updated this patch in a while. Sorry for
the noise.

This reverts commit 49e5f603d4.
2020-07-12 20:29:19 -07:00
David Blaikie 49e5f603d4 Rename/refactor isIntegerConstantExpression to getIntegerConstantExpression
There is a version that just tests (also called
isIntegerConstantExpression) & whereas this version is specifically used
when the value is of interest (a few call sites were actually refactored
to calling the test-only version) so let's make the API look more like
it.

Reviewers: aaron.ballman

Differential Revision: https://reviews.llvm.org/D76646
2020-07-12 19:43:24 -07:00
Akira Hatanaka 04027052a7 [Sema] Teach -Wcast-align to compute alignment of CXXThisExpr
This fixes https://bugs.llvm.org/show_bug.cgi?id=46605.

rdar://problem/65158878

Differential Revision: https://reviews.llvm.org/D83317
2020-07-07 17:45:04 -07:00
Erik Pilkington 2f71cf6d77 [SemaObjC] Fix a -Wobjc-signed-char-bool false-positive with binary conditional operator
We were previously bypassing the conditional expression special case for binary
conditional expressions.

rdar://64134411

Differential revision: https://reviews.llvm.org/D81751
2020-07-07 13:29:54 -04:00
Xiang1 Zhang 939d8309db [X86-64] Support Intel AMX Intrinsic
INTEL ADVANCED MATRIX EXTENSIONS (AMX).
AMX is a new programming paradigm, it has a set of 2-dimensional registers
(TILES) representing sub-arrays from a larger 2-dimensional memory image and
operate on TILES.

These intrinsics use direct TMM register number as its params.

Spec can be found in Chapter 3 here https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D83111
2020-07-07 10:13:40 +08:00
Haojian Wu 283c8f7f5a [clang] Check ValueDependent instead of InstantiationDependent before executing the align expr for builtin align functions.
in general, value dependent is a subset of instnatiation dependent. This
would allows us to produce diagnostics for the align expression (which
is instantiation dependent but not value dependent).

Differential Revision: https://reviews.llvm.org/D83074
2020-07-03 09:02:12 +02:00
Biplob Mishra 286073484f [PowerPC]Implement Vector Permute Extended Builtin
Implements vector permute builtin: vec_permx()

Differential Revision: https://reviews.llvm.org/D82869
2020-07-02 14:53:18 -05:00
Biplob Mishra 88874f0746 [PowerPC]Implement Vector Shift Double Bit Immediate Builtins
Implement Vector Shift Double Bit Immediate Builtins in LLVM/Clang.
  * vec_sldb ();
  * vec_srdb ();

Differential Revision: https://reviews.llvm.org/D82440
2020-07-01 20:34:53 -05:00
Amy Kwan e0c02dc980 [PowerPC][Power10] Implement centrifuge, vector gather every nth bit, vector evaluate Builtins in LLVM/Clang
This patch implements builtins for the following prototypes:

unsigned long long __builtin_cfuged (unsigned long long, unsigned long long);
vector unsigned long long vec_cfuge (vector unsigned long long, vector unsigned long long);
unsigned long long vec_gnb (vector unsigned __int128, const unsigned int);
vector unsigned char vec_ternarylogic (vector unsigned char, vector unsigned char, vector unsigned char, const unsigned int);
vector unsigned short vec_ternarylogic (vector unsigned short, vector unsigned short, vector unsigned short, const unsigned int);
vector unsigned int vec_ternarylogic (vector unsigned int, vector unsigned int, vector unsigned int, const unsigned int);
vector unsigned long long vec_ternarylogic (vector unsigned long long, vector unsigned long long, vector unsigned long long, const unsigned int);
vector unsigned __int128 vec_ternarylogic (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128, const unsigned int);

Differential Revision: https://reviews.llvm.org/D80970
2020-06-25 21:34:41 -05:00
Florian Hahn 043b608399 [Matrix] Use 1st/2nd instead of first/second in matrix diags.
This was suggested in D72782 and brings the diagnostics more in line
with how argument references are handled elsewhere.

Reviewers: rjmccall, jfb, Bigcheese

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D82473
2020-06-25 11:55:03 +01:00
Zhi Zhuang 37fb860301 Add support of __builtin_expect_with_probability
Add a new builtin-function __builtin_expect_with_probability and
intrinsic llvm.expect.with.probability.
The interface is __builtin_expect_with_probability(long expr, long
expected, double probability).
It is mainly the same as __builtin_expect besides one more argument
indicating the probability of expression equal to expected value. The
probability should be a constant floating-point expression and be in
range [0.0, 1.0] inclusive.
It is similar to builtin-expect-with-probability function in GCC
built-in functions.

Differential Revision: https://reviews.llvm.org/D79830
2020-06-22 10:21:28 -07:00
Bruno Ricci f5bbe390d2
[clang] SequenceChecker: C++17 sequencing rule for overloaded operators.
In C++17 the operand(s) of an overloaded operator are sequenced as for
the corresponding built-in operator when the overloaded operator is
called with the operator notation ([over.match.oper]p2).

Reported in PR35340.

Differential Revision: https://reviews.llvm.org/D81330

Reviewed By: rsmith
2020-06-20 10:51:46 +01:00
Eric Christopher 1f593f46f3 [AST/Lex/Parse/Sema] As part of using inclusive language within
the llvm project, migrate away from the use of blacklist and whitelist.
2020-06-20 01:15:32 -07:00
Florian Hahn b5e082e728 [Matrix] Add __builtin_matrix_column_store to Clang.
This patch add __builtin_matrix_column_major_store to Clang,
as described in clang/docs/MatrixTypes.rst. In the initial version,
the stride is not optional yet.

Reviewers: rjmccall, jfb, rsmith, Bigcheese

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D72782
2020-06-18 11:39:02 +01:00
Sander de Smalen 4ea8e27a64 [SveEmitter] Add builtins to insert/extract subvectors from tuples (svget/svset)
For example:
  svint32_t svget4(svint32x4_t tuple, uint64_t imm_index)

returns the subvector at `index`, which must be in range `0..3`.
  svint32x3_t svset3(svint32x3_t tuple, uint64_t index, svint32_t vec)

returns a tuple vector with `vec` inserted into `tuple` at `index`,
which must be in range `0..2`.

Reviewers: c-rhodes, efriedma

Reviewed By: c-rhodes

Tags: #clang

Differential Revision: https://reviews.llvm.org/D81464
2020-06-18 11:06:16 +01:00
Florian Hahn 934bcaf10b [Matrix] Add __builtin_matrix_column_load to Clang.
This patch add __builtin_matrix_column_major_load to Clang,
as described in clang/docs/MatrixTypes.rst. In the initial version,
the stride is not optional yet.

Reviewers: rjmccall, rsmith, jfb, Bigcheese

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D72781
2020-06-18 10:47:55 +01:00
Jeff Mott 8799ebbc1f [clang] Fix or emit diagnostic for checked arithmetic builtins with
_ExtInt types

- Fix computed size for _ExtInt types passed to checked arithmetic
  builtins.
- Emit diagnostic when signed _ExtInt larger than 128-bits is passed
    to __builtin_mul_overflow.
- Change Sema checks for builtins to accept placeholder types.

Differential Revision: https://reviews.llvm.org/D81420
2020-06-15 06:51:54 -07:00
Saiyedul Islam 675cefbf60 [AMDGPU] Introduce Clang builtins to be mapped to AMDGCN atomic inc/dec intrinsics
Summary:
__builtin_amdgcn_atomic_inc32(int *Ptr, int Val, unsigned MemoryOrdering, const char *SyncScope)
__builtin_amdgcn_atomic_inc64(int64_t *Ptr, int64_t Val, unsigned MemoryOrdering, const char *SyncScope)
__builtin_amdgcn_atomic_dec32(int *Ptr, int Val, unsigned MemoryOrdering, const char *SyncScope)
__builtin_amdgcn_atomic_dec64(int64_t *Ptr, int64_t Val, unsigned MemoryOrdering, const char *SyncScope)

First and second arguments gets transparently passed to the amdgcn atomic
inc/dec intrinsic. Fifth argument of the intrinsic is set as true if the
first argument of the builtin is a volatile pointer. The third argument of
this builtin is one of the memory-ordering specifiers ATOMIC_ACQUIRE,
ATOMIC_RELEASE, ATOMIC_ACQ_REL, or ATOMIC_SEQ_CST following C++11 memory
model semantics. This is mapped to corresponding LLVM atomic memory ordering
for the atomic inc/dec instruction using CLANG atomic C ABI. The fourth
argument is an AMDGPU-specific synchronization scope defined as string.

Reviewers: arsenm, sameerds, JonChesterfield, jdoerfert

Reviewed By: arsenm, sameerds

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, jfb, kerbowa, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D80804
2020-06-09 17:02:58 +00:00
Florian Hahn 3323a628ec [Matrix] Add __builtin_matrix_transpose to Clang.
This patch add __builtin_matrix_transpose to Clang, as described in
clang/docs/MatrixTypes.rst.

Reviewers: rjmccall, jfb, rsmith, Bigcheese

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D72778
2020-06-09 10:14:37 +01:00
Ties Stuij ecd682bbf5 [ARM] Add __bf16 as new Bfloat16 C Type
Summary:
This patch upstreams support for a new storage only bfloat16 C type.
This type is used to implement primitive support for bfloat16 data, in
line with the Bfloat16 extension of the Armv8.6-a architecture, as
detailed here:

https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a

The bfloat type, and its properties are specified in the Arm Architecture
Reference Manual:

https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile

In detail this patch:
- introduces an opaque, storage-only C-type __bf16, which introduces a new bfloat IR type.

This is part of a patch series, starting with command-line and Bfloat16
assembly support. The subsequent patches will upstream intrinsics
support for BFloat16, followed by Matrix Multiplication and the
remaining Virtualization features of the armv8.6-a architecture.

The following people contributed to this patch:
- Luke Cheeseman
- Momchil Velikov
- Alexandros Lamprineas
- Luke Geeson
- Simon Tatham
- Ties Stuij

Reviewers: SjoerdMeijer, rjmccall, rsmith, liutianle, RKSimon, craig.topper, jfb, LukeGeeson, fpetrogalli

Reviewed By: SjoerdMeijer

Subscribers: labrinea, majnemer, asmith, dexonsmith, kristof.beyls, arphaman, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76077
2020-06-05 10:32:43 +01:00
Bruno Ricci a2f32bfcc7
[clang][Sema] SequenceChecker: C++17 sequencing rule for call expressions.
In C++17 the postfix-expression of a call expression is sequenced before
each expression in the expression-list and any default argument.

Differential Revision: https://reviews.llvm.org/D58579

Reviewed By: rsmith
2020-06-03 12:35:12 +01:00
Erich Keane 81a73fde5c Fix aux-target diagnostics for certain builtins
When I fixed the targets specific builtins to make sure that aux-targets
are checked, it seems I didn't consider cases where the builtins check
the target info for further info.  This patch bubbles the target-info
down to the individual checker functions to ensure that they validate
against the aux-target as well.

For non-aux-target invocations, this is an NFC.
2020-05-19 10:49:45 -07:00
Yonghong Song 072cde03aa [Clang][BPF] implement __builtin_btf_type_id() builtin function
Such a builtin function is mostly useful to preserve btf type id
for non-global data. For example,
   extern void foo(..., void *data, int size);
   int test(...) {
     struct t { int a; int b; int c; } d;
     d.a = ...; d.b = ...; d.c = ...;
     foo(..., &d, sizeof(d));
   }

The function "foo" in the above only see raw data and does not
know what type of the data is. In certain cases, e.g., logging,
the additional type information will help pretty print.

This patch implemented a BPF specific builtin
  u32 btf_type_id = __builtin_btf_type_id(param, flag)
which will return a btf type id for the "param".
flag == 0 will indicate a BTF local relocation,
which means btf type_id only adjusted when bpf program BTF changes.
flag == 1 will indicate a BTF remote relocation,
which means btf type_id is adjusted against linux kernel or
future other entities.

Differential Revision: https://reviews.llvm.org/D74668
2020-05-15 09:44:54 -07:00
Akira Hatanaka 854f5f332a [Sema] Teach -Wcast-align to compute an accurate alignment using the
alignment information on VarDecls in more cases

This commit improves upon https://reviews.llvm.org/D21099. The code that
computes the source alignment now understands array subscript
expressions, binary operators, derived-to-base casts, and several more
expressions.

rdar://problem/59242343

Differential Revision: https://reviews.llvm.org/D78767
2020-05-15 00:59:03 -07:00
Erich Keane f9eaa6934e Ensure aux-target specific builtins get validated.
I discovered that when using an aux-target builtin, it was recognized as
a builtin but never checked. This patch checks for an aux-target builtin
and instead validates it against the correct target.

It does this by extracting the checking code for Target-specific
builtins into its own function, then calls with either targetInfo or
AuxTargetInfo.
2020-05-07 13:22:10 -07:00
Saiyedul Islam 06bdffb2bb [AMDGPU] Expose llvm fence instruction as clang intrinsic
Expose llvm fence instruction as clang builtin for AMDGPU target

__builtin_amdgcn_fence(unsigned int memoryOrdering, const char *syncScope)

The first argument of this builtin is one of the memory-ordering specifiers
__ATOMIC_ACQUIRE, __ATOMIC_RELEASE, __ATOMIC_ACQ_REL, or __ATOMIC_SEQ_CST
following C++11 memory model semantics. This is mapped to corresponding
LLVM atomic memory ordering for the fence instruction using LLVM atomic C
ABI. The second argument is an AMDGPU-specific synchronization scope
defined as string.

Reviewed By: sameerds

Differential Revision: https://reviews.llvm.org/D75917
2020-04-27 09:39:03 +05:30
Sander de Smalen 823e2a670a [SveEmitter] Add builtins for contiguous prefetches
This patch also adds the enum `sv_prfop` for the prefetch operation specifier
and checks to ensure the passed enum values are valid.

Reviewers: SjoerdMeijer, efriedma, ctetreau

Reviewed By: efriedma

Tags: #clang

Differential Revision: https://reviews.llvm.org/D78674
2020-04-24 11:35:59 +01:00
Puyan Lotfi 9721fbf85b [NFC] Refactoring PropertyAttributeKind for ObjCPropertyDecl and ObjCDeclSpec.
This is a code clean up of the PropertyAttributeKind and
ObjCPropertyAttributeKind enums in ObjCPropertyDecl and ObjCDeclSpec that are
exactly identical. This non-functional change consolidates these enums
into one. The changes are to many files across clang (and comments in LLVM) so
that everything refers to the new consolidated enum in DeclObjCCommon.h.

2nd Landing Attempt...

Differential Revision: https://reviews.llvm.org/D77233
2020-04-23 17:21:25 -04:00
Puyan Lotfi bbf386f02b Revert "[NFC] Refactoring PropertyAttributeKind for ObjCPropertyDecl and ObjCDeclSpec."
This reverts commit 2aa044ed08.

Reverting due to bot failure in lldb.
2020-04-23 00:05:08 -04:00
Puyan Lotfi 2aa044ed08 [NFC] Refactoring PropertyAttributeKind for ObjCPropertyDecl and ObjCDeclSpec.
This is a code clean up of the PropertyAttributeKind and
ObjCPropertyAttributeKind enums in ObjCPropertyDecl and ObjCDeclSpec that are
exactly identical. This non-functional change consolidates these enums
into one. The changes are to many files across clang (and comments in LLVM) so
that everything refers to the new consolidated enum in DeclObjCCommon.h.

Differential Revision: https://reviews.llvm.org/D77233
2020-04-22 23:27:06 -04:00
Sander de Smalen fc64539749 [SveEmitter] Add immediate checks for lanes and complex imms
Adds another bunch of of intrinsics that take immediates with
varying ranges based, some being a complex rotation immediate
which are a set of allowed immediates rather than a range.

    svmla_lane:   lane immediate ranging 0..(128/(1*sizeinbits(elt)) - 1)
    svcmla_lane:  lane immediate ranging 0..(128/(2*sizeinbits(elt)) - 1)
    svdot_lane:   lane immediate ranging 0..(128/(4*sizeinbits(elt)) - 1)
    svcadd:       complex rotate immediate [90, 270]
    svcmla:
    svcmla_lane:  complex rotate immediate [0, 90, 180, 270]

Reviewers: efriedma, SjoerdMeijer, rovka

Reviewed By: efriedma

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76680
2020-04-20 15:10:54 +01:00
Sander de Smalen 515020c091 [SveEmitter] Add more immediate operand checks.
This patch adds a number of intrinsics that take immediates with
varying ranges based on the element size one of the operands.

    svext:   immediate ranging 0 to (2048/sizeinbits(elt) - 1)
    svasrd:  immediate ranging 1..sizeinbits(elt)
    svqshlu: immediate ranging 1..sizeinbits(elt)/2
    ftmad:   immediate ranging 0..(sizeinbits(elt) - 1)

Reviewers: efriedma, SjoerdMeijer, rovka, rengolin

Reviewed By: SjoerdMeijer

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76679
2020-04-20 14:41:58 +01:00
Erich Keane 5f0903e9be Reland Implement _ExtInt as an extended int type specifier.
I fixed the LLDB issue, so re-applying the patch.

This reverts commit a4b88c0449.
2020-04-17 10:45:48 -07:00
Sterling Augustine a4b88c0449 Revert "Implement _ExtInt as an extended int type specifier."
This reverts commit 61ba1481e2.

I'm reverting this because it breaks the lldb build with
incomplete switch coverage warnings. I would fix it forward,
but am not familiar enough with lldb to determine the correct
fix.

lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:3958:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
  switch (qual_type->getTypeClass()) {
          ^
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:4633:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
  switch (qual_type->getTypeClass()) {
          ^
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:4889:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
  switch (qual_type->getTypeClass()) {
2020-04-17 10:29:40 -07:00
Erich Keane 61ba1481e2 Implement _ExtInt as an extended int type specifier.
Introduction/Motivation:
LLVM-IR supports integers of non-power-of-2 bitwidth, in the iN syntax.
Integers of non-power-of-two aren't particularly interesting or useful
on most hardware, so much so that no language in Clang has been
motivated to expose it before.

However, in the case of FPGA hardware normal integer types where the
full bitwidth isn't used, is extremely wasteful and has severe
performance/space concerns.  Because of this, Intel has introduced this
functionality in the High Level Synthesis compiler[0]
under the name "Arbitrary Precision Integer" (ap_int for short). This
has been extremely useful and effective for our users, permitting them
to optimize their storage and operation space on an architecture where
both can be extremely expensive.

We are proposing upstreaming a more palatable version of this to the
community, in the form of this proposal and accompanying patch.  We are
proposing the syntax _ExtInt(N).  We intend to propose this to the WG14
committee[1], and the underscore-capital seems like the active direction
for a WG14 paper's acceptance.  An alternative that Richard Smith
suggested on the initial review was __int(N), however we believe that
is much less acceptable by WG14.  We considered _Int, however _Int is
used as an identifier in libstdc++ and there is no good way to fall
back to an identifier (since _Int(5) is indistinguishable from an
unnamed initializer of a template type named _Int).

[0]https://www.intel.com/content/www/us/en/software/programmable/quartus-prime/hls-compiler.html)
[1]http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf

Differential Revision: https://reviews.llvm.org/D73967
2020-04-17 07:10:57 -07:00
Sander de Smalen c8a5b30bac [SveEmitter] Add range checks for immediates and predicate patterns.
Summary:
This patch adds a mechanism to easily add range checks for a builtin's
immediate operands. This patch is tested with the qdech intrinsic, which takes
both an enum for the predicate pattern, as well as an immediate for the
multiplier.

Reviewers: efriedma, SjoerdMeijer, rovka

Reviewed By: efriedma, SjoerdMeijer

Subscribers: mgorny, tschuett, mgrang, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76678
2020-04-14 16:49:32 +01:00
Aaron Ballman 86b5eabfea Allow parameter names to be elided in a function definition in C.
WG14 has adopted N2480 (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2480.pdf)
into C2x at the meetings last week, allowing parameter names of a function
definition to be elided. This patch relaxes the error so that C++ and C2x do not
diagnose this situation, and modes before C2x will allow it as an extension.

This also adds the same feature to ObjC blocks under the assumption that ObjC
wishes to follow the C standard in this regard.
2020-04-07 14:43:38 -04:00
Richard Smith 8f2d2a7cb4 For PR45333: Move AnalyzeImplicitConversions to using data recursion
instead of recursing on the stack.

This doesn't actually resolve PR45333, because we now hit stack overflow
somewhere else, but it does get us further. I've not found any way of
testing this that doesn't still crash elsewhere.
2020-04-06 16:49:27 -07:00
Guillaume Chatelet d260a10d98 [clang] Fix crash during template sema checking
Summary: If the size parameter of `__builtin_memcpy_inline` comes from an un-instantiated template parameter current code would crash.

Reviewers: efriedma, courbet

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76504
2020-03-21 12:42:06 +01:00
Mikhail Maltsev 43606efb68 Suppress an "unused variable" warning in release build 2020-03-10 17:10:52 +00:00
Mikhail Maltsev 47edf5bafb [ARM,CDE] Generalize MVE intrinsics infrastructure to support CDE
Summary:
This patch generalizes the existing code to support CDE intrinsics
which will share some properties with existing MVE intrinsics
(some of the intrinsics will be polymorphic and accept/return values
of MVE vector types).
Specifically the patch:
* Adds new tablegen backends -gen-arm-cde-builtin-def,
  -gen-arm-cde-builtin-codegen, -gen-arm-cde-builtin-sema,
  -gen-arm-cde-builtin-aliases, -gen-arm-cde-builtin-header based on
  existing MVE backends.
* Renames the '__clang_arm_mve_alias' attribute into
  '__clang_arm_builtin_alias' (it will be used with CDE intrinsics as
  well as MVE intrinsics)
* Implements semantic checks for the coprocessor argument of the CDE
  intrinsics as well as the existing coprocessor intrinsics.
* Adds one CDE intrinsic __arm_cx1 to test the above changes

Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen

Reviewed By: simon_tatham

Subscribers: sdesmalen, mgorny, kristof.beyls, danielkiss, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75850
2020-03-10 14:03:16 +00:00
Jeremy Stenglein 843a9778fc Add a warning for builtin_return_address/frame_address with > 0 argument
Clang is missing a warning for
builtin_return_address/builtin_frame_address called with > 0 argument.
Gcc provides a warning for this via -Wframe-address:

https://gcc.gnu.org/onlinedocs/gcc/Return-Address.html

As calling these functions with argument > 0 has caused several crashes
for us, we would like to have the same warning as gcc here. This diff
adds the warning and makes it part of -Wmost.

Differential Revision: https://reviews.llvm.org/D75768
2020-03-09 10:43:09 -07:00
Akira Hatanaka f4d791f833 [CodeGen][ObjC] Extend lifetime of ObjC pointers passed to calls to
__builtin_os_log_format

This is needed to keep all the objects, including temporaries returned
by function calls, written to the buffer alive until os_log_pack_send is
called.

rdar://problem/60105410
2020-03-06 16:46:50 -08:00
Erik Pilkington e392dcd570 [Sema] Look through OpaqueValueExpr when checking implicit conversions
Specifically, this fixes a false-positive in -Wobjc-signed-char-bool.
rdar://57372317

Differential revision: https://reviews.llvm.org/D75387
2020-03-02 11:24:36 -08:00
Roman Lebedev b8fdafe68c
[Sema] Perform call checking when building CXXNewExpr
Summary:
There was even a TODO for this.
The main motivation is to make use of call-site based
`__attribute__((alloc_align(param_idx)))` validation (D72996).

Reviewers: rsmith, erichkeane, aaron.ballman, jdoerfert

Reviewed By: rsmith

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73020
2020-02-26 01:36:44 +03:00
zoecarver 6201f6601d Check args passed to __builtin_frame_address and __builtin_return_address.
Verifies that an argument passed to __builtin_frame_address or __builtin_return_address is within the range [0, 0xFFFF]

Differential revision: https://reviews.llvm.org/D66839

Re-committed after fixed: c93112dc4f
2020-02-25 12:47:14 -08:00
zoecarver 6980782572 Revert "Validate argument passed to __builtin_frame_address and __builtin_return_address"
This reverts commit c93112dc4f.
2020-02-24 14:35:02 -08:00
zoecarver c93112dc4f Validate argument passed to __builtin_frame_address and __builtin_return_address
Verifies that the argument passed to __builtin_frame_address and __builtin_return_address is within the range [0, 0xFFFF].
2020-02-24 14:23:41 -08:00
Roman Lebedev 9ea5d17cc9
[Sema] Demote call-site-based 'alignment is a power of two' check for AllocAlignAttr into a warning
Summary:
As @rsmith notes in https://reviews.llvm.org/D73020#inline-672219
while that is certainly UB land, it may not be actually reachable at runtime, e.g.:
```
template<int N> void *make() {
  if ((N & (N-1)) == 0)
    return operator new(N, std::align_val_t(N));
  else
    return operator new(N);
}
void *p = make<7>();
```
and we shouldn't really error-out there.

That being said, i'm not really following the logic here.
Which ones of these cases should remain being an error?

Reviewers: rsmith, erichkeane

Reviewed By: erichkeane

Subscribers: cfe-commits, rsmith

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73996
2020-02-20 16:39:26 +03:00
Mirko Brkusanin 5ba931a84a [Mips] Add intrinsics for 4-byte and 8-byte MSA loads/stores.
New intrinisics are implemented for when we need to port SIMD code from other
arhitectures and only load or store portions of MSA registers.

Following intriniscs are added which only load/store element 0 of a vector:
v4i32 __builtin_msa_ldrq_w (const void *, imm_n2048_2044);
v2i64 __builtin_msa_ldr_d (const void *, imm_n4096_4088);
void __builtin_msa_strq_w (v4i32, void *, imm_n2048_2044);
void __builtin_msa_str_d (v2i64, void *, imm_n4096_4088);

Differential Revision: https://reviews.llvm.org/D73644
2020-02-11 11:47:30 +01:00
Guillaume Chatelet d65bbf81f8 [clang] Add support for __builtin_memcpy_inline
Summary: This is a follow up on D61634 and the last step to implement http://lists.llvm.org/pipermail/llvm-dev/2019-April/131973.html

Reviewers: efriedma, courbet, tejohnson

Subscribers: hiraditya, cfe-commits, llvm-commits, jdoerfert, t.p.northover

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73543
2020-02-07 23:55:26 +01:00
serge-sans-paille 6d485ff455 Improve static checks for sprintf and __builtin___sprintf_chk
Implement a pessimistic evaluator of the minimal required size for a buffer
based on the format string, and couple that with the fortified version to emit a
warning when the buffer size is lower than the lower bound computed from the
format string.

Differential Revision: https://reviews.llvm.org/D71566
2020-01-25 18:10:34 +01:00
Roman Lebedev 1d0972ff5e
[Sema] Introduce MaximumAlignment value, to be used instead of magical constants
There is llvm::Value::MaximumAlignment, which is numerically
equivalent to these constants, but we can't use it directly
because we can't include llvm IR headers in clang Sema.
So instead, copy-paste the constant, and fixup the places to use it.

This was initially reviewed in https://reviews.llvm.org/D72998
2020-01-24 17:49:17 +03:00
Roman Lebedev ba545c814b
[Sema] Try 2: Attempt to perform call-size-specific `__attribute__((alloc_align(param_idx)))` validation
Summary:
`alloc_align` attribute takes parameter number, not the alignment itself,
so given **just** the attribute/function declaration we can't do any
sanity checking for said alignment.

However, at call site, given the actual `Expr` that is passed
into that parameter, we //might// be able to evaluate said `Expr`
as Integer Constant Expression, and perform the sanity checks.
But since there is no requirement for that argument to be an immediate,
we may fail, and that's okay.

However if we did evaluate, we should enforce the same constraints
as with `__builtin_assume_aligned()`/`__attribute__((assume_aligned(imm)))`:
said alignment is a power of two, and is not greater than our magic threshold


This was initially committed in c2a9061ac5
but reverted in 00756b1823 because of
suspicious bot failures.

Reviewers: erichkeane, aaron.ballman, hfinkel, rsmith, jdoerfert

Reviewed By: erichkeane

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72996
2020-01-24 14:42:45 +03:00
Roman Lebedev 1624cba782
Partially revert "[IR] Attribute/AttrBuilder: use Value::MaximumAlignment magic constant"
Apparently makes bots angry.

This reverts commit d096f8d306.
2020-01-23 23:30:42 +03:00
Roman Lebedev 00756b1823
Revert "[Sema] Attempt to perform call-size-specific `__attribute__((alloc_align(param_idx)))` validation"
Likely makes bots angry.

This reverts commit c2a9061ac5.
2020-01-23 23:10:34 +03:00
Roman Lebedev d096f8d306
[IR] Attribute/AttrBuilder: use Value::MaximumAlignment magic constant
Summary:
I initially encountered those assertions when trying to create
this IR `alignment` attribute from clang's `__attribute__((assume_aligned(imm)))`,
because until D72994 there is no sanity checking for the value of `imm`.

But even then, we have `llvm::Value::MaximumAlignment` constant (which is `536870912`),
which is enforced for clang attributes, and then there are some other magical constant
(`0x40000000` i.e. `1073741824` i.e. `2 * 536870912`) in
`Attribute::getWithAlignment()`/`AttrBuilder::addAlignmentAttr()`.

I strongly suspect that `0x40000000` is incorrect,
and that also should be `llvm::Value::MaximumAlignment`.

Reviewers: erichkeane, hfinkel, jdoerfert, gchatelet, courbet

Reviewed By: erichkeane

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D72998
2020-01-23 22:50:49 +03:00
Roman Lebedev c2a9061ac5
[Sema] Attempt to perform call-size-specific `__attribute__((alloc_align(param_idx)))` validation
Summary:
`alloc_align` attribute takes parameter number, not the alignment itself,
so given **just** the attribute/function declaration we can't do any
sanity checking for said alignment.

However, at call site, given the actual `Expr` that is passed
into that parameter, we //might// be able to evaluate said `Expr`
as Integer Constant Expression, and perform the sanity checks.
But since there is no requirement for that argument to be an immediate,
we may fail, and that's okay.

However if we did evaluate, we should enforce the same constraints
as with `__builtin_assume_aligned()`/`__attribute__((assume_aligned(imm)))`:
said alignment is a power of two, and is not greater than our magic threshold

Reviewers: erichkeane, aaron.ballman, hfinkel, rsmith, jdoerfert

Reviewed By: erichkeane

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72996
2020-01-23 22:50:49 +03:00
Simon Tatham 4321c6af28 [ARM,MVE] Support immediate vbicq,vorrq,vmvnq intrinsics.
Summary:
Immediate vmvnq is code-generated as a simple vector constant in IR,
and left to the backend to recognize that it can be created with an
MVE VMVN instruction. The predicated version is represented as a
select between the input and the same constant, and I've added a
Tablegen isel rule to turn that into a predicated VMVN. (That should
be better than the previous VMVN + VPSEL: it's the same number of
instructions but now it can fold into an adjacent VPT block.)

The unpredicated forms of VBIC and VORR are done by enabling the same
isel lowering as for NEON, recognizing appropriate immediates and
rewriting them as ARMISD::VBICIMM / ARMISD::VORRIMM SDNodes, which I
then instruction-select into the right MVE instructions (now that I've
also reworked those instructions to use the same MC operand encoding).
In order to do that, I had to promote the Tablegen SDNode instance
`NEONvorrImm` to a general `ARMvorrImm` available in MVE as well, and
similarly for `NEONvbicImm`.

The predicated forms of VBIC and VORR are represented as a vector
select between the original input vector and the output of the
unpredicated operation. The main convenience of this is that it still
lets me use the existing isel lowering for VBICIMM/VORRIMM, and not
have to write another copy of the operand encoding translation code.

This intrinsic family is the first to use the `imm_simd` system I put
into the MveEmitter tablegen backend. So, naturally, it showed up a
bug or two (emitting bogus range checks and the like). Fixed those,
and added a full set of tests for the permissible immediates in the
existing Sema test.

Also adjusted the isel pattern for `vmovlb.u8`, which stopped matching
because lowering started turning its input into a VBICIMM. Now it
recognizes the VBICIMM instead.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D72934
2020-01-23 11:53:52 +00:00
Krzysztof Parzyszek 6f3effbbf0 [Hexagon] Update autogenerated intrinsic info in clang
In addition to that, use target features to validate intrinsic
availability on a given target.
2020-01-16 14:20:12 -06:00
Krzysztof Parzyszek bc413da086 [Hexagon] Fix alignment info for __builtin_circ_lduh 2020-01-16 10:54:45 -06:00
Alex Richardson 8c387cbea7 Add builtins for aligning and checking alignment of pointers and integers
This change introduces three new builtins (which work on both pointers
and integers) that can be used instead of common bitwise arithmetic:
__builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and
__builtin_is_aligned(x, alignment).

I originally added these builtins to the CHERI fork of LLVM a few years ago
to handle the slightly different C semantics that we use for CHERI [1].
Until recently these builtins (or sequences of other builtins) were
required to generate correct code. I have since made changes to the default
C semantics so that they are no longer strictly necessary (but using them
does generate slightly more efficient code). However, based on our experience
using them in various projects over the past few years, I believe that adding
these builtins to clang would be useful.

These builtins have the following benefit over bit-manipulation and casts
via uintptr_t:

- The named builtins clearly convey the semantics of the operation. While
  checking alignment using __builtin_is_aligned(x, 16) versus
  ((x & 15) == 0) is probably not a huge win in readably, I personally find
  __builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1).
- They preserve the type of the argument (including const qualifiers). When
  using casts via uintptr_t, it is easy to cast to the wrong type or strip
  qualifiers such as const.
- If the alignment argument is a constant value, clang can check that it is
  a power-of-two and within the range of the type. Since the semantics of
  these builtins is well defined compared to arbitrary bit-manipulation,
  it is possible to add a UBSAN checker that the run-time value is a valid
  power-of-two. I intend to add this as a follow-up to this change.
- The builtins avoids int-to-pointer casts both in C and LLVM IR.
  In the future (i.e. once most optimizations handle it), we could use the new
  llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally
  be generated.
- They can be used to round up/down to the next aligned value for both
  integers and pointers without requiring two separate macros.
- In many projects the alignment operations are already wrapped in macros (e.g.
  roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation
  with a builtin call, we get improved diagnostics for many call-sites while
  only having to change a few lines.
- Finally, the builtins also emit assume_aligned metadata when used on pointers.
  This can improve code generation compared to the uintptr_t casts.

[1] In our CHERI compiler we have compilation mode where all pointers are
implemented as capabilities (essentially unforgeable 128-bit fat pointers).
In our original model, casts from uintptr_t (which is a 128-bit capability)
to an integer value returned the "offset" of the capability (i.e. the
difference between the virtual address and the base of the allocation).
This causes problems for cases such as checking the alignment: for example, the
expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the
pointer is aligned to a multiple of 64 bytes. The problem with offsets is that
any pointer to the beginning of an allocation will have an offset of zero, so
this check always succeeds in that case (even if the address is not correctly
aligned). The same issues also exist when aligning up or down. Using the
alignment builtins ensures that the address is used instead of the offset. While
I have since changed the default C semantics to return the address instead of
the offset when casting, this offset compilation mode can still be used by
passing a command-line flag.

Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune
Reviewed By: aaron.ballman, lebedev.ri
Differential Revision: https://reviews.llvm.org/D71499
2020-01-09 21:48:29 +00:00
serge-sans-paille cee4a1c957 Improve support of GNU mempcpy
- Lower to the memcpy intrinsic
- Raise warnings when size/bounds are known

Differential Revision: https://reviews.llvm.org/D71374
2020-01-09 17:31:00 +01:00
Yaxun (Sam) Liu 134ef0fb4b [OpenCL] Fix inconsistency between opencl and c11 atomic fetch max/min
There is some inconsistency between opencl and c11 atomic fetch max/min after

https://reviews.llvm.org/D46386

https://reviews.llvm.org/D55562

It is not reasonable to have such inconsistencies. This patch fixes that.

Differential Revision: https://reviews.llvm.org/D71725
2019-12-27 11:29:04 -05:00
Bruno Ricci 7394c15178
[Sema] SequenceChecker: C++17 sequencing rules for built-in operators <<, >>, .*, ->*, =, op=
Implement the C++17 sequencing rules for the built-in operators <<, >>, .*,
 ->*, = and op=.

Differential Revision: https://reviews.llvm.org/D58297

Reviewed By: rsmith
2019-12-22 12:41:14 +00:00
Bruno Ricci 8a571538df
[Sema] SequenceChecker: Fix handling of operator ||, && and ?:
The current handling of the operators ||, && and ?: has a number of false
positive and false negative. The issues for operator || and && are:

1. We need to add sequencing regions for the LHS and RHS as is done for the
   comma operator. Not doing so causes false positives in expressions like
   `((a++, false) || (a++, false))` (from PR39779, see PR22197 for another
    example).

2. In the current implementation when the evaluation of the LHS fails, the RHS
   is added to a worklist to be processed later. This results in false negatives
   in expressions like `(a && a++) + a`.

Fix these issues by introducing sequencing regions for the LHS and RHS, and by
not deferring the visitation of the RHS.

The issues with the ternary operator ?: are similar, with the added twist that
we should not warn on expressions like `(x ? y += 1 : y += 2)` since exactly
one of the 2nd and 3rd expression is going to be evaluated, but we should still
warn on expressions like `(x ? y += 1 : y += 2) = y`.

Differential Revision: https://reviews.llvm.org/D57747

Reviewed By: rsmith
2019-12-22 12:27:31 +00:00
Bruno Ricci b6eba31292
[Sema] SequenceChecker: Add some comments + related small NFCs
NFCs factored out of the following patches:
- Change all of the `Expr *` to `const Expr *` in SequenceChecker for
  const-correctness. SequenceChecker should not modify AST nodes.
- Add some comments.
- clang-format

Differential Revision: https://reviews.llvm.org/D57659

Reviewed By: xbolva00
2019-12-22 12:07:26 +00:00
Erich Keane 1ed832e424 Reland [NFC-I] Remove hack for fp-classification builtins
The FP-classification builtins (__builtin_isfinite, etc) use variadic
packs in the definition file to mean an overload set.  Because of that,
floats were converted to doubles, which is incorrect. There WAS a patch
to remove the cast after the fact.

THis patch switches these builtins to just be custom type checking,
calls the implicit conversions for the integer members, and makes sure
the correct L->R casts are put into place, then does type checking like
normal.

A future direction (that wouldn't be NFC) would consider making
conversions for the floating point parameter legal.

Note: The initial patch for this missed that certain systems need to
still convert half to float, since they dont' support that type.
2019-12-17 06:58:29 -08:00
Richard Smith 4b00299958 [c++20] Add deprecation warnings for the expression forms deprecated by P1120R0.
This covers:
 * usual arithmetic conversions (comparisons, arithmetic, conditionals)
   between different enumeration types
 * usual arithmetic conversions between enums and floating-point types
 * comparisons between two operands of array type

The deprecation warnings are on-by-default (in C++20 compilations); it
seems likely that these forms will become ill-formed in C++23, so
warning on them now by default seems wise.

For the first two bullets, off-by-default warnings were also added for
all the cases where we didn't already have warnings (covering language
modes prior to C++20). These warnings are in subgroups of the existing
-Wenum-conversion (except that the first case is not warned on if either
enumeration type is anonymous, consistent with our existing
-Wenum-conversion warnings).
2019-12-16 17:49:45 -08:00
Erich Keane 3f22b4721e Revert "[NFC-I] Remove hack for fp-classification builtins"
This reverts commit b1e542f302.

The original 'hack' didn't chop out fp-16 to double conversions, so
systems that use FP16ConversionIntrinsics end up in IR-CodeGen with an
i16 type isntead of a float type (like PPC64-BE).  The bots noticed
this.

Reverting until I figure out how to fix this
2019-12-16 14:01:51 -08:00
Erich Keane b1e542f302 [NFC-I] Remove hack for fp-classification builtins
The FP-classification builtins (__builtin_isfinite, etc) use variadic
packs in the definition file to mean an overload set.  Because of that,
floats were converted to doubles, which is incorrect. There WAS a patch
to remove the cast after the fact.

THis patch switches these builtins to just be custom type checking,
calls the implicit conversions for the integer members, and makes sure
the correct L->R casts are put into place, then does type checking like
normal.

A future direction (that wouldn't be NFC) would consider making
conversions for the floating point parameter legal.
2019-12-16 12:22:55 -08:00
Jim Lin 9c39663798 Only Remove implicit conversion for the target that support fp16
Remove implicit conversion that promotes half to double
for the target that support fp16. If the target doesn't
support fp16, fp16 will be converted to fp16 intrinsic.
2019-12-10 19:15:11 +08:00
Jim Lin cefac9dfaa Remove implicit conversion that promotes half to other larger precision types for fp classification builtins
Summary:
It shouldn't promote half to double or any larger precision types for fp classification builtins.
Because fp classification builtins would get incorrect result with promoted argument.
For example, __builtin_isnormal with a subnormal half value should return false, but it is not.
That the subnormal half value is promoted to a normal double value.

Reviewers: aaron.ballman

Reviewed By: aaron.ballman

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D71049
2019-12-10 13:24:21 +08:00
Elizabeth Andrews 878a24ee24 Reapply "Fix crash on switch conditions of non-integer types in templates"
This patch reapplies commit 759948467e. Patch was reverted due to a
clang-tidy test fail on Windows. The test has been modified. There
are no additional code changes.

Patch was tested with ninja check-all on Windows and Linux.

Summary of code changes:

Clang currently crashes for switch statements inside a template when the
condition is a non-integer field member because contextual implicit
conversion is skipped when parsing the condition. This conversion is
however later checked in an assert when the case statement is handled.
The conversion is skipped when parsing the condition because
the field member is set as type-dependent based on its containing class.
This patch sets the type dependency based on the field's type instead.

This patch fixes Bug 40982.
2019-12-03 15:27:19 -08:00
Alexandre Ganea 471d06020a [CIndex] Fix annotate-deep-statements test when using a Debug build
Differential Revision: https://reviews.llvm.org/D70149
2019-11-29 10:52:20 -05:00
Simon Atanasyan f4d32ae75b [mips] Check that features required by built-ins are enabled
Now Clang does not check that features required by built-in functions
are enabled. That causes errors in the backend reported in PR44018.

This patch fixes this bug by checking that required features
are enabled.

This should fix PR44018.

Differential Revision: https://reviews.llvm.org/D70808
2019-11-29 00:23:00 +03:00
Tim Northover 5cf58768cb Atomics: support min/max orthogonally
We seem to have been gradually growing support for atomic min/max operations
(exposing longstanding IR atomicrmw instructions). But until now there have
been gaps in the expected intrinsics. This adds support for the C11-style
intrinsics (i.e. taking _Atomic, rather than individually blessed by C11
standard), and the variants that return the new value instead of the original
one.

That way, people won't be misled by trying one form and it not working, and the
front-end is more friendly to people using _Atomic types, as we recommend.
2019-11-21 10:37:56 +00:00
Erik Pilkington d9957c7405 [Sema] Add a 'Semantic' parameter to Expr::isKnownToHaveBooleanValue
Some clients of this function want to know about any expression that is known
to produce a 0/1 value, and others care about expressions that are semantically
boolean.

This fixes a -Wswitch-bool regression I introduced in 8bfb353bb3, pointed out
by Chris Hamilton!
2019-11-20 16:29:31 -08:00
Tyker b0561b3346 [NFC] Refactor representation of materialized temporaries
Summary:
this patch refactor representation of materialized temporaries to prevent an issue raised by rsmith in https://reviews.llvm.org/D63640#inline-612718

Reviewers: rsmith, martong, shafik

Reviewed By: rsmith

Subscribers: thakis, sammccall, ilya-biryukov, rnkovacs, arphaman, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D69360
2019-11-19 18:20:45 +01:00
Nico Weber c9276fbfdf Revert "[NFC] Refactor representation of materialized temporaries"
This reverts commit 08ea1ee2db.
It broke ./ClangdTests/FindExplicitReferencesTest.All
on the bots, see comments on https://reviews.llvm.org/D69360
2019-11-17 02:09:25 -05:00
Tyker 08ea1ee2db [NFC] Refactor representation of materialized temporaries
Summary:
this patch refactor representation of materialized temporaries to prevent an issue raised by rsmith in https://reviews.llvm.org/D63640#inline-612718

Reviewers: rsmith, martong, shafik

Reviewed By: rsmith

Subscribers: rnkovacs, arphaman, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D69360
2019-11-16 17:56:09 +01:00
Tim Northover 44e5879f0f AArch64: add arm64_32 support to Clang. 2019-11-12 12:45:18 +00:00
Melanie Blower d0b3e73175 Revert "Reapply "Fix crash on switch conditions of non-integer types in templates""
This reverts commit 759948467e.
There were build bot failures in clang-tidy
2019-11-08 14:18:15 -08:00
Melanie Blower 759948467e Reapply "Fix crash on switch conditions of non-integer types in templates"
This patch reapplies commit 76945821b9. The first version broke
buildbots due to clang-tidy test fails. The fails are because some
errors in templates are now diagnosed earlier (does not wait till
instantiation). I have modified the tests to add checks for these
diagnostics/prevent these diagnostics. There are no additional code
changes.

Summary of code changes:

Clang currently crashes for switch statements inside a template when the
condition is a non-integer field member because contextual implicit
conversion is skipped when parsing the condition. This conversion is
however later checked in an assert when the case statement is handled.
The conversion is skipped when parsing the condition because
the field member is set as type-dependent based on its containing class.
This patch sets the type dependency based on the field's type instead.

This patch fixes Bug 40982.

Reviewers: rnk, gribozavr2

Patch by: Elizabeth Andrews (eandrews)

Differential revision: https://reviews.llvm.org/D69950
2019-11-08 10:17:06 -08:00
Simon Tatham 08074cc965 [clang,ARM] Initial ACLE intrinsics for MVE.
This commit sets up the infrastructure for auto-generating <arm_mve.h>
and doing clang-side code generation for the builtins it relies on,
and demonstrates that it works by implementing a representative sample
of the ACLE intrinsics, more or less matching the ones introduced in
LLVM IR by D67158,D68699,D68700.

Like NEON, that header file will provide a set of vector types like
uint16x8_t and C functions with names like vaddq_u32(). Unlike NEON,
the ACLE spec for <arm_mve.h> includes a polymorphism system, so that
you can write plain vaddq() and disambiguate by the vector types you
pass to it.

Unlike the corresponding NEON code, I've arranged to make every user-
facing ACLE intrinsic into a clang builtin, and implement all the code
generation inside clang. So <arm_mve.h> itself contains nothing but
typedefs and function declarations, with the latter all using the new
`__attribute__((__clang_builtin))` system to arrange that the user-
facing function names correspond to the right internal BuiltinIDs.

So the new MveEmitter tablegen system specifies the full sequence of
IRBuilder operations that each user-facing ACLE intrinsic should
translate into. Where possible, the ACLE intrinsics map to standard IR
operations such as vector-typed `add` and `fadd`; where no standard
representation exists, I call down to the sample IR intrinsics
introduced in an earlier commit.

Doing it like this means that you get the polymorphism for free just
by using __attribute__((overloadable)): the clang overload resolution
decides which function declaration is the relevant one, and _then_ its
BuiltinID is looked up, so by the time we're doing code generation,
that's all been resolved by the standard system. It also means that
you get really nice error messages if the user passes the wrong
combination of types: clang will show the declarations from the header
file and explain why each one doesn't match.

(The obvious alternative approach would be to have wrapper functions
in <arm_mve.h> which pass their arguments to the underlying builtins.
But that doesn't work in the case where one of the arguments has to be
a constant integer: the wrapper function can't pass the constantness
through. So you'd have to do that case using a macro instead, and then
use C11 `_Generic` to handle the polymorphism. Then you have to add
horrible workarounds because `_Generic` requires even the untaken
branches to type-check successfully, and //then// if the user gets the
types wrong, the error message is totally unreadable!)

Reviewers: dmgreen, miyuki, ostannard

Subscribers: mgorny, javed.absar, kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67161
2019-10-24 16:33:13 +01:00
Erich Keane f759395994 Reland r374450 with Richard Smith's comments and test fixed.
The behavior from the original patch has changed, since we're no longer
allowing LLVM to just ignore the alignment.  Instead, we're just
assuming the maximum possible alignment.

Differential Revision: https://reviews.llvm.org/D68824

llvm-svn: 374562
2019-10-11 14:59:44 +00:00
Nico Weber b556085d81 Revert 374450 "Fix __builtin_assume_aligned with too large values."
The test fails on Windows, with

  error: 'warning' diagnostics expected but not seen:
    File builtin-assume-aligned.c Line 62: requested alignment
        must be 268435456 bytes or smaller; assumption ignored
  error: 'warning' diagnostics seen but not expected:
    File builtin-assume-aligned.c Line 62: requested alignment
        must be 8192 bytes or smaller; assumption ignored

llvm-svn: 374456
2019-10-10 21:34:32 +00:00
Erich Keane 31e454c1ec Fix __builtin_assume_aligned with too large values.
Code to handle __builtin_assume_aligned was allowing larger values, but
would convert this to unsigned along the way. This patch removes the
EmitAssumeAligned overloads that take unsigned to do away with this
problem.

Additionally, it adds a warning that values greater than 1 <<29 are
ignored by LLVM.

Differential Revision: https://reviews.llvm.org/D68824

llvm-svn: 374450
2019-10-10 21:08:28 +00:00
Yonghong Song 05e46979d2 [BPF] do compile-once run-everywhere relocation for bitfields
A bpf specific clang intrinsic is introduced:
   u32 __builtin_preserve_field_info(member_access, info_kind)
Depending on info_kind, different information will
be returned to the program. A relocation is also
recorded for this builtin so that bpf loader can
patch the instruction on the target host.
This clang intrinsic is used to get certain information
to facilitate struct/union member relocations.

The offset relocation is extended by 4 bytes to
include relocation kind.
Currently supported relocation kinds are
 enum {
    FIELD_BYTE_OFFSET = 0,
    FIELD_BYTE_SIZE,
    FIELD_EXISTENCE,
    FIELD_SIGNEDNESS,
    FIELD_LSHIFT_U64,
    FIELD_RSHIFT_U64,
 };
for __builtin_preserve_field_info. The old
access offset relocation is covered by
    FIELD_BYTE_OFFSET = 0.

An example:
struct s {
    int a;
    int b1:9;
    int b2:4;
};
enum {
    FIELD_BYTE_OFFSET = 0,
    FIELD_BYTE_SIZE,
    FIELD_EXISTENCE,
    FIELD_SIGNEDNESS,
    FIELD_LSHIFT_U64,
    FIELD_RSHIFT_U64,
};

void bpf_probe_read(void *, unsigned, const void *);
int field_read(struct s *arg) {
  unsigned long long ull = 0;
  unsigned offset = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_OFFSET);
  unsigned size = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_SIZE);
 #ifdef USE_PROBE_READ
  bpf_probe_read(&ull, size, (const void *)arg + offset);
  unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
 #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
  lshift = lshift + (size << 3) - 64;
 #endif
 #else
  switch(size) {
  case 1:
    ull = *(unsigned char *)((void *)arg + offset); break;
  case 2:
    ull = *(unsigned short *)((void *)arg + offset); break;
  case 4:
    ull = *(unsigned int *)((void *)arg + offset); break;
  case 8:
    ull = *(unsigned long long *)((void *)arg + offset); break;
  }
  unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
 #endif
  ull <<= lshift;
  if (__builtin_preserve_field_info(arg->b2, FIELD_SIGNEDNESS))
    return (long long)ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
  return ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
}

There is a minor overhead for bpf_probe_read() on big endian.

The code and relocation generated for field_read where bpf_probe_read() is
used to access argument data on little endian mode:
        r3 = r1
        r1 = 0
        r1 = 4  <=== relocation (FIELD_BYTE_OFFSET)
        r3 += r1
        r1 = r10
        r1 += -8
        r2 = 4  <=== relocation (FIELD_BYTE_SIZE)
        call bpf_probe_read
        r2 = 51 <=== relocation (FIELD_LSHIFT_U64)
        r1 = *(u64 *)(r10 - 8)
        r1 <<= r2
        r2 = 60 <=== relocation (FIELD_RSHIFT_U64)
        r0 = r1
        r0 >>= r2
        r3 = 1  <=== relocation (FIELD_SIGNEDNESS)
        if r3 == 0 goto LBB0_2
        r1 s>>= r2
        r0 = r1
LBB0_2:
        exit

Compare to the above code between relocations FIELD_LSHIFT_U64 and
FIELD_LSHIFT_U64, the code with big endian mode has four more
instructions.
        r1 = 41   <=== relocation (FIELD_LSHIFT_U64)
        r6 += r1
        r6 += -64
        r6 <<= 32
        r6 >>= 32
        r1 = *(u64 *)(r10 - 8)
        r1 <<= r6
        r2 = 60   <=== relocation (FIELD_RSHIFT_U64)

The code and relocation generated when using direct load.
        r2 = 0
        r3 = 4
        r4 = 4
        if r4 s> 3 goto LBB0_3
        if r4 == 1 goto LBB0_5
        if r4 == 2 goto LBB0_6
        goto LBB0_9
LBB0_6:                                 # %sw.bb1
        r1 += r3
        r2 = *(u16 *)(r1 + 0)
        goto LBB0_9
LBB0_3:                                 # %entry
        if r4 == 4 goto LBB0_7
        if r4 == 8 goto LBB0_8
        goto LBB0_9
LBB0_8:                                 # %sw.bb9
        r1 += r3
        r2 = *(u64 *)(r1 + 0)
        goto LBB0_9
LBB0_5:                                 # %sw.bb
        r1 += r3
        r2 = *(u8 *)(r1 + 0)
        goto LBB0_9
LBB0_7:                                 # %sw.bb5
        r1 += r3
        r2 = *(u32 *)(r1 + 0)
LBB0_9:                                 # %sw.epilog
        r1 = 51
        r2 <<= r1
        r1 = 60
        r0 = r2
        r0 >>= r1
        r3 = 1
        if r3 == 0 goto LBB0_11
        r2 s>>= r1
        r0 = r2
LBB0_11:                                # %sw.epilog
        exit

Considering verifier is able to do limited constant
propogation following branches. The following is the
code actually traversed.
        r2 = 0
        r3 = 4   <=== relocation
        r4 = 4   <=== relocation
        if r4 s> 3 goto LBB0_3
LBB0_3:                                 # %entry
        if r4 == 4 goto LBB0_7
LBB0_7:                                 # %sw.bb5
        r1 += r3
        r2 = *(u32 *)(r1 + 0)
LBB0_9:                                 # %sw.epilog
        r1 = 51   <=== relocation
        r2 <<= r1
        r1 = 60   <=== relocation
        r0 = r2
        r0 >>= r1
        r3 = 1
        if r3 == 0 goto LBB0_11
        r2 s>>= r1
        r0 = r2
LBB0_11:                                # %sw.epilog
        exit

For native load case, the load size is calculated to be the
same as the size of load width LLVM otherwise used to load
the value which is then used to extract the bitfield value.

Differential Revision: https://reviews.llvm.org/D67980

llvm-svn: 374099
2019-10-08 18:23:17 +00:00
David Bolvansky aaea76ba02 [Diagnostics] Emit better -Wbool-operation's warning message if we known that the result is always true
llvm-svn: 373973
2019-10-07 21:57:03 +00:00
Simon Pilgrim dc4d908d6e Sema - silence static analyzer getAs<> null dereference warnings. NFCI.
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<> directly and if not assert will fire for us.

llvm-svn: 373911
2019-10-07 14:25:46 +00:00
Yuanfang Chen 442ddffe13 [clang] fix a typo from r372531
Reviewers: xbolva00

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68482

llvm-svn: 373792
2019-10-04 21:37:20 +00:00
Erik Pilkington f7766b1ed4 [Sema] Split out -Wformat-type-confusion from -Wformat-pedantic
The warnings now in -Wformat-type-confusion don't align with how we interpret
'pedantic' in clang, and don't belong in -pedantic.

Differential revision: https://reviews.llvm.org/D67775

llvm-svn: 373774
2019-10-04 19:20:27 +00:00
Simon Pilgrim 1cd399c915 Silence static analyzer getAs<RecordType> null dereference warnings. NFCI.
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<RecordType> directly and if not assert will fire for us.

llvm-svn: 373584
2019-10-03 11:22:48 +00:00
Simon Pilgrim e0712019f2 Silence static analyzer getAs<VectorType> null dereference warnings. NFCI.
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<VectorType> directly and if not assert will fire for us.

llvm-svn: 373478
2019-10-02 15:31:25 +00:00
David Bolvansky 471910d754 [Diagnostics] Warn if enumeration type mismatch in conditional expression
Summary:
- Useful warning
- GCC compatibility (GCC warns in C++ mode)

Reviewers: rsmith, aaron.ballman

Reviewed By: aaron.ballman

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67919

llvm-svn: 373252
2019-09-30 19:55:50 +00:00
David Bolvansky e52ed1e80c [NFC] Strenghten preconditions for warning
llvm-svn: 372775
2019-09-24 20:10:57 +00:00
David Bolvansky 275e4df115 [Diagnostics] Handle tautological left shifts in boolean context
llvm-svn: 372749
2019-09-24 13:14:18 +00:00
David Bolvansky 849fd28cf0 [Diagnostics] Do not diagnose unsigned shifts in boolean context (-Wint-in-bool-context)
I was looking at old GCC's patch. Current "trunk" version avoids warning for unsigned case, GCC warns only for signed shifts.

llvm-svn: 372708
2019-09-24 09:14:33 +00:00
Michael Liao 566b3164c5 [Sema] Fix the atomic expr rebuilding order.
Summary:
- Rearrange the atomic expr order to the API order when rebuilding
  atomic expr during template instantiation.

Reviewers: erichkeane

Subscribers: jfb, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67924

llvm-svn: 372640
2019-09-23 18:48:06 +00:00
David Bolvansky 84ea41fd17 [Diagnostics] Warn if '<<' in bool context with -Wint-in-bool-context (GCC compatibility)
Extracted from D63082, addressed review comments related to a warning message.

llvm-svn: 372612
2019-09-23 14:21:08 +00:00
David Bolvansky fb218170b4 [Diagnostics] Warn if ?: with integer constants always evaluates to true
Extracted from D63082. GCC has this warning under -Wint-in-bool-context, but as noted in the D63082's review, we should put it under TautologicalConstantCompare.

llvm-svn: 372531
2019-09-22 22:00:48 +00:00
Yonghong Song 91d5c2a035 [CLANG][BPF] permit any argument type for __builtin_preserve_access_index()
Commit c15aa241f8 ("[CLANG][BPF] change __builtin_preserve_access_index()
signature") changed the builtin function signature to
  PointerT __builtin_preserve_access_index(PointerT ptr)
with a pointer type as the argument/return type, where argument and
return types must be the same.

There is really no reason for this constraint. The builtin just
presented a code region so that IR builtins
  __builtin_{array, struct, union}_preserve_access_index
can be applied.

This patch removed the pointer type restriction to permit any
argument type as long as it is permitted by the compiler.

Differential Revision: https://reviews.llvm.org/D67883

llvm-svn: 372516
2019-09-22 17:33:48 +00:00
Erich Keane 830909b97a Ensure AtomicExpr goes through SEMA checking after TreeTransform
RebuildAtomicExpr was skipping doing semantic analysis which broke in
the cases where the expressions were not dependent. This resulted in the
ImplicitCastExpr from an array to a pointer being lost, causing a crash
in IR CodeGen.

Differential Revision: https://reviews.llvm.org/D67854

llvm-svn: 372422
2019-09-20 19:17:31 +00:00
Yonghong Song c15aa241f8 [CLANG][BPF] change __builtin_preserve_access_index() signature
The clang intrinsic __builtin_preserve_access_index() currently
has signature:
  const void * __builtin_preserve_access_index(const void * ptr)

This may cause compiler warning when:
  - parameter type is "volatile void *" or "const volatile void *", or
  - the assign-to type of the intrinsic does not have "const" qualifier.
Further, this signature does not allow dereference of the
builtin result pointer as it is a "const void *" type, which
adds extra step for the user to do type casting.

Let us change the signature to:
  PointerT __builtin_preserve_access_index(PointerT ptr)
such that the result and argument types are the same.
With this, directly dereferencing the builtin return value
becomes possible.

Differential Revision: https://reviews.llvm.org/D67734

llvm-svn: 372294
2019-09-19 02:59:43 +00:00
Erik Pilkington 5741d19f04 [Sema] Suppress -Wformat diagnostics for bool types when printed using %hhd
Also, add a diagnostic under -Wformat for printing a boolean value as a
character.

rdar://54579473

Differential revision: https://reviews.llvm.org/D66856

llvm-svn: 372247
2019-09-18 19:05:14 +00:00
Erik Pilkington 5c62152275 [Sema] Split of versions of -Wimplicit-{float,int}-conversion for Objective-C BOOL
Also, add a diagnostic group, -Wobjc-signed-char-bool, to control all these
related diagnostics.

rdar://51954400

Differential revision: https://reviews.llvm.org/D67559

llvm-svn: 372183
2019-09-17 21:11:51 +00:00
Craig Topper ce2cb0f09e [X86] Allow _MM_FROUND_CUR_DIRECTION and _MM_FROUND_NO_EXC to be used together on instructions that only support SAE and not embedded rounding.
Current for SAE instructions we only allow _MM_FROUND_CUR_DIRECTION(bit 2) or _MM_FROUND_NO_EXC(bit 3) to be used as the immediate passed to the inrinsics. But these instructions don't perform rounding so _MM_FROUND_CUR_DIRECTION is just sort of a default placeholder when you don't want to suppress exceptions. Using _MM_FROUND_NO_EXC by itself is really bit equivalent to (_MM_FROUND_NO_EXC | _MM_FROUND_TO_NEAREST_INT) since _MM_FROUND_TO_NEAREST_INT is 0. Since we aren't rounding on these instructions we should also accept (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) as equivalent to (_MM_FROUND_NO_EXC). icc allows this, but gcc does not.

Differential Revision: https://reviews.llvm.org/D67289

llvm-svn: 371430
2019-09-09 17:48:05 +00:00
David Bolvansky fd07568074 [Diagnostics] Refactor code for -Wsizeof-pointer-div, catch more cases; also add -Wsizeof-array-div
Previously, -Wsizeof-pointer-div failed to catch:
const int *r;
sizeof(r) / sizeof(int);

Now fixed.
Also introduced -Wsizeof-array-div which catches bugs like:
sizeof(r) / sizeof(short);

(Array element type does not match type of sizeof operand).

llvm-svn: 371222
2019-09-06 16:12:48 +00:00
Jinsong Ji a71c199f82 [PowerPC][Altivec][Clang] Check compile-time constant for vec_dst*
Summary:
This is follow up of https://reviews.llvm.org/D66699.
We might get ISEL ICE if we call vec_dss with non const 3rd arg.

```
Cannot select: intrinsic %llvm.ppc.altivec.dst
```

We should check the constraints in clang and generate better error
messages.

Reviewers: nemanjai, hfinkel, echristo, #powerpc, wuzish

Reviewed By: #powerpc, wuzish

Subscribers: wuzish, kbarton, MaskRay, shchenz, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D66748

llvm-svn: 370912
2019-09-04 15:22:26 +00:00
Jinsong Ji 5309189d9b [PowerPC][Altivec] Fix constant argument for vec_dss
Summary:
This is similar to vec_ct* in https://reviews.llvm.org/rL304205.

The argument must be a constant, otherwise instruction selection
will fail. always_inline is not enough for isel to always fold
everything away at -O0.

The fix is to turn the function into macros in altivec.h.

Fixes https://bugs.llvm.org/show_bug.cgi?id=43072

Reviewers: nemanjai, hfinkel, #powerpc, wuzish

Reviewed By: #powerpc, wuzish

Subscribers: wuzish, kbarton, MaskRay, shchenz, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D66699

llvm-svn: 370902
2019-09-04 14:01:47 +00:00
Sven van Haastregt a280b63ead [OpenCL] Fix diagnosing enqueue_kernel call with too few args
The err_typecheck_call_too_few_args diagnostic takes arguments, but
none were provided causing clang to crash when attempting to diagnose
an enqueue_kernel call with too few arguments.

Fixes llvm.org/PR42045

Differential Revision: https://reviews.llvm.org/D66883

llvm-svn: 370322
2019-08-29 10:21:06 +00:00
Nathan Huckleberry cc01d6421f [Sema] Don't warn on printf('%hd', [char]) (PR41467)
Summary: Link: https://bugs.llvm.org/show_bug.cgi?id=41467

Reviewers: rsmith, nickdesaulniers, aaron.ballman, lebedev.ri

Reviewed By: nickdesaulniers, aaron.ballman, lebedev.ri

Subscribers: lebedev.ri, nickdesaulniers, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D66186

llvm-svn: 369791
2019-08-23 18:01:57 +00:00
Erik Pilkington aa3855694f [Sema][ObjC] Fix a -Wformat false positive with localizedStringForKey
Only honour format_arg attributes on -[NSBundle localizedStringForKey] when its
argument has a format specifier in it, otherwise its likely to just be a key to
fetch localized strings.

Fixes rdar://23622446

Differential revision: https://reviews.llvm.org/D27165

llvm-svn: 368878
2019-08-14 16:57:11 +00:00
Dmitri Gribenko a5ef73cb4b Revert "Fix crash on switch conditions of non-integer types in templates"
This reverts commit r368706. It broke ClangTidy tests.

llvm-svn: 368738
2019-08-13 19:07:28 +00:00
Elizabeth Andrews 76945821b9 Fix crash on switch conditions of non-integer types in templates
Clang currently crashes for switch statements inside a template when
the condition is a non-integer field. The crash is due to incorrect
type-dependency of field. Type-dependency of member expressions is
currently set based on the containing class. This patch changes this for
'members of the current instantiation' to set the type dependency based
on the member's type instead.

A few lit tests started to fail once I applied this patch because errors
are now diagnosed earlier (does not wait till instantiation). I've modified
these tests in this patch as well.

Patch fixes PR#40982

Differential Revision: https://reviews.llvm.org/D61027

llvm-svn: 368706
2019-08-13 15:53:19 +00:00
Ziang Wan 87b668befe [Sema] Enable -Wimplicit-float-conversion for integral to floating point precision loss
Issue an warning when the code tries to do an implicit int -> float
conversion, where the float type ha a narrower significant than the
float type.

The new warning is controlled by flag -Wimplicit-int-float-conversion,
under -Wimplicit-float-conversion and -Wconversion. It is also silenced
when c++11 narrowing warning is issued.

Differential Revision: https://reviews.llvm.org/D64666

llvm-svn: 367497
2019-08-01 00:16:43 +00:00
Momchil Velikov a36d31478c [AArch64] Add support for Transactional Memory Extension (TME)
Re-commit r366322 after some fixes

TME is a future architecture technology, documented in

  https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools
  https://developer.arm.com/docs/ddi0601/a

More about the future architectures:

  https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/new-technologies-for-the-arm-a-profile-architecture

This patch adds support for the TME instructions TSTART, TTEST, TCOMMIT, and
TCANCEL and the target feature/arch extension "tme".

It also implements TME builtin functions, defined in ACLE Q2 2019
(https://developer.arm.com/docs/101028/latest)

Differential Revision: https://reviews.llvm.org/D64416

Patch by Javed Absar and Momchil Velikov

llvm-svn: 367428
2019-07-31 12:52:17 +00:00
George Burgess IV 9d045a5c1e [Sema] add -Walloca to flag uses of `alloca`
This CL adds an optional warning to diagnose uses of the
`__builtin_alloca` family of functions. The use of these functions is
discouraged by many, so it seems like a good idea to allow clang to warn
about it.

Patch by Elaina Guan!

Differential Revision: https://reviews.llvm.org/D64883

llvm-svn: 367067
2019-07-25 22:23:40 +00:00
Petr Hosek f55f51b7be Revert "[Sema] Enable -Wimplicit-float-conversion for integral to floating point precision loss"
This reverts commit r366972 which broke the following tests:

  Clang :: CXX/dcl.decl/dcl.init/dcl.init.list/p7-0x.cpp
  Clang :: CXX/dcl.decl/dcl.init/dcl.init.list/p7-cxx11-nowarn.cpp

llvm-svn: 366979
2019-07-25 03:11:49 +00:00
Ziang Wan 2028d97d09 [Sema] Enable -Wimplicit-float-conversion for integral to floating point precision loss
Issue an warning when the code tries to do an implicit int -> float
conversion, where the float type ha a narrower significant than the
float type.

The new warning is controlled by flag -Wimplicit-int-float-conversion,
under -Wimplicit-float-conversion and -Wconversion.

Differential Revision: https://reviews.llvm.org/D64666

llvm-svn: 366972
2019-07-25 00:32:50 +00:00
Momchil Velikov 0e2b74a2b0 Revert [AArch64] Add support for Transactional Memory Extension (TME)
This reverts r366322 (git commit 4b8da3a503)

llvm-svn: 366355
2019-07-17 17:43:32 +00:00
Momchil Velikov 4b8da3a503 [AArch64] Add support for Transactional Memory Extension (TME)
TME is a future architecture technology, documented in

https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools
https://developer.arm.com/docs/ddi0601/a

More about the future architectures:

https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/new-technologies-for-the-arm-a-profile-architecture

This patch adds support for the TME instructions TSTART, TTEST, TCOMMIT, and
TCANCEL and the target feature/arch extension "tme".

It also implements TME builtin functions, defined in ACLE Q2 2019
(https://developer.arm.com/docs/101028/latest)

Patch by Javed Absar and Momchil Velikov

Differential Revision: https://reviews.llvm.org/D64416

llvm-svn: 366322
2019-07-17 13:23:27 +00:00
Rui Ueyama 49a3ad21d6 Fix parameter name comments using clang-tidy. NFC.
This patch applies clang-tidy's bugprone-argument-comment tool
to LLVM, clang and lld source trees. Here is how I created this
patch:

$ git clone https://github.com/llvm/llvm-project.git
$ cd llvm-project
$ mkdir build
$ cd build
$ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug \
    -DLLVM_ENABLE_PROJECTS='clang;lld;clang-tools-extra' \
    -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DLLVM_ENABLE_LLD=On \
    -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ../llvm
$ ninja
$ parallel clang-tidy -checks='-*,bugprone-argument-comment' \
    -config='{CheckOptions: [{key: StrictMode, value: 1}]}' -fix \
    ::: ../llvm/lib/**/*.{cpp,h} ../clang/lib/**/*.{cpp,h} ../lld/**/*.{cpp,h}

llvm-svn: 366177
2019-07-16 04:46:31 +00:00
Ulrich Weigand b98bf60ef7 [SystemZ] Add support for new cpu architecture - arch13
This patch series adds support for the next-generation arch13
CPU architecture to the SystemZ backend.

This includes:
- Basic support for the new processor and its features.
- Support for low-level builtins mapped to new LLVM intrinsics.
- New high-level intrinsics in vecintrin.h.
- Indicate support by defining  __VEC__ == 10303.

Note: No currently available Z system supports the arch13
architecture.  Once new systems become available, the
official system name will be added as supported -march name.

llvm-svn: 365933
2019-07-12 18:14:51 +00:00
Erik Pilkington abffae3a56 [ObjC] Add a warning for implicit conversions of a constant non-boolean value to BOOL
rdar://51954400

Differential revision: https://reviews.llvm.org/D63912

llvm-svn: 365518
2019-07-09 17:29:40 +00:00
Yonghong Song 048493f882 [BPF] Preserve debuginfo array/union/struct type/access index
For background of BPF CO-RE project, please refer to
  http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.

In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.

Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
  addr = preserve_array_access_index(base, index, dimension)
  addr = preserve_union_access_index(base, di_index)
  addr = preserve_struct_access_index(base, gep_index, di_index)
here,
  base: the base pointer for the array/union/struct access.
  index: the last access index for array, the same for IR/DebugInfo layout.
  dimension: the array dimension.
  gep_index: the access index based on IR layout.
  di_index: the access index based on user/debuginfo types.

If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
  base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().

For example, for the following example,
  $ cat test.c
  struct sk_buff {
     int i;
     int b1:1;
     int b2:2;
     union {
       struct {
         int o1;
         int o2;
       } o;
       struct {
         char flags;
         char dev_id;
       } dev;
       int netid;
     } u[10];
  };

  static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
      = (void *) 4;

  #define _(x) (__builtin_preserve_access_index(x))

  int bpf_prog(struct sk_buff *ctx) {
    char dev_id;
    bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
    return dev_id;
  }
  $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
    test.c >& log

The generated IR looks like below:
  ...
  define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
    %2 = alloca %struct.sk_buff*, align 8
    %3 = alloca i8, align 1
    store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
    call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
    call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
    call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
    %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
    %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
    %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
         %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
    %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
         [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
    %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
         %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
    %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
    %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
         %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
    %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
    %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
    %13 = sext i8 %12 to i32, !dbg !54
    call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
    ret i32 %13, !dbg !57
  }

  !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
  !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
  !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)

Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.

For &ctx->u[5].dev.dev_id,
  . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
  . The "%7 = ..." represents array subscript "5".
  . The "%8 = ..." represents union member "dev" with index 1 for DI layout.
  . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.

Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.

The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D61809

llvm-svn: 365438
2019-07-09 04:21:50 +00:00
Yonghong Song e085b40e9c Revert "[BPF] Preserve debuginfo array/union/struct type/access index"
This reverts commit r365435.

Forgot adding the Differential Revision link. Will add to the
commit message and resubmit.

llvm-svn: 365436
2019-07-09 04:15:12 +00:00
Yonghong Song f21eeafcd9 [BPF] Preserve debuginfo array/union/struct type/access index
For background of BPF CO-RE project, please refer to
  http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.

In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.

Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
  addr = preserve_array_access_index(base, index, dimension)
  addr = preserve_union_access_index(base, di_index)
  addr = preserve_struct_access_index(base, gep_index, di_index)
here,
  base: the base pointer for the array/union/struct access.
  index: the last access index for array, the same for IR/DebugInfo layout.
  dimension: the array dimension.
  gep_index: the access index based on IR layout.
  di_index: the access index based on user/debuginfo types.

If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
  base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().

For example, for the following example,
  $ cat test.c
  struct sk_buff {
     int i;
     int b1:1;
     int b2:2;
     union {
       struct {
         int o1;
         int o2;
       } o;
       struct {
         char flags;
         char dev_id;
       } dev;
       int netid;
     } u[10];
  };

  static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
      = (void *) 4;

  #define _(x) (__builtin_preserve_access_index(x))

  int bpf_prog(struct sk_buff *ctx) {
    char dev_id;
    bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
    return dev_id;
  }
  $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
    test.c >& log

The generated IR looks like below:
  ...
  define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
    %2 = alloca %struct.sk_buff*, align 8
    %3 = alloca i8, align 1
    store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
    call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
    call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
    call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
    %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
    %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
    %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
         %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
    %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
         [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
    %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
         %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
    %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
    %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
         %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
    %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
    %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
    %13 = sext i8 %12 to i32, !dbg !54
    call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
    ret i32 %13, !dbg !57
  }

  !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
  !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
  !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)

Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.

For &ctx->u[5].dev.dev_id,
  . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
  . The "%7 = ..." represents array subscript "5".
  . The "%8 = ..." represents union member "dev" with index 1 for DI layout.
  . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.

Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.

The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 365435
2019-07-09 04:04:21 +00:00
Erik Pilkington fa591c370d [ObjC] Add a -Wtautological-compare warning for BOOL
On macOS, BOOL is a typedef for signed char, but it should never hold a value
that isn't 1 or 0. Any code that expects a different value in their BOOL should
be fixed.

rdar://51954400

Differential revision: https://reviews.llvm.org/D63856

llvm-svn: 365408
2019-07-08 23:42:52 +00:00
Fangrui Song 7264a474b7 Change std::{lower,upper}_bound to llvm::{lower,upper}_bound or llvm::partition_point. NFC
llvm-svn: 365006
2019-07-03 08:13:17 +00:00
Gauthier Harnisch 0bb4d46b2b [clang] perform semantic checking in constant context
Summary:
Since the addition of __builtin_is_constant_evaluated the result of an expression can change based on whether it is evaluated in constant context. a lot of semantic checking performs evaluations with out specifying context. which can lead to wrong diagnostics.
for example:
```
constexpr int i0 = (long long)__builtin_is_constant_evaluated() * (1ll << 33); //#1
constexpr int i1 = (long long)!__builtin_is_constant_evaluated() * (1ll << 33); //#2
```
before the patch, #2 was diagnosed incorrectly and #1 wasn't diagnosed.
after the patch #1 is diagnosed as it should and #2 isn't.

Changes:
 - add a flag to Sema to passe in constant context mode.
 - in SemaChecking.cpp calls to Expr::Evaluate* are now done in constant context when they should.
 - in SemaChecking.cpp diagnostics for UB are not checked for in constant context because an error will be emitted by the constant evaluator.
 - in SemaChecking.cpp diagnostics for construct that cannot appear in constant context are not checked for in constant context.
 - in SemaChecking.cpp diagnostics on constant expression are always emitted because constant expression are always evaluated.
 - semantic checking for initialization of constexpr variables is now done in constant context.
 - adapt test that were depending on warning changes.
 - add test.

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D62009

llvm-svn: 363488
2019-06-15 08:32:56 +00:00
Craig Topper 9967a6c60a [X86] Add checks that immediate for reducesd/ss fits in 8-bits.
llvm-svn: 363472
2019-06-14 23:23:19 +00:00
Richard Smith 715f7a1bd0 For DR712: store on a DeclRefExpr whether it constitutes an odr-use.
Begin restructuring to support the forms of non-odr-use reference
permitted by DR712.

llvm-svn: 363086
2019-06-11 17:50:32 +00:00