Commit Graph

3600 Commits

Author SHA1 Message Date
Reid Kleckner 7177ce978e [SEH] Defer checking filter expression types until instantiaton
While here, wordsmith the error a bit. Now clang says:
  error: filter expression has non-integral type 'Foo'

Fixes PR43779

Reviewers: amccarth

Differential Revision: https://reviews.llvm.org/D69969
2019-11-07 14:52:04 -08:00
Dávid Bolvanský 01b10bc7b1 [Diagnostics] Teach -Wnull-dereference about address_space attribute
Summary:
Clang should not warn for:

> test.c:2:12: warning: indirection of non-volatile null pointer will be deleted,
>       not trap [-Wnull-dereference]
>     return *(int __attribute__((address_space(256))) *) 0;
>            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Solves PR42292.

Reviewers: aaron.ballman, rsmith

Reviewed By: aaron.ballman

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D69664
2019-11-07 22:43:27 +01:00
Simon Tatham 6c3fee47a6 [ARM,MVE] Add intrinsics for gather/scatter load/stores.
This patch adds two new families of intrinsics, both of which are
memory accesses taking a vector of locations to load from / store to.

The vldrq_gather_base / vstrq_scatter_base intrinsics take a vector of
base addresses, and an immediate offset to be added consistently to
each one. vldrq_gather_offset / vstrq_scatter_offset take a scalar
base address, and a vector of offsets to add to it. The
'shifted_offset' variants also multiply each offset by the element
size type, so that the vector is effectively of array indices.

At the IR level, these operations are represented by a single set of
four IR intrinsics: {gather,scatter} × {base,offset}. The other
details (signed/unsigned, shift, and memory element size as opposed to
vector element size) are all specified by IR intrinsic polymorphism
and immediate operands, because that made the selection job easier
than making a huge family of similarly named intrinsics.

I considered using the standard IR representations such as
llvm.masked.gather, but they're not a good fit. In order to use
llvm.masked.gather to represent a gather_offset load with element size
smaller than a pointer, you'd have to expand the <8 x i16> vector of
offsets into an <8 x i16*> vector of pointers, which would be split up
during legalization, so you'd spend most of your time undoing the mess
it had made. Also, ISel support for llvm.masked.gather would be easy
enough in a trivial way (you can expand it into a gather-base load
with a zero immediate offset), but instruction-selecting lots of
fiddly idioms back into all the _other_ MVE load instructions would be
much more work. So I think dedicated IR intrinsics are the more
sensible approach, at least for the moment.

On the clang tablegen side, I've added two new features to the
Tablegen source accepted by MveEmitter: a 'CopyKind' type node for
defining a type that varies with the parameter type (it lets you ask
for an unsigned integer type of the same width as the parameter), and
an 'unsignedflag' value node for passing an immediate IR operand which
is 0 for a signed integer type or 1 for an unsigned one. That lets me
write each kind of intrinsic just once and get all its subtypes and
immediate arguments generated automatically.

Also I've tweaked the handling of pointer-typed values in the code
generation part of MveEmitter: they're generated as Address rather
than Value (i.e. including an alignment) so that they can be given to
the ordinary IR load and store operations, but I'd omitted the code to
convert them back to Value when they're going to be used as an
argument to an IR intrinsic.

On the MC side, I've enhanced MVEVectorVTInfo so that it can tell you
not only the full assembly-language suffix for a given vector type
(like 's32' or 'u16') but also the numeric-only one used by store
instructions (just '32' or '16').

Reviewers: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D69791
2019-11-06 09:01:42 +00:00
Guillaume Chatelet 5607ff12fa Fix missing memcpy builtin on ppc64be
See D68028
2019-10-29 16:35:32 +01:00
Guillaume Chatelet 98f3151a7d [clang] Add no_builtin attribute
Summary:
This is a follow up on https://reviews.llvm.org/D61634
This patch is simpler and only adds the no_builtin attribute.

Reviewers: tejohnson, courbet, theraven, t.p.northover, jdoerfert

Subscribers: mgrang, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68028

This is a re-submit after it got reverted in https://reviews.llvm.org/rGbd8791610948 since the breakage doesn't seem to come from this patch.
2019-10-29 15:50:29 +01:00
Vlad Tsyrklevich ad531fff81 Revert "[clang] Add no_builtin attribute"
This reverts commit bd87916109. It was
causing ASan/MSan failures on the sanitizer buildbots.
2019-10-28 15:21:59 -07:00
Guillaume Chatelet bd87916109 [clang] Add no_builtin attribute
Summary:
This is a follow up on https://reviews.llvm.org/D61634
This patch is simpler and only adds the no_builtin attribute.

Reviewers: tejohnson, courbet, theraven, t.p.northover, jdoerfert

Subscribers: mgrang, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68028
2019-10-28 17:30:11 +01:00
David Goldman 7a2b704bf0 [Sema][Typo Correction] Fix another infinite loop on ambiguity
See also: D67515

- For the given call expression we would end up repeatedly
   trying to transform the same expression over and over again

- Fix is to keep the old TransformCache when checking for ambiguity

Differential Revision: https://reviews.llvm.org/D69060
2019-10-25 13:20:27 -04:00
Simon Tatham 7c11da0cfd [clang] New __attribute__((__clang_arm_mve_alias)).
This allows you to declare a function with a name of your choice (say
`foo`), but have clang treat it as if it were a builtin function (say
`__builtin_foo`), by writing

  static __inline__ __attribute__((__clang_arm_mve_alias(__builtin_foo)))
  int foo(args);

I'm intending to use this for the ACLE intrinsics for MVE, which have
to be polymorphic on their argument types and also need to be
implemented by builtins. To avoid having to implement the polymorphism
with several layers of nested _Generic and make error reporting
hideous, I want to make all the user-facing intrinsics correspond
directly to clang builtins, so that after clang resolves
__attribute__((overloadable)) polymorphism it's already holding the
right BuiltinID for the intrinsic it selected.

However, this commit itself just introduces the new attribute, and
doesn't use it for anything.

To avoid unanticipated side effects if this attribute is used to make
aliases to other builtins, there's a restriction mechanism: only
(BuiltinID, alias) pairs that are approved by the function
ArmMveAliasValid() will be permitted. At present, that function
doesn't permit anything, because the Tablegen that will generate its
list of valid pairs isn't yet implemented. So the only test of this
facility is one that checks that an unapproved builtin _can't_ be
aliased.

Reviewers: dmgreen, miyuki, ostannard

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67159
2019-10-24 16:33:13 +01:00
Richard Trieu 637af4cc37 Add -Wbitwise-conditional-parentheses to warn on mixing '|' and '&' with "?:"
Extend -Wparentheses to cover mixing bitwise-and and bitwise-or with the
conditional operator. There's two main cases seen with this:

unsigned bits1 = 0xf0 | cond ? 0x4 : 0x1;
unsigned bits2 = cond1 ? 0xf0 : 0x10 | cond2 ? 0x5 : 0x2;

// Intended order of evaluation:
unsigned bits1 = 0xf0 | (cond ? 0x4 : 0x1);
unsigned bits2 = (cond1 ? 0xf0 : 0x10) | (cond2 ? 0x5 : 0x2);

// Actual order of evaluation:
unsigned bits1 = (0xf0 | cond) ? 0x4 : 0x1;
unsigned bits2 = cond1 ? 0xf0 : ((0x10 | cond2) ? 0x5 : 0x2);

Differential Revision: https://reviews.llvm.org/D66043

llvm-svn: 375326
2019-10-19 01:47:49 +00:00
Richard Trieu 8b0d14a8f0 New tautological warning for bitwise-or with non-zero constant always true.
Taking a value and the bitwise-or it with a non-zero constant will always
result in a non-zero value. In a boolean context, this is always true.

if (x | 0x4) {}  // always true, intended '&'

This patch creates a new warning group -Wtautological-bitwise-compare for this
warning. It also moves in the existing tautological bitwise comparisons into
this group. A few other changes were needed to the CFGBuilder so that all bool
contexts would be checked. The warnings in -Wtautological-bitwise-compare will
be off by default due to using the CFG.

Fixes: https://bugs.llvm.org/show_bug.cgi?id=42666
Differential Revision: https://reviews.llvm.org/D66046

llvm-svn: 375318
2019-10-19 00:57:23 +00:00
Dmitry Mikulin f14642f2f1 Added support for "#pragma clang section relro=<name>"
Differential Revision: https://reviews.llvm.org/D68806

llvm-svn: 374934
2019-10-15 18:31:10 +00:00
Erich Keane add0786dba Fix test failure with 374562 on Hexagon
__builtin_assume_aligned takes a size_t which is a 32 bit int on
hexagon.  Thus, the constant gets converted to a 32 bit value, resulting
in 0 not being a power of 2.  This patch changes the constant being
passed to 2**30 so that it fails, but doesnt exceed 30 bits.

llvm-svn: 374569
2019-10-11 16:30:45 +00:00
Erich Keane f759395994 Reland r374450 with Richard Smith's comments and test fixed.
The behavior from the original patch has changed, since we're no longer
allowing LLVM to just ignore the alignment.  Instead, we're just
assuming the maximum possible alignment.

Differential Revision: https://reviews.llvm.org/D68824

llvm-svn: 374562
2019-10-11 14:59:44 +00:00
Nico Weber b556085d81 Revert 374450 "Fix __builtin_assume_aligned with too large values."
The test fails on Windows, with

  error: 'warning' diagnostics expected but not seen:
    File builtin-assume-aligned.c Line 62: requested alignment
        must be 268435456 bytes or smaller; assumption ignored
  error: 'warning' diagnostics seen but not expected:
    File builtin-assume-aligned.c Line 62: requested alignment
        must be 8192 bytes or smaller; assumption ignored

llvm-svn: 374456
2019-10-10 21:34:32 +00:00
Erich Keane 31e454c1ec Fix __builtin_assume_aligned with too large values.
Code to handle __builtin_assume_aligned was allowing larger values, but
would convert this to unsigned along the way. This patch removes the
EmitAssumeAligned overloads that take unsigned to do away with this
problem.

Additionally, it adds a warning that values greater than 1 <<29 are
ignored by LLVM.

Differential Revision: https://reviews.llvm.org/D68824

llvm-svn: 374450
2019-10-10 21:08:28 +00:00
Reid Kleckner 5e866e411c Add -fgnuc-version= to control __GNUC__ and other GCC macros
I noticed that compiling on Windows with -fno-ms-compatibility had the
side effect of defining __GNUC__, along with __GNUG__, __GXX_RTTI__, and
a number of other macros for GCC compatibility. This is undesirable and
causes Chromium to do things like mix __attribute__ and __declspec,
which doesn't work. We should have a positive language option to enable
GCC compatibility features so that we can experiment with
-fno-ms-compatibility on Windows. This change adds -fgnuc-version= to be
that option.

My issue aside, users have, for a long time, reported that __GNUC__
doesn't match their expectations in one way or another. We have
encouraged users to migrate code away from this macro, but new code
continues to be written assuming a GCC-only environment. There's really
nothing we can do to stop that. By adding this flag, we can allow them
to choose their own adventure with __GNUC__.

This overlaps a bit with the "GNUMode" language option from -std=gnu*.
The gnu language mode tends to enable non-conforming behaviors that we'd
rather not enable by default, but the we want to set things like
__GXX_RTTI__ by default, so I've kept these separate.

Helps address PR42817

Reviewed By: hans, nickdesaulniers, MaskRay

Differential Revision: https://reviews.llvm.org/D68055

llvm-svn: 374449
2019-10-10 21:04:25 +00:00
Yonghong Song 05e46979d2 [BPF] do compile-once run-everywhere relocation for bitfields
A bpf specific clang intrinsic is introduced:
   u32 __builtin_preserve_field_info(member_access, info_kind)
Depending on info_kind, different information will
be returned to the program. A relocation is also
recorded for this builtin so that bpf loader can
patch the instruction on the target host.
This clang intrinsic is used to get certain information
to facilitate struct/union member relocations.

The offset relocation is extended by 4 bytes to
include relocation kind.
Currently supported relocation kinds are
 enum {
    FIELD_BYTE_OFFSET = 0,
    FIELD_BYTE_SIZE,
    FIELD_EXISTENCE,
    FIELD_SIGNEDNESS,
    FIELD_LSHIFT_U64,
    FIELD_RSHIFT_U64,
 };
for __builtin_preserve_field_info. The old
access offset relocation is covered by
    FIELD_BYTE_OFFSET = 0.

An example:
struct s {
    int a;
    int b1:9;
    int b2:4;
};
enum {
    FIELD_BYTE_OFFSET = 0,
    FIELD_BYTE_SIZE,
    FIELD_EXISTENCE,
    FIELD_SIGNEDNESS,
    FIELD_LSHIFT_U64,
    FIELD_RSHIFT_U64,
};

void bpf_probe_read(void *, unsigned, const void *);
int field_read(struct s *arg) {
  unsigned long long ull = 0;
  unsigned offset = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_OFFSET);
  unsigned size = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_SIZE);
 #ifdef USE_PROBE_READ
  bpf_probe_read(&ull, size, (const void *)arg + offset);
  unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
 #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
  lshift = lshift + (size << 3) - 64;
 #endif
 #else
  switch(size) {
  case 1:
    ull = *(unsigned char *)((void *)arg + offset); break;
  case 2:
    ull = *(unsigned short *)((void *)arg + offset); break;
  case 4:
    ull = *(unsigned int *)((void *)arg + offset); break;
  case 8:
    ull = *(unsigned long long *)((void *)arg + offset); break;
  }
  unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
 #endif
  ull <<= lshift;
  if (__builtin_preserve_field_info(arg->b2, FIELD_SIGNEDNESS))
    return (long long)ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
  return ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
}

There is a minor overhead for bpf_probe_read() on big endian.

The code and relocation generated for field_read where bpf_probe_read() is
used to access argument data on little endian mode:
        r3 = r1
        r1 = 0
        r1 = 4  <=== relocation (FIELD_BYTE_OFFSET)
        r3 += r1
        r1 = r10
        r1 += -8
        r2 = 4  <=== relocation (FIELD_BYTE_SIZE)
        call bpf_probe_read
        r2 = 51 <=== relocation (FIELD_LSHIFT_U64)
        r1 = *(u64 *)(r10 - 8)
        r1 <<= r2
        r2 = 60 <=== relocation (FIELD_RSHIFT_U64)
        r0 = r1
        r0 >>= r2
        r3 = 1  <=== relocation (FIELD_SIGNEDNESS)
        if r3 == 0 goto LBB0_2
        r1 s>>= r2
        r0 = r1
LBB0_2:
        exit

Compare to the above code between relocations FIELD_LSHIFT_U64 and
FIELD_LSHIFT_U64, the code with big endian mode has four more
instructions.
        r1 = 41   <=== relocation (FIELD_LSHIFT_U64)
        r6 += r1
        r6 += -64
        r6 <<= 32
        r6 >>= 32
        r1 = *(u64 *)(r10 - 8)
        r1 <<= r6
        r2 = 60   <=== relocation (FIELD_RSHIFT_U64)

The code and relocation generated when using direct load.
        r2 = 0
        r3 = 4
        r4 = 4
        if r4 s> 3 goto LBB0_3
        if r4 == 1 goto LBB0_5
        if r4 == 2 goto LBB0_6
        goto LBB0_9
LBB0_6:                                 # %sw.bb1
        r1 += r3
        r2 = *(u16 *)(r1 + 0)
        goto LBB0_9
LBB0_3:                                 # %entry
        if r4 == 4 goto LBB0_7
        if r4 == 8 goto LBB0_8
        goto LBB0_9
LBB0_8:                                 # %sw.bb9
        r1 += r3
        r2 = *(u64 *)(r1 + 0)
        goto LBB0_9
LBB0_5:                                 # %sw.bb
        r1 += r3
        r2 = *(u8 *)(r1 + 0)
        goto LBB0_9
LBB0_7:                                 # %sw.bb5
        r1 += r3
        r2 = *(u32 *)(r1 + 0)
LBB0_9:                                 # %sw.epilog
        r1 = 51
        r2 <<= r1
        r1 = 60
        r0 = r2
        r0 >>= r1
        r3 = 1
        if r3 == 0 goto LBB0_11
        r2 s>>= r1
        r0 = r2
LBB0_11:                                # %sw.epilog
        exit

Considering verifier is able to do limited constant
propogation following branches. The following is the
code actually traversed.
        r2 = 0
        r3 = 4   <=== relocation
        r4 = 4   <=== relocation
        if r4 s> 3 goto LBB0_3
LBB0_3:                                 # %entry
        if r4 == 4 goto LBB0_7
LBB0_7:                                 # %sw.bb5
        r1 += r3
        r2 = *(u32 *)(r1 + 0)
LBB0_9:                                 # %sw.epilog
        r1 = 51   <=== relocation
        r2 <<= r1
        r1 = 60   <=== relocation
        r0 = r2
        r0 >>= r1
        r3 = 1
        if r3 == 0 goto LBB0_11
        r2 s>>= r1
        r0 = r2
LBB0_11:                                # %sw.epilog
        exit

For native load case, the load size is calculated to be the
same as the size of load width LLVM otherwise used to load
the value which is then used to extract the bitfield value.

Differential Revision: https://reviews.llvm.org/D67980

llvm-svn: 374099
2019-10-08 18:23:17 +00:00
James Clarke 67f542aba7 [Diagnostics] Silence -Wsizeof-array-div for character buffers
Summary:
Character buffers are sometimes used to represent a pool of memory that
contains non-character objects, due to them being synonymous with a stream of
bytes on almost all modern architectures. Often, when interacting with hardware
devices, byte buffers are therefore used as an intermediary and so we can end
Character buffers are sometimes used to represent a pool of memory that
contains non-character objects, due to them being synonymous with a stream of
bytes on almost all modern architectures. Often, when interacting with hardware
devices, byte buffers are therefore used as an intermediary and so we can end
up generating lots of false-positives.

Moreover, due to the ability of character pointers to alias non-character
pointers, the strict aliasing violations that would generally be implied by the
calculations caught by the warning (if the calculation itself is in fact
correct) do not apply here, and so although the length calculation may be
wrong, that is the only possible issue.

Reviewers: rsmith, xbolva00, thakis

Reviewed By: xbolva00, thakis

Subscribers: thakis, lebedev.ri, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68526

llvm-svn: 374035
2019-10-08 11:34:02 +00:00
David Bolvansky aaea76ba02 [Diagnostics] Emit better -Wbool-operation's warning message if we known that the result is always true
llvm-svn: 373973
2019-10-07 21:57:03 +00:00
David Bolvansky 559265c8da [Diagnostics] Use Expr::isKnownToHaveBooleanValue() to check bitwise negation of bool in languages without a bool type
Thanks for this advice, Richard Trieu!

llvm-svn: 373817
2019-10-05 08:02:11 +00:00
Yuanfang Chen 442ddffe13 [clang] fix a typo from r372531
Reviewers: xbolva00

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68482

llvm-svn: 373792
2019-10-04 21:37:20 +00:00
Erik Pilkington f7766b1ed4 [Sema] Split out -Wformat-type-confusion from -Wformat-pedantic
The warnings now in -Wformat-type-confusion don't align with how we interpret
'pedantic' in clang, and don't belong in -pedantic.

Differential revision: https://reviews.llvm.org/D67775

llvm-svn: 373774
2019-10-04 19:20:27 +00:00
Sam McCall f44ca7f6eb Further improve -Wbool-operation bitwise negation message
llvm-svn: 373749
2019-10-04 14:11:05 +00:00
David Bolvansky 5e851ad6c1 [NFCI] Improve the -Wbool-operation's warning message
Based on the request from the post commit review. Also added one new test.

llvm-svn: 373743
2019-10-04 12:55:13 +00:00
David Bolvansky b4ee523ffc [Diagnostics] Bitwise negation of a boolean expr always evaluates to true; warn with -Wbool-operation
Requested here:
http://lists.llvm.org/pipermail/cfe-dev/2019-October/063452.html

llvm-svn: 373614
2019-10-03 15:17:59 +00:00
David Bolvansky 00d632e089 [Diagnostics] Make -Wenum-compare-conditional off by default
Too many false positives, eg. in Chromium.

llvm-svn: 373371
2019-10-01 18:12:13 +00:00
David Bolvansky 362055d1fa [Diagnostics] Move warning into the subgroup (-Wenum-compare-conditional)
llvm-svn: 373345
2019-10-01 15:44:38 +00:00
David Bolvansky 471910d754 [Diagnostics] Warn if enumeration type mismatch in conditional expression
Summary:
- Useful warning
- GCC compatibility (GCC warns in C++ mode)

Reviewers: rsmith, aaron.ballman

Reviewed By: aaron.ballman

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67919

llvm-svn: 373252
2019-09-30 19:55:50 +00:00
David Bolvansky 275e4df115 [Diagnostics] Handle tautological left shifts in boolean context
llvm-svn: 372749
2019-09-24 13:14:18 +00:00
David Bolvansky 849fd28cf0 [Diagnostics] Do not diagnose unsigned shifts in boolean context (-Wint-in-bool-context)
I was looking at old GCC's patch. Current "trunk" version avoids warning for unsigned case, GCC warns only for signed shifts.

llvm-svn: 372708
2019-09-24 09:14:33 +00:00
David Bolvansky 28b38c277a [Diagnostics] Warn for enum constants in bool context (-Wint-in-bool-context; GCC compatibility)
Extracted from D63082.

llvm-svn: 372664
2019-09-23 22:09:49 +00:00
David Bolvansky 84ea41fd17 [Diagnostics] Warn if '<<' in bool context with -Wint-in-bool-context (GCC compatibility)
Extracted from D63082, addressed review comments related to a warning message.

llvm-svn: 372612
2019-09-23 14:21:08 +00:00
David Bolvansky 116e6cf36e [Diagnostics] Avoid -Wsizeof-array-div when dividing the size of a nested array by the size of the deepest base type
llvm-svn: 372600
2019-09-23 12:54:35 +00:00
Craig Topper e4c1765124 [X86] Require last argument to LWPINS/LWPVAL builtins to be an ICE. Add ImmArg to the llvm intrinsics.
Update the isel patterns to use timm instead of imm.

llvm-svn: 372534
2019-09-22 23:48:50 +00:00
David Bolvansky fb218170b4 [Diagnostics] Warn if ?: with integer constants always evaluates to true
Extracted from D63082. GCC has this warning under -Wint-in-bool-context, but as noted in the D63082's review, we should put it under TautologicalConstantCompare.

llvm-svn: 372531
2019-09-22 22:00:48 +00:00
Yonghong Song 91d5c2a035 [CLANG][BPF] permit any argument type for __builtin_preserve_access_index()
Commit c15aa241f8 ("[CLANG][BPF] change __builtin_preserve_access_index()
signature") changed the builtin function signature to
  PointerT __builtin_preserve_access_index(PointerT ptr)
with a pointer type as the argument/return type, where argument and
return types must be the same.

There is really no reason for this constraint. The builtin just
presented a code region so that IR builtins
  __builtin_{array, struct, union}_preserve_access_index
can be applied.

This patch removed the pointer type restriction to permit any
argument type as long as it is permitted by the compiler.

Differential Revision: https://reviews.llvm.org/D67883

llvm-svn: 372516
2019-09-22 17:33:48 +00:00
Richard Trieu 4c05de8c1d Merge and improve code that detects same value in comparisons.
-Wtautological-overlap-compare and self-comparison from -Wtautological-compare
relay on detecting the same operand in different locations.  Previously, each
warning had it's own operand checker.  Now, both are merged together into
one function that each can call.  The function also now looks through member
access and array accesses.

Differential Revision: https://reviews.llvm.org/D66045

llvm-svn: 372453
2019-09-21 03:02:26 +00:00
Richard Trieu 6541c7988b Improve -Wtautological-overlap-compare
Allow this warning to detect a larger number of constant values, including
negative numbers, and handle non-int types better.

Differential Revision: https://reviews.llvm.org/D66044

llvm-svn: 372448
2019-09-21 02:37:10 +00:00
Yonghong Song c15aa241f8 [CLANG][BPF] change __builtin_preserve_access_index() signature
The clang intrinsic __builtin_preserve_access_index() currently
has signature:
  const void * __builtin_preserve_access_index(const void * ptr)

This may cause compiler warning when:
  - parameter type is "volatile void *" or "const volatile void *", or
  - the assign-to type of the intrinsic does not have "const" qualifier.
Further, this signature does not allow dereference of the
builtin result pointer as it is a "const void *" type, which
adds extra step for the user to do type casting.

Let us change the signature to:
  PointerT __builtin_preserve_access_index(PointerT ptr)
such that the result and argument types are the same.
With this, directly dereferencing the builtin return value
becomes possible.

Differential Revision: https://reviews.llvm.org/D67734

llvm-svn: 372294
2019-09-19 02:59:43 +00:00
Erik Pilkington 5741d19f04 [Sema] Suppress -Wformat diagnostics for bool types when printed using %hhd
Also, add a diagnostic under -Wformat for printing a boolean value as a
character.

rdar://54579473

Differential revision: https://reviews.llvm.org/D66856

llvm-svn: 372247
2019-09-18 19:05:14 +00:00
Erik Pilkington 5c62152275 [Sema] Split of versions of -Wimplicit-{float,int}-conversion for Objective-C BOOL
Also, add a diagnostic group, -Wobjc-signed-char-bool, to control all these
related diagnostics.

rdar://51954400

Differential revision: https://reviews.llvm.org/D67559

llvm-svn: 372183
2019-09-17 21:11:51 +00:00
Erik Pilkington a1e29a3407 Use 'BOOL' instead of BOOL in diagnostic messages
Type names should be enclosed in single quotes.

llvm-svn: 372152
2019-09-17 18:02:45 +00:00
Richard Smith 9864269a0d Fix reliance on lax vector conversions in tests for x86 intrinsics.
llvm-svn: 372062
2019-09-17 03:56:28 +00:00
David Bolvansky b8185153f3 [Diagnostics] Added silence note for -Wsizeof-array-div; suggest extra parens
llvm-svn: 371924
2019-09-14 19:38:55 +00:00
David Goldman 6d18650421 [Sema][Typo Correction] Fix potential infite loop on ambiguity checks
Summary:
This fixes a bug introduced in D62648, where Clang could infinite loop
if it became stuck on a single TypoCorrection when it was supposed to
be testing ambiguous corrections. Although not a common case, it could
happen if there are multiple possible corrections with the same edit
distance.

The fix is simply to wipe the TypoExpr from the `TransformCache` so that
the call to `TransformTypoExpr` doesn't use the `CachedEntry`.

Reviewers: rsmith

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67515

llvm-svn: 371859
2019-09-13 14:43:24 +00:00
Richard Smith c624510f13 For PR17164: split -fno-lax-vector-conversion into three different
levels:

 -- none: no lax vector conversions [new GCC default]
 -- integer: only conversions between integer vectors [old GCC default]
 -- all: all conversions between same-size vectors [Clang default]

For now, Clang still defaults to "all" mode, but per my proposal on
cfe-dev (2019-04-10) the default will be changed to "integer" as soon as
that doesn't break lots of testcases. (Eventually I'd like to change the
default to "none" to match GCC and general sanity.)

Following GCC's behavior, the driver flag -flax-vector-conversions is
translated to -flax-vector-conversions=integer.

This reinstates r371805, reverted in r371813, with an additional fix for
lldb.

llvm-svn: 371817
2019-09-13 06:02:15 +00:00
Jonas Devlieghere 4aaa77e48d Revert "For PR17164: split -fno-lax-vector-conversion into three different"
This breaks the LLDB build. I tried reaching out to Richard, but haven't
gotten a reply yet.

llvm-svn: 371813
2019-09-13 05:16:59 +00:00
Richard Smith 49c4e58b75 For PR17164: split -fno-lax-vector-conversion into three different
levels:

 -- none: no lax vector conversions [new GCC default]
 -- integer: only conversions between integer vectors [old GCC default]
 -- all: all conversions between same-size vectors [Clang default]

For now, Clang still defaults to "all" mode, but per my proposal on
cfe-dev (2019-04-10) the default will be changed to "integer" as soon as
that doesn't break lots of testcases. (Eventually I'd like to change the
default to "none" to match GCC and general sanity.)

Following GCC's behavior, the driver flag -flax-vector-conversions is
translated to -flax-vector-conversions=integer.

llvm-svn: 371805
2019-09-13 02:20:00 +00:00
David Bolvansky 82d9e0e122 [NFC] Added triple to test file to avoid arm buildbots failures
llvm-svn: 371646
2019-09-11 18:55:56 +00:00