Commit Graph

1379 Commits

Author SHA1 Message Date
Alexey Bataev d08c056695 [OPENMP50]Add if clause in simd directive.
According to OpenMP 5.0, if clause can be used in simd directive. If
condition in the if clause if false, the non-vectorized version of the
loop must be executed.
2019-11-19 15:58:19 -05:00
Eric Christopher 30e7ee3c4b Temporarily Revert "Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="
and a follow-up NFC rearrangement as it's causing a crash on valid. Testcase is on the original review thread.

This reverts commits af57dbf12e and e6584b2b7b
2019-11-18 10:46:48 -08:00
Melanie Blower af57dbf12e Add support for options -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior=
Add options to control floating point behavior: trapping and
    exception behavior, rounding, and control of optimizations that affect
    floating point calculations. More details in UsersManual.rst.

    Reviewers: rjmccall

    Differential Revision: https://reviews.llvm.org/D62731
2019-11-07 07:22:45 -08:00
Alexey Bataev 14a388f43b [OPENMP50]Add support for parallel master taskloop simd directive.
Added full support for parallel master taskloop simd directive.
2019-10-30 10:23:33 -04:00
Simon Tatham 08074cc965 [clang,ARM] Initial ACLE intrinsics for MVE.
This commit sets up the infrastructure for auto-generating <arm_mve.h>
and doing clang-side code generation for the builtins it relies on,
and demonstrates that it works by implementing a representative sample
of the ACLE intrinsics, more or less matching the ones introduced in
LLVM IR by D67158,D68699,D68700.

Like NEON, that header file will provide a set of vector types like
uint16x8_t and C functions with names like vaddq_u32(). Unlike NEON,
the ACLE spec for <arm_mve.h> includes a polymorphism system, so that
you can write plain vaddq() and disambiguate by the vector types you
pass to it.

Unlike the corresponding NEON code, I've arranged to make every user-
facing ACLE intrinsic into a clang builtin, and implement all the code
generation inside clang. So <arm_mve.h> itself contains nothing but
typedefs and function declarations, with the latter all using the new
`__attribute__((__clang_builtin))` system to arrange that the user-
facing function names correspond to the right internal BuiltinIDs.

So the new MveEmitter tablegen system specifies the full sequence of
IRBuilder operations that each user-facing ACLE intrinsic should
translate into. Where possible, the ACLE intrinsics map to standard IR
operations such as vector-typed `add` and `fadd`; where no standard
representation exists, I call down to the sample IR intrinsics
introduced in an earlier commit.

Doing it like this means that you get the polymorphism for free just
by using __attribute__((overloadable)): the clang overload resolution
decides which function declaration is the relevant one, and _then_ its
BuiltinID is looked up, so by the time we're doing code generation,
that's all been resolved by the standard system. It also means that
you get really nice error messages if the user passes the wrong
combination of types: clang will show the declarations from the header
file and explain why each one doesn't match.

(The obvious alternative approach would be to have wrapper functions
in <arm_mve.h> which pass their arguments to the underlying builtins.
But that doesn't work in the case where one of the arguments has to be
a constant integer: the wrapper function can't pass the constantness
through. So you'd have to do that case using a macro instead, and then
use C11 `_Generic` to handle the polymorphism. Then you have to add
horrible workarounds because `_Generic` requires even the untaken
branches to type-check successfully, and //then// if the user gets the
types wrong, the error message is totally unreadable!)

Reviewers: dmgreen, miyuki, ostannard

Subscribers: mgorny, javed.absar, kristof.beyls, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D67161
2019-10-24 16:33:13 +01:00
Alexey Bataev b8552abfe7 [OPENMP50]Add support for master taskloop simd.
Added  trsing/semantics/codegen for combined construct master taskloop simd.

llvm-svn: 375255
2019-10-18 16:47:35 +00:00
Alexey Bataev 5bbceadfc8 [OPENMP50]Add support for 'parallel master taskloop' construct.
Added parsing/sema/codegen support for 'parallel master taskloop'
constructs. Some of the clauses, like 'grainsize', 'num_tasks', 'final'
and 'priority' are not supported in full, only constant expressions can
be used currently in these clauses.

llvm-svn: 374791
2019-10-14 17:17:41 +00:00
Erich Keane f759395994 Reland r374450 with Richard Smith's comments and test fixed.
The behavior from the original patch has changed, since we're no longer
allowing LLVM to just ignore the alignment.  Instead, we're just
assuming the maximum possible alignment.

Differential Revision: https://reviews.llvm.org/D68824

llvm-svn: 374562
2019-10-11 14:59:44 +00:00
Nico Weber b556085d81 Revert 374450 "Fix __builtin_assume_aligned with too large values."
The test fails on Windows, with

  error: 'warning' diagnostics expected but not seen:
    File builtin-assume-aligned.c Line 62: requested alignment
        must be 268435456 bytes or smaller; assumption ignored
  error: 'warning' diagnostics seen but not expected:
    File builtin-assume-aligned.c Line 62: requested alignment
        must be 8192 bytes or smaller; assumption ignored

llvm-svn: 374456
2019-10-10 21:34:32 +00:00
Erich Keane 31e454c1ec Fix __builtin_assume_aligned with too large values.
Code to handle __builtin_assume_aligned was allowing larger values, but
would convert this to unsigned along the way. This patch removes the
EmitAssumeAligned overloads that take unsigned to do away with this
problem.

Additionally, it adds a warning that values greater than 1 <<29 are
ignored by LLVM.

Differential Revision: https://reviews.llvm.org/D68824

llvm-svn: 374450
2019-10-10 21:08:28 +00:00
Alexey Bataev 60e51c4803 [OPENMP50]Support for 'master taskloop' directive.
Added full support for master taskloop directive.

llvm-svn: 374437
2019-10-10 20:13:02 +00:00
Yonghong Song 05e46979d2 [BPF] do compile-once run-everywhere relocation for bitfields
A bpf specific clang intrinsic is introduced:
   u32 __builtin_preserve_field_info(member_access, info_kind)
Depending on info_kind, different information will
be returned to the program. A relocation is also
recorded for this builtin so that bpf loader can
patch the instruction on the target host.
This clang intrinsic is used to get certain information
to facilitate struct/union member relocations.

The offset relocation is extended by 4 bytes to
include relocation kind.
Currently supported relocation kinds are
 enum {
    FIELD_BYTE_OFFSET = 0,
    FIELD_BYTE_SIZE,
    FIELD_EXISTENCE,
    FIELD_SIGNEDNESS,
    FIELD_LSHIFT_U64,
    FIELD_RSHIFT_U64,
 };
for __builtin_preserve_field_info. The old
access offset relocation is covered by
    FIELD_BYTE_OFFSET = 0.

An example:
struct s {
    int a;
    int b1:9;
    int b2:4;
};
enum {
    FIELD_BYTE_OFFSET = 0,
    FIELD_BYTE_SIZE,
    FIELD_EXISTENCE,
    FIELD_SIGNEDNESS,
    FIELD_LSHIFT_U64,
    FIELD_RSHIFT_U64,
};

void bpf_probe_read(void *, unsigned, const void *);
int field_read(struct s *arg) {
  unsigned long long ull = 0;
  unsigned offset = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_OFFSET);
  unsigned size = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_SIZE);
 #ifdef USE_PROBE_READ
  bpf_probe_read(&ull, size, (const void *)arg + offset);
  unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
 #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
  lshift = lshift + (size << 3) - 64;
 #endif
 #else
  switch(size) {
  case 1:
    ull = *(unsigned char *)((void *)arg + offset); break;
  case 2:
    ull = *(unsigned short *)((void *)arg + offset); break;
  case 4:
    ull = *(unsigned int *)((void *)arg + offset); break;
  case 8:
    ull = *(unsigned long long *)((void *)arg + offset); break;
  }
  unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64);
 #endif
  ull <<= lshift;
  if (__builtin_preserve_field_info(arg->b2, FIELD_SIGNEDNESS))
    return (long long)ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
  return ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64);
}

There is a minor overhead for bpf_probe_read() on big endian.

The code and relocation generated for field_read where bpf_probe_read() is
used to access argument data on little endian mode:
        r3 = r1
        r1 = 0
        r1 = 4  <=== relocation (FIELD_BYTE_OFFSET)
        r3 += r1
        r1 = r10
        r1 += -8
        r2 = 4  <=== relocation (FIELD_BYTE_SIZE)
        call bpf_probe_read
        r2 = 51 <=== relocation (FIELD_LSHIFT_U64)
        r1 = *(u64 *)(r10 - 8)
        r1 <<= r2
        r2 = 60 <=== relocation (FIELD_RSHIFT_U64)
        r0 = r1
        r0 >>= r2
        r3 = 1  <=== relocation (FIELD_SIGNEDNESS)
        if r3 == 0 goto LBB0_2
        r1 s>>= r2
        r0 = r1
LBB0_2:
        exit

Compare to the above code between relocations FIELD_LSHIFT_U64 and
FIELD_LSHIFT_U64, the code with big endian mode has four more
instructions.
        r1 = 41   <=== relocation (FIELD_LSHIFT_U64)
        r6 += r1
        r6 += -64
        r6 <<= 32
        r6 >>= 32
        r1 = *(u64 *)(r10 - 8)
        r1 <<= r6
        r2 = 60   <=== relocation (FIELD_RSHIFT_U64)

The code and relocation generated when using direct load.
        r2 = 0
        r3 = 4
        r4 = 4
        if r4 s> 3 goto LBB0_3
        if r4 == 1 goto LBB0_5
        if r4 == 2 goto LBB0_6
        goto LBB0_9
LBB0_6:                                 # %sw.bb1
        r1 += r3
        r2 = *(u16 *)(r1 + 0)
        goto LBB0_9
LBB0_3:                                 # %entry
        if r4 == 4 goto LBB0_7
        if r4 == 8 goto LBB0_8
        goto LBB0_9
LBB0_8:                                 # %sw.bb9
        r1 += r3
        r2 = *(u64 *)(r1 + 0)
        goto LBB0_9
LBB0_5:                                 # %sw.bb
        r1 += r3
        r2 = *(u8 *)(r1 + 0)
        goto LBB0_9
LBB0_7:                                 # %sw.bb5
        r1 += r3
        r2 = *(u32 *)(r1 + 0)
LBB0_9:                                 # %sw.epilog
        r1 = 51
        r2 <<= r1
        r1 = 60
        r0 = r2
        r0 >>= r1
        r3 = 1
        if r3 == 0 goto LBB0_11
        r2 s>>= r1
        r0 = r2
LBB0_11:                                # %sw.epilog
        exit

Considering verifier is able to do limited constant
propogation following branches. The following is the
code actually traversed.
        r2 = 0
        r3 = 4   <=== relocation
        r4 = 4   <=== relocation
        if r4 s> 3 goto LBB0_3
LBB0_3:                                 # %entry
        if r4 == 4 goto LBB0_7
LBB0_7:                                 # %sw.bb5
        r1 += r3
        r2 = *(u32 *)(r1 + 0)
LBB0_9:                                 # %sw.epilog
        r1 = 51   <=== relocation
        r2 <<= r1
        r1 = 60   <=== relocation
        r0 = r2
        r0 >>= r1
        r3 = 1
        if r3 == 0 goto LBB0_11
        r2 s>>= r1
        r0 = r2
LBB0_11:                                # %sw.epilog
        exit

For native load case, the load size is calculated to be the
same as the size of load width LLVM otherwise used to load
the value which is then used to extract the bitfield value.

Differential Revision: https://reviews.llvm.org/D67980

llvm-svn: 374099
2019-10-08 18:23:17 +00:00
Guillaume Chatelet c79099e0f4 [Alignment][Clang][NFC] Add CharUnits::getAsAlign
Summary:
This is a prerequisite to removing `llvm::GlobalObject::setAlignment(unsigned)`.
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: jholewinski, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D68274

llvm-svn: 373592
2019-10-03 13:00:29 +00:00
Marco Antognini 8855963764 [OpenCL] Improve destructor support in C++ for OpenCL
This re-applies r366422 with a fix for Bug PR42665 and a new regression
test.

llvm-svn: 366670
2019-07-22 09:39:13 +00:00
Ilya Biryukov 4f8dc16fcd Revert r366422: [OpenCL] Improve destructor support in C++ for OpenCL
Reason: this commit causes crashes in the clang compiler when building
LLVM Support with libc++, see https://bugs.llvm.org/show_bug.cgi?id=42665
for details.

llvm-svn: 366429
2019-07-18 11:55:33 +00:00
Marco Antognini 83748cc5ab [OpenCL] Improve destructor support in C++ for OpenCL
Summary:
This patch does mainly three things:
 1. It fixes a false positive error detection in Sema that is similar to
    D62156. The error happens when explicitly calling an overloaded
    destructor for different address spaces.
 2. It selects the correct destructor when multiple overloads for
    address spaces are available.
 3. It inserts the expected address space cast when invoking a
    destructor, if needed, and therefore fixes a crash due to the unmet
    assertion in llvm::CastInst::Create.

The following is a reproducer of the three issues:

    struct MyType {
      ~MyType() {}
      ~MyType() __constant {}
    };

    __constant MyType myGlobal{};

    kernel void foo() {
      myGlobal.~MyType(); // 1 and 2.
      // 1. error: cannot initialize object parameter of type
      //    '__generic MyType' with an expression of type '__constant MyType'
      // 2. error: no matching member function for call to '~MyType'
    }

    kernel void bar() {
      // 3. The implicit call to the destructor crashes due to:
      //    Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed.
      //    in llvm::CastInst::Create.
      MyType myLocal;
    }

The added test depends on D62413 and covers a few more things than the
above reproducer.

Subscribers: yaxunl, Anastasia, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D64569

llvm-svn: 366422
2019-07-18 10:04:18 +00:00
Yonghong Song 4754814c5a fix unnamed fiefield issue and add tests for __builtin_preserve_access_index intrinsic
The original commit is r366076. It is temporarily reverted (r366155)
due to test failure. This resubmit makes test more robust by accepting
regex instead of hardcoded names/references in several places.

This is a followup patch for https://reviews.llvm.org/D61809.
Handle unnamed bitfield properly and add more test cases.

Fixed the unnamed bitfield issue. The unnamed bitfield is ignored
by debug info, so we need to ignore such a struct/union member
when we try to get the member index in the debug info.

D61809 contains two test cases but not enough as it does
not checking generated IRs in the fine grain level, and also
it does not have semantics checking tests.
This patch added unit tests for both code gen and semantics checking for
the new intrinsic.

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 366231
2019-07-16 17:24:33 +00:00
Stephan Bergmann e215996a29 Finish "Adapt -fsanitize=function to SANITIZER_NON_UNIQUE_TYPEINFO"
i.e., recent 5745eccef54ddd3caca278d1d292a88b2281528b:

* Bump the function_type_mismatch handler version, as its signature has changed.

* The function_type_mismatch handler can return successfully now, so
  SanitizerKind::Function must be AlwaysRecoverable (like for
  SanitizerKind::Vptr).

* But the minimal runtime would still unconditionally treat a call to the
  function_type_mismatch handler as failure, so disallow -fsanitize=function in
  combination with -fsanitize-minimal-runtime (like it was already done for
  -fsanitize=vptr).

* Add tests.

Differential Revision: https://reviews.llvm.org/D61479

llvm-svn: 366186
2019-07-16 06:23:27 +00:00
Eric Christopher fdcbd5fa48 Temporarily Revert "fix unnamed fiefield issue and add tests for __builtin_preserve_access_index intrinsic"
The commit had tests that would only work with names in the IR.

This reverts commit r366076.

llvm-svn: 366155
2019-07-15 23:49:31 +00:00
Yonghong Song e5086481b6 fix unnamed fiefield issue and add tests for __builtin_preserve_access_index intrinsic
This is a followup patch for https://reviews.llvm.org/D61809.
Handle unnamed bitfield properly and add more test cases.

Fixed the unnamed bitfield issue. The unnamed bitfield is ignored
by debug info, so we need to ignore such a struct/union member
when we try to get the member index in the debug info.

D61809 contains two test cases but not enough as it does
not checking generated IRs in the fine grain level, and also
it does not have semantics checking tests.
This patch added unit tests for both code gen and semantics checking for
the new intrinsic.

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 366076
2019-07-15 15:42:41 +00:00
Vitaly Buka e72676657f NFC: Convert large lambda into method
Reviewers: pcc, eugenis

Reviewed By: eugenis

Subscribers: cfe-commits, lldb-commits

Tags: #clang, #lldb

Differential Revision: https://reviews.llvm.org/D63854

llvm-svn: 365708
2019-07-10 22:53:50 +00:00
Yonghong Song 048493f882 [BPF] Preserve debuginfo array/union/struct type/access index
For background of BPF CO-RE project, please refer to
  http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.

In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.

Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
  addr = preserve_array_access_index(base, index, dimension)
  addr = preserve_union_access_index(base, di_index)
  addr = preserve_struct_access_index(base, gep_index, di_index)
here,
  base: the base pointer for the array/union/struct access.
  index: the last access index for array, the same for IR/DebugInfo layout.
  dimension: the array dimension.
  gep_index: the access index based on IR layout.
  di_index: the access index based on user/debuginfo types.

If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
  base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().

For example, for the following example,
  $ cat test.c
  struct sk_buff {
     int i;
     int b1:1;
     int b2:2;
     union {
       struct {
         int o1;
         int o2;
       } o;
       struct {
         char flags;
         char dev_id;
       } dev;
       int netid;
     } u[10];
  };

  static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
      = (void *) 4;

  #define _(x) (__builtin_preserve_access_index(x))

  int bpf_prog(struct sk_buff *ctx) {
    char dev_id;
    bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
    return dev_id;
  }
  $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
    test.c >& log

The generated IR looks like below:
  ...
  define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
    %2 = alloca %struct.sk_buff*, align 8
    %3 = alloca i8, align 1
    store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
    call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
    call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
    call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
    %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
    %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
    %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
         %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
    %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
         [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
    %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
         %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
    %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
    %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
         %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
    %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
    %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
    %13 = sext i8 %12 to i32, !dbg !54
    call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
    ret i32 %13, !dbg !57
  }

  !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
  !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
  !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)

Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.

For &ctx->u[5].dev.dev_id,
  . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
  . The "%7 = ..." represents array subscript "5".
  . The "%8 = ..." represents union member "dev" with index 1 for DI layout.
  . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.

Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.

The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D61809

llvm-svn: 365438
2019-07-09 04:21:50 +00:00
Yonghong Song e085b40e9c Revert "[BPF] Preserve debuginfo array/union/struct type/access index"
This reverts commit r365435.

Forgot adding the Differential Revision link. Will add to the
commit message and resubmit.

llvm-svn: 365436
2019-07-09 04:15:12 +00:00
Yonghong Song f21eeafcd9 [BPF] Preserve debuginfo array/union/struct type/access index
For background of BPF CO-RE project, please refer to
  http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.

In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.

Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
  addr = preserve_array_access_index(base, index, dimension)
  addr = preserve_union_access_index(base, di_index)
  addr = preserve_struct_access_index(base, gep_index, di_index)
here,
  base: the base pointer for the array/union/struct access.
  index: the last access index for array, the same for IR/DebugInfo layout.
  dimension: the array dimension.
  gep_index: the access index based on IR layout.
  di_index: the access index based on user/debuginfo types.

If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
  base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().

For example, for the following example,
  $ cat test.c
  struct sk_buff {
     int i;
     int b1:1;
     int b2:2;
     union {
       struct {
         int o1;
         int o2;
       } o;
       struct {
         char flags;
         char dev_id;
       } dev;
       int netid;
     } u[10];
  };

  static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
      = (void *) 4;

  #define _(x) (__builtin_preserve_access_index(x))

  int bpf_prog(struct sk_buff *ctx) {
    char dev_id;
    bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
    return dev_id;
  }
  $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
    test.c >& log

The generated IR looks like below:
  ...
  define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
    %2 = alloca %struct.sk_buff*, align 8
    %3 = alloca i8, align 1
    store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
    call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
    call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
    call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
    %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
    %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
    %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
         %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
    %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
         [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
    %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
         %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
    %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
    %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
         %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
    %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
    %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
    %13 = sext i8 %12 to i32, !dbg !54
    call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
    ret i32 %13, !dbg !57
  }

  !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
  !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
  !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)

Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.

For &ctx->u[5].dev.dev_id,
  . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
  . The "%7 = ..." represents array subscript "5".
  . The "%8 = ..." represents union member "dev" with index 1 for DI layout.
  . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.

Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.

The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.

Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 365435
2019-07-09 04:04:21 +00:00
Erich Keane 36176249d1 Ensure Target Features always_inline error happens in C++ cases.
A handful of C++ cases as reported in PR42352 didn't actually give an
error when always_inlining with a different target feature list. This
resulted in broken IR.

llvm-svn: 364109
2019-06-21 22:29:32 +00:00
Richard Smith 8cca3a5a85 Rename CodeGenFunction::overlapFor* to getOverlapFor*.
llvm-svn: 363980
2019-06-20 20:56:20 +00:00
Richard Smith 78b239ea67 P0840R2: support for [[no_unique_address]] attribute
Summary:
Add support for the C++2a [[no_unique_address]] attribute for targets using the Itanium C++ ABI.

This depends on D63371.

Reviewers: rjmccall, aaron.ballman

Subscribers: dschuff, aheejin, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D63451

llvm-svn: 363976
2019-06-20 20:44:45 +00:00
Amy Huang 7fac5c8d94 Store a pointer to the return value in a static alloca and let the debugger use that
as the variable address for NRVO variables.

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D63361

llvm-svn: 363952
2019-06-20 17:15:21 +00:00
Eric Fiselier 708afb56c1 Implement __builtin_LINE() et. al. to support source location capture.
Summary:
This patch implements the source location builtins `__builtin_LINE(), `__builtin_FUNCTION()`, `__builtin_FILE()` and `__builtin_COLUMN()`. These builtins are needed to implement [`std::experimental::source_location`](https://rawgit.com/cplusplus/fundamentals-ts/v2/main.html#reflection.src_loc.creation).

With the exception of `__builtin_COLUMN`, GCC also implements these builtins, and Clangs behavior is intended to match as closely as possible. 

Reviewers: rsmith, joerg, aaron.ballman, bogner, majnemer, shafik, martong

Reviewed By: rsmith

Subscribers: rnkovacs, loskutov, riccibruno, mgorny, kunitoki, alexr, majnemer, hfinkel, cfe-commits

Differential Revision: https://reviews.llvm.org/D37035

llvm-svn: 360937
2019-05-16 21:04:15 +00:00
Anastasia Stulova 094c72660a [PR41276] Fixed incorrect generation of addr space cast for 'this' in C++.
Improved classification of address space cast when qualification
conversion is performed - prevent adding addr space cast for
non-pointer and non-reference types. Take address space correctly
from the pointee.

Also pass correct address space from 'this' object using
AggValueSlot when generating addrspacecast in the constructor
call.

Differential Revision: https://reviews.llvm.org/D59988

llvm-svn: 357682
2019-04-04 10:48:36 +00:00
Reid Kleckner 73253bdefc [MS] Make __iso_volatile_* available on all targets
Future versions of MSVC make these intrinsics available on x86 & x64,
according to:
http://lists.llvm.org/pipermail/cfe-dev/2019-March/061711.html

The purpose of these builtins is to emit plain, non-atomic, volatile
stores when /volatile:ms (-cc1 -fms-volatile) is enabled.

llvm-svn: 357220
2019-03-28 22:59:09 +00:00
Peter Collingbourne d1c5b28c2a IRGen: Remove StructorType; thread GlobalDecl through more code. NFCI.
This should make it easier to add more structor variants.

Differential Revision: https://reviews.llvm.org/D59724

llvm-svn: 356822
2019-03-22 23:05:10 +00:00
Erik Pilkington 02886e5476 Revert "Add a new attribute, fortify_stdlib"
This reverts commit r353765. After talking with our c stdlib folks, we decided
to use the existing pass_object_size attribute to implement _FORTIFY_SOURCE
wrappers, like Bionic does (I didn't realize that pass_object_size could be used
for this purpose). Sorry for the flip/flop, and thanks to James Y. Knight for
pointing this out to me.

llvm-svn: 356103
2019-03-13 21:37:01 +00:00
Erik Pilkington ec389b0838 [CodeGenObjC] Emit [[X alloc] init] as objc_alloc_init(X) when available
This provides a code size win on the caller side, since the init
message send is done in the runtime function.

rdar://44987038

Differential revision: https://reviews.llvm.org/D57936

llvm-svn: 354056
2019-02-14 19:58:37 +00:00
Erik Pilkington e3cd735ea6 Add a new attribute, fortify_stdlib
This attribute applies to declarations of C stdlib functions
(sprintf, memcpy...) that have known fortified variants
(__sprintf_chk, __memcpy_chk, ...). When applied, clang will emit
calls to the fortified variant functions instead of calls to the
defaults.

In GCC, this is done by adding gnu_inline-style wrapper functions,
but that doesn't work for us for variadic functions because we don't
support __builtin_va_arg_pack (and have no intention to).

This attribute takes two arguments, the first is 'type' argument
passed through to __builtin_object_size, and the second is a flag
argument that gets passed through to the variadic checking variants.

rdar://47905754

Differential revision: https://reviews.llvm.org/D57918

llvm-svn: 353765
2019-02-11 23:21:39 +00:00
James Y Knight f7321540d5 [opaque pointer types] Pass through function types for TLS
initialization and global destructor calls.

Differential Revision: https://reviews.llvm.org/D57801

llvm-svn: 353355
2019-02-07 01:14:17 +00:00
James Y Knight 76f787424d [opaque pointer types] More trivial changes to pass FunctionType to CallInst.
Change various functions to use FunctionCallee or Function*.

Pass function type through __builtin_dump_struct's dumpRecord helper.

llvm-svn: 353199
2019-02-05 19:17:50 +00:00
James Y Knight 9871db064d [opaque pointer types] Pass function types for runtime function calls.
Emit{Nounwind,}RuntimeCall{,OrInvoke} have been modified to take a
FunctionCallee as an argument, and CreateRuntimeFunction has been
modified to return a FunctionCallee. All callers have been updated.

Additionally, CreateBuiltinFunction is removed, as it was redundant
with CreateRuntimeFunction after some previous changes.

Differential Revision: https://reviews.llvm.org/D57668

llvm-svn: 353184
2019-02-05 16:42:33 +00:00
James Y Knight 8799caee8d [opaque pointer types] Trivial changes towards CallInst requiring
explicit function types.

llvm-svn: 353009
2019-02-03 21:53:49 +00:00
Julian Lettner b6c06dc28f [Sanitizers] UBSan unreachable incompatible with ASan in the presence of `noreturn` calls
Summary:
UBSan wants to detect when unreachable code is actually reached, so it
adds instrumentation before every unreachable instruction. However, the
optimizer will remove code after calls to functions marked with
noreturn. To avoid this UBSan removes noreturn from both the call
instruction as well as from the function itself. Unfortunately, ASan
relies on this annotation to unpoison the stack by inserting calls to
_asan_handle_no_return before noreturn functions. This is important for
functions that do not return but access the the stack memory, e.g.,
unwinder functions *like* longjmp (longjmp itself is actually
"double-proofed" via its interceptor). The result is that when ASan and
UBSan are combined, the noreturn attributes are missing and ASan cannot
unpoison the stack, so it has false positives when stack unwinding is
used.

Changes:
Clang-CodeGen now directly insert calls to `__asan_handle_no_return`
when a call to a noreturn function is encountered and both
UBsan-unreachable and ASan are enabled. This allows UBSan to continue
removing the noreturn attribute from functions without any changes to
the ASan pass.

Previously generated code:
```
  call void @longjmp
  call void @__asan_handle_no_return
  call void @__ubsan_handle_builtin_unreachable
```

Generated code (for now):
```
  call void @__asan_handle_no_return
  call void @longjmp
  call void @__asan_handle_no_return
  call void @__ubsan_handle_builtin_unreachable
```

rdar://problem/40723397

Reviewers: delcypher, eugenis, vsk

Differential Revision: https://reviews.llvm.org/D57278

> llvm-svn: 352690

llvm-svn: 352829
2019-02-01 02:51:00 +00:00
Eric Liu a22c72ca8f Revert "[Sanitizers] UBSan unreachable incompatible with ASan in the presence of `noreturn` calls"
This reverts commit r352690. This causes clang to crash. Sent reproducer to the
author in the orginal commit.

llvm-svn: 352755
2019-01-31 14:20:02 +00:00
Julian Lettner 8280c1e23e [Sanitizers] UBSan unreachable incompatible with ASan in the presence of `noreturn` calls
Summary:
UBSan wants to detect when unreachable code is actually reached, so it
adds instrumentation before every unreachable instruction. However, the
optimizer will remove code after calls to functions marked with
noreturn. To avoid this UBSan removes noreturn from both the call
instruction as well as from the function itself. Unfortunately, ASan
relies on this annotation to unpoison the stack by inserting calls to
_asan_handle_no_return before noreturn functions. This is important for
functions that do not return but access the the stack memory, e.g.,
unwinder functions *like* longjmp (longjmp itself is actually
"double-proofed" via its interceptor). The result is that when ASan and
UBSan are combined, the noreturn attributes are missing and ASan cannot
unpoison the stack, so it has false positives when stack unwinding is
used.

Changes:
Clang-CodeGen now directly insert calls to `__asan_handle_no_return`
when a call to a noreturn function is encountered and both
UBsan-unreachable and ASan are enabled. This allows UBSan to continue
removing the noreturn attribute from functions without any changes to
the ASan pass.

Previously generated code:
```
  call void @longjmp
  call void @__asan_handle_no_return
  call void @__ubsan_handle_builtin_unreachable
```

Generated code (for now):
```
  call void @__asan_handle_no_return
  call void @longjmp
  call void @__asan_handle_no_return
  call void @__ubsan_handle_builtin_unreachable
```

rdar://problem/40723397

Reviewers: delcypher, eugenis, vsk

Differential Revision: https://reviews.llvm.org/D57278

llvm-svn: 352690
2019-01-30 23:42:13 +00:00
Erik Pilkington 9c3b588db9 Add a new builtin: __builtin_dynamic_object_size
This builtin has the same UI as __builtin_object_size, but has the
potential to be evaluated dynamically. It is meant to be used as a
drop-in replacement for libraries that use __builtin_object_size when
a dynamic checking mode is enabled. For instance,
__builtin_object_size fails to provide any extra checking in the
following function:

  void f(size_t alloc) {
    char* p = malloc(alloc);
    strcpy(p, "foobar"); // expands to __builtin___strcpy_chk(p, "foobar", __builtin_object_size(p, 0))
  }

This is an overflow if alloc < 7, but because LLVM can't fold the
object size intrinsic statically, it folds __builtin_object_size to
-1. With __builtin_dynamic_object_size, alloc is passed through to
__builtin___strcpy_chk.

rdar://32212419

Differential revision: https://reviews.llvm.org/D56760

llvm-svn: 352665
2019-01-30 20:34:53 +00:00
James Y Knight 3933addd30 Cleanup: replace uses of CallSite with CallBase.
llvm-svn: 352595
2019-01-30 02:54:28 +00:00
Richard Smith cfa79b27b5 [ubsan] Check the correct size when sanitizing array new.
We previously forgot to multiply the element size by the array bound.

llvm-svn: 351924
2019-01-23 03:37:29 +00:00
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Richard Smith 0444006fff Fix cleanup registration for lambda captures.
Lambda captures should be destroyed if an exception is thrown only if
the construction of the complete lambda-expression has not completed.
(If the lambda-expression has been fully constructed, any exception will
invoke its destructor, which will destroy the captures.)

This is directly modeled after how we handle the equivalent situation in
InitListExprs.

Note that EmitLambdaLValue was unreachable because in C++11 onwards the
frontend never creates the awkward situation where a prvalue expression
(such as a lambda) is used in an lvalue context (such as the left-hand
side of a class member access).

llvm-svn: 351487
2019-01-17 22:05:50 +00:00
Roman Lebedev bd1c087019 [clang][UBSan] Sanitization for alignment assumptions.
Summary:
UB isn't nice. It's cool and powerful, but not nice.
Having a way to detect it is nice though.
[[ https://wg21.link/p1007r3 | P1007R3: std::assume_aligned ]] / http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1007r2.pdf says:
```
We propose to add this functionality via a library function instead of a core language attribute.
...
If the pointer passed in is not aligned to at least N bytes, calling assume_aligned results in undefined behaviour.
```

This differential teaches clang to sanitize all the various variants of this assume-aligned attribute.

Requires D54588 for LLVM IRBuilder changes.
The compiler-rt part is D54590.

This is a second commit, the original one was r351105,
which was mass-reverted in r351159 because 2 compiler-rt tests were failing.

Reviewers: ABataev, craig.topper, vsk, rsmith, rnk, #sanitizers, erichkeane, filcab, rjmccall

Reviewed By: rjmccall

Subscribers: chandlerc, ldionne, EricWF, mclow.lists, cfe-commits, bkramer

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D54589

llvm-svn: 351177
2019-01-15 09:44:25 +00:00
Vlad Tsyrklevich 86e68fda3b Revert alignment assumptions changes
Revert r351104-6, r351109, r351110, r351119, r351134, and r351153. These
changes fail on the sanitizer bots.

llvm-svn: 351159
2019-01-15 03:38:02 +00:00
Roman Lebedev 7892c37455 [clang][UBSan] Sanitization for alignment assumptions.
Summary:
UB isn't nice. It's cool and powerful, but not nice.
Having a way to detect it is nice though.
[[ https://wg21.link/p1007r3 | P1007R3: std::assume_aligned ]] / http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1007r2.pdf says:
```
We propose to add this functionality via a library function instead of a core language attribute.
...
If the pointer passed in is not aligned to at least N bytes, calling assume_aligned results in undefined behaviour.
```

This differential teaches clang to sanitize all the various variants of this assume-aligned attribute.

Requires D54588 for LLVM IRBuilder changes.
The compiler-rt part is D54590.

Reviewers: ABataev, craig.topper, vsk, rsmith, rnk, #sanitizers, erichkeane, filcab, rjmccall

Reviewed By: rjmccall

Subscribers: chandlerc, ldionne, EricWF, mclow.lists, cfe-commits, bkramer

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D54589

llvm-svn: 351105
2019-01-14 19:09:27 +00:00