Commit Graph

202 Commits

Author SHA1 Message Date
Arthur Eubanks b6ccff3d5f [NewPM] Provide method to run all pipeline callbacks, used for -O0
Some targets may add required passes via
TargetMachine::registerPassBuilderCallbacks(). We need to run those even
under -O0. As an example, BPFTargetMachine adds
BPFAbstractMemberAccessPass, a required pass.

This also allows us to clean up BackendUtil.cpp (and out-of-tree Rust
usage of the NPM) by allowing us to share added passes like coroutines
and sanitizers between -O0 and other optimization levels.

Since callbacks may end up not adding passes, we need to check if the
pass managers are empty before adding them, so PassManager now has an
isEmpty() function. For example, polly adds callbacks but doesn't always
add passes in those callbacks, so this is necessary to keep
-debug-pass-manager tests' output from changing depending on if polly is
enabled or not.

Tests are a continuation of those added in
https://reviews.llvm.org/D89083.

Reviewed By: asbirlea, Meinersbur

Differential Revision: https://reviews.llvm.org/D89158
2020-11-11 15:10:27 -08:00
Simon Pilgrim 2224c2f8bc [BPF] intrinsic-array-2.ll - remove unused check prefixes
Just use default CHECK
2020-11-11 18:38:21 +00:00
Arthur Eubanks 226e179f74 Revert "[NewPM] Provide method to run all pipeline callbacks, used for -O0"
This reverts commit ae38540042.
As well as some follow-up test fixes.

The original change causes new-pass-manager.ll to fail when polly is enabled.
2020-11-08 00:32:35 -08:00
Arthur Eubanks ae38540042 [NewPM] Provide method to run all pipeline callbacks, used for -O0
Some targets may add required passes via
    TargetMachine::registerPassBuilderCallbacks(). We need to run those even
    under -O0. As an example, BPFTargetMachine adds
    BPFAbstractMemberAccessPass, a required pass.

    This also allows us to clean up BackendUtil.cpp (and out-of-tree Rust
    usage of the NPM) by allowing us to share added passes like coroutines
    and sanitizers between -O0 and other optimization levels.

    Tests are a continuation of those added in
    https://reviews.llvm.org/D89083.

    In order to prevent TargetMachines from adding unnecessary optimization
    passes at -O0, TargetMachine::registerPassBuilderCallbacks() will be
    changed to take an OptimizationLevel, but that will be done separately.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D89158
2020-11-04 22:27:16 -08:00
Arthur Eubanks 2218e6d0a8 [BPF] Make BPFAbstractMemberAccessPass required
Or else on optnone functions we get the following during instruction selection:
  fatal error: error in backend: Cannot select: intrinsic %llvm.preserve.struct.access.index

Currently the -O0 pipeline doesn't properly run passes registered via
TargetMachine::registerPassBuilderCallbacks(), so don't add that RUN
line yet. That will be fixed after this.

Reviewed By: yonghong-song

Differential Revision: https://reviews.llvm.org/D89083
2020-10-09 11:26:37 -07:00
Yonghong Song 3161172168 BPF: fix incorrect DAG2DAG load optimization
Currently, bpf backend Instruction section DAG2DAG phase has
an optimization to replace loading constant struct memeber
or array element with direct values. The reason is that these
locally defined struct or array variables may have their
initial values stored in a readonly section and early bpf
ecosystem is not able to handle such cases.

Bpf ecosystem now can not only handle readonly sections,
but also global variables. global variable can also have
initialized data and global variable may or may not be constant,
i.e., global variable data can be put in .data section or .rodata
section. This exposed a bug in DAG2DAG Load optimization
as it did not check whether the global variable is constant
or not.

This patch fixed the bug by checking whether global variable,
representing the initial data, is constant or not and will not
do optimization if it is not a constant.

Another bug is also fixed in this patch to check whether
the load is simple (not volatile/atomic) or not. If it is
not simple, we will not do optimization. To summary for
globals:
   - struct t var = { ... } ;  // no load optimization
   - const struct t var = { ... }; // load optimization is possible
   - volatile const struct t var = { ... }; // no load optimization

Differential Revision: https://reviews.llvm.org/D89021
2020-10-07 19:08:40 -07:00
Yonghong Song ddf1864ace BPF: add AdjustOpt IR pass to generate verifier friendly codes
Add an IR phase right before main module optimization.
This is to modify IR to restrict certain downward optimizations
in order to generate verifier friendly code.
  > prevent certain instcombine optimizations, handling both
    in-block/cross-block instcombines.
  > avoid speculative code motion if the variable used in
    condition is also used in the later blocks.

Internally, a bpf IR builtin
  result = __builtin_bpf_passthrough(seq_num, result)
is used to enforce ordering. This builtin is only used
during target independent IR optimizations and it will
be removed at the beginning of target dependent IR
optimizations.

For example, removing the following workaround,
  --- a/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c
  +++ b/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c
  @@ -47,7 +47,7 @@ int sysctl_tcp_mem(struct bpf_sysctl *ctx)
          /* a workaround to prevent compiler from generating
           * codes verifier cannot handle yet.
           */
  -       volatile int ret;
  +       int ret;
this patch is able to generate code which passed the verifier.

To disable optimization, users need to use "opt" command like below:
  clang -target bpf -O2 -S -emit-llvm -Xclang -disable-llvm-passes test.c
  // disable icmp serialization
  opt -O2 -bpf-disable-serialize-icmp test.ll | llvm-dis > t.ll
  // disable avoid-speculation
  opt -O2 -bpf-disable-avoid-speculation test.ll | llvm-dis > t.ll
  llc t.ll

Differential Revision: https://reviews.llvm.org/D85570
2020-10-07 08:49:10 -07:00
Yonghong Song edd71db38b BPF: avoid duplicated globals for CORE relocations
This patch fixed two issues related with relocation globals.
In LLVM, if a global, e.g. with name "g", is created and
conflict with another global with the same name, LLVM will
rename the global, e.g., with a new name "g.2". Since
relocation global name has special meaning, we do not want
llvm to change it, so internally we have logic to check
whether duplication happens or not. If happens, just reuse
the previous global.

The first bug is related to non-btf-id relocation
(BPFAbstractMemberAccess.cpp). Commit 54d9f743c8
("BPF: move AbstractMemberAccess and PreserveDIType passes
to EP_EarlyAsPossible") changed ModulePass to FunctionPass,
i.e., handling each function at a time. But still just
one BPFAbstractMemberAccess object is created so module
level de-duplication still possible. Commit 40251fee00
("[BPF][NewPM] Make BPFTargetMachine properly adjust NPM optimizer
pipeline") made a change to create a BPFAbstractMemberAccess
object per function so module level de-duplication is not
possible any more without going through all module globals.
This patch simply changed the map which holds reloc globals
as class static, so it will be available to all
BPFAbstractMemberAccess objects for different functions.

The second bug is related to btf-id relocation
(BPFPreserveDIType.cpp). Before Commit 54d9f743c8, the pass
is a ModulePass, so we have a local variable, incremented for
each instance, and works fine. But after Commit 54d9f743c8,
the pass becomes a FunctionPass. Local variable won't work
properly since different functions will start with the same
initial value. Fix the issue by change the local count variable
as static, so it will be truely unique across the whole module
compilation.

Differential Revision: https://reviews.llvm.org/D88942
2020-10-06 22:37:49 -07:00
Arthur Eubanks 40251fee00 [BPF][NewPM] Make BPFTargetMachine properly adjust NPM optimizer pipeline
This involves porting BPFAbstractMemberAccess and BPFPreserveDIType to
NPM, then adding them BPFTargetMachine::registerPassBuilderCallbacks
(the NPM equivalent of adjustPassManager()).

Reviewed By: yonghong-song, asbirlea

Differential Revision: https://reviews.llvm.org/D88855
2020-10-06 07:42:32 -07:00
Yonghong Song ca1ce397ac BPF: explicitly specify bpfel triple for certain tests
Commit 54d9f743c8 ("BPF: move AbstractMemberAccess and
PreserveDIType passes to EP_EarlyAsPossible") changed most
of CORE tests with opt run followed by llc and opt requires
the target triple specified in the IR.

There are few tests where little endian and big endian will
report different result and for little endian versions of
tests, "target triple = "bpf"" will produce wrong results
if the test executed in a big endian machine, e.g.
PowerPC big endian machine, since target "bpf" represents
host endian and will resolve to "bpfeb".
The builtbot reported such failures when build-and-run
on a PowerPC big endian machine.

To fix the issue, using "target triple = "bpfel"" instead.
2020-09-28 20:25:25 -07:00
Yonghong Song 54d9f743c8 BPF: move AbstractMemberAccess and PreserveDIType passes to EP_EarlyAsPossible
Move abstractMemberAccess and PreserveDIType passes as early as
possible, right after clang code generation.

Currently, compiler may transform the above code
  p1 = llvm.bpf.builtin.preserve.struct.access(base, 0, 0);
  p2 = llvm.bpf.builtin.preserve.struct.access(p1, 1, 2);
  a = llvm.bpf.builtin.preserve_field_info(p2, EXIST);
  if (a) {
    p1 = llvm.bpf.builtin.preserve.struct.access(base, 0, 0);
    p2 = llvm.bpf.builtin.preserve.struct.access(p1, 1, 2);
    bpf_probe_read(buf, buf_size, p2);
  }
to
  p1 = llvm.bpf.builtin.preserve.struct.access(base, 0, 0);
  p2 = llvm.bpf.builtin.preserve.struct.access(p1, 1, 2);
  a = llvm.bpf.builtin.preserve_field_info(p2, EXIST);
  if (a) {
    bpf_probe_read(buf, buf_size, p2);
  }
and eventually assembly code looks like
  reloc_exist = 1;
  reloc_member_offset = 10; //calculate member offset from base
  p2 = base + reloc_member_offset;
  if (reloc_exist) {
    bpf_probe_read(bpf, buf_size, p2);
  }
if during libbpf relocation resolution, reloc_exist is actually
resolved to 0 (not exist), reloc_member_offset relocation cannot
be resolved and will be patched with illegal instruction.
This will cause verifier failure.

This patch attempts to address this issue by do chaining
analysis and replace chains with special globals right
after clang code gen. This will remove the cse possibility
described in the above. The IR typically looks like
  %6 = load @llvm.sk_buff:0:50$0:0:0:2:0
  %7 = bitcast %struct.sk_buff* %2 to i8*
  %8 = getelementptr i8, i8* %7, %6
for a particular address computation relocation.

But this transformation has another consequence, code sinking
may happen like below:
  PHI = <possibly different @preserve_*_access_globals>
  %7 = bitcast %struct.sk_buff* %2 to i8*
  %8 = getelementptr i8, i8* %7, %6

For such cases, we will not able to generate relocations since
multiple relocations are merged into one.

This patch introduced a passthrough builtin
to prevent such optimization. Looks like inline assembly has more
impact for optimizaiton, e.g., inlining. Using passthrough has
less impact on optimizations.

A new IR pass is introduced at the beginning of target-dependent
IR optimization, which does:
  - report fatal error if any reloc global in PHI nodes
  - remove all bpf passthrough builtin functions

Changes for existing CORE tests:
  - for clang tests, add "-Xclang -disable-llvm-passes" flags to
    avoid builtin->reloc_global transformation so the test is still
    able to check correctness for clang generated IR.
  - for llvm CodeGen/BPF tests, add "opt -O2 <ir_file> | llvm-dis" command
    before "llc" command since "opt" is needed to call newly-placed
    builtin->reloc_global transformation. Add target triple in the IR
    file since "opt" requires it.
  - Since target triple is added in IR file, if a test may produce
    different results for different endianness, two tests will be
    created, one for bpfeb and another for bpfel, e.g., some tests
    for relocation of lshift/rshift of bitfields.
  - field-reloc-bitfield-1.ll has different relocations compared to
    old codes. This is because for the structure in the test,
    new code returns struct layout alignment 4 while old code
    is 8. Align 8 is more precise and permits double load. With align 4,
    the new mechanism uses 4-byte load, so generating different
    relocations.
  - test intrinsic-transforms.ll is removed. This is used to test
    cse on intrinsics so we do not lose metadata. Now metadata is attached
    to global and not instruction, it won't get lost with cse.

Differential Revision: https://reviews.llvm.org/D87153
2020-09-28 16:56:22 -07:00
Yonghong Song 87cba43402 BPF: add a SimplifyCFG IR pass during generic Scalar/IPO optimization
The following bpf linux kernel selftest failed with latest
llvm:
  $ ./test_progs -n 7/10
  ...
  The sequence of 8193 jumps is too complex.
  verification time 126272 usec
  stack depth 320
  processed 114799 insns (limit 1000000)
  ...
  libbpf: failed to load object 'pyperf600_nounroll.o'
  test_bpf_verif_scale:FAIL:110
  #7/10 pyperf600_nounroll.o:FAIL
  #7 bpf_verif_scale:FAIL

After some investigation, I found the following llvm patch
  https://reviews.llvm.org/D84108
is responsible. The patch disabled hoisting common instructions
in SimplifyCFG by default. Later on, the code changes and a
SimplifyCFG phase with hoisting on cannot do the work any more.

A test is provided to demonstrate the problem.
The IR before simplifyCFG looks like:
  for.cond:
    %i.0 = phi i32 [ 0, %entry ], [ %inc, %for.inc ]
    %cmp = icmp ult i32 %i.0, 6
    br i1 %cmp, label %for.body, label %for.cond.cleanup

  for.cond.cleanup:
    %2 = load i8*, i8** %frame_ptr, align 8, !tbaa !2
    %cmp2 = icmp eq i8* %2, null
    %conv = zext i1 %cmp2 to i32
    call void @llvm.lifetime.end.p0i8(i64 8, i8* nonnull %1) #3
    call void @llvm.lifetime.end.p0i8(i64 8, i8* nonnull %0) #3
    ret i32 %conv

  for.body:
    %3 = load i8*, i8** %frame_ptr, align 8, !tbaa !2
    %tobool.not = icmp eq i8* %3, null
    br i1 %tobool.not, label %for.inc, label %land.lhs.true

The first two insns of `for.cond.cleanup` and `for.body`, load and
icmp, can be hoisted to `for.cond` block. With Patch D84108, the
optimization is delayed. But unfortunately, later on loop rotation
added addition phi nodes to `for.body` and hoisting cannot
be done any more.

Note such a hoisting is beneficial to bpf programs as
bpf verifier does path sensitive analysis and verification.
The hoisting preverts reloading from stack which will assume
conservative value and increase exploited insns. In this case,
it caused verifier failure.

To fix this problem, I added an IR pass from bpf target
to performance additional simplifycfg with hoisting common inst
enabled.

Differential Revision: https://reviews.llvm.org/D85434
2020-08-06 13:16:00 -07:00
Yonghong Song 00602ee7ef BPF: simplify IR generation for __builtin_btf_type_id()
This patch simplified IR generation for __builtin_btf_type_id().
For __builtin_btf_type_id(obj, flag), previously IR builtin
looks like
   if (obj is a lvalue)
     llvm.bpf.btf.type.id(obj.ptr, 1, flag)  !type
   else
     llvm.bpf.btf.type.id(obj, 0, flag)  !type
The purpose of the 2nd argument is to differentiate
   __builtin_btf_type_id(obj, flag) where obj is a lvalue
vs.
   __builtin_btf_type_id(obj.ptr, flag)

Note that obj or obj.ptr is never used by the backend
and the `obj` argument is only used to derive the type.
This code sequence is subject to potential llvm CSE when
  - obj is the same .e.g., nullptr
  - flag is the same
  - metadata type is different, e.g., typedef of struct "s"
    and strust "s".
In the above, we don't want CSE since their metadata is different.

This patch change IR builtin to
   llvm.bpf.btf.type.id(seq_num, flag)  !type
and seq_num is always increasing. This will prevent potential
llvm CSE.

Also report an error if the type name is empty for
remote relocation since remote relocation needs non-empty
type name to do relocation against vmlinux.

Differential Revision: https://reviews.llvm.org/D85174
2020-08-04 16:29:42 -07:00
Yonghong Song 6d218b4adb BPF: support type exist/size and enum exist/value relocations
Four new CO-RE relocations are introduced:
  - TYPE_EXISTENCE: whether a typedef/record/enum type exists
  - TYPE_SIZE: the size of a typedef/record/enum type
  - ENUM_VALUE_EXISTENCE: whether an enum value of an enum type exists
  - ENUM_VALUE: the enum value of an enum type

These additional relocations will make CO-RE bpf programs
more adaptive for potential kernel internal data structure
changes.

Differential Revision: https://reviews.llvm.org/D83878
2020-08-04 12:35:39 -07:00
Elvina Yakubova b36a3e6140 [llvm-readobj] Update tests because of changes in llvm-readobj behavior
This patch updates tests using llvm-readobj and llvm-readelf, because
soon reading from stdin will be achievable only via a '-' as described
here: https://bugs.llvm.org/show_bug.cgi?id=46400. Patch with changes to
llvm-readobj behavior is here: https://reviews.llvm.org/D83704

Differential Revision: https://reviews.llvm.org/D83912

Reviewed by: jhenderson, MaskRay, grimar
2020-07-20 10:39:04 +01:00
Yonghong Song 0e347c0ff0 BPF: generate .rodata BTF datasec for certain initialized local var's
Currently, BTF datasec type for .rodata is generated only if there are
user-defined readonly global variables which have debuginfo generated.

Certain readonly global variables may be generated from initialized
local variables. For example,
  void foo(const void *);
  int test() {
    const struct {
      unsigned a[4];
      char b;
    } val = { .a = {2, 3, 4, 5}, .b = 6 };
    foo(&val);
    return 0;
  }

The clang will create a private linkage const global to store
the initialized value:
  @__const.test.val = private unnamed_addr constant %struct.anon
      { [4 x i32] [i32 2, i32 3, i32 4, i32 5], i8 6 }, align 4

This global variable eventually is put in .rodata ELF section.

If there is .rodata ELF section, libbpf expects a BTF .rodata
datasec as well even though it may be empty meaning there are no
global readonly variables with proper debuginfo. Martin reported
a bug where without this empty BTF .rodata datasec, the bpftool
gen will exit with an error.

This patch fixed the issue by generating .rodata BTF datasec
if there exists local var intial data which will result in
.rodata ELF section.

Differential Revision: https://reviews.llvm.org/D84002
2020-07-17 09:45:57 -07:00
Yonghong Song 152a9fef1b BPF: permit .maps section variables with typedef type
Currently, llvm when see a global variable in .maps section,
it ensures its type must be a struct type. Then pointee
will be further evaluated for the structure members.
In normal cases, the pointee type will be skipped.

Although this is what current all bpf programs are doing,
but it is a little bit restrictive. For example, it is legitimate
for users to have:
typedef struct { int key_size; int value_size; } __map_t;
__map_t map __attribute__((section(".maps")));

This patch lifts this restriction and typedef of
a struct type is also allowed for .maps section variables.
To avoid create unnecessary fixup entries when traversal
started with typedef/struct type, the new implementation
first traverse all map struct members and then traverse
the typedef/struct type. This way, in internal BTFDebug
implementation, no fixup entries are generated.

Two new unit tests are added for typedef and const
struct in .maps section. Also tested with kernel bpf selftests.

Differential Revision: https://reviews.llvm.org/D83638
2020-07-12 09:42:25 -07:00
Yonghong Song 3eacfdc72f [BPF] Fix a BTF gen bug related to a pointer struct member
Currently, BTF generation stops at pointer struct members
if the pointee type is a struct. This is to avoid bloating
generated BTF size. The following is the process to
correctly record types for these pointee struct types.
  - During type traversal stage, when a struct member, which
    is a pointer to another struct, is encountered,
    the pointee struct type, keyed with its name, is
    remembered in a Fixup map.
  - Later, when all type traversal is done, the Fixup map
    is scanned, based on struct name matching, to either
    resolve as pointing to a real already generated type
    or as a forward declaration.

Andrii discovered a bug if the struct member pointee struct
is anonymous. In this case, a struct with empty name is
recorded in Fixup map, and later it happens another anonymous
struct with empty name is defined in BTF. So wrong type
resolution happens.

To fix the problem, if the struct member pointee struct
is anonymous, pointee struct type will be generated in
stead of being put in Fixup map.

Differential Revision: https://reviews.llvm.org/D82976
2020-07-01 09:55:01 -07:00
Yonghong Song 7f6bc84a97 [BPF] Fix a bug for __builtin_preserve_field_info() processing
Andrii discovered a problem where a simple case similar to below
will generate wrong relocation kind:
  enum { FIELD_EXISTENCE = 2, };
  struct s1 { int a1; };
  int test() {
    struct s1 *v = 0;
    return __builtin_preserve_field_info(v[0], FIELD_EXISTENCE);
  }
The expected relocation kind should be FIELD_EXISTENCE, but
recorded reloc kind in the final object file is FIELD_BYTE_OFFSET,
which is incorrect.

This exposed a bug in generating access strings from intrinsics.
The current access string generation has two steps:
  step 1: find the base struct/union type,
  step 2: traverse members in the base type.

The current implementation relies on at lease one member access
in step 2 to get the correct relocation kind, which is true
in typical cases. But if there is no member accesses, the current
implementation falls to the default info kind FIELD_BYTE_OFFSET.
This is incorrect, we should still record the reloc kind
based on the user input. This patch fixed this issue by properly
recording the reloc kind in such cases.

Differential Revision: https://reviews.llvm.org/D82932
2020-06-30 23:45:37 -07:00
Yonghong Song 89648eb16d [BPF] fix a bug for BTF pointee type pruning
In BTF, pointee type pruning is used to reduce cluttering
too many unused types into prog BTF. For example,
   struct task_struct {
      ...
      struct mm_struct *mm;
      ...
   }
If bpf program does not access members of "struct mm_struct",
there is no need to bring types for "struct mm_struct" to BTF.

This patch fixed a bug where an incorrect pruning happened.
The test case like below:
    struct t;
    typedef struct t _t;
    struct s1 { _t *c; };
    int test1(struct s1 *arg) { ... }

    struct t { int a; int b; };
    struct s2 { _t c; }
    int test2(struct s2 *arg) { ... }

After processing test1(), among others, BPF backend generates BTF types for
    "struct s1", "_t" and a placeholder for "struct t".
Note that "struct t" is not really generated. If later a direct access
to "struct t" member happened, "struct t" BTF type will be generated
properly.

During processing test2(), when processing member type "_t c",
BPF backend sees type "_t" already generated, so returned.
This caused the problem that "struct t" BTF type is never generated and
eventually causing incorrect type definition for "struct s2".

To fix the issue, during DebugInfo type traversal, even if a
typedef/const/volatile/restrict derived type has been recorded in BTF,
if it is not a type pruning candidate, type traversal of its base type continues.

Differential Revision: https://reviews.llvm.org/D82041
2020-06-17 15:13:46 -07:00
Yonghong Song 4db1878158 [BPF] fix incorrect type in BPFISelDAGToDAG readonly load optimization
In BPF Instruction Selection DAGToDAG transformation phase,
BPF backend had an optimization to turn load from readonly data
section to direct load of the values. This phase is implemented
before libbpf has readonly section support and before alu32
is supported.

This phase however may generate incorrect type when alu32 is
enabled. The following is an example,
  -bash-4.4$ cat ~/tmp2/t.c
  struct t {
    unsigned char a;
    unsigned char b;
    unsigned char c;
  };
  extern void foo(void *);
  int test() {
    struct t v = {
      .b = 2,
    };
    foo(&v);
    return 0;
  }

The compiler will turn local variable "v" into a readonly section.
During instruction selection phase, the compiler generates two
loads from readonly section, one 2 byte load or 1 byte load, e.g., for 2 loads,
  t8: i32,ch = load<(dereferenceable load 2 from `i8* getelementptr inbounds
       (%struct.t, %struct.t* @__const.test.v, i64 0, i32 0)`, align 1),
       anyext from i16> t3, GlobalAddress:i64<%struct.t* @__const.test.v> 0, undef:i64
  t9: ch = store<(store 2 into %ir.v1.sub1), trunc to i16> t3, t8,
    FrameIndex:i64<0>, undef:i64

BPF backend changed t8 to i64 = Constant<2> and eventually the generated machine IR:
  t10: i64 = MOV_ri TargetConstant:i64<2>
  t40: i32 = SLL_ri_32 t10, TargetConstant:i32<8>
  t41: i32 = OR_ri_32 t40, TargetConstant:i64<0>
  t9: ch = STH32<Mem:(store 2 into %ir.v1.sub1)> t41, TargetFrameIndex:i64<0>,
      TargetConstant:i64<0>, t3

Note that t10 in the above is not correct. The type should be i32 and instruction
should be MOV_ri_32. The reason for incorrect insn selection is BPF insn selection
generated an i64 constant instead of an i32 constant as specified in the original
load instruction. Such incorrect insn sequence eventually caused the following
fatal error when a COPY insn tries to copy a 64bit register to a 32bit subregister.
  Impossible reg-to-reg copy
  UNREACHABLE executed at ../lib/Target/BPF/BPFInstrInfo.cpp:42!

This patch fixed the issue by using the load result type instead of always i64
when doing readonly load optimization.

Differential Revision: https://reviews.llvm.org/D81630
2020-06-11 19:31:06 -07:00
Yonghong Song 3659559cf3 [BPF] Remove unnecessary MOV_32_64 instructions
Commit 13f6c81c5d ("[BPF] simplify zero extension
with MOV_32_64") tried to use MOV_32_64 instructions
instead of lshift/rshift instructions for zero extension.
This has the benefit to remove the number of instructions
and may help verifier too.

But the same commit also removed the old MOV_32_64
pruning as it deems unsafe as MOV_32_64 does have the
side effect, zeroing out the top 32bit in the register.
This caused the following failure in kernel selftest
test_cls_redirect.o. In linux kernel, we have
     struct __sk_buff {
        __u32 data;
        __u32 data_end;
     };
The compiler will generate 32bit load for __sk_buff->data
and __sk_buff->data_end. But kernel verifier will actually
loads an address (64bit address on 64bit kernel) to the
result register. In this particular example, the explicit zext
was not optimized away and destroyed top 32bit
address and the verifier rejected the program :
     w2 = *(u32 *)(r1 + 76)
     ...
     r2 = w2  /* MOV_32_64: this will clear top 32bit */

Currently, if the load and the zext are next to each other, the
instruction pattern match can actually capture this to
avoid MOV_32_64, e.g., in BPFInstrInfo.td, we have
  def : Pat<(i64 (zextloadi32 ADDRri:$src)),
            (SUBREG_TO_REG (i64 0), (LDW32 ADDRri:$src), sub_32)>;

However, if they are not next to each other, LDW32 and
MOV_32_64 are generated, which may cause the above mentioned
problem.

BPF Backend already tried to optimize away pattern
   mov_32_64 + lshift + rshift

Commit 13f6c81c5d may generate mov_32_64 not followed by shifts.
This patch added optimization for only mov_32_64 too.

Differential Revision: https://reviews.llvm.org/D81048
2020-06-03 08:14:54 -07:00
John Fastabend 13f6c81c5d [BPF] simplify zero extension with MOV_32_64
The current pattern matching for zext results in the following code snippet
being produced,

  w1 = w0
  r1 <<= 32
  r1 >>= 32

Because BPF implementations require zero extension on 32bit loads this
both adds a few extra unneeded instructions but also makes it a bit
harder for the verifier to track the r1 register bounds. For example in
this verifier trace we see at the end of the snippet R2 offset is unknown.
However, if we track this correctly we see w1 should have the same bounds
as r8. R8 smax is less than U32 max value so a zero extend load should keep
the same value. Adding a max value of 800 (R8=inv(id=0,smax_value=800)) to
an off=0, as seen in R7 should create a max offset of 800. However at the
end of the snippet we note the R2 max offset is 0xffffFFFF.

  R0=inv(id=0,smax_value=800)
  R1_w=inv(id=0,umax_value=2147483647,var_off=(0x0; 0x7fffffff))
  R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=4,vs=1600,imm=0)
  R8_w=inv(id=0,smax_value=800,umax_value=4294967295,var_off=(0x0; 0xffffffff))
  R9=inv800 R10=fp0 fp-8=mmmm????
 58: (1c) w9 -= w8
 59: (bc) w1 = w8
 60: (67) r1 <<= 32
 61: (77) r1 >>= 32
 62: (bf) r2 = r7
 63: (0f) r2 += r1
 64: (bf) r1 = r6
 65: (bc) w3 = w9
 66: (b7) r4 = 0
 67: (85) call bpf_get_stack#67
  R0=inv(id=0,smax_value=800)
  R1_w=ctx(id=0,off=0,imm=0)
  R2_w=map_value(id=0,off=0,ks=4,vs=1600,umax_value=4294967295,var_off=(0x0; 0xffffffff))
  R3_w=inv(id=0,umax_value=800,var_off=(0x0; 0x3ff))
  R4_w=inv0 R6=ctx(id=0,off=0,imm=0)
  R7=map_value(id=0,off=0,ks=4,vs=1600,imm=0)
  R8_w=inv(id=0,smax_value=800,umax_value=4294967295,var_off=(0x0; 0xffffffff))
  R9_w=inv(id=0,umax_value=800,var_off=(0x0; 0x3ff))
  R10=fp0 fp-8=mmmm????

After this patch R1 bounds are not smashed by the <<=32 >>=32 shift and we
get correct bounds on R2 umax_value=800.

Further it reduces 3 insns to 1.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>

Differential Revision: https://reviews.llvm.org/D73985
2020-05-27 11:26:39 -07:00
Yonghong Song 8e8f1bd75a [BPF] Return fail if disassembled insn registers out of range
Daniel reported a llvm-objdump segfault like below:
  $ llvm-objdump -D bpf_xdp.o
  ...
  0000000000000000 <.strtab>:
       0:       00 63 69 6c 69 75 6d 5f <unknown>
       1:       6c 62 36 5f 61 66 66 69 w2 <<= w6
  ...
  (llvm-objdump: lib/Target/BPF/BPFGenAsmWriter.inc:1087: static const char*
   llvm::BPFInstPrinter::getRegisterName(unsigned int): Assertion
   `RegNo && RegNo < 25 && "Invalid register number!"' failed.
   Stack dump:
   0.      Program arguments: llvm-objdump -D bpf_xdp.o
    ...
    abort
    ...
    llvm::BPFInstPrinter::getRegisterName(unsigned int)
    llvm::BPFInstPrinter::printMemOperand(llvm::MCInst const*,
                          int, llvm::raw_ostream&, char const*)
    llvm::BPFInstPrinter::printInstruction(llvm::MCInst const*,
                          unsigned long, llvm::raw_ostream&)
    llvm::BPFInstPrinter::printInst(llvm::MCInst const*,
                          unsigned long, llvm::StringRef, llvm::MCSubtargetInfo const&,
                          llvm::raw_ostream&)
   ...

Basically, since -D enables disassembly for all sections, .strtab is also disassembled,
but some strings are decoded as legal instructions but with illegal register numbers.
When llvm-objdump tries to print register name for these illegal register numbers,
assertion and segfault happens.

The patch fixed the issue by returning fail for a disassembled insn if
that insn contains a reg operand with illegal reg number.
The insn will be printed as "<unknown>" instead of causing an assertion.
2020-05-18 18:53:23 -07:00
Yonghong Song ddff9799d2 [BPF] Prevent disassembly segfault for NOP insn
For a simple program like below:
  -bash-4.4$ cat t.c
  int test() {
    asm volatile("r0 = r0" ::);
    return 0;
  }
compiled with
  clang -target bpf -O2 -c t.c
the following llvm-objdump command will segfault.
  llvm-objdump -d t.o

  0:       bf 00 00 00 00 00 00 00 nop
  llvm-objdump: ../include/llvm/ADT/SmallVector.h:180
  ...
  Assertion `idx < size()' failed
  ...
  abort
  ...
  llvm::BPFInstPrinter::printOperand
  llvm::BPFInstPrinter::printInstruction
  ...

The reason is both NOP and MOV_rr (r0 = r0) having the same encoding.
The disassembly getInstruction() decodes to be a NOP instruciton but
during printInstruction() the same encoding is interpreted as
a MOV_rr instruction. Such a mismatcch caused the segfault.

The fix is to make NOP instruction as CodeGen only so disassembler
will skip NOP insn for disassembling.

Note that instruction "r0 = r0" should not appear in non inline
asm codes since BPF Machine Instruction Peephole optimization will
remove it.

Differential Revision: https://reviews.llvm.org/D80156
2020-05-18 17:40:18 -07:00
Yonghong Song 6b01b46538 [BPF] preserve debuginfo types for builtin __builtin__btf_type_id()
The builtin function
  u32 btf_type_id = __builtin_btf_type_id(param, 0)
can help preserve type info for the following use case:
  extern void foo(..., void *data, int size);
  int test(...) {
    struct t { int a; int b; int c; } d;
    d.a = ...; d.b = ...; d.c = ...;
    foo(..., &d, sizeof(d));
  }

The function "foo" in the above only see raw data and does not
know what type of the data is. In certain cases, e.g., logging,
the additional type information will help pretty print.

This patch handles the builtin in BPF backend. It includes
an IR pass to translate the IR intrinsic to a load of
a global variable which carries the metadata, and an MI
pass to remove the intermediate load of the global variable.
Finally, in AsmPrinter pass, proper instruction are generated.

In the above example, the second argument for __builtin_btf_type_id()
is 0, which means a relocation for local adjustment,
i.e., w.r.t. bpf program BTF change,  will be generated.
The value 1 for the second argument means
a relocation for remote adjustment, e.g., against vmlinux.

Differential Revision: https://reviews.llvm.org/D74572
2020-05-15 08:00:44 -07:00
Yonghong Song 3cb7e7bf95 BPF: fix a CORE optimization bug
For the test case in this patch like below
  struct t { int a; } __attribute__((preserve_access_index));
  int foo(void *);
  int test(struct t *arg) {
      long param[1];
      param[0] = (long)&arg->a;
      return foo(param);
  }

The IR right before BPF SimplifyPatchable phase:
  %1:gpr = LD_imm64 @"llvm.t:0:0$0:0"
  %2:gpr = LDD killed %1:gpr, 0
  %3:gpr = ADD_rr %0:gpr(tied-def 0), killed %2:gpr
  STD killed %3:gpr, %stack.0.param, 0
After SimplifyPatchable phase, the incorrect IR is generated:
  %1:gpr = LD_imm64 @"llvm.t:0:0$0:0"
  %3:gpr = ADD_rr %0:gpr(tied-def 0), killed %1:gpr
  CORE_MEM killed %3:gpr, 306, %0:gpr, @"llvm.t:0:0$0:0"

Note that CORE_MEM pseudo op is introduced to encode
memory operations related to CORE. In the above, we intend
to check whether we have a store like
   *(%3:gpr + 0) = ...
and if this is the case, we could replace it with
   *(%0:gpr + @"llvm.t:0:0$0:0"_ = ...

Unfortunately, in the above, IR for the store is
   *(%stack.0.param + 0) = %3:gpr
and transformation should not happen.

Note that we won't have problem if the actual CORE
dereference (arg->a) happens.

This patch fixed the problem by skip CORE optimization if
the use of ADD_rr result is not the base address of the store
operation.

Differential Revision: https://reviews.llvm.org/D78466
2020-04-20 19:54:51 -07:00
Jonathan Roelofs 7c5d2bec76 [llvm] Fix missing FileCheck directive colons
https://reviews.llvm.org/D77352
2020-04-06 09:59:08 -06:00
Yonghong Song ced0d1f42b [BPF] support 128bit int explicitly in layout spec
Currently, bpf does not specify 128bit alignment in its
layout spec. So for a structure like
  struct ipv6_key_t {
    unsigned pid;
    unsigned __int128 saddr;
    unsigned short lport;
  };
clang will generate IR type
  %struct.ipv6_key_t = type { i32, [12 x i8], i128, i16, [14 x i8] }
Additional padding is to ensure later IR->MIR can generate correct
stack layout with target layout spec.

But it is common practice for a tracing program to be
first compiled with target flag (e.g., x86_64 or aarch64) through
clang to generate IR and then go through llc to generate bpf
byte code. Tracing program often refers to kernel internal
data structures which needs to be compiled with non-bpf target.

But such a compilation model may cause a problem on aarch64.
The bcc issue https://github.com/iovisor/bcc/issues/2827
reported such a problem.

For the above structure, since aarch64 has "i128:128" in its
layout string, the generated IR will have
  %struct.ipv6_key_t = type { i32, i128, i16 }

Since bpf does not have "i128:128" in its spec string,
the selectionDAG assumes alignment 8 for i128 and
computes the stack storage size for the above is 32 bytes,
which leads incorrect code later.

The x86_64 does not have this issue as it does not have
"i128:128" in its layout spec as it does permits i128 to
be alignmented at 8 bytes at stack. Its IR type looks like
  %struct.ipv6_key_t = type { i32, [12 x i8], i128, i16, [14 x i8] }

The fix here is add i128 support in layout spec, the same as
aarch64. The only downside is we may have less optimal stack
allocation in certain cases since we require 16byte alignment
for i128 instead of 8. But this is probably fine as i128 is
not used widely and in most cases users should already
have proper alignment.

Differential Revision: https://reviews.llvm.org/D76587
2020-03-28 11:46:29 -07:00
Fangrui Song ecd6d7254e [test] llvm/test/: change llvm-objdump single-dash long options to double-dash options
As announced here: http://lists.llvm.org/pipermail/llvm-dev/2019-April/131786.html

Grouped option syntax (POSIX Utility Conventions) does not play well with -long-option
A subsequent change will reject -long-option.
2020-03-15 17:46:23 -07:00
Fangrui Song 71e2ca6e32 [llvm-objdump] -d: print `00000000 <foo>:` instead of `00000000 foo:`
The new behavior matches GNU objdump. A pair of angle brackets makes tests slightly easier.

`.foo:` is not unique and thus cannot be used in a `CHECK-LABEL:` directive.
Without `-LABEL`, the CHECK line can match the `Disassembly of section`
line and causes the next `CHECK-NEXT:` to fail.

```
Disassembly of section .foo:

0000000000001634 .foo:
```

Bdragon: <> has metalinguistic connotation. it just "feels right"

Reviewed By: rupprecht

Differential Revision: https://reviews.llvm.org/D75713
2020-03-05 18:05:28 -08:00
Yuanfang Chen 4ad7685258 Revert "Revert "Reland "[Support] make report_fatal_error `abort` instead of `exit`"""
This reverts commit 80a34ae311 with fixes.

Previously, since bots turning on EXPENSIVE_CHECKS are essentially turning on
MachineVerifierPass by default on X86 and the fact that
inline-asm-avx-v-constraint-32bit.ll and inline-asm-avx512vl-v-constraint-32bit.ll
are not expected to generate functioning machine code, this would go
down to `report_fatal_error` in MachineVerifierPass. Here passing
`-verify-machineinstrs=0` to make the intent explicit.
2020-02-13 10:16:06 -08:00
Yuanfang Chen 17122ec10a Revert "Revert "Revert "Reland "[Support] make report_fatal_error `abort` instead of `exit`""""
This reverts commit bb51d24330.
2020-02-13 10:08:05 -08:00
Yuanfang Chen bb51d24330 Revert "Revert "Reland "[Support] make report_fatal_error `abort` instead of `exit`"""
This reverts commit 80a34ae311 with fixes.

On bots llvm-clang-x86_64-expensive-checks-ubuntu and
llvm-clang-x86_64-expensive-checks-debian only,
llc returns 0 for these two tests unexpectedly. I tweaked the RUN line a little
bit in the hope that LIT is the culprit since this change is not in the
codepath these tests are testing.
llvm\test\CodeGen\X86\inline-asm-avx-v-constraint-32bit.ll
llvm\test\CodeGen\X86\inline-asm-avx512vl-v-constraint-32bit.ll
2020-02-13 10:02:53 -08:00
Jordan Rupprecht 60a8a504f1 [llvm-objdump] Print file format in lowercase to match GNU output.
Summary:
GNU objdump prints the file format in lowercase, e.g. `elf64-x86-64`. llvm-objdump prints `ELF64-x86-64` right now, even though piping that into llvm-objcopy refuses that as a valid arch to use.

As an example of a problem this causes, see: https://github.com/ClangBuiltLinux/linux/issues/779

Reviewers: MaskRay, jhenderson, alexshap

Reviewed By: MaskRay

Subscribers: tpimh, sbc100, grimar, jvesely, nhaehnle, kerbowa, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74433
2020-02-12 08:17:01 -08:00
Yuanfang Chen 80a34ae311 Revert "Reland "[Support] make report_fatal_error `abort` instead of `exit`""
This reverts commit rGcd5b308b828e, rGcd5b308b828e, rG8cedf0e2994c.

There are issues to be investigated for polly bots and bots turning on
EXPENSIVE_CHECKS.
2020-02-11 20:41:53 -08:00
Yuanfang Chen 8cedf0e299 Reland "[Support] make report_fatal_error `abort` instead of `exit`"
Summary:
Reland D67847 after D73742 is committed. Replace `sys::Process::Exit(1)`
with `abort` in `report_fatal_error`.

After this patch, for tools turning on `CrashRecoveryContext`,
crash handler installed by `CrashRecoveryContext` is called unless
they installed a non-returning handler using `llvm::install_fatal_error_handler`
like `cc1_main` currently does.

Reviewers: rnk, MaskRay, aganea, hans, espindola, jhenderson

Subscribers: jholewinski, qcolombet, dschuff, jyknight, emaste, sdardis, nemanjai, jvesely, nhaehnle, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, steven_wu, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, rupprecht, jocewei, jsji, Jim, dmgreen, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, kerbowa, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74456
2020-02-11 18:20:40 -08:00
Yonghong Song 29bc5dd194 [BPF] implement isTruncateFree and isZExtFree in BPFTargetLowering
Currently, isTruncateFree() and isZExtFree() callbacks return false
as they are not implemented in BPF backend. This may cause suboptimal
code generation. For example, if the load in the context of zero extension
has more than one use, the pattern zextload{i8,i16,i32} will
not be generated. Rather, the load will be matched first and
then the result is zero extended.

For example, in the test together with this commit, we have
   I1: %0 = load i32, i32* %data_end1, align 4, !tbaa !2
   I2: %conv = zext i32 %0 to i64
   ...
   I3: %2 = load i32, i32* %data, align 4, !tbaa !7
   I4: %conv2 = zext i32 %2 to i64
   ...
   I5: %4 = trunc i64 %sub.ptr.lhs.cast to i32
   I6: %conv13 = sub i32 %4, %2
   ...

The I1 and I2 will match to one zextloadi32 DAG node, where SUBREG_TO_REG is
used to convert a 32bit register to 64bit one. During code generation,
SUBREG_TO_REG is a noop.

The %2 in I3 is used in both I4 and I6. If isTruncateFree() is false,
the current implementation will generate a SLL_ri and SRL_ri
for the zext part during lowering.

This patch implement isTruncateFree() in the BPF backend, so for the
above example, I3 and I4 will generate a zextloadi32 DAG node with
SUBREG_TO_REG is generated during lowering to Machine IR.

isZExtFree() is also implemented as it should help code gen as well.

This patch also enables the change in https://reviews.llvm.org/D73985
since it won't kick in generates MOV_32_64 machine instruction.

Differential Revision: https://reviews.llvm.org/D74101
2020-02-11 09:59:19 -08:00
Yonghong Song d96c1bbaa0 [BPF] disable ReduceLoadWidth during SelectionDag phase
The compiler may transform the following code
  ctx = ctx + reloc_offset
  ... (*(u32 *)ctx) & 0x8000 ...
to
  ctx = ctx + reloc_offset
  ... (*(u8 *)(ctx + 1)) & 0x80 ...
where reloc_offset will be replaced with a constant during
AsmPrinter phase.

The above transformed code will be rejected the kernel verifier
as it does not allow
  *(type *)((ctx + non_zero_offset1) + non_zero_offset2)
style access pattern.

It is hard at SelectionDag phase to identify whether a load
is related to context or not. Sometime, interprocedure analysis
may be needed. So let us simply prevent such optimization
from happening.

Differential Revision: https://reviews.llvm.org/D73997
2020-02-04 18:37:43 -08:00
Yonghong Song 6d07802d63 [BPF] handle typedef of struct/union for CO-RE relocations
Linux commit
  1cf5b23988 (diff-289313b9fec99c6f0acfea19d9cfd949)
uses "#pragma clang attribute push (__attribute__((preserve_access_index)),
      apply_to = record)"
to apply CO-RE relocations to all records including the following pattern:
  #pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)
  typedef struct {
    int a;
  } __t;
  #pragma clang attribute pop
  int test(__t *arg) { return arg->a; }

The current approach to use struct/union type in the relocation record will
result in an anonymous struct, which make later type matching difficult
in bpf loader. In fact, current BPF backend will fail the above program
with assertion:
  clang: ../lib/Target/BPF/BPFAbstractMemberAccess.cpp:796: ...
     Assertion `TypeName.size()' failed.

clang will change to use the type of the base of the member access
which will preserve the typedef modifier for the
preserve_{struct,union}_access_index intrinsics in the above example.
Here we adjust BPF backend to accept that the debuginfo
type metadata may be 'typedef' and handle them properly.

Differential Revision: https://reviews.llvm.org/D73902
2020-02-04 08:53:03 -08:00
Yonghong Song 795bbb3662 [BPF] fix a bug in BPFMISimplifyPatchable pass with -O0
The recommended optimization level for BPF programs
is O2 since (1). BPF is running inside the kernel and
linux kernel won't work at -O0 level, and (2). Verifier
is not able to handle O0 code properly, e.g., potential
large stack size and a lot of spills.

But we should keep -O0 at least compiling.
This patch fixed a bug in BPFMISimplifyPatchable phase
where with -O0, a segmentation fault will happen for a
simple program like:
  int test(int a, int b) { return a + b; }

A test case is added to capture such a case.

Differential Revision: https://reviews.llvm.org/D73681
2020-01-30 08:28:39 -08:00
Yuanfang Chen 6e24c6037f Revert "[Support] make report_fatal_error `abort` instead of `exit`"
This reverts commit 647c3f4e47.

Got bots failure from sanitizer-windows and maybe others.
2020-01-15 17:52:25 -08:00
Yuanfang Chen 647c3f4e47 [Support] make report_fatal_error `abort` instead of `exit`
Summary:
This patch could be treated as a rebase of D33960. It also fixes PR35547.
A fix for `llvm/test/Other/close-stderr.ll` is proposed in D68164. Seems
the consensus is that the test is passing by chance and I'm not
sure how important it is for us. So it is removed like in D33960 for now.
The rest of the test fixes are just adding `--crash` flag to `not` tool.

** The reason it fixes PR35547 is

`exit` does cleanup including calling class destructor whereas `abort`
does not do any cleanup. In multithreading environment such as ThinLTO or JIT,
threads may share states which mostly are ManagedStatic<>. If faulting thread
tearing down a class when another thread is using it, there are chances of
memory corruption. This is bad 1. It will stop error reporting like pretty
stack printer; 2. The memory corruption is distracting and nondeterministic in
terms of error message, and corruption type (depending one the timing, it
could be double free, heap free after use, etc.).

Reviewers: rnk, chandlerc, zturner, sepavloff, MaskRay, espindola

Reviewed By: rnk, MaskRay

Subscribers: wuzish, jholewinski, qcolombet, dschuff, jyknight, emaste, sdardis, nemanjai, jvesely, nhaehnle, sbc100, arichardson, jgravelle-google, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, lenary, s.egerton, pzheng, cfe-commits, MaskRay, filcab, davide, MatzeB, mehdi_amini, hiraditya, steven_wu, dexonsmith, rupprecht, seiya, llvm-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D67847
2020-01-15 17:05:13 -08:00
Simon Pilgrim 8f49204f26 [SelectionDAG] ComputeKnownBits - minimum leading/trailing zero bits in LSHR/SHL (PR44526)
As detailed in https://blog.regehr.org/archives/1709 we don't make use of the known leading/trailing zeros for shifted values in cases where we don't know the shift amount value.

This patch adds support to SelectionDAG::ComputeKnownBits to use KnownBits::countMinTrailingZeros and countMinLeadingZeros to set the minimum guaranteed leading/trailing known zero bits.

Differential Revision: https://reviews.llvm.org/D72573
2020-01-13 11:08:12 +00:00
Yonghong Song fbb64aa698 [BPF] extend BTF_KIND_FUNC to cover global, static and extern funcs
Previously extern function is added as BTF_KIND_VAR. This does not work
well with existing BTF infrastructure as function expected to use
BTF_KIND_FUNC and BTF_KIND_FUNC_PROTO.

This patch added extern function to BTF_KIND_FUNC. The two bits 0:1
of btf_type.info are used to indicate what kind of function it is:
  0: static
  1: global
  2: extern

Differential Revision: https://reviews.llvm.org/D71638
2020-01-10 09:06:31 -08:00
Yonghong Song ffd57408ef [BPF] Enable relocation location for load/store/shifts
Previous btf field relocation is always at assignment like
   r1 = 4
which is converted from an ld_imm64 instruction.

This patch did an optimization such that relocation
instruction might be load/store/shift. Specically, the
following insns may also have relocation, except BPF_MOV:
  LDB, LDH, LDW, LDD, STB, STH, STW, STD,
  LDB32, LDH32, LDW32, STB32, STH32, STW32,
  SLL, SRL, SRA

To accomplish this, a few BPF target specific
codegen only instructions are invented. They
are generated at backend BPF SimplifyPatchable phase,
which is at early llc phase when SSA form is available.
The new codegen only instructions will be converted to
real proper instructions at the codegen and BTF emission stage.

Note that, as revealed by a few tests, this optimization might
be actual generating more relocations:
Scenario 1:
  if (...) {
    ... __builtin_preserve_field_info(arg->b2, 0) ...
  } else {
    ... __builtin_preserve_field_info(arg->b2, 0) ...
  }
  Compiler could do CSE to only have one relocation. But if both
  of the above is translated into codegen internal instructions,
  the compiler will not be able to do that.
Scenario 2:
  offset = ... __builtin_preserve_field_info(arg->b2, 0) ...
  ...
  ...  offset ...
  ...  offset ...
  ...  offset ...
  For whatever reason, the compiler might be temporarily do copy
  propagation of the righthand of "offset" assignment like
  ...  __builtin_preserve_field_info(arg->b2, 0) ...
  ...  __builtin_preserve_field_info(arg->b2, 0) ...
  and CSE will be able to deduplicate later.
  But if these intrinsics are converted to BPF pseudo instructions,
  they will not be able to get deduplicated.

I do not expect we have big instruction count difference.
It may actually reduce instruction count since now relocation
is in deeper insn dependency chain.
For example, for test offset-reloc-fieldinfo-2.ll, this patch
generates 7 instead of 6 relocations for non-alu32 mode, but it
actually reduced instruction count from 29 to 26.

Differential Revision: https://reviews.llvm.org/D71790
2019-12-26 09:07:39 -08:00
Fangrui Song a36ddf0aa9 Migrate function attribute "no-frame-pointer-elim"="false" to "frame-pointer"="none" as cleanups after D56351 2019-12-24 16:27:51 -08:00
Fangrui Song 502a77f125 Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" as cleanups after D56351 2019-12-24 15:57:33 -08:00
Yonghong Song 7d0e8930ed [BPF] put not-section-attribute externs into BTF ".extern" data section
Currently for extern variables with section attribute, those
BTF_KIND_VARs will not be placed in any DataSec. This is
inconvenient as any other generated BTF_KIND_VAR belongs to
one DataSec. This patch put these extern variables into
".extern" section so bpf loader can have a consistent
processing mechanism for all data sections and variables.
2019-12-10 11:45:17 -08:00
Yonghong Song 4448125007 [BPF] Support to emit debugInfo for extern variables
extern variable usage in BPF is different from traditional
pure user space application. Recent discussion in linux bpf
mailing list has two use cases where debug info types are
required to use extern variables:
  - extern types are required to have a suitable interface
    in libbpf (bpf loader) to provide kernel config parameters
    to bpf programs.
    https://lore.kernel.org/bpf/CAEf4BzYCNo5GeVGMhp3fhysQ=_axAf=23PtwaZs-yAyafmXC9g@mail.gmail.com/T/#t
  - extern types are required so kernel bpf verifier can
    verify program which uses external functions more precisely.
    This will make later link with actual external function no
    need to reverify.
    https://lore.kernel.org/bpf/87eez4odqp.fsf@toke.dk/T/#m8d5c3e87ffe7f2764e02d722cb0d8cbc136880ed

This patch added bpf support to consume such info into BTF,
which can then be used by bpf loader. Function processFuncPrototypes()
only adds extern function definitions into BTF. The functions
with actual definition have been added to BTF in some other places.

Differential Revision: https://reviews.llvm.org/D70697
2019-12-09 21:53:29 -08:00