2018-10-06 07:40:00 +08:00
|
|
|
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
2018-01-31 04:55:03 +08:00
|
|
|
|
2015-07-01 10:14:03 +08:00
|
|
|
/*
|
|
|
|
* common eBPF ELF operations.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
|
|
|
|
* Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
|
|
|
|
* Copyright (C) 2015 Huawei Inc.
|
2016-07-04 19:02:42 +08:00
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation;
|
|
|
|
* version 2.1 of the License (not later!)
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with this program; if not, see <http://www.gnu.org/licenses>
|
2015-07-01 10:14:03 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stdlib.h>
|
2019-02-14 02:25:53 +08:00
|
|
|
#include <string.h>
|
2015-07-01 10:14:03 +08:00
|
|
|
#include <memory.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <asm/unistd.h>
|
2019-06-18 03:26:50 +08:00
|
|
|
#include <errno.h>
|
2015-07-01 10:14:03 +08:00
|
|
|
#include <linux/bpf.h>
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
#include <linux/filter.h>
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
#include <limits.h>
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
#include <sys/resource.h>
|
2015-07-01 10:14:03 +08:00
|
|
|
#include "bpf.h"
|
2018-01-31 04:55:01 +08:00
|
|
|
#include "libbpf.h"
|
2019-06-18 03:26:50 +08:00
|
|
|
#include "libbpf_internal.h"
|
2015-07-01 10:14:03 +08:00
|
|
|
|
|
|
|
/*
|
2017-02-28 06:29:28 +08:00
|
|
|
* When building perf, unistd.h is overridden. __NR_bpf is
|
2016-01-11 21:47:57 +08:00
|
|
|
* required to be defined explicitly.
|
2015-07-01 10:14:03 +08:00
|
|
|
*/
|
|
|
|
#ifndef __NR_bpf
|
|
|
|
# if defined(__i386__)
|
|
|
|
# define __NR_bpf 357
|
|
|
|
# elif defined(__x86_64__)
|
|
|
|
# define __NR_bpf 321
|
|
|
|
# elif defined(__aarch64__)
|
|
|
|
# define __NR_bpf 280
|
2017-04-23 03:31:05 +08:00
|
|
|
# elif defined(__sparc__)
|
|
|
|
# define __NR_bpf 349
|
2017-08-04 20:20:55 +08:00
|
|
|
# elif defined(__s390__)
|
|
|
|
# define __NR_bpf 351
|
2019-05-02 23:56:50 +08:00
|
|
|
# elif defined(__arc__)
|
|
|
|
# define __NR_bpf 280
|
bpf, mips: Fix build errors about __NR_bpf undeclared
Add the __NR_bpf definitions to fix the following build errors for mips:
$ cd tools/bpf/bpftool
$ make
[...]
bpf.c:54:4: error: #error __NR_bpf not defined. libbpf does not support your arch.
# error __NR_bpf not defined. libbpf does not support your arch.
^~~~~
bpf.c: In function ‘sys_bpf’:
bpf.c:66:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’?
return syscall(__NR_bpf, cmd, attr, size);
^~~~~~~~
__NR_brk
[...]
In file included from gen_loader.c:15:0:
skel_internal.h: In function ‘skel_sys_bpf’:
skel_internal.h:53:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’?
return syscall(__NR_bpf, cmd, attr, size);
^~~~~~~~
__NR_brk
We can see the following generated definitions:
$ grep -r "#define __NR_bpf" arch/mips
arch/mips/include/generated/uapi/asm/unistd_o32.h:#define __NR_bpf (__NR_Linux + 355)
arch/mips/include/generated/uapi/asm/unistd_n64.h:#define __NR_bpf (__NR_Linux + 315)
arch/mips/include/generated/uapi/asm/unistd_n32.h:#define __NR_bpf (__NR_Linux + 319)
The __NR_Linux is defined in arch/mips/include/uapi/asm/unistd.h:
$ grep -r "#define __NR_Linux" arch/mips
arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 4000
arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 5000
arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 6000
That is to say, __NR_bpf is:
4000 + 355 = 4355 for mips o32,
6000 + 319 = 6319 for mips n32,
5000 + 315 = 5315 for mips n64.
So use the GCC pre-defined macro _ABIO32, _ABIN32 and _ABI64 [1] to define
the corresponding __NR_bpf.
This patch is similar with commit bad1926dd2f6 ("bpf, s390: fix build for
libbpf and selftest suite").
[1] https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/config/mips/mips.h#l549
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1637804167-8323-1-git-send-email-yangtiezhu@loongson.cn
2021-11-25 09:36:07 +08:00
|
|
|
# elif defined(__mips__) && defined(_ABIO32)
|
|
|
|
# define __NR_bpf 4355
|
|
|
|
# elif defined(__mips__) && defined(_ABIN32)
|
|
|
|
# define __NR_bpf 6319
|
|
|
|
# elif defined(__mips__) && defined(_ABI64)
|
|
|
|
# define __NR_bpf 5315
|
2015-07-01 10:14:03 +08:00
|
|
|
# else
|
|
|
|
# error __NR_bpf not defined. libbpf does not support your arch.
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
2017-02-12 03:37:08 +08:00
|
|
|
static inline __u64 ptr_to_u64(const void *ptr)
|
2015-07-01 10:14:06 +08:00
|
|
|
{
|
|
|
|
return (__u64) (unsigned long) ptr;
|
|
|
|
}
|
|
|
|
|
2017-02-12 03:37:08 +08:00
|
|
|
static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
|
|
|
|
unsigned int size)
|
2015-07-01 10:14:03 +08:00
|
|
|
{
|
|
|
|
return syscall(__NR_bpf, cmd, attr, size);
|
|
|
|
}
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
static inline int sys_bpf_fd(enum bpf_cmd cmd, union bpf_attr *attr,
|
|
|
|
unsigned int size)
|
|
|
|
{
|
|
|
|
int fd;
|
|
|
|
|
|
|
|
fd = sys_bpf(cmd, attr, size);
|
|
|
|
return ensure_good_fd(fd);
|
|
|
|
}
|
|
|
|
|
2021-11-04 06:08:35 +08:00
|
|
|
#define PROG_LOAD_ATTEMPTS 5
|
|
|
|
|
|
|
|
static inline int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts)
|
2019-01-08 21:58:00 +08:00
|
|
|
{
|
|
|
|
int fd;
|
|
|
|
|
|
|
|
do {
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_PROG_LOAD, attr, size);
|
2021-11-04 06:08:35 +08:00
|
|
|
} while (fd < 0 && errno == EAGAIN && --attempts > 0);
|
2019-01-08 21:58:00 +08:00
|
|
|
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
/* Probe whether kernel switched from memlock-based (RLIMIT_MEMLOCK) to
|
|
|
|
* memcg-based memory accounting for BPF maps and progs. This was done in [0].
|
|
|
|
* We use the support for bpf_ktime_get_coarse_ns() helper, which was added in
|
|
|
|
* the same 5.11 Linux release ([1]), to detect memcg-based accounting for BPF.
|
|
|
|
*
|
|
|
|
* [0] https://lore.kernel.org/bpf/20201201215900.3569844-1-guro@fb.com/
|
|
|
|
* [1] d05512618056 ("bpf: Add bpf_ktime_get_coarse_ns helper")
|
|
|
|
*/
|
|
|
|
int probe_memcg_account(void)
|
|
|
|
{
|
|
|
|
const size_t prog_load_attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
|
|
|
|
struct bpf_insn insns[] = {
|
|
|
|
BPF_EMIT_CALL(BPF_FUNC_ktime_get_coarse_ns),
|
|
|
|
BPF_EXIT_INSN(),
|
|
|
|
};
|
|
|
|
size_t insn_cnt = sizeof(insns) / sizeof(insns[0]);
|
|
|
|
union bpf_attr attr;
|
|
|
|
int prog_fd;
|
|
|
|
|
|
|
|
/* attempt loading freplace trying to use custom BTF */
|
|
|
|
memset(&attr, 0, prog_load_attr_sz);
|
|
|
|
attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
|
|
|
|
attr.insns = ptr_to_u64(insns);
|
|
|
|
attr.insn_cnt = insn_cnt;
|
|
|
|
attr.license = ptr_to_u64("GPL");
|
|
|
|
|
|
|
|
prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
|
|
|
|
if (prog_fd >= 0) {
|
|
|
|
close(prog_fd);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool memlock_bumped;
|
|
|
|
static rlim_t memlock_rlim = RLIM_INFINITY;
|
|
|
|
|
|
|
|
int libbpf_set_memlock_rlim(size_t memlock_bytes)
|
|
|
|
{
|
|
|
|
if (memlock_bumped)
|
|
|
|
return libbpf_err(-EBUSY);
|
|
|
|
|
|
|
|
memlock_rlim = memlock_bytes;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int bump_rlimit_memlock(void)
|
|
|
|
{
|
|
|
|
struct rlimit rlim;
|
|
|
|
|
|
|
|
/* this the default in libbpf 1.0, but for now user has to opt-in explicitly */
|
|
|
|
if (!(libbpf_mode & LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* if kernel supports memcg-based accounting, skip bumping RLIMIT_MEMLOCK */
|
|
|
|
if (memlock_bumped || kernel_supports(NULL, FEAT_MEMCG_ACCOUNT))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
memlock_bumped = true;
|
|
|
|
|
|
|
|
/* zero memlock_rlim_max disables auto-bumping RLIMIT_MEMLOCK */
|
|
|
|
if (memlock_rlim == 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
rlim.rlim_cur = rlim.rlim_max = memlock_rlim;
|
|
|
|
if (setrlimit(RLIMIT_MEMLOCK, &rlim))
|
|
|
|
return -errno;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
int bpf_map_create(enum bpf_map_type map_type,
|
|
|
|
const char *map_name,
|
|
|
|
__u32 key_size,
|
|
|
|
__u32 value_size,
|
|
|
|
__u32 max_entries,
|
|
|
|
const struct bpf_map_create_opts *opts)
|
2015-07-01 10:14:03 +08:00
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
|
2015-07-01 10:14:03 +08:00
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2015-07-01 10:14:03 +08:00
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
memset(&attr, 0, attr_sz);
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_map_create_opts))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
attr.map_type = map_type;
|
|
|
|
if (map_name)
|
2021-12-11 08:40:43 +08:00
|
|
|
libbpf_strlcpy(attr.map_name, map_name, sizeof(attr.map_name));
|
2021-11-25 03:32:30 +08:00
|
|
|
attr.key_size = key_size;
|
|
|
|
attr.value_size = value_size;
|
|
|
|
attr.max_entries = max_entries;
|
|
|
|
|
|
|
|
attr.btf_fd = OPTS_GET(opts, btf_fd, 0);
|
|
|
|
attr.btf_key_type_id = OPTS_GET(opts, btf_key_type_id, 0);
|
|
|
|
attr.btf_value_type_id = OPTS_GET(opts, btf_value_type_id, 0);
|
|
|
|
attr.btf_vmlinux_value_type_id = OPTS_GET(opts, btf_vmlinux_value_type_id, 0);
|
|
|
|
|
|
|
|
attr.inner_map_fd = OPTS_GET(opts, inner_map_fd, 0);
|
|
|
|
attr.map_flags = OPTS_GET(opts, map_flags, 0);
|
|
|
|
attr.map_extra = OPTS_GET(opts, map_extra, 0);
|
|
|
|
attr.numa_node = OPTS_GET(opts, numa_node, 0);
|
|
|
|
attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
|
2018-04-19 06:56:05 +08:00
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2018-04-19 06:56:05 +08:00
|
|
|
}
|
2017-09-28 05:37:54 +08:00
|
|
|
|
2021-10-28 07:45:01 +08:00
|
|
|
int bpf_create_map_xattr(const struct bpf_create_map_attr *create_attr)
|
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, p);
|
2021-10-28 07:45:01 +08:00
|
|
|
|
|
|
|
p.map_flags = create_attr->map_flags;
|
|
|
|
p.numa_node = create_attr->numa_node;
|
|
|
|
p.btf_fd = create_attr->btf_fd;
|
|
|
|
p.btf_key_type_id = create_attr->btf_key_type_id;
|
|
|
|
p.btf_value_type_id = create_attr->btf_value_type_id;
|
|
|
|
p.map_ifindex = create_attr->map_ifindex;
|
2021-11-25 03:32:30 +08:00
|
|
|
if (create_attr->map_type == BPF_MAP_TYPE_STRUCT_OPS)
|
|
|
|
p.btf_vmlinux_value_type_id = create_attr->btf_vmlinux_value_type_id;
|
2021-10-28 07:45:01 +08:00
|
|
|
else
|
|
|
|
p.inner_map_fd = create_attr->inner_map_fd;
|
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
return bpf_map_create(create_attr->map_type, create_attr->name,
|
|
|
|
create_attr->key_size, create_attr->value_size,
|
|
|
|
create_attr->max_entries, &p);
|
2021-10-28 07:45:01 +08:00
|
|
|
}
|
|
|
|
|
2018-04-19 06:56:05 +08:00
|
|
|
int bpf_create_map_node(enum bpf_map_type map_type, const char *name,
|
|
|
|
int key_size, int value_size, int max_entries,
|
|
|
|
__u32 map_flags, int node)
|
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts);
|
|
|
|
|
|
|
|
opts.map_flags = map_flags;
|
2017-08-19 02:28:01 +08:00
|
|
|
if (node >= 0) {
|
2021-11-25 03:32:30 +08:00
|
|
|
opts.numa_node = node;
|
|
|
|
opts.map_flags |= BPF_F_NUMA_NODE;
|
2017-08-19 02:28:01 +08:00
|
|
|
}
|
2015-07-01 10:14:03 +08:00
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
return bpf_map_create(map_type, name, key_size, value_size, max_entries, &opts);
|
2015-07-01 10:14:03 +08:00
|
|
|
}
|
2015-07-01 10:14:06 +08:00
|
|
|
|
2017-08-19 02:28:01 +08:00
|
|
|
int bpf_create_map(enum bpf_map_type map_type, int key_size,
|
|
|
|
int value_size, int max_entries, __u32 map_flags)
|
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts, .map_flags = map_flags);
|
2018-04-19 06:56:05 +08:00
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
return bpf_map_create(map_type, NULL, key_size, value_size, max_entries, &opts);
|
2017-08-19 02:28:01 +08:00
|
|
|
}
|
|
|
|
|
2017-09-28 05:37:54 +08:00
|
|
|
int bpf_create_map_name(enum bpf_map_type map_type, const char *name,
|
|
|
|
int key_size, int value_size, int max_entries,
|
|
|
|
__u32 map_flags)
|
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts, .map_flags = map_flags);
|
2018-04-19 06:56:05 +08:00
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
return bpf_map_create(map_type, name, key_size, value_size, max_entries, &opts);
|
2017-09-28 05:37:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_create_map_in_map_node(enum bpf_map_type map_type, const char *name,
|
|
|
|
int key_size, int inner_map_fd, int max_entries,
|
2017-08-19 02:28:01 +08:00
|
|
|
__u32 map_flags, int node)
|
2017-03-23 01:00:35 +08:00
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts);
|
2017-09-28 05:37:54 +08:00
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
opts.inner_map_fd = inner_map_fd;
|
|
|
|
opts.map_flags = map_flags;
|
2017-08-19 02:28:01 +08:00
|
|
|
if (node >= 0) {
|
2021-11-25 03:32:30 +08:00
|
|
|
opts.map_flags |= BPF_F_NUMA_NODE;
|
|
|
|
opts.numa_node = node;
|
2017-08-19 02:28:01 +08:00
|
|
|
}
|
2017-03-23 01:00:35 +08:00
|
|
|
|
2021-11-25 03:32:30 +08:00
|
|
|
return bpf_map_create(map_type, name, key_size, 4, max_entries, &opts);
|
2017-03-23 01:00:35 +08:00
|
|
|
}
|
|
|
|
|
2017-09-28 05:37:54 +08:00
|
|
|
int bpf_create_map_in_map(enum bpf_map_type map_type, const char *name,
|
|
|
|
int key_size, int inner_map_fd, int max_entries,
|
|
|
|
__u32 map_flags)
|
2017-08-19 02:28:01 +08:00
|
|
|
{
|
2021-11-25 03:32:30 +08:00
|
|
|
LIBBPF_OPTS(bpf_map_create_opts, opts,
|
|
|
|
.inner_map_fd = inner_map_fd,
|
|
|
|
.map_flags = map_flags,
|
|
|
|
);
|
|
|
|
|
|
|
|
return bpf_map_create(map_type, name, key_size, 4, max_entries, &opts);
|
2017-08-19 02:28:01 +08:00
|
|
|
}
|
|
|
|
|
2018-12-08 08:42:31 +08:00
|
|
|
static void *
|
|
|
|
alloc_zero_tailing_info(const void *orecord, __u32 cnt,
|
|
|
|
__u32 actual_rec_size, __u32 expected_rec_size)
|
|
|
|
{
|
2019-11-07 10:08:52 +08:00
|
|
|
__u64 info_len = (__u64)actual_rec_size * cnt;
|
2018-12-08 08:42:31 +08:00
|
|
|
void *info, *nrecord;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
info = malloc(info_len);
|
|
|
|
if (!info)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* zero out bytes kernel does not understand */
|
|
|
|
nrecord = info;
|
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
memcpy(nrecord, orecord, expected_rec_size);
|
|
|
|
memset(nrecord + expected_rec_size, 0,
|
|
|
|
actual_rec_size - expected_rec_size);
|
|
|
|
orecord += actual_rec_size;
|
|
|
|
nrecord += actual_rec_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
return info;
|
|
|
|
}
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
DEFAULT_VERSION(bpf_prog_load_v0_6_0, bpf_prog_load, LIBBPF_0.6.0)
|
|
|
|
int bpf_prog_load_v0_6_0(enum bpf_prog_type prog_type,
|
|
|
|
const char *prog_name, const char *license,
|
|
|
|
const struct bpf_insn *insns, size_t insn_cnt,
|
|
|
|
const struct bpf_prog_load_opts *opts)
|
2015-07-01 10:14:06 +08:00
|
|
|
{
|
2018-12-08 08:42:31 +08:00
|
|
|
void *finfo = NULL, *linfo = NULL;
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
const char *func_info, *line_info;
|
|
|
|
__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
|
|
|
|
__u32 func_info_rec_size, line_info_rec_size;
|
|
|
|
int fd, attempts;
|
2015-07-01 10:14:06 +08:00
|
|
|
union bpf_attr attr;
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
char *log_buf;
|
2018-03-31 06:08:01 +08:00
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
if (!OPTS_VALID(opts, bpf_prog_load_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
tools/bpf: add log_level to bpf_load_program_attr
The kernel verifier has three levels of logs:
0: no logs
1: logs mostly useful
> 1: verbose
Current libbpf API functions bpf_load_program_xattr() and
bpf_load_program() cannot specify log_level.
The bcc, however, provides an interface for user to
specify log_level 2 for verbose output.
This patch added log_level into structure
bpf_load_program_attr, so users, including bcc, can use
bpf_load_program_xattr() to change log_level. The
supported log_level is 0, 1, and 2.
The bpf selftest test_sock.c is modified to enable log_level = 2.
If the "verbose" in test_sock.c is changed to true,
the test will output logs like below:
$ ./test_sock
func#0 @0
0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
0: (bf) r6 = r1
1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
1: (61) r7 = *(u32 *)(r6 +28)
invalid bpf_context access off=28 size=4
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
...
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Some test_sock tests are negative tests and verbose verifier
log will be printed out as shown in the above.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-08 01:34:51 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attempts = OPTS_GET(opts, attempts, 0);
|
|
|
|
if (attempts < 0)
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
if (attempts == 0)
|
|
|
|
attempts = PROG_LOAD_ATTEMPTS;
|
2018-03-31 06:08:01 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2020-12-04 04:46:31 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.prog_type = prog_type;
|
|
|
|
attr.expected_attach_type = OPTS_GET(opts, expected_attach_type, 0);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.prog_btf_fd = OPTS_GET(opts, prog_btf_fd, 0);
|
|
|
|
attr.prog_flags = OPTS_GET(opts, prog_flags, 0);
|
|
|
|
attr.prog_ifindex = OPTS_GET(opts, prog_ifindex, 0);
|
|
|
|
attr.kern_version = OPTS_GET(opts, kern_version, 0);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
if (prog_name)
|
2021-12-11 08:40:43 +08:00
|
|
|
libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name));
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.license = ptr_to_u64(license);
|
tools/bpf: add log_level to bpf_load_program_attr
The kernel verifier has three levels of logs:
0: no logs
1: logs mostly useful
> 1: verbose
Current libbpf API functions bpf_load_program_xattr() and
bpf_load_program() cannot specify log_level.
The bcc, however, provides an interface for user to
specify log_level 2 for verbose output.
This patch added log_level into structure
bpf_load_program_attr, so users, including bcc, can use
bpf_load_program_xattr() to change log_level. The
supported log_level is 0, 1, and 2.
The bpf selftest test_sock.c is modified to enable log_level = 2.
If the "verbose" in test_sock.c is changed to true,
the test will output logs like below:
$ ./test_sock
func#0 @0
0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
0: (bf) r6 = r1
1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
1: (61) r7 = *(u32 *)(r6 +28)
invalid bpf_context access off=28 size=4
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
...
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Some test_sock tests are negative tests and verbose verifier
log will be printed out as shown in the above.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-08 01:34:51 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
if (insn_cnt > UINT_MAX)
|
|
|
|
return libbpf_err(-E2BIG);
|
|
|
|
|
|
|
|
attr.insns = ptr_to_u64(insns);
|
|
|
|
attr.insn_cnt = (__u32)insn_cnt;
|
|
|
|
|
|
|
|
attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0);
|
|
|
|
attach_btf_obj_fd = OPTS_GET(opts, attach_btf_obj_fd, 0);
|
|
|
|
|
|
|
|
if (attach_prog_fd && attach_btf_obj_fd)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
attr.attach_btf_id = OPTS_GET(opts, attach_btf_id, 0);
|
|
|
|
if (attach_prog_fd)
|
|
|
|
attr.attach_prog_fd = attach_prog_fd;
|
|
|
|
else
|
|
|
|
attr.attach_btf_obj_fd = attach_btf_obj_fd;
|
tools/bpf: add log_level to bpf_load_program_attr
The kernel verifier has three levels of logs:
0: no logs
1: logs mostly useful
> 1: verbose
Current libbpf API functions bpf_load_program_xattr() and
bpf_load_program() cannot specify log_level.
The bcc, however, provides an interface for user to
specify log_level 2 for verbose output.
This patch added log_level into structure
bpf_load_program_attr, so users, including bcc, can use
bpf_load_program_xattr() to change log_level. The
supported log_level is 0, 1, and 2.
The bpf selftest test_sock.c is modified to enable log_level = 2.
If the "verbose" in test_sock.c is changed to true,
the test will output logs like below:
$ ./test_sock
func#0 @0
0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
0: (bf) r6 = r1
1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
1: (61) r7 = *(u32 *)(r6 +28)
invalid bpf_context access off=28 size=4
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
...
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Some test_sock tests are negative tests and verbose verifier
log will be printed out as shown in the above.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-08 01:34:51 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
log_buf = OPTS_GET(opts, log_buf, NULL);
|
|
|
|
log_size = OPTS_GET(opts, log_size, 0);
|
|
|
|
log_level = OPTS_GET(opts, log_level, 0);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
if (!!log_buf != !!log_size)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (log_level > (4 | 2 | 1))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (log_level && !log_buf)
|
|
|
|
return libbpf_err(-EINVAL);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
func_info_rec_size = OPTS_GET(opts, func_info_rec_size, 0);
|
|
|
|
func_info = OPTS_GET(opts, func_info, NULL);
|
|
|
|
attr.func_info_rec_size = func_info_rec_size;
|
|
|
|
attr.func_info = ptr_to_u64(func_info);
|
|
|
|
attr.func_info_cnt = OPTS_GET(opts, func_info_cnt, 0);
|
2015-07-01 10:14:06 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
line_info_rec_size = OPTS_GET(opts, line_info_rec_size, 0);
|
|
|
|
line_info = OPTS_GET(opts, line_info, NULL);
|
|
|
|
attr.line_info_rec_size = line_info_rec_size;
|
|
|
|
attr.line_info = ptr_to_u64(line_info);
|
|
|
|
attr.line_info_cnt = OPTS_GET(opts, line_info_cnt, 0);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.fd_array = ptr_to_u64(OPTS_GET(opts, fd_array, NULL));
|
2015-07-01 10:14:06 +08:00
|
|
|
|
2021-12-10 03:38:29 +08:00
|
|
|
if (log_level) {
|
|
|
|
attr.log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.log_size = log_size;
|
|
|
|
attr.log_level = log_level;
|
|
|
|
}
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
|
2018-12-08 08:42:29 +08:00
|
|
|
if (fd >= 0)
|
2015-07-01 10:14:06 +08:00
|
|
|
return fd;
|
|
|
|
|
2018-11-20 07:29:16 +08:00
|
|
|
/* After bpf_prog_load, the kernel may modify certain attributes
|
|
|
|
* to give user space a hint how to deal with loading failure.
|
|
|
|
* Check to see whether we can make some changes and load again.
|
|
|
|
*/
|
2018-12-08 08:42:31 +08:00
|
|
|
while (errno == E2BIG && (!finfo || !linfo)) {
|
|
|
|
if (!finfo && attr.func_info_cnt &&
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.func_info_rec_size < func_info_rec_size) {
|
2018-12-08 08:42:31 +08:00
|
|
|
/* try with corrected func info records */
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
finfo = alloc_zero_tailing_info(func_info,
|
|
|
|
attr.func_info_cnt,
|
|
|
|
func_info_rec_size,
|
2018-12-08 08:42:31 +08:00
|
|
|
attr.func_info_rec_size);
|
2021-05-25 11:59:33 +08:00
|
|
|
if (!finfo) {
|
|
|
|
errno = E2BIG;
|
2018-12-08 08:42:31 +08:00
|
|
|
goto done;
|
2021-05-25 11:59:33 +08:00
|
|
|
}
|
2018-12-08 08:42:31 +08:00
|
|
|
|
|
|
|
attr.func_info = ptr_to_u64(finfo);
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.func_info_rec_size = func_info_rec_size;
|
2018-12-08 08:42:31 +08:00
|
|
|
} else if (!linfo && attr.line_info_cnt &&
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.line_info_rec_size < line_info_rec_size) {
|
|
|
|
linfo = alloc_zero_tailing_info(line_info,
|
|
|
|
attr.line_info_cnt,
|
|
|
|
line_info_rec_size,
|
2018-12-08 08:42:31 +08:00
|
|
|
attr.line_info_rec_size);
|
2021-05-25 11:59:33 +08:00
|
|
|
if (!linfo) {
|
|
|
|
errno = E2BIG;
|
2018-12-08 08:42:31 +08:00
|
|
|
goto done;
|
2021-05-25 11:59:33 +08:00
|
|
|
}
|
2018-12-08 08:42:31 +08:00
|
|
|
|
|
|
|
attr.line_info = ptr_to_u64(linfo);
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
attr.line_info_rec_size = line_info_rec_size;
|
2018-12-08 08:42:31 +08:00
|
|
|
} else {
|
|
|
|
break;
|
2018-11-20 07:29:16 +08:00
|
|
|
}
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
|
2018-12-08 08:42:29 +08:00
|
|
|
if (fd >= 0)
|
2018-11-20 07:29:16 +08:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2021-12-10 03:38:29 +08:00
|
|
|
if (log_level == 0 && log_buf) {
|
|
|
|
/* log_level == 0 with non-NULL log_buf requires retrying on error
|
|
|
|
* with log_level == 1 and log_buf/log_buf_size set, to get details of
|
|
|
|
* failure
|
|
|
|
*/
|
|
|
|
attr.log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.log_size = log_size;
|
|
|
|
attr.log_level = 1;
|
2020-12-04 04:46:31 +08:00
|
|
|
|
2021-12-10 03:38:29 +08:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
|
|
|
|
}
|
2018-11-20 07:29:16 +08:00
|
|
|
done:
|
2021-05-25 11:59:33 +08:00
|
|
|
/* free() doesn't affect errno, so we don't need to restore it */
|
2018-11-20 07:29:16 +08:00
|
|
|
free(finfo);
|
2018-12-08 08:42:31 +08:00
|
|
|
free(linfo);
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2015-07-01 10:14:06 +08:00
|
|
|
}
|
2015-11-24 21:36:08 +08:00
|
|
|
|
2021-11-04 06:08:37 +08:00
|
|
|
__attribute__((alias("bpf_load_program_xattr2")))
|
2020-12-04 04:46:31 +08:00
|
|
|
int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
|
2021-11-04 06:08:37 +08:00
|
|
|
char *log_buf, size_t log_buf_sz);
|
|
|
|
|
|
|
|
static int bpf_load_program_xattr2(const struct bpf_load_program_attr *load_attr,
|
|
|
|
char *log_buf, size_t log_buf_sz)
|
2020-12-04 04:46:31 +08:00
|
|
|
{
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
LIBBPF_OPTS(bpf_prog_load_opts, p);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
|
|
|
if (!load_attr || !log_buf != !log_buf_sz)
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-12-04 04:46:31 +08:00
|
|
|
|
|
|
|
p.expected_attach_type = load_attr->expected_attach_type;
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
switch (load_attr->prog_type) {
|
2020-12-04 04:46:31 +08:00
|
|
|
case BPF_PROG_TYPE_STRUCT_OPS:
|
|
|
|
case BPF_PROG_TYPE_LSM:
|
|
|
|
p.attach_btf_id = load_attr->attach_btf_id;
|
|
|
|
break;
|
|
|
|
case BPF_PROG_TYPE_TRACING:
|
|
|
|
case BPF_PROG_TYPE_EXT:
|
|
|
|
p.attach_btf_id = load_attr->attach_btf_id;
|
|
|
|
p.attach_prog_fd = load_attr->attach_prog_fd;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
p.prog_ifindex = load_attr->prog_ifindex;
|
|
|
|
p.kern_version = load_attr->kern_version;
|
|
|
|
}
|
|
|
|
p.log_level = load_attr->log_level;
|
|
|
|
p.log_buf = log_buf;
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
p.log_size = log_buf_sz;
|
2020-12-04 04:46:31 +08:00
|
|
|
p.prog_btf_fd = load_attr->prog_btf_fd;
|
|
|
|
p.func_info_rec_size = load_attr->func_info_rec_size;
|
|
|
|
p.func_info_cnt = load_attr->func_info_cnt;
|
|
|
|
p.func_info = load_attr->func_info;
|
|
|
|
p.line_info_rec_size = load_attr->line_info_rec_size;
|
|
|
|
p.line_info_cnt = load_attr->line_info_cnt;
|
|
|
|
p.line_info = load_attr->line_info;
|
|
|
|
p.prog_flags = load_attr->prog_flags;
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-04 06:08:36 +08:00
|
|
|
return bpf_prog_load(load_attr->prog_type, load_attr->name, load_attr->license,
|
|
|
|
load_attr->insns, load_attr->insns_cnt, &p);
|
2020-12-04 04:46:31 +08:00
|
|
|
}
|
|
|
|
|
2017-09-28 05:37:54 +08:00
|
|
|
int bpf_load_program(enum bpf_prog_type type, const struct bpf_insn *insns,
|
|
|
|
size_t insns_cnt, const char *license,
|
|
|
|
__u32 kern_version, char *log_buf,
|
|
|
|
size_t log_buf_sz)
|
|
|
|
{
|
2018-03-31 06:08:01 +08:00
|
|
|
struct bpf_load_program_attr load_attr;
|
|
|
|
|
|
|
|
memset(&load_attr, 0, sizeof(struct bpf_load_program_attr));
|
|
|
|
load_attr.prog_type = type;
|
|
|
|
load_attr.expected_attach_type = 0;
|
|
|
|
load_attr.name = NULL;
|
|
|
|
load_attr.insns = insns;
|
|
|
|
load_attr.insns_cnt = insns_cnt;
|
|
|
|
load_attr.license = license;
|
|
|
|
load_attr.kern_version = kern_version;
|
|
|
|
|
2021-11-04 06:08:37 +08:00
|
|
|
return bpf_load_program_xattr2(&load_attr, log_buf, log_buf_sz);
|
2017-09-28 05:37:54 +08:00
|
|
|
}
|
|
|
|
|
2017-05-11 02:42:48 +08:00
|
|
|
int bpf_verify_program(enum bpf_prog_type type, const struct bpf_insn *insns,
|
2018-12-01 13:08:14 +08:00
|
|
|
size_t insns_cnt, __u32 prog_flags, const char *license,
|
|
|
|
__u32 kern_version, char *log_buf, size_t log_buf_sz,
|
|
|
|
int log_level)
|
2017-05-11 02:42:48 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2017-05-11 02:42:48 +08:00
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-05-11 02:42:48 +08:00
|
|
|
attr.prog_type = type;
|
|
|
|
attr.insn_cnt = (__u32)insns_cnt;
|
|
|
|
attr.insns = ptr_to_u64(insns);
|
|
|
|
attr.license = ptr_to_u64(license);
|
|
|
|
attr.log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.log_size = log_buf_sz;
|
2017-07-21 06:00:22 +08:00
|
|
|
attr.log_level = log_level;
|
2017-05-11 02:42:48 +08:00
|
|
|
log_buf[0] = 0;
|
|
|
|
attr.kern_version = kern_version;
|
2018-12-01 13:08:14 +08:00
|
|
|
attr.prog_flags = prog_flags;
|
2017-05-11 02:42:48 +08:00
|
|
|
|
2021-11-04 06:08:35 +08:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), PROG_LOAD_ATTEMPTS);
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2017-05-11 02:42:48 +08:00
|
|
|
}
|
|
|
|
|
2017-02-10 07:21:39 +08:00
|
|
|
int bpf_map_update_elem(int fd, const void *key, const void *value,
|
2016-12-09 10:46:15 +08:00
|
|
|
__u64 flags)
|
2015-11-24 21:36:08 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2015-11-24 21:36:08 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2015-11-24 21:36:08 +08:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2015-11-24 21:36:08 +08:00
|
|
|
}
|
2016-11-26 15:03:25 +08:00
|
|
|
|
2017-02-10 07:21:40 +08:00
|
|
|
int bpf_map_lookup_elem(int fd, const void *key, void *value)
|
2016-11-26 15:03:25 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2016-11-26 15:03:25 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 15:03:25 +08:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 15:03:25 +08:00
|
|
|
}
|
|
|
|
|
2019-02-01 07:40:11 +08:00
|
|
|
int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2019-02-01 07:40:11 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2019-02-01 07:40:11 +08:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2019-02-01 07:40:11 +08:00
|
|
|
}
|
|
|
|
|
2018-10-18 21:16:41 +08:00
|
|
|
int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2018-10-18 21:16:41 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-10-18 21:16:41 +08:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2018-10-18 21:16:41 +08:00
|
|
|
}
|
|
|
|
|
2021-05-12 05:00:05 +08:00
|
|
|
int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-11-05 01:13:54 +08:00
|
|
|
int ret;
|
2021-05-12 05:00:05 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
2021-11-05 01:13:54 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2021-05-12 05:00:05 +08:00
|
|
|
}
|
|
|
|
|
2017-02-10 07:21:41 +08:00
|
|
|
int bpf_map_delete_elem(int fd, const void *key)
|
2016-11-26 15:03:25 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2016-11-26 15:03:25 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 15:03:25 +08:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 15:03:25 +08:00
|
|
|
}
|
|
|
|
|
2017-02-10 07:21:42 +08:00
|
|
|
int bpf_map_get_next_key(int fd, const void *key, void *next_key)
|
2016-11-26 15:03:25 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2016-11-26 15:03:25 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 15:03:25 +08:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.next_key = ptr_to_u64(next_key);
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 15:03:25 +08:00
|
|
|
}
|
|
|
|
|
bpf, libbpf: support global data/bss/rodata sections
This work adds BPF loader support for global data sections
to libbpf. This allows to write BPF programs in more natural
C-like way by being able to define global variables and const
data.
Back at LPC 2018 [0] we presented a first prototype which
implemented support for global data sections by extending BPF
syscall where union bpf_attr would get additional memory/size
pair for each section passed during prog load in order to later
add this base address into the ldimm64 instruction along with
the user provided offset when accessing a variable. Consensus
from LPC was that for proper upstream support, it would be
more desirable to use maps instead of bpf_attr extension as
this would allow for introspection of these sections as well
as potential live updates of their content. This work follows
this path by taking the following steps from loader side:
1) In bpf_object__elf_collect() step we pick up ".data",
".rodata", and ".bss" section information.
2) If present, in bpf_object__init_internal_map() we add
maps to the obj's map array that corresponds to each
of the present sections. Given section size and access
properties can differ, a single entry array map is
created with value size that is corresponding to the
ELF section size of .data, .bss or .rodata. These
internal maps are integrated into the normal map
handling of libbpf such that when user traverses all
obj maps, they can be differentiated from user-created
ones via bpf_map__is_internal(). In later steps when
we actually create these maps in the kernel via
bpf_object__create_maps(), then for .data and .rodata
sections their content is copied into the map through
bpf_map_update_elem(). For .bss this is not necessary
since array map is already zero-initialized by default.
Additionally, for .rodata the map is frozen as read-only
after setup, such that neither from program nor syscall
side writes would be possible.
3) In bpf_program__collect_reloc() step, we record the
corresponding map, insn index, and relocation type for
the global data.
4) And last but not least in the actual relocation step in
bpf_program__relocate(), we mark the ldimm64 instruction
with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
imm field the map's file descriptor is stored as similarly
done as in BPF_PSEUDO_MAP_FD, and in the second imm field
(as ldimm64 is 2-insn wide) we store the access offset
into the section. Given these maps have only single element
ldimm64's off remains zero in both parts.
5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
load will then store the actual target address in order
to have a 'map-lookup'-free access. That is, the actual
map value base address + offset. The destination register
in the verifier will then be marked as PTR_TO_MAP_VALUE,
containing the fixed offset as reg->off and backing BPF
map as reg->map_ptr. Meaning, it's treated as any other
normal map value from verification side, only with
efficient, direct value access instead of actual call to
map lookup helper as in the typical case.
Currently, only support for static global variables has been
added, and libbpf rejects non-static global variables from
loading. This can be lifted until we have proper semantics
for how BPF will treat multi-object BPF loads. From BTF side,
libbpf will set the value type id of the types corresponding
to the ".bss", ".data" and ".rodata" names which LLVM will
emit without the object name prefix. The key type will be
left as zero, thus making use of the key-less BTF option in
array maps.
Simple example dump of program using globals vars in each
section:
# bpftool prog
[...]
6784: sched_cls name load_static_dat tag a7e1291567277844 gpl
loaded_at 2019-03-11T15:39:34+0000 uid 0
xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240
# bpftool map show id 2237
2237: array name test_glo.bss flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2235
2235: array name test_glo.data flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2236
2236: array name test_glo.rodata flags 0x80
key 4B value 96B max_entries 1 memlock 4096B
# bpftool prog dump xlated id 6784
int load_static_data(struct __sk_buff * skb):
; int load_static_data(struct __sk_buff *skb)
0: (b7) r6 = 0
; test_reloc(number, 0, &num0);
1: (63) *(u32 *)(r10 -4) = r6
2: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
3: (07) r2 += -4
; test_reloc(number, 0, &num0);
4: (18) r1 = map[id:2238]
6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area
8: (b7) r4 = 0
9: (85) call array_map_update_elem#100464
10: (b7) r1 = 1
; test_reloc(number, 1, &num1);
[...]
; test_reloc(string, 2, str2);
120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16
122: (18) r1 = map[id:2239]
124: (18) r3 = map[id:2237][0]+16
126: (b7) r4 = 0
127: (85) call array_map_update_elem#100464
128: (b7) r1 = 120
; str1[5] = 'x';
129: (73) *(u8 *)(r9 +5) = r1
; test_reloc(string, 3, str1);
130: (b7) r1 = 3
131: (63) *(u32 *)(r10 -4) = r1
132: (b7) r9 = 3
133: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
134: (07) r2 += -4
; test_reloc(string, 3, str1);
135: (18) r1 = map[id:2239]
137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area
139: (b7) r4 = 0
140: (85) call array_map_update_elem#100464
141: (b7) r1 = 111
; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data
143: (b7) r1 = 108
144: (73) *(u8 *)(r8 +5) = r1
[...]
For Cilium use-case in particular, this enables migrating configuration
constants from Cilium daemon's generated header defines into global
data sections such that expensive runtime recompilations with LLVM can
be avoided altogether. Instead, the ELF file becomes effectively a
"template", meaning, it is compiled only once (!) and the Cilium daemon
will then rewrite relevant configuration data from the ELF's .data or
.rodata sections directly instead of recompiling the program. The
updated ELF is then loaded into the kernel and atomically replaces
the existing program in the networking datapath. More info in [0].
Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
for static variables").
[0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
http://vger.kernel.org/lpc-bpf2018.html#session-3
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:13 +08:00
|
|
|
int bpf_map_freeze(int fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
bpf, libbpf: support global data/bss/rodata sections
This work adds BPF loader support for global data sections
to libbpf. This allows to write BPF programs in more natural
C-like way by being able to define global variables and const
data.
Back at LPC 2018 [0] we presented a first prototype which
implemented support for global data sections by extending BPF
syscall where union bpf_attr would get additional memory/size
pair for each section passed during prog load in order to later
add this base address into the ldimm64 instruction along with
the user provided offset when accessing a variable. Consensus
from LPC was that for proper upstream support, it would be
more desirable to use maps instead of bpf_attr extension as
this would allow for introspection of these sections as well
as potential live updates of their content. This work follows
this path by taking the following steps from loader side:
1) In bpf_object__elf_collect() step we pick up ".data",
".rodata", and ".bss" section information.
2) If present, in bpf_object__init_internal_map() we add
maps to the obj's map array that corresponds to each
of the present sections. Given section size and access
properties can differ, a single entry array map is
created with value size that is corresponding to the
ELF section size of .data, .bss or .rodata. These
internal maps are integrated into the normal map
handling of libbpf such that when user traverses all
obj maps, they can be differentiated from user-created
ones via bpf_map__is_internal(). In later steps when
we actually create these maps in the kernel via
bpf_object__create_maps(), then for .data and .rodata
sections their content is copied into the map through
bpf_map_update_elem(). For .bss this is not necessary
since array map is already zero-initialized by default.
Additionally, for .rodata the map is frozen as read-only
after setup, such that neither from program nor syscall
side writes would be possible.
3) In bpf_program__collect_reloc() step, we record the
corresponding map, insn index, and relocation type for
the global data.
4) And last but not least in the actual relocation step in
bpf_program__relocate(), we mark the ldimm64 instruction
with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
imm field the map's file descriptor is stored as similarly
done as in BPF_PSEUDO_MAP_FD, and in the second imm field
(as ldimm64 is 2-insn wide) we store the access offset
into the section. Given these maps have only single element
ldimm64's off remains zero in both parts.
5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
load will then store the actual target address in order
to have a 'map-lookup'-free access. That is, the actual
map value base address + offset. The destination register
in the verifier will then be marked as PTR_TO_MAP_VALUE,
containing the fixed offset as reg->off and backing BPF
map as reg->map_ptr. Meaning, it's treated as any other
normal map value from verification side, only with
efficient, direct value access instead of actual call to
map lookup helper as in the typical case.
Currently, only support for static global variables has been
added, and libbpf rejects non-static global variables from
loading. This can be lifted until we have proper semantics
for how BPF will treat multi-object BPF loads. From BTF side,
libbpf will set the value type id of the types corresponding
to the ".bss", ".data" and ".rodata" names which LLVM will
emit without the object name prefix. The key type will be
left as zero, thus making use of the key-less BTF option in
array maps.
Simple example dump of program using globals vars in each
section:
# bpftool prog
[...]
6784: sched_cls name load_static_dat tag a7e1291567277844 gpl
loaded_at 2019-03-11T15:39:34+0000 uid 0
xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240
# bpftool map show id 2237
2237: array name test_glo.bss flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2235
2235: array name test_glo.data flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2236
2236: array name test_glo.rodata flags 0x80
key 4B value 96B max_entries 1 memlock 4096B
# bpftool prog dump xlated id 6784
int load_static_data(struct __sk_buff * skb):
; int load_static_data(struct __sk_buff *skb)
0: (b7) r6 = 0
; test_reloc(number, 0, &num0);
1: (63) *(u32 *)(r10 -4) = r6
2: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
3: (07) r2 += -4
; test_reloc(number, 0, &num0);
4: (18) r1 = map[id:2238]
6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area
8: (b7) r4 = 0
9: (85) call array_map_update_elem#100464
10: (b7) r1 = 1
; test_reloc(number, 1, &num1);
[...]
; test_reloc(string, 2, str2);
120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16
122: (18) r1 = map[id:2239]
124: (18) r3 = map[id:2237][0]+16
126: (b7) r4 = 0
127: (85) call array_map_update_elem#100464
128: (b7) r1 = 120
; str1[5] = 'x';
129: (73) *(u8 *)(r9 +5) = r1
; test_reloc(string, 3, str1);
130: (b7) r1 = 3
131: (63) *(u32 *)(r10 -4) = r1
132: (b7) r9 = 3
133: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
134: (07) r2 += -4
; test_reloc(string, 3, str1);
135: (18) r1 = map[id:2239]
137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area
139: (b7) r4 = 0
140: (85) call array_map_update_elem#100464
141: (b7) r1 = 111
; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data
143: (b7) r1 = 108
144: (73) *(u8 *)(r8 +5) = r1
[...]
For Cilium use-case in particular, this enables migrating configuration
constants from Cilium daemon's generated header defines into global
data sections such that expensive runtime recompilations with LLVM can
be avoided altogether. Instead, the ELF file becomes effectively a
"template", meaning, it is compiled only once (!) and the Cilium daemon
will then rewrite relevant configuration data from the ELF's .data or
.rodata sections directly instead of recompiling the program. The
updated ELF is then loaded into the kernel and atomically replaces
the existing program in the networking datapath. More info in [0].
Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
for static variables").
[0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
http://vger.kernel.org/lpc-bpf2018.html#session-3
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:13 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.map_fd = fd;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
bpf, libbpf: support global data/bss/rodata sections
This work adds BPF loader support for global data sections
to libbpf. This allows to write BPF programs in more natural
C-like way by being able to define global variables and const
data.
Back at LPC 2018 [0] we presented a first prototype which
implemented support for global data sections by extending BPF
syscall where union bpf_attr would get additional memory/size
pair for each section passed during prog load in order to later
add this base address into the ldimm64 instruction along with
the user provided offset when accessing a variable. Consensus
from LPC was that for proper upstream support, it would be
more desirable to use maps instead of bpf_attr extension as
this would allow for introspection of these sections as well
as potential live updates of their content. This work follows
this path by taking the following steps from loader side:
1) In bpf_object__elf_collect() step we pick up ".data",
".rodata", and ".bss" section information.
2) If present, in bpf_object__init_internal_map() we add
maps to the obj's map array that corresponds to each
of the present sections. Given section size and access
properties can differ, a single entry array map is
created with value size that is corresponding to the
ELF section size of .data, .bss or .rodata. These
internal maps are integrated into the normal map
handling of libbpf such that when user traverses all
obj maps, they can be differentiated from user-created
ones via bpf_map__is_internal(). In later steps when
we actually create these maps in the kernel via
bpf_object__create_maps(), then for .data and .rodata
sections their content is copied into the map through
bpf_map_update_elem(). For .bss this is not necessary
since array map is already zero-initialized by default.
Additionally, for .rodata the map is frozen as read-only
after setup, such that neither from program nor syscall
side writes would be possible.
3) In bpf_program__collect_reloc() step, we record the
corresponding map, insn index, and relocation type for
the global data.
4) And last but not least in the actual relocation step in
bpf_program__relocate(), we mark the ldimm64 instruction
with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
imm field the map's file descriptor is stored as similarly
done as in BPF_PSEUDO_MAP_FD, and in the second imm field
(as ldimm64 is 2-insn wide) we store the access offset
into the section. Given these maps have only single element
ldimm64's off remains zero in both parts.
5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
load will then store the actual target address in order
to have a 'map-lookup'-free access. That is, the actual
map value base address + offset. The destination register
in the verifier will then be marked as PTR_TO_MAP_VALUE,
containing the fixed offset as reg->off and backing BPF
map as reg->map_ptr. Meaning, it's treated as any other
normal map value from verification side, only with
efficient, direct value access instead of actual call to
map lookup helper as in the typical case.
Currently, only support for static global variables has been
added, and libbpf rejects non-static global variables from
loading. This can be lifted until we have proper semantics
for how BPF will treat multi-object BPF loads. From BTF side,
libbpf will set the value type id of the types corresponding
to the ".bss", ".data" and ".rodata" names which LLVM will
emit without the object name prefix. The key type will be
left as zero, thus making use of the key-less BTF option in
array maps.
Simple example dump of program using globals vars in each
section:
# bpftool prog
[...]
6784: sched_cls name load_static_dat tag a7e1291567277844 gpl
loaded_at 2019-03-11T15:39:34+0000 uid 0
xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240
# bpftool map show id 2237
2237: array name test_glo.bss flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2235
2235: array name test_glo.data flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2236
2236: array name test_glo.rodata flags 0x80
key 4B value 96B max_entries 1 memlock 4096B
# bpftool prog dump xlated id 6784
int load_static_data(struct __sk_buff * skb):
; int load_static_data(struct __sk_buff *skb)
0: (b7) r6 = 0
; test_reloc(number, 0, &num0);
1: (63) *(u32 *)(r10 -4) = r6
2: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
3: (07) r2 += -4
; test_reloc(number, 0, &num0);
4: (18) r1 = map[id:2238]
6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area
8: (b7) r4 = 0
9: (85) call array_map_update_elem#100464
10: (b7) r1 = 1
; test_reloc(number, 1, &num1);
[...]
; test_reloc(string, 2, str2);
120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16
122: (18) r1 = map[id:2239]
124: (18) r3 = map[id:2237][0]+16
126: (b7) r4 = 0
127: (85) call array_map_update_elem#100464
128: (b7) r1 = 120
; str1[5] = 'x';
129: (73) *(u8 *)(r9 +5) = r1
; test_reloc(string, 3, str1);
130: (b7) r1 = 3
131: (63) *(u32 *)(r10 -4) = r1
132: (b7) r9 = 3
133: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
134: (07) r2 += -4
; test_reloc(string, 3, str1);
135: (18) r1 = map[id:2239]
137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area
139: (b7) r4 = 0
140: (85) call array_map_update_elem#100464
141: (b7) r1 = 111
; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data
143: (b7) r1 = 108
144: (73) *(u8 *)(r8 +5) = r1
[...]
For Cilium use-case in particular, this enables migrating configuration
constants from Cilium daemon's generated header defines into global
data sections such that expensive runtime recompilations with LLVM can
be avoided altogether. Instead, the ELF file becomes effectively a
"template", meaning, it is compiled only once (!) and the Cilium daemon
will then rewrite relevant configuration data from the ELF's .data or
.rodata sections directly instead of recompiling the program. The
updated ELF is then loaded into the kernel and atomically replaces
the existing program in the networking datapath. More info in [0].
Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
for static variables").
[0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
http://vger.kernel.org/lpc-bpf2018.html#session-3
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-10 05:20:13 +08:00
|
|
|
}
|
|
|
|
|
2020-01-16 02:43:06 +08:00
|
|
|
static int bpf_map_batch_common(int cmd, int fd, void *in_batch,
|
|
|
|
void *out_batch, void *keys, void *values,
|
|
|
|
__u32 *count,
|
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
2020-01-16 12:59:18 +08:00
|
|
|
union bpf_attr attr;
|
2020-01-16 02:43:06 +08:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_map_batch_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-01-16 02:43:06 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.batch.map_fd = fd;
|
|
|
|
attr.batch.in_batch = ptr_to_u64(in_batch);
|
|
|
|
attr.batch.out_batch = ptr_to_u64(out_batch);
|
|
|
|
attr.batch.keys = ptr_to_u64(keys);
|
|
|
|
attr.batch.values = ptr_to_u64(values);
|
|
|
|
attr.batch.count = *count;
|
|
|
|
attr.batch.elem_flags = OPTS_GET(opts, elem_flags, 0);
|
|
|
|
attr.batch.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
|
|
|
|
ret = sys_bpf(cmd, &attr, sizeof(attr));
|
|
|
|
*count = attr.batch.count;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(ret);
|
2020-01-16 02:43:06 +08:00
|
|
|
}
|
|
|
|
|
2022-01-07 04:13:05 +08:00
|
|
|
int bpf_map_delete_batch(int fd, const void *keys, __u32 *count,
|
2020-01-16 02:43:06 +08:00
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_DELETE_BATCH, fd, NULL,
|
2022-01-07 04:13:05 +08:00
|
|
|
NULL, (void *)keys, NULL, count, opts);
|
2020-01-16 02:43:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_map_lookup_batch(int fd, void *in_batch, void *out_batch, void *keys,
|
|
|
|
void *values, __u32 *count,
|
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_LOOKUP_BATCH, fd, in_batch,
|
|
|
|
out_batch, keys, values, count, opts);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_map_lookup_and_delete_batch(int fd, void *in_batch, void *out_batch,
|
|
|
|
void *keys, void *values, __u32 *count,
|
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_LOOKUP_AND_DELETE_BATCH,
|
|
|
|
fd, in_batch, out_batch, keys, values,
|
|
|
|
count, opts);
|
|
|
|
}
|
|
|
|
|
2022-01-07 04:13:05 +08:00
|
|
|
int bpf_map_update_batch(int fd, const void *keys, const void *values, __u32 *count,
|
2020-01-16 02:43:06 +08:00
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_UPDATE_BATCH, fd, NULL, NULL,
|
2022-01-07 04:13:05 +08:00
|
|
|
(void *)keys, (void *)values, count, opts);
|
2020-01-16 02:43:06 +08:00
|
|
|
}
|
|
|
|
|
2016-11-26 15:03:25 +08:00
|
|
|
int bpf_obj_pin(int fd, const char *pathname)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2016-11-26 15:03:25 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 15:03:25 +08:00
|
|
|
attr.pathname = ptr_to_u64((void *)pathname);
|
|
|
|
attr.bpf_fd = fd;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_OBJ_PIN, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 15:03:25 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_obj_get(const char *pathname)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2016-11-26 15:03:25 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 15:03:25 +08:00
|
|
|
attr.pathname = ptr_to_u64((void *)pathname);
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_OBJ_GET, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2016-11-26 15:03:25 +08:00
|
|
|
}
|
2016-12-15 06:05:26 +08:00
|
|
|
|
2017-08-28 22:10:04 +08:00
|
|
|
int bpf_prog_attach(int prog_fd, int target_fd, enum bpf_attach_type type,
|
|
|
|
unsigned int flags)
|
2019-12-19 15:44:36 +08:00
|
|
|
{
|
|
|
|
DECLARE_LIBBPF_OPTS(bpf_prog_attach_opts, opts,
|
|
|
|
.flags = flags,
|
|
|
|
);
|
|
|
|
|
|
|
|
return bpf_prog_attach_xattr(prog_fd, target_fd, type, &opts);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_prog_attach_xattr(int prog_fd, int target_fd,
|
|
|
|
enum bpf_attach_type type,
|
|
|
|
const struct bpf_prog_attach_opts *opts)
|
2016-12-15 06:05:26 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2016-12-15 06:05:26 +08:00
|
|
|
|
2019-12-19 15:44:36 +08:00
|
|
|
if (!OPTS_VALID(opts, bpf_prog_attach_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2019-12-19 15:44:36 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-12-15 06:05:26 +08:00
|
|
|
attr.target_fd = target_fd;
|
2017-08-28 22:10:04 +08:00
|
|
|
attr.attach_bpf_fd = prog_fd;
|
2016-12-15 06:05:26 +08:00
|
|
|
attr.attach_type = type;
|
2019-12-19 15:44:36 +08:00
|
|
|
attr.attach_flags = OPTS_GET(opts, flags, 0);
|
|
|
|
attr.replace_bpf_fd = OPTS_GET(opts, replace_prog_fd, 0);
|
2016-12-15 06:05:26 +08:00
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-12-15 06:05:26 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2016-12-15 06:05:26 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-12-15 06:05:26 +08:00
|
|
|
attr.target_fd = target_fd;
|
|
|
|
attr.attach_type = type;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-12-15 06:05:26 +08:00
|
|
|
}
|
2017-03-31 12:45:39 +08:00
|
|
|
|
2017-10-03 13:50:24 +08:00
|
|
|
int bpf_prog_detach2(int prog_fd, int target_fd, enum bpf_attach_type type)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2017-10-03 13:50:24 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-10-03 13:50:24 +08:00
|
|
|
attr.target_fd = target_fd;
|
|
|
|
attr.attach_bpf_fd = prog_fd;
|
|
|
|
attr.attach_type = type;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2017-10-03 13:50:24 +08:00
|
|
|
}
|
|
|
|
|
2020-03-30 11:00:00 +08:00
|
|
|
int bpf_link_create(int prog_fd, int target_fd,
|
|
|
|
enum bpf_attach_type attach_type,
|
|
|
|
const struct bpf_link_create_opts *opts)
|
|
|
|
{
|
2020-09-29 20:45:53 +08:00
|
|
|
__u32 target_btf_id, iter_info_len;
|
2020-03-30 11:00:00 +08:00
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2020-03-30 11:00:00 +08:00
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_link_create_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-03-30 11:00:00 +08:00
|
|
|
|
2020-09-29 20:45:53 +08:00
|
|
|
iter_info_len = OPTS_GET(opts, iter_info_len, 0);
|
|
|
|
target_btf_id = OPTS_GET(opts, target_btf_id, 0);
|
|
|
|
|
2021-08-15 15:06:03 +08:00
|
|
|
/* validate we don't have unexpected combinations of non-zero fields */
|
|
|
|
if (iter_info_len || target_btf_id) {
|
|
|
|
if (iter_info_len && target_btf_id)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (!OPTS_ZEROED(opts, target_btf_id))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
}
|
2020-09-29 20:45:53 +08:00
|
|
|
|
2020-03-30 11:00:00 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_create.prog_fd = prog_fd;
|
|
|
|
attr.link_create.target_fd = target_fd;
|
|
|
|
attr.link_create.attach_type = attach_type;
|
2020-07-24 02:41:17 +08:00
|
|
|
attr.link_create.flags = OPTS_GET(opts, flags, 0);
|
2020-09-29 20:45:53 +08:00
|
|
|
|
2021-08-15 15:06:03 +08:00
|
|
|
if (target_btf_id) {
|
2020-09-29 20:45:53 +08:00
|
|
|
attr.link_create.target_btf_id = target_btf_id;
|
2021-08-15 15:06:03 +08:00
|
|
|
goto proceed;
|
2020-09-29 20:45:53 +08:00
|
|
|
}
|
2020-03-30 11:00:00 +08:00
|
|
|
|
2021-08-15 15:06:03 +08:00
|
|
|
switch (attach_type) {
|
|
|
|
case BPF_TRACE_ITER:
|
|
|
|
attr.link_create.iter_info = ptr_to_u64(OPTS_GET(opts, iter_info, (void *)0));
|
|
|
|
attr.link_create.iter_info_len = iter_info_len;
|
|
|
|
break;
|
|
|
|
case BPF_PERF_EVENT:
|
|
|
|
attr.link_create.perf_event.bpf_cookie = OPTS_GET(opts, perf_event.bpf_cookie, 0);
|
|
|
|
if (!OPTS_ZEROED(opts, perf_event))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (!OPTS_ZEROED(opts, flags))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
proceed:
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2020-03-30 11:00:00 +08:00
|
|
|
}
|
|
|
|
|
2020-08-01 02:28:27 +08:00
|
|
|
int bpf_link_detach(int link_fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2020-08-01 02:28:27 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_detach.link_fd = link_fd;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_LINK_DETACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2020-08-01 02:28:27 +08:00
|
|
|
}
|
|
|
|
|
2020-03-30 11:00:00 +08:00
|
|
|
int bpf_link_update(int link_fd, int new_prog_fd,
|
|
|
|
const struct bpf_link_update_opts *opts)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2020-03-30 11:00:00 +08:00
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_link_update_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-03-30 11:00:00 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_update.link_fd = link_fd;
|
|
|
|
attr.link_update.new_prog_fd = new_prog_fd;
|
|
|
|
attr.link_update.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_LINK_UPDATE, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2020-03-30 11:00:00 +08:00
|
|
|
}
|
|
|
|
|
2020-05-10 01:59:17 +08:00
|
|
|
int bpf_iter_create(int link_fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2020-05-10 01:59:17 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.iter_create.link_fd = link_fd;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_ITER_CREATE, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2020-05-10 01:59:17 +08:00
|
|
|
}
|
|
|
|
|
2017-10-03 13:50:27 +08:00
|
|
|
int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
|
|
|
|
__u32 *attach_flags, __u32 *prog_ids, __u32 *prog_cnt)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-10-03 13:50:27 +08:00
|
|
|
attr.query.target_fd = target_fd;
|
|
|
|
attr.query.attach_type = type;
|
|
|
|
attr.query.query_flags = query_flags;
|
|
|
|
attr.query.prog_cnt = *prog_cnt;
|
|
|
|
attr.query.prog_ids = ptr_to_u64(prog_ids);
|
|
|
|
|
|
|
|
ret = sys_bpf(BPF_PROG_QUERY, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2017-10-03 13:50:27 +08:00
|
|
|
if (attach_flags)
|
|
|
|
*attach_flags = attr.query.attach_flags;
|
|
|
|
*prog_cnt = attr.query.prog_cnt;
|
2021-05-25 11:59:33 +08:00
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
2017-10-03 13:50:27 +08:00
|
|
|
}
|
|
|
|
|
2017-03-31 12:45:39 +08:00
|
|
|
int bpf_prog_test_run(int prog_fd, int repeat, void *data, __u32 size,
|
|
|
|
void *data_out, __u32 *size_out, __u32 *retval,
|
|
|
|
__u32 *duration)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-03-31 12:45:39 +08:00
|
|
|
attr.test.prog_fd = prog_fd;
|
|
|
|
attr.test.data_in = ptr_to_u64(data);
|
|
|
|
attr.test.data_out = ptr_to_u64(data_out);
|
|
|
|
attr.test.data_size_in = size;
|
|
|
|
attr.test.repeat = repeat;
|
|
|
|
|
|
|
|
ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2017-03-31 12:45:39 +08:00
|
|
|
if (size_out)
|
|
|
|
*size_out = attr.test.data_size_out;
|
|
|
|
if (retval)
|
|
|
|
*retval = attr.test.retval;
|
|
|
|
if (duration)
|
|
|
|
*duration = attr.test.duration;
|
2021-05-25 11:59:33 +08:00
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
2017-03-31 12:45:39 +08:00
|
|
|
}
|
2017-06-06 03:15:53 +08:00
|
|
|
|
2018-12-03 19:31:25 +08:00
|
|
|
int bpf_prog_test_run_xattr(struct bpf_prog_test_run_attr *test_attr)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!test_attr->data_out && test_attr->data_size_out > 0)
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2018-12-03 19:31:25 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-12-03 19:31:25 +08:00
|
|
|
attr.test.prog_fd = test_attr->prog_fd;
|
|
|
|
attr.test.data_in = ptr_to_u64(test_attr->data_in);
|
|
|
|
attr.test.data_out = ptr_to_u64(test_attr->data_out);
|
|
|
|
attr.test.data_size_in = test_attr->data_size_in;
|
|
|
|
attr.test.data_size_out = test_attr->data_size_out;
|
2019-04-10 02:49:10 +08:00
|
|
|
attr.test.ctx_in = ptr_to_u64(test_attr->ctx_in);
|
|
|
|
attr.test.ctx_out = ptr_to_u64(test_attr->ctx_out);
|
|
|
|
attr.test.ctx_size_in = test_attr->ctx_size_in;
|
|
|
|
attr.test.ctx_size_out = test_attr->ctx_size_out;
|
2018-12-03 19:31:25 +08:00
|
|
|
attr.test.repeat = test_attr->repeat;
|
|
|
|
|
|
|
|
ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2018-12-03 19:31:25 +08:00
|
|
|
test_attr->data_size_out = attr.test.data_size_out;
|
2019-04-10 02:49:10 +08:00
|
|
|
test_attr->ctx_size_out = attr.test.ctx_size_out;
|
2018-12-03 19:31:25 +08:00
|
|
|
test_attr->retval = attr.test.retval;
|
|
|
|
test_attr->duration = attr.test.duration;
|
2021-05-25 11:59:33 +08:00
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
2018-12-03 19:31:25 +08:00
|
|
|
}
|
|
|
|
|
2020-09-26 04:54:30 +08:00
|
|
|
int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_test_run_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-09-26 04:54:30 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.test.prog_fd = prog_fd;
|
|
|
|
attr.test.cpu = OPTS_GET(opts, cpu, 0);
|
|
|
|
attr.test.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
attr.test.repeat = OPTS_GET(opts, repeat, 0);
|
|
|
|
attr.test.duration = OPTS_GET(opts, duration, 0);
|
|
|
|
attr.test.ctx_size_in = OPTS_GET(opts, ctx_size_in, 0);
|
|
|
|
attr.test.ctx_size_out = OPTS_GET(opts, ctx_size_out, 0);
|
|
|
|
attr.test.data_size_in = OPTS_GET(opts, data_size_in, 0);
|
|
|
|
attr.test.data_size_out = OPTS_GET(opts, data_size_out, 0);
|
|
|
|
attr.test.ctx_in = ptr_to_u64(OPTS_GET(opts, ctx_in, NULL));
|
|
|
|
attr.test.ctx_out = ptr_to_u64(OPTS_GET(opts, ctx_out, NULL));
|
|
|
|
attr.test.data_in = ptr_to_u64(OPTS_GET(opts, data_in, NULL));
|
|
|
|
attr.test.data_out = ptr_to_u64(OPTS_GET(opts, data_out, NULL));
|
|
|
|
|
|
|
|
ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2020-09-26 04:54:30 +08:00
|
|
|
OPTS_SET(opts, data_size_out, attr.test.data_size_out);
|
|
|
|
OPTS_SET(opts, ctx_size_out, attr.test.ctx_size_out);
|
|
|
|
OPTS_SET(opts, duration, attr.test.duration);
|
|
|
|
OPTS_SET(opts, retval, attr.test.retval);
|
2021-05-25 11:59:33 +08:00
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
2020-09-26 04:54:30 +08:00
|
|
|
}
|
|
|
|
|
2019-08-20 17:31:52 +08:00
|
|
|
static int bpf_obj_get_next_id(__u32 start_id, __u32 *next_id, int cmd)
|
2017-06-06 03:15:53 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int err;
|
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-06-06 03:15:53 +08:00
|
|
|
attr.start_id = start_id;
|
|
|
|
|
2019-08-20 17:31:52 +08:00
|
|
|
err = sys_bpf(cmd, &attr, sizeof(attr));
|
2017-06-06 03:15:53 +08:00
|
|
|
if (!err)
|
|
|
|
*next_id = attr.next_id;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(err);
|
2017-06-06 03:15:53 +08:00
|
|
|
}
|
|
|
|
|
2019-08-20 17:31:52 +08:00
|
|
|
int bpf_prog_get_next_id(__u32 start_id, __u32 *next_id)
|
2017-06-06 03:15:53 +08:00
|
|
|
{
|
2019-08-20 17:31:52 +08:00
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_PROG_GET_NEXT_ID);
|
|
|
|
}
|
2017-06-06 03:15:53 +08:00
|
|
|
|
2019-08-20 17:31:52 +08:00
|
|
|
int bpf_map_get_next_id(__u32 start_id, __u32 *next_id)
|
|
|
|
{
|
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_MAP_GET_NEXT_ID);
|
2017-06-06 03:15:53 +08:00
|
|
|
}
|
|
|
|
|
2019-08-20 17:31:53 +08:00
|
|
|
int bpf_btf_get_next_id(__u32 start_id, __u32 *next_id)
|
|
|
|
{
|
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_BTF_GET_NEXT_ID);
|
|
|
|
}
|
|
|
|
|
2020-04-29 08:16:09 +08:00
|
|
|
int bpf_link_get_next_id(__u32 start_id, __u32 *next_id)
|
|
|
|
{
|
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_LINK_GET_NEXT_ID);
|
|
|
|
}
|
|
|
|
|
2017-06-06 03:15:53 +08:00
|
|
|
int bpf_prog_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2017-06-06 03:15:53 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-06-06 03:15:53 +08:00
|
|
|
attr.prog_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_PROG_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2017-06-06 03:15:53 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_map_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2017-06-06 03:15:53 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-06-06 03:15:53 +08:00
|
|
|
attr.map_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2017-06-06 03:15:53 +08:00
|
|
|
}
|
|
|
|
|
2018-05-05 05:49:55 +08:00
|
|
|
int bpf_btf_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2018-05-05 05:49:55 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-05-05 05:49:55 +08:00
|
|
|
attr.btf_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2018-05-05 05:49:55 +08:00
|
|
|
}
|
|
|
|
|
2020-04-29 08:16:09 +08:00
|
|
|
int bpf_link_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2020-04-29 08:16:09 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_LINK_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2020-04-29 08:16:09 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len)
|
2017-06-06 03:15:53 +08:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int err;
|
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2020-04-29 08:16:09 +08:00
|
|
|
attr.info.bpf_fd = bpf_fd;
|
2017-06-06 03:15:53 +08:00
|
|
|
attr.info.info_len = *info_len;
|
|
|
|
attr.info.info = ptr_to_u64(info);
|
|
|
|
|
|
|
|
err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2017-06-06 03:15:53 +08:00
|
|
|
if (!err)
|
|
|
|
*info_len = attr.info.info_len;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(err);
|
2017-06-06 03:15:53 +08:00
|
|
|
}
|
2018-01-31 04:55:01 +08:00
|
|
|
|
2018-03-29 03:05:38 +08:00
|
|
|
int bpf_raw_tracepoint_open(const char *name, int prog_fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2018-03-29 03:05:38 +08:00
|
|
|
|
2019-02-14 02:25:53 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-03-29 03:05:38 +08:00
|
|
|
attr.raw_tracepoint.name = ptr_to_u64(name);
|
|
|
|
attr.raw_tracepoint.prog_fd = prog_fd;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2018-03-29 03:05:38 +08:00
|
|
|
}
|
|
|
|
|
2021-12-10 03:38:30 +08:00
|
|
|
int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_load_opts *opts)
|
2018-04-19 06:56:05 +08:00
|
|
|
{
|
2021-12-10 03:38:30 +08:00
|
|
|
const size_t attr_sz = offsetofend(union bpf_attr, btf_log_level);
|
|
|
|
union bpf_attr attr;
|
|
|
|
char *log_buf;
|
|
|
|
size_t log_size;
|
|
|
|
__u32 log_level;
|
2018-04-19 06:56:05 +08:00
|
|
|
int fd;
|
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-15 03:59:03 +08:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
2021-12-10 03:38:30 +08:00
|
|
|
memset(&attr, 0, attr_sz);
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_btf_load_opts))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
log_buf = OPTS_GET(opts, log_buf, NULL);
|
|
|
|
log_size = OPTS_GET(opts, log_size, 0);
|
|
|
|
log_level = OPTS_GET(opts, log_level, 0);
|
|
|
|
|
|
|
|
if (log_size > UINT_MAX)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (log_size && !log_buf)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
attr.btf = ptr_to_u64(btf_data);
|
2018-04-19 06:56:05 +08:00
|
|
|
attr.btf_size = btf_size;
|
2021-12-10 03:38:30 +08:00
|
|
|
/* log_level == 0 and log_buf != NULL means "try loading without
|
|
|
|
* log_buf, but retry with log_buf and log_level=1 on error", which is
|
|
|
|
* consistent across low-level and high-level BTF and program loading
|
|
|
|
* APIs within libbpf and provides a sensible behavior in practice
|
|
|
|
*/
|
|
|
|
if (log_level) {
|
|
|
|
attr.btf_log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.btf_log_size = (__u32)log_size;
|
|
|
|
attr.btf_log_level = log_level;
|
|
|
|
}
|
2018-04-19 06:56:05 +08:00
|
|
|
|
2021-12-10 03:38:30 +08:00
|
|
|
fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz);
|
|
|
|
if (fd < 0 && log_buf && log_level == 0) {
|
2018-04-19 06:56:05 +08:00
|
|
|
attr.btf_log_buf = ptr_to_u64(log_buf);
|
2021-12-10 03:38:30 +08:00
|
|
|
attr.btf_log_size = (__u32)log_size;
|
|
|
|
attr.btf_log_level = 1;
|
|
|
|
fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz);
|
2018-04-19 06:56:05 +08:00
|
|
|
}
|
2021-12-10 03:38:30 +08:00
|
|
|
return libbpf_err_errno(fd);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size, bool do_log)
|
|
|
|
{
|
|
|
|
LIBBPF_OPTS(bpf_btf_load_opts, opts);
|
|
|
|
int fd;
|
2018-04-19 06:56:05 +08:00
|
|
|
|
2021-12-10 03:38:30 +08:00
|
|
|
retry:
|
|
|
|
if (do_log && log_buf && log_buf_size) {
|
|
|
|
opts.log_buf = log_buf;
|
|
|
|
opts.log_size = log_buf_size;
|
|
|
|
opts.log_level = 1;
|
|
|
|
}
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2021-12-10 03:38:30 +08:00
|
|
|
fd = bpf_btf_load(btf, btf_size, &opts);
|
2021-05-25 11:59:33 +08:00
|
|
|
if (fd < 0 && !do_log && log_buf && log_buf_size) {
|
2018-04-19 06:56:05 +08:00
|
|
|
do_log = true;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2018-04-19 06:56:05 +08:00
|
|
|
}
|
2018-05-25 02:21:10 +08:00
|
|
|
|
|
|
|
int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len,
|
|
|
|
__u32 *prog_id, __u32 *fd_type, __u64 *probe_offset,
|
|
|
|
__u64 *probe_addr)
|
|
|
|
{
|
|
|
|
union bpf_attr attr = {};
|
|
|
|
int err;
|
|
|
|
|
|
|
|
attr.task_fd_query.pid = pid;
|
|
|
|
attr.task_fd_query.fd = fd;
|
|
|
|
attr.task_fd_query.flags = flags;
|
|
|
|
attr.task_fd_query.buf = ptr_to_u64(buf);
|
|
|
|
attr.task_fd_query.buf_len = *buf_len;
|
|
|
|
|
|
|
|
err = sys_bpf(BPF_TASK_FD_QUERY, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
|
2018-05-25 02:21:10 +08:00
|
|
|
*buf_len = attr.task_fd_query.buf_len;
|
|
|
|
*prog_id = attr.task_fd_query.prog_id;
|
|
|
|
*fd_type = attr.task_fd_query.fd_type;
|
|
|
|
*probe_offset = attr.task_fd_query.probe_offset;
|
|
|
|
*probe_addr = attr.task_fd_query.probe_addr;
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(err);
|
2018-05-25 02:21:10 +08:00
|
|
|
}
|
2020-04-30 15:15:05 +08:00
|
|
|
|
|
|
|
int bpf_enable_stats(enum bpf_stats_type type)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int fd;
|
2020-04-30 15:15:05 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.enable_stats.type = type;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 14:34:57 +08:00
|
|
|
fd = sys_bpf_fd(BPF_ENABLE_STATS, &attr, sizeof(attr));
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err_errno(fd);
|
2020-04-30 15:15:05 +08:00
|
|
|
}
|
2020-09-16 07:45:41 +08:00
|
|
|
|
|
|
|
int bpf_prog_bind_map(int prog_fd, int map_fd,
|
|
|
|
const struct bpf_prog_bind_opts *opts)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 11:59:33 +08:00
|
|
|
int ret;
|
2020-09-16 07:45:41 +08:00
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_prog_bind_opts))
|
2021-05-25 11:59:33 +08:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-09-16 07:45:41 +08:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.prog_bind_map.prog_fd = prog_fd;
|
|
|
|
attr.prog_bind_map.map_fd = map_fd;
|
|
|
|
attr.prog_bind_map.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
|
2021-05-25 11:59:33 +08:00
|
|
|
ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2020-09-16 07:45:41 +08:00
|
|
|
}
|