In sk_psock_backlog function, for ingress direction skb, if no new data
packet arrives after the skb is cached, the cached skb does not have a
chance to be added to the receive queue of psock. As a result, the cached
skb cannot be received by the upper-layer application. Fix this by reschedule
the psock work to dispose the cached skb in sk_msg_recvmsg function.
Fixes: 604326b41a ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220907071311.60534-1-liujian56@huawei.com
Add one test for wait redirect sock's send memory test for sockmap.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220823133755.314697-3-liujian56@huawei.com
Andrii Nakryiko says:
====================
A small patch set adding few usability improvements and features making
veristat a more convenient tool to be used for work on BPF verifier:
- patch #2 speeds up and makes stats parsing from BPF verifier log more
robust;
- patch #3 makes veristat less strict about input object files; veristat
will ignore non-BPF ELF files;
- patch #4 adds progress log, by default, so that user doing
mass-verification is aware that veristat is not stuck;
- patch #5 allows to tune requested BPF verifier log level, which makes
veristat a simplest way to get BPF verifier log, especially successfully
verified ones.
v1->v2:
- don't emit progress in non-table mode, as it breaks CSV output.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add -l (--log-level) flag to override default BPF verifier log lever.
This only matters in verbose mode, which is the mode in which veristat
emits verifier log for each processed BPF program.
This is important because for successfully verified BPF programs
log_level 1 is empty, as BPF verifier truncates all the successfully
verified paths. So -l2 is the only way to actually get BPF verifier log
in practice. It looks sometihng like this:
[vmuser@archvm bpf]$ sudo ./veristat xdp_tx.bpf.o -vl2
Processing 'xdp_tx.bpf.o'...
PROCESSING xdp_tx.bpf.o/xdp_tx, DURATION US: 19, VERDICT: success, VERIFIER LOG:
func#0 @0
0: R1=ctx(off=0,imm=0) R10=fp0
; return XDP_TX;
0: (b4) w0 = 3 ; R0_w=3
1: (95) exit
verification time 19 usec
stack depth 0
processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
File Program Verdict Duration (us) Total insns Total states Peak states
------------ ------- ------- ------------- ----------- ------------ -----------
xdp_tx.bpf.o xdp_tx success 19 2 0 0
------------ ------- ------- ------------- ----------- ------------ -----------
Done. Processed 1 files, 0 programs. Skipped 1 files, 0 programs.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-6-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Emit "Processing <filepath>..." for each BPF object file to be
processed, to show progress. But also add -q (--quiet) flag to silence
such messages. Doing something more clever (like overwriting same output
line) is to cumbersome and easily breakable if there is any other
console output (e.g., errors from libbpf).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-5-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Make veristat ignore non-BPF object files. This allows simpler
mass-verification (e.g., `sudo ./veristat *.bpf.o` in selftests/bpf
directory). Note that `sudo ./veristat *.o` would also work, but with
selftests's multiple copies of BPF object files (.bpf.o and
.bpf.linked{1,2,3}.o) it's 4x slower.
Also, given some of BPF object files could be incomplete in the sense
that they are meant to be statically linked into final BPF object file
(like linked_maps, linked_funcs, linked_vars), note such instances in
stderr, but proceed anyways. This seems like a better trade off between
completely silently ignoring BPF object file and aborting
mass-verification altogether.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-4-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Make sure veristat doesn't spend ridiculous amount of time parsing
verifier stats from verifier log, especially for very large logs or
truncated logs (e.g., when verifier returns -ENOSPC due to too small
buffer). For this, parse lines from the end of the log and make sure we
parse only up to 100 last lines, where stats should be, if at all.
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
When attach_prog_fd field was removed in libbpf 1.0 and replaced with
`long: 0` placeholder, it actually shifted all the subsequent fields by
8 byte. This is due to `long: 0` promising to adjust next field's offset
to long-aligned offset. But in this case we were already long-aligned
as pin_root_path is a pointer. So `long: 0` had no effect, and thus
didn't feel the gap created by removed attach_prog_fd.
Non-zero bitfield should have been used instead. I validated using
pahole. Originally kconfig field was at offset 40. With `long: 0` it's
at offset 32, which is wrong. With this change it's back at offset 40.
While technically libbpf 1.0 is allowed to break backwards
compatibility and applications should have been recompiled against
libbpf 1.0 headers, but given how trivial it is to preserve memory
layout, let's fix this.
Reported-by: Grant Seltzer Richman <grantseltzer@gmail.com>
Fixes: 146bf811f5 ("libbpf: remove most other deprecated high-level APIs")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923230559.666608-1-andrii@kernel.org
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
The cgroup_hierarchical_stats selftest is complicated. It has to be,
because it tests an entire workflow of recording, aggregating, and
dumping cgroup stats. However, some of the complexity is unnecessary.
The test now enables the memory controller in a cgroup hierarchy, invokes
reclaim, measure reclaim time, THEN uses that reclaim time to test the
stats collection and aggregation. We don't need to use such a
complicated stat, as the context in which the stat is collected is
orthogonal.
Simplify the test by using a simple stat instead of reclaim time, the
total number of times a process has ever entered a cgroup. This makes
the test simpler and removes the dependency on the memory controller and
the memory reclaim interface.
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20220919175330.890793-1-yosryahmed@google.com
Hou Tao says:
====================
From: Hou Tao <houtao1@huawei.com>
Hi,
It is just a tiny patch set aims to fix the resource leaks in test_maps
after test case succeeds or is skipped. And these leaks are spotted by
using address sanitizer and checking the content of /proc/$pid/fd.
Please see indiviual patch for more details.
Change Log:
v2:
* Add the missing header file unistd.h for close() (From kernel-patches/bpf)
The reason Why I miss that is that -Werror is removed from Makefile
when enabling clang address sanitizer.
v1:
* https://lore.kernel.org/bpf/20220921025855.115463-1-houtao@huaweicloud.com/T/
====================
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Free the created fd or allocated bpf_object after test case succeeds,
else there will be resource leaks.
Spotted by using address sanitizer and checking the content of
/proc/$pid/fd directory.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220921070035.2016413-3-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Destroy the created skeleton when CONFIG_PREEMPT is off, else will be
resource leak.
Fixes: 73b97bc78b ("selftests/bpf: Test concurrent updates on bpf_task_storage_busy")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220921070035.2016413-2-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Added urandom_read shared lib is missing from the list of installed
files what makes urandom_read test after `make install` or `make
gen_tar` broken.
Add the library to TEST_GEN_FILES. The names in the list do not
contain $(OUTPUT) since it's added by lib.mk code.
Fixes: 00a0fa2d7d ("selftests/bpf: Add urandom_read shared lib and USDTs")
Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920161409.129953-1-ykaliuta@redhat.com
Andrii Nakryiko says:
====================
Add three more critical features to veristat tool, which make it sufficient
for a practical work on BPF verifier:
- CSV output, which allows easier programmatic post-processing of stats;
- building upon CSV output, veristat now supports comparison mode, in which
two previously captured CSV outputs from veristat are compared with each
other in a convenient form;
- flexible allow/deny filtering using globs for BPF object files and
programs, allowing to narrow down target BPF programs to be verified.
See individual patches for more details and examples.
v1->v2:
- split out double-free fix into patch #1 (Yonghong);
- fixed typo in verbose flag (Quentin);
- baseline and comparison stats were reversed in output table, fixed that.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add -f (--filter) argument which accepts glob-based filters for
narrowing down what BPF object files and programs within them should be
processed by veristat. This filtering applies both to comparison and
main (verification) mode.
Filter can be of two forms:
- file (object) filter: 'strobemeta*'; in this case all the programs
within matching files are implicitly allowed (or denied, depending
if it's positive or negative rule, see below);
- file and prog filter: 'strobemeta*/*unroll*' will further filter
programs within matching files to only allow those program names that
match '*unroll*' glob.
As mentioned, filters can be positive (allowlisting) and negative
(denylisting). Negative filters should start with '!': '!strobemeta*'
will deny any filename which basename starts with "strobemeta".
Further, one extra special syntax is supported to allow more convenient
use in practice. Instead of specifying rule on the command line,
veristat allows to specify file that contains rules, both positive and
negative, one line per one filter. This is achieved with -f @<filepath>
use, where <filepath> points to a text file containing rules (negative
and positive rules can be mixed). For convenience empty lines and lines
starting with '#' are ignored. This feature is useful to have some
pre-canned list of object files and program names that are tested
repeatedly, allowing to check in a list of rules and quickly specify
them on the command line.
As a demonstration (and a short cut for nearest future), create a small
list of "interesting" BPF object files from selftests/bpf and commit it
as veristat.cfg. It currently includes 73 programs, most of which are
the most complex and largest BPF programs in selftests, as judged by
total verified instruction count and verifier states total.
If there is overlap between positive or negative filters, negative
filter takes precedence (denylisting is stronger than allowlisting). If
no allow filter is specified, veristat implicitly assumes '*/*' rule. If
no deny rule is specified, veristat (logically) assumes no negative
filters.
Also note that -f (just like -e and -s) can be specified multiple times
and their effect is cumulative.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-5-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add ability to compare and contrast two veristat runs, previously
recorded with veristat using CSV output format.
When veristat is called with -C (--compare) flag, veristat expects
exactly two input files specified, both should be in CSV format.
Expectation is that it's output from previous veristat runs, but as long
as column names and formats match, it should just work. First CSV file
is designated as a "baseline" provided, and the second one is
comparison (experiment) data set. Establishing baseline matters later
when calculating difference percentages, see below.
Veristat parses these two CSV files and "reconstructs" verifier stats
(it could be just a subset of all possible stats). File and program
names are mandatory as they are used as joining key (these two "stats"
are designated as "key stats" in the code).
Veristat currently enforces that the set of stats recorded in both CSV
has to exactly match, down to exact order. This is just a simplifying
condition which can be lifted with a bit of additional pre-processing to
reorded stat specs internally, which I didn't bother doing, yet.
For all the non-key stats, veristat will output three columns: one for
baseline data, one for comparison data, and one with an absolute and
relative percentage difference. If either baseline or comparison values
are missing (that is, respective CSV file doesn't have a row with
*exactly* matching file and program name), those values are assumed to
be empty or zero. In such case relative percentages are forced to +100%
or -100% output, for consistency with a typical case.
Veristat's -e (--emit) and -s (--sort) specs still apply, so even if CSV
contains lots of stats, user can request to compare only a subset of
them (and specify desired column order as well). Similarly, both CSV and
human-readable table output is honored. Note that input is currently
always expected to be CSV.
Here's an example shell session, recording data for biosnoop tool on two
different kernels and comparing them afterwards, outputting data in table
format.
# on slightly older production kernel
$ sudo ./veristat biosnoop_bpf.o
File Program Verdict Duration (us) Total insns Total states Peak states
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
biosnoop_bpf.o blk_account_io_merge_bio success 37 24 1 1
biosnoop_bpf.o blk_account_io_start failure 0 0 0 0
biosnoop_bpf.o block_rq_complete success 76 104 6 6
biosnoop_bpf.o block_rq_insert success 83 85 7 7
biosnoop_bpf.o block_rq_issue success 79 85 7 7
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
Done. Processed 1 object files, 5 programs.
$ sudo ./veristat ~/local/tmp/fbcode-bpf-objs/biosnoop_bpf.o -o csv > baseline.csv
$ cat baseline.csv
file_name,prog_name,verdict,duration,total_insns,total_states,peak_states
biosnoop_bpf.o,blk_account_io_merge_bio,success,36,24,1,1
biosnoop_bpf.o,blk_account_io_start,failure,0,0,0,0
biosnoop_bpf.o,block_rq_complete,success,82,104,6,6
biosnoop_bpf.o,block_rq_insert,success,78,85,7,7
biosnoop_bpf.o,block_rq_issue,success,74,85,7,7
# on latest bpf-next kernel
$ sudo ./veristat biosnoop_bpf.o
File Program Verdict Duration (us) Total insns Total states Peak states
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
biosnoop_bpf.o blk_account_io_merge_bio success 31 24 1 1
biosnoop_bpf.o blk_account_io_start failure 0 0 0 0
biosnoop_bpf.o block_rq_complete success 76 104 6 6
biosnoop_bpf.o block_rq_insert success 83 91 7 7
biosnoop_bpf.o block_rq_issue success 74 91 7 7
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
Done. Processed 1 object files, 5 programs.
$ sudo ./veristat biosnoop_bpf.o -o csv > comparison.csv
$ cat comparison.csv
file_name,prog_name,verdict,duration,total_insns,total_states,peak_states
biosnoop_bpf.o,blk_account_io_merge_bio,success,71,24,1,1
biosnoop_bpf.o,blk_account_io_start,failure,0,0,0,0
biosnoop_bpf.o,block_rq_complete,success,82,104,6,6
biosnoop_bpf.o,block_rq_insert,success,83,91,7,7
biosnoop_bpf.o,block_rq_issue,success,87,91,7,7
# now let's compare with human-readable output (note that no sudo needed)
# we also ignore verification duration in this case to shortned output
$ ./veristat -C baseline.csv comparison.csv -e file,prog,verdict,insns
File Program Verdict (A) Verdict (B) Verdict (DIFF) Total insns (A) Total insns (B) Total insns (DIFF)
-------------- ------------------------ ----------- ----------- -------------- --------------- --------------- ------------------
biosnoop_bpf.o blk_account_io_merge_bio success success MATCH 24 24 +0 (+0.00%)
biosnoop_bpf.o blk_account_io_start failure failure MATCH 0 0 +0 (+100.00%)
biosnoop_bpf.o block_rq_complete success success MATCH 104 104 +0 (+0.00%)
biosnoop_bpf.o block_rq_insert success success MATCH 91 85 -6 (-6.59%)
biosnoop_bpf.o block_rq_issue success success MATCH 91 85 -6 (-6.59%)
-------------- ------------------------ ----------- ----------- -------------- --------------- --------------- ------------------
While not particularly exciting example (it turned out to be kind of hard to
quickly find a nice example with significant difference just because of kernel
version bump), it should demonstrate main features.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-4-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Teach veristat to output results as CSV table for easier programmatic
processing. Change what was --output/-o argument to now be --emit/-e.
And then use --output-format/-o <fmt> to specify output format.
Currently "table" and "csv" is supported, table being default.
For CSV output mode veristat is using spec identifiers as column names.
E.g., instead of "Total states" veristat uses "total_states" as a CSV
header name.
Internally veristat recognizes three formats, one of them
(RESFMT_TABLE_CALCLEN) is a special format instructing veristat to
calculate column widths for table output. This felt a bit cleaner and
more uniform than either creating separate functions just for this.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
bpf_object__close(obj) is called twice for BPF object files with single
BPF program in it. This causes crash. Fix this by not calling
bpf_object__close() unnecessarily.
Fixes: c8bc5e0509 ("selftests/bpf: Add veristat tool for mass-verifying BPF object files")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-2-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi says:
====================
Introduce bpf_ct_set_nat_info kfunc helper in order to set source and
destination nat addresses/ports in a new allocated ct entry not inserted
in the connection tracking table yet.
Introduce support for per-parameter trusted args.
Changes since v2:
- use int instead of a pointer for port in bpf_ct_set_nat_info signature
- modify KF_TRUSTED_ARGS definition in order to referenced pointer constraint
just for PTR_TO_BTF_ID
- drop patch 2/4
Changes since v1:
- enable CONFIG_NF_NAT in tools/testing/selftests/bpf/config
Kumar Kartikeya Dwivedi (1):
bpf: Tweak definition of KF_TRUSTED_ARGS
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Instead of forcing all arguments to be referenced pointers with non-zero
reg->ref_obj_id, tweak the definition of KF_TRUSTED_ARGS to mean that
only PTR_TO_BTF_ID (and socket types translated to PTR_TO_BTF_ID) have
that constraint, and require their offset to be set to 0.
The rest of pointer types are also accomodated in this definition of
trusted pointers, but with more relaxed rules regarding offsets.
The inherent meaning of setting this flag is that all kfunc pointer
arguments have a guranteed lifetime, and kernel object pointers
(PTR_TO_BTF_ID, PTR_TO_CTX) are passed in their unmodified form (with
offset 0). In general, this is not true for PTR_TO_BTF_ID as it can be
obtained using pointer walks.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/cdede0043c47ed7a357f0a915d16f9ce06a1d589.1663778601.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
For a non-preallocated hash map on RT kernel, regular spinlock instead
of raw spinlock is used for bucket lock. The reason is that on RT kernel
memory allocation is forbidden under atomic context and regular spinlock
is sleepable under RT.
Now hash map has been fully converted to use bpf_map_alloc, and there
will be no synchronous memory allocation for non-preallocated hash map,
so it is safe to always use raw spinlock for bucket lock on RT. So
removing the usage of htab_use_raw_lock() and updating the comments
accordingly.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220921073826.2365800-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
We got report from sysbot [1] about warnings that were caused by
bpf program attached to contention_begin raw tracepoint triggering
the same tracepoint by using bpf_trace_printk helper that takes
trace_printk_lock lock.
Call Trace:
<TASK>
? trace_event_raw_event_bpf_trace_printk+0x5f/0x90
bpf_trace_printk+0x2b/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
bpf_trace_printk+0x3f/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
bpf_trace_printk+0x3f/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
bpf_trace_printk+0x3f/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
__unfreeze_partials+0x5b/0x160
...
The can be reproduced by attaching bpf program as raw tracepoint on
contention_begin tracepoint. The bpf prog calls bpf_trace_printk
helper. Then by running perf bench the spin lock code is forced to
take slow path and call contention_begin tracepoint.
Fixing this by skipping execution of the bpf program if it's
already running, Using bpf prog 'active' field, which is being
currently used by trampoline programs for the same reason.
Moving bpf_prog_inc_misses_counter to syscall.c because
trampoline.c is compiled in just for CONFIG_BPF_JIT option.
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Reported-by: syzbot+2251879aa068ad9c960d@syzkaller.appspotmail.com
[1] https://lore.kernel.org/bpf/YxhFe3EwqchC%2FfYf@krava/T/#t
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220916071914.7156-1-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu says:
====================
One of the desirable features in security is the ability to restrict import
of data to a given system based on data authenticity. If data import can be
restricted, it would be possible to enforce a system-wide policy based on
the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction
is enabled, kernel modules can be plugged in only if they are signed with a
key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to
authenticate data an eBPF program makes security decisions on.
After a discussion in the eBPF mailing list, it was decided that the stated
goal should be accomplished by introducing four new kfuncs:
bpf_lookup_user_key() and bpf_lookup_system_key(), for retrieving a keyring
with keys trusted for signature verification, respectively from its serial
and from a pre-determined ID; bpf_key_put(), to release the reference
obtained with the former two kfuncs, bpf_verify_pkcs7_signature(), for
verifying PKCS#7 signatures.
Other than the key serial, bpf_lookup_user_key() also accepts key lookup
flags, that influence the behavior of the lookup. bpf_lookup_system_key()
accepts pre-determined IDs defined in include/linux/verification.h.
bpf_key_put() accepts the new bpf_key structure, introduced to tell whether
the other structure member, a key pointer, is valid or not. The reason is
that verify_pkcs7_signature() also accepts invalid pointers, set with the
pre-determined ID, to select a system-defined keyring. key_put() must be
called only for valid key pointers.
Since the two key lookup functions allocate memory and one increments a key
reference count, they must be used in conjunction with bpf_key_put(). The
latter must be called only if the lookup functions returned a non-NULL
pointer. The verifier denies the execution of eBPF programs that don't
respect this rule.
The two key lookup functions should be used in alternative, depending on
the use case. While bpf_lookup_user_key() provides great flexibility, it
seems suboptimal in terms of security guarantees, as even if the eBPF
program is assumed to be trusted, the serial used to obtain the key pointer
might come from untrusted user space not choosing one that the system
administrator approves to enforce a mandatory policy.
bpf_lookup_system_key() instead provides much stronger guarantees,
especially if the pre-determined ID is not passed by user space but is
hardcoded in the eBPF program, and that program is signed. In this case,
bpf_verify_pkcs7_signature() will always perform signature verification
with a key that the system administrator approves, i.e. the primary,
secondary or platform keyring.
Nevertheless, key permission checks need to be done accurately. Since
bpf_lookup_user_key() cannot determine how a key will be used by other
kfuncs, it has to defer the permission check to the actual kfunc using the
key. It does it by calling lookup_user_key() with KEY_DEFER_PERM_CHECK as
needed permission. Later, bpf_verify_pkcs7_signature(), if called,
completes the permission check by calling key_validate(). It does not need
to call key_task_permission() with permission KEY_NEED_SEARCH, as it is
already done elsewhere by the key subsystem. Future kfuncs using the
bpf_key structure need to implement the proper checks as well.
Finally, the last kfunc, bpf_verify_pkcs7_signature(), accepts the data and
signature to verify as eBPF dynamic pointers, to minimize the number of
kfunc parameters, and the keyring with keys for signature verification as a
bpf_key structure, returned by one of the two key lookup functions.
bpf_lookup_user_key() and bpf_verify_pkcs7_signature() can be called only
from sleepable programs, because of memory allocation and crypto
operations. For example, the lsm.s/bpf attach point is suitable,
fexit/array_map_update_elem is not.
The correctness of implementation of the new kfuncs and of their usage is
checked with the introduced tests.
The patch set includes a patch from another author (dependency) for sake of
completeness. It is organized as follows.
Patch 1 from KP Singh allows kfuncs to be used by LSM programs. Patch 2
exports the bpf_dynptr definition through BTF. Patch 3 splits
is_dynptr_reg_valid_init() and introduces is_dynptr_type_expected(), to
know more precisely the cause of a negative result of a dynamic pointer
check. Patch 4 allows dynamic pointers to be used as kfunc parameters.
Patch 5 exports bpf_dynptr_get_size(), to obtain the real size of data
carried by a dynamic pointer. Patch 6 makes available for new eBPF kfuncs
and programs some key-related definitions. Patch 7 introduces the
bpf_lookup_*_key() and bpf_key_put() kfuncs. Patch 8 introduces the
bpf_verify_pkcs7_signature() kfunc. Patch 9 changes the testing kernel
configuration to compile everything as built-in. Finally, patches 10-13
introduce the tests.
Changelog
v17:
- Remove unnecessary typedefs in test_verify_pkcs7_sig.c (suggested by KP)
- Add patch to export bpf_dynptr through BTF (reported by KP)
- Rename u{8,16,32,64} variables to __u{8,16,32,64} in the tests, for
consistency with other eBPF programs (suggested by Yonghong)
v16:
- Remove comments in include/linux/key.h for KEY_LOOKUP_*
- Change kmalloc() flag from GFP_ATOMIC to GFP_KERNEL in
bpf_lookup_user_key(), as the kfunc needs anyway to be sleepable
(suggested by Kumar)
- Test passing a dynamic pointer with NULL data to
bpf_verify_pkcs7_signature() (suggested by Kumar)
v15:
- Add kfunc_dynptr_param test to deny list for s390x
v14:
- Explain that is_dynptr_type_expected() will be useful also for BTF
(suggested by Joanne)
- Rename KEY_LOOKUP_FLAGS_ALL to KEY_LOOKUP_ALL (suggested by Jarkko)
- Swap declaration of spi and dynptr_type in is_dynptr_type_expected()
(suggested by Joanne)
- Reimplement kfunc dynptr tests with a regular eBPF program instead of
executing them with test_verifier (suggested by Joanne)
- Make key lookup flags as enum so that they are automatically exported
through BTF (suggested by Alexei)
v13:
- Split is_dynptr_reg_valid_init() and introduce is_dynptr_type_expected()
to see if the dynamic pointer type passed as argument to a kfunc is
supported (suggested by Kumar)
- Add forward declaration of struct key in include/linux/bpf.h (suggested
by Song)
- Declare mask for key lookup flags, remove key_lookup_flags_check()
(suggested by Jarkko and KP)
- Allow only certain dynamic pointer types (currently, local) to be passed
as argument to kfuncs (suggested by Kumar)
- For each dynamic pointer parameter in kfunc, additionally check if the
passed pointer is to the stack (suggested by Kumar)
- Split the validity/initialization and dynamic pointer type check also in
the verifier, and adjust the expected error message in the test (a test
for an unexpected dynptr type passed to a helper cannot be added due to
missing suitable helpers, but this case has been tested manually)
- Add verifier tests to check the dynamic pointers passed as argument to
kfuncs (suggested by Kumar)
v12:
- Put lookup_key and verify_pkcs7_sig tests in deny list for s390x (JIT
does not support calling kernel function)
v11:
- Move stringify_struct() macro to include/linux/btf.h (suggested by
Daniel)
- Change kernel configuration options in
tools/testing/selftests/bpf/config* from =m to =y
v10:
- Introduce key_lookup_flags_check() and system_keyring_id_check() inline
functions to check parameters (suggested by KP)
- Fix descriptions and comment of key-related kfuncs (suggested by KP)
- Register kfunc set only once (suggested by Alexei)
- Move needed kernel options to the architecture-independent configuration
for testing
v9:
- Drop patch to introduce KF_SLEEPABLE kfunc flag (already merged)
- Rename valid_ptr member of bpf_key to has_ref (suggested by Daniel)
- Check dynamic pointers in kfunc definition with bpf_dynptr_kern struct
definition instead of string, to detect structure renames (suggested by
Daniel)
- Explicitly say that we permit initialized dynamic pointers in kfunc
definition (suggested by Daniel)
- Remove noinline __weak from kfuncs definition (reported by Daniel)
- Simplify key lookup flags check in bpf_lookup_user_key() (suggested by
Daniel)
- Explain the reason for deferring key permission check (suggested by
Daniel)
- Allocate memory with GFP_ATOMIC in bpf_lookup_system_key(), and remove
KF_SLEEPABLE kfunc flag from kfunc declaration (suggested by Daniel)
- Define only one kfunc set and remove the loop for registration
(suggested by Alexei)
v8:
- Define the new bpf_key structure to carry the key pointer and whether
that pointer is valid or not (suggested by Daniel)
- Drop patch to mark a kfunc parameter with the __maybe_null suffix
- Improve documentation of kfuncs
- Introduce bpf_lookup_system_key() to obtain a key pointer suitable for
verify_pkcs7_signature() (suggested by Daniel)
- Use the new kfunc registration API
- Drop patch to test the __maybe_null suffix
- Add tests for bpf_lookup_system_key()
v7:
- Add support for using dynamic and NULL pointers in kfunc (suggested by
Alexei)
- Add new kfunc-related tests
v6:
- Switch back to key lookup helpers + signature verification (until v5),
and defer permission check from bpf_lookup_user_key() to
bpf_verify_pkcs7_signature()
- Add additional key lookup test to illustrate the usage of the
KEY_LOOKUP_CREATE flag and validate the flags (suggested by Daniel)
- Make description of flags of bpf_lookup_user_key() more user-friendly
(suggested by Daniel)
- Fix validation of flags parameter in bpf_lookup_user_key() (reported by
Daniel)
- Rename bpf_verify_pkcs7_signature() keyring-related parameters to
user_keyring and system_keyring to make their purpose more clear
- Accept keyring-related parameters of bpf_verify_pkcs7_signature() as
alternatives (suggested by KP)
- Replace unsigned long type with u64 in helper declaration (suggested by
Daniel)
- Extend the bpf_verify_pkcs7_signature() test by calling the helper
without data, by ensuring that the helper enforces the keyring-related
parameters as alternatives, by ensuring that the helper rejects
inaccessible and expired keyrings, and by checking all system keyrings
- Move bpf_lookup_user_key() and bpf_key_put() usage tests to
ref_tracking.c (suggested by John)
- Call bpf_lookup_user_key() and bpf_key_put() only in sleepable programs
v5:
- Move KEY_LOOKUP_ to include/linux/key.h
for validation of bpf_verify_pkcs7_signature() parameter
- Remove bpf_lookup_user_key() and bpf_key_put() helpers, and the
corresponding tests
- Replace struct key parameter of bpf_verify_pkcs7_signature() with the
keyring serial and lookup flags
- Call lookup_user_key() and key_put() in bpf_verify_pkcs7_signature()
code, to ensure that the retrieved key is used according to the
permission requested at lookup time
- Clarified keyring precedence in the description of
bpf_verify_pkcs7_signature() (suggested by John)
- Remove newline in the second argument of ASSERT_
- Fix helper prototype regular expression in bpf_doc.py
v4:
- Remove bpf_request_key_by_id(), don't return an invalid pointer that
other helpers can use
- Pass the keyring ID (without ULONG_MAX, suggested by Alexei) to
bpf_verify_pkcs7_signature()
- Introduce bpf_lookup_user_key() and bpf_key_put() helpers (suggested by
Alexei)
- Add lookup_key_norelease test, to ensure that the verifier blocks eBPF
programs which don't decrement the key reference count
- Parse raw PKCS#7 signature instead of module-style signature in the
verify_pkcs7_signature test (suggested by Alexei)
- Parse kernel module in user space and pass raw PKCS#7 signature to the
eBPF program for signature verification
v3:
- Rename bpf_verify_signature() back to bpf_verify_pkcs7_signature() to
avoid managing different parameters for each signature verification
function in one helper (suggested by Daniel)
- Use dynamic pointers and export bpf_dynptr_get_size() (suggested by
Alexei)
- Introduce bpf_request_key_by_id() to give more flexibility to the caller
of bpf_verify_pkcs7_signature() to retrieve the appropriate keyring
(suggested by Alexei)
- Fix test by reordering the gcc command line, always compile sign-file
- Improve helper support check mechanism in the test
v2:
- Rename bpf_verify_pkcs7_signature() to a more generic
bpf_verify_signature() and pass the signature type (suggested by KP)
- Move the helper and prototype declaration under #ifdef so that user
space can probe for support for the helper (suggested by Daniel)
- Describe better the keyring types (suggested by Daniel)
- Include linux/bpf.h instead of vmlinux.h to avoid implicit or
redeclaration
- Make the test selfcontained (suggested by Alexei)
v1:
- Don't define new map flag but introduce simple wrapper of
verify_pkcs7_signature() (suggested by Alexei and KP)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add tests to ensure that only supported dynamic pointer types are accepted,
that the passed argument is actually a dynamic pointer, that the passed
argument is a pointer to the stack, and that bpf_verify_pkcs7_signature()
correctly handles dynamic pointers with data set to NULL.
The tests are currently in the deny list for s390x (JIT does not support
calling kernel function).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-14-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Perform several tests to ensure the correct implementation of the
bpf_verify_pkcs7_signature() kfunc.
Do the tests with data signed with a generated testing key (by using
sign-file from scripts/) and with the tcp_bic.ko kernel module if it is
found in the system. The test does not fail if tcp_bic.ko is not found.
First, perform an unsuccessful signature verification without data.
Second, perform a successful signature verification with the session
keyring and a new one created for testing.
Then, ensure that permission and validation checks are done properly on the
keyring provided to bpf_verify_pkcs7_signature(), despite those checks were
deferred at the time the keyring was retrieved with bpf_lookup_user_key().
The tests expect to encounter an error if the Search permission is removed
from the keyring, or the keyring is expired.
Finally, perform a successful and unsuccessful signature verification with
the keyrings with pre-determined IDs (the last test fails because the key
is not in the platform keyring).
The test is currently in the deny list for s390x (JIT does not support
calling kernel function).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Link: https://lore.kernel.org/r/20220920075951.929132-13-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add a test to ensure that bpf_lookup_user_key() creates a referenced
special keyring when the KEY_LOOKUP_CREATE flag is passed to this function.
Ensure that the kfunc rejects invalid flags.
Ensure that a keyring can be obtained from bpf_lookup_system_key() when one
of the pre-determined keyring IDs is provided.
The test is currently blacklisted for s390x (JIT does not support calling
kernel function).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Link: https://lore.kernel.org/r/20220920075951.929132-12-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add verifier tests for bpf_lookup_*_key() and bpf_key_put(), to ensure that
acquired key references stored in the bpf_key structure are released, that
a non-NULL bpf_key pointer is passed to bpf_key_put(), and that key
references are not leaked.
Also, slightly modify test_verifier.c, to find the BTF ID of the attach
point for the LSM program type (currently, it is done only for TRACING).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-11-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Since the eBPF CI does not support kernel modules, change the kernel config
to compile everything as built-in.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Daniel Müller <deso@posteo.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-10-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules
the ability to check the validity of a signature against supplied data, by
using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF
programs might be allowed to make security decisions only based on data
sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF
dynamic pointers (to minimize the number of parameters) and a bpf_key
structure containing a reference to the keyring with keys trusted for
signature verification, obtained from bpf_lookup_user_key() or
bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function,
bpf_verify_pkcs7_signature() completes the permission check deferred by
that function by calling key_validate(). key_task_permission() is already
called by the PKCS#7 code.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-9-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put()
kfuncs, to respectively search a key with a given key handle serial number
and flags, obtain a key from a pre-determined ID defined in
include/linux/verification.h, and cleanup.
Introduce system_keyring_id_check() to validate the keyring ID parameter of
bpf_lookup_system_key().
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-8-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF
kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to
validate the kfunc parameters. Add them to enum key_lookup_flag, so that
all the current ones and the ones defined in the future are automatically
exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, with the logical OR of currently
defined flags as value, to facilitate checking whether a variable contains
only those flags.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-7-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Export bpf_dynptr_get_size(), so that kernel code dealing with eBPF dynamic
pointers can obtain the real size of data carried by this data structure.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Reviewed-by: Joanne Koong <joannelkoong@gmail.com>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-6-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Allow dynamic pointers (struct bpf_dynptr_kern *) to be specified as
parameters in kfuncs. Also, ensure that dynamic pointers passed as argument
are valid and initialized, are a pointer to the stack, and of the type
local. More dynamic pointer types can be supported in the future.
To properly detect whether a parameter is of the desired type, introduce
the stringify_struct() macro to compare the returned structure name with
the desired name. In addition, protect against structure renames, by
halting the build with BUILD_BUG_ON(), so that developers have to revisit
the code.
To check if a dynamic pointer passed to the kfunc is valid and initialized,
and if its type is local, export the existing functions
is_dynptr_reg_valid_init() and is_dynptr_type_expected().
Cc: Joanne Koong <joannelkoong@gmail.com>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-5-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Move dynptr type check to is_dynptr_type_expected() from
is_dynptr_reg_valid_init(), so that callers can better determine the cause
of a negative result (dynamic pointer not valid/initialized, dynamic
pointer of the wrong type). It will be useful for example for BTF, to
restrict which dynamic pointer types can be passed to kfuncs, as initially
only the local type will be supported.
Also, splitting makes the code more readable, since checking the dynamic
pointer type is not necessarily related to validity and initialization.
Split the validity/initialization and dynamic pointer type check also in
the verifier, and adjust the expected error message in the test (a test for
an unexpected dynptr type passed to a helper cannot be added due to missing
suitable helpers, but this case has been tested manually).
Cc: Joanne Koong <joannelkoong@gmail.com>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-4-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
eBPF dynamic pointers is a new feature recently added to upstream. It binds
together a pointer to a memory area and its size. The internal kernel
structure bpf_dynptr_kern is not accessible by eBPF programs in user space.
They instead see bpf_dynptr, which is then translated to the internal
kernel structure by the eBPF verifier.
The problem is that it is not possible to include at the same time the uapi
include linux/bpf.h and the vmlinux BTF vmlinux.h, as they both contain the
definition of some structures/enums. The compiler complains saying that the
structures/enums are redefined.
As bpf_dynptr is defined in the uapi include linux/bpf.h, this makes it
impossible to include vmlinux.h. However, in some cases, e.g. when using
kfuncs, vmlinux.h has to be included. The only option until now was to
include vmlinux.h and add the definition of bpf_dynptr directly in the eBPF
program source code from linux/bpf.h.
Solve the problem by using the same approach as for bpf_timer (which also
follows the same scheme with the _kern suffix for the internal kernel
structure).
Add the following line in one of the dynamic pointer helpers,
bpf_dynptr_from_mem():
BTF_TYPE_EMIT(struct bpf_dynptr);
Cc: stable@vger.kernel.org
Cc: Joanne Koong <joannelkoong@gmail.com>
Fixes: 97e03f5210 ("bpf: Add verifier support for dynptrs")
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
Tested-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-3-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
In preparation for the addition of new kfuncs, allow kfuncs defined in the
tracing subsystem to be used in LSM programs by mapping the LSM program
type to the TRACING hook.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-2-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Currently, the default vmlinux files at '/boot/vmlinux-*',
'/lib/modules/*/vmlinux-*' etc. are parsed with 'btf__parse_elf()' to
extract BTF. It is possible that these files are actually raw BTF files
similar to /sys/kernel/btf/vmlinux. So parse these files with
'btf__parse' which tries both raw format and ELF format.
This might be useful in some scenarios where users put their custom BTF
into known locations and don't want to specify btf_custom_path option.
Signed-off-by: Tao Chen <chentao.kernel@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/3f59fb5a345d2e4f10e16fe9e35fbc4c03ecaa3e.1662999860.git.chentao.kernel@linux.alibaba.com
It's possible to specify particular tests for test_bpf.ko with
module parameters. Make it possible to pass the module parameters,
example:
test_kmod.sh test_range=1,3
Since magnitude tests take long time it can be reasonable to skip
them.
Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220908120146.381218-1-ykaliuta@redhat.com
Commit 34586d29f8 ("libbpf: Add new BPF_PROG2 macro") added BPF_PROG2
macro for trampoline based programs with struct arguments. Andrii
made a few suggestions to improve code quality and description.
This patch implemented these suggestions including better internal
macro name, consistent usage pattern for __builtin_choose_expr(),
simpler macro definition for always-inline func arguments and
better macro description.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20220910025214.1536510-1-yhs@fb.com
David Vernet says:
====================
This patch set defines a new map type, BPF_MAP_TYPE_USER_RINGBUF, which
provides single-user-space-producer / single-kernel-consumer semantics over
a ring buffer. Along with the new map type, a helper function called
bpf_user_ringbuf_drain() is added which allows a BPF program to specify a
callback with the following signature, to which samples are posted by the
helper:
void (struct bpf_dynptr *dynptr, void *context);
The program can then use the bpf_dynptr_read() or bpf_dynptr_data() helper
functions to safely read the sample from the dynptr. There are currently no
helpers available to determine the size of the sample, but one could easily
be added if required.
On the user-space side, libbpf has been updated to export a new
'struct ring_buffer_user' type, along with the following symbols:
struct ring_buffer_user *
ring_buffer_user__new(int map_fd,
const struct ring_buffer_user_opts *opts);
void ring_buffer_user__free(struct ring_buffer_user *rb);
void *ring_buffer_user__reserve(struct ring_buffer_user *rb,
uint32_t size);
void *ring_buffer_user__poll(struct ring_buffer_user *rb, uint32_t size,
int timeout_ms);
void ring_buffer_user__discard(struct ring_buffer_user *rb, void *sample);
void ring_buffer_user__submit(struct ring_buffer_user *rb, void *sample);
These symbols are exported for inclusion in libbpf version 1.0.0.
Signed-off-by: David Vernet <void@manifault.com>
---
v5 -> v6:
- Fixed s/BPF_MAP_TYPE_RINGBUF/BPF_MAP_TYPE_USER_RINGBUF typo in the
libbpf user ringbuf doxygen header comment for ring_buffer_user__new()
(Andrii).
- Specify that pointer returned from ring_buffer_user__reserve() and its
blocking counterpart is 8-byte aligned (Andrii).
- Renamed user_ringbuf__commit() to user_ringbuf_commit(), as it's static
(Andrii).
- Another slight reworking of user_ring_buffer__reserve_blocking() to
remove some extraneous nanosecond variables + checking (Andrii).
- Add a final check of user_ring_buffer__reserve() in
user_ring_buffer__reserve_blocking().
- Moved busy bit lock / unlock logic from __bpf_user_ringbuf_peek() to
bpf_user_ringbuf_drain() (Andrii).
- -ENOSPC -> -ENODATA for an empty ring buffer in
__bpf_user_ringbuf_peek() (Andrii).
- Updated BPF_RB_FORCE_WAKEUP to only force a wakeup notification to be
sent if even if no sample was drained.
- Changed a bit of the wording in the UAPI header for
bpf_user_ringbuf_drain() to mention the BPF_RB_FORCE_WAKEUP behavior.
- Remove extra space after return in ringbuf_map_poll_user() (Andrii).
- Removed now-extraneous paragraph from the commit summary of patch 2/4
(Andrii).
v4 -> v5:
- DENYLISTed the user-ringbuf test suite on s390x. We have a number of
functions in the progs/user_ringbuf_success.c prog that user-space
fires by invoking a syscall. Not all of these syscalls are available
on s390x. If and when we add the ability to kick the kernel from
user-space, or if we end up using iterators for that per Hao's
suggestion, we could re-enable this test suite on s390x.
- Fixed a few more places that needed ringbuffer -> ring buffer.
v3 -> v4:
- Update BPF_MAX_USER_RINGBUF_SAMPLES to not specify a bit, and instead
just specify a number of samples. (Andrii)
- Update "ringbuffer" in comments and commit summaries to say "ring
buffer". (Andrii)
- Return -E2BIG from bpf_user_ringbuf_drain() both when a sample can't
fit into the ring buffer, and when it can't fit into a dynptr. (Andrii)
- Don't loop over samples in __bpf_user_ringbuf_peek() if a sample was
discarded. Instead, return -EAGAIN so the caller can deal with it. Also
updated the caller to detect -EAGAIN and skip over it when iterating.
(Andrii)
- Removed the heuristic for notifying user-space when a sample is drained,
causing the ring buffer to no longer be full. This may be useful in the
future, but is being removed now because it's strictly a heuristic.
- Re-add BPF_RB_FORCE_WAKEUP flag to bpf_user_ringbuf_drain(). (Andrii)
- Remove helper_allocated_dynptr tracker from verifier. (Andrii)
- Add libbpf function header comments to tools/lib/bpf/libbpf.h, so that
they will be included in rendered libbpf docs. (Andrii)
- Add symbols to a new LIBBPF_1.1.0 section in linker version script,
rather than including them in LIBBPF_1.0.0. (Andrii)
- Remove libbpf_err() calls from static libbpf functions. (Andrii)
- Check user_ring_buffer_opts instead of ring_buffer_opts in
user_ring_buffer__new(). (Andrii)
- Avoid an extra if in the hot path in user_ringbuf__commit(). (Andrii)
- Use ENOSPC rather than ENODATA if no space is available in the ring
buffer. (Andrii)
- Don't round sample size in header to 8, but still round size that is
reserved and written to 8, and validate positions are multiples of 8
(Andrii).
- Use nanoseconds for most calculations in
user_ring_buffer__reserve_blocking(). (Andrii)
- Don't use CHECK() in testcases, instead use ASSERT_*. (Andrii)
- Use SEC("?raw_tp") instead of SEC("?raw_tp/sys_nanosleep") in negative
test. (Andrii)
- Move test_user_ringbuf.h header to live next to BPF program instead of
a directory up from both it and the user-space test program. (Andrii)
- Update bpftool help message / docs to also include user_ringbuf.
v2 -> v3:
- Lots of formatting fixes, such as keeping things on one line if they fit
within 100 characters, and removing some extraneous newlines. Applies
to all diffs in the patch-set. (Andrii)
- Renamed ring_buffer_user__* symbols to user_ring_buffer__*. (Andrii)
- Added a missing smb_mb__before_atomic() in
__bpf_user_ringbuf_sample_release(). (Hao)
- Restructure how and when notification events are sent from the kernel to
the user-space producers via the .map_poll() callback for the
BPF_MAP_TYPE_USER_RINGBUF map. Before, we only sent a notification when
the ringbuffer was fully drained. Now, we guarantee user-space that
we'll send an event at least once per bpf_user_ringbuf_drain(), as long
as at least one sample was drained, and BPF_RB_NO_WAKEUP was not passed.
As a heuristic, we also send a notification event any time a sample being
drained causes the ringbuffer to no longer be full. (Andrii)
- Continuing on the above point, updated
user_ring_buffer__reserve_blocking() to loop around epoll_wait() until a
sufficiently large sample is found. (Andrii)
- Communicate BPF_RINGBUF_BUSY_BIT and BPF_RINGBUF_DISCARD_BIT in sample
headers. The ringbuffer implementation still only supports
single-producer semantics, but we can now add synchronization support in
user_ring_buffer__reserve(), and will automatically get multi-producer
semantics. (Andrii)
- Updated some commit summaries, specifically adding more details where
warranted. (Andrii)
- Improved function documentation for bpf_user_ringbuf_drain(), more
clearly explaining all function arguments and return types, as well as
the semantics for waking up user-space producers.
- Add function header comments for user_ring_buffer__reserve{_blocking}().
(Andrii)
- Rounding-up all samples to 8-bytes in the user-space producer, and
enforcing that all samples are properly aligned in the kernel. (Andrii)
- Added testcases that verify that bpf_user_ringbuf_drain() properly
validates samples, and returns error conditions if any invalid samples
are encountered. (Andrii)
- Move atomic_t busy field out of the consumer page, and into the
struct bpf_ringbuf. (Andrii)
- Split ringbuf_map_{mmap, poll}_{kern, user}() into separate
implementations. (Andrii)
- Don't silently consume errors in bpf_user_ringbuf_drain(). (Andrii)
- Remove magic number of samples (4096) from bpf_user_ringbuf_drain(),
and instead use BPF_MAX_USER_RINGBUF_SAMPLES macro, which allows
128k samples. (Andrii)
- Remove MEM_ALLOC modifier from PTR_TO_DYNPTR register in verifier, and
instead rely solely on the register being PTR_TO_DYNPTR. (Andrii)
- Move freeing of atomic_t busy bit to before we invoke irq_work_queue() in
__bpf_user_ringbuf_sample_release(). (Andrii)
- Only check for BPF_RB_NO_WAKEUP flag in bpf_ringbuf_drain().
- Remove libbpf function names from kernel smp_{load, store}* comments in
the kernel. (Andrii)
- Don't use double-underscore naming convention in libbpf functions.
(Andrii)
- Use proper __u32 and __u64 for types where we need to guarantee their
size. (Andrii)
v1 -> v2:
- Following Joanne landing 883743422c ("bpf: Fix ref_obj_id for dynptr
data slices in verifier") [0], removed [PATCH 1/5] bpf: Clear callee
saved regs after updating REG0 [1]. (Joanne)
- Following the above adjustment, updated check_helper_call() to not store
a reference for bpf_dynptr_data() if the register containing the dynptr
is of type MEM_ALLOC. (Joanne)
- Fixed casting issue pointed out by kernel test robot by adding a missing
(uintptr_t) cast. (lkp)
[0] https://lore.kernel.org/all/20220809214055.4050604-1-joannelkoong@gmail.com/
[1] https://lore.kernel.org/all/20220808155341.2479054-1-void@manifault.com/
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
This change includes selftests that validate the expected behavior and
APIs of the new BPF_MAP_TYPE_USER_RINGBUF map type.
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-5-void@manifault.com
Now that all of the logic is in place in the kernel to support user-space
produced ring buffers, we can add the user-space logic to libbpf. This
patch therefore adds the following public symbols to libbpf:
struct user_ring_buffer *
user_ring_buffer__new(int map_fd,
const struct user_ring_buffer_opts *opts);
void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size);
void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb,
__u32 size, int timeout_ms);
void user_ring_buffer__submit(struct user_ring_buffer *rb, void *sample);
void user_ring_buffer__discard(struct user_ring_buffer *rb,
void user_ring_buffer__free(struct user_ring_buffer *rb);
A user-space producer must first create a struct user_ring_buffer * object
with user_ring_buffer__new(), and can then reserve samples in the
ring buffer using one of the following two symbols:
void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size);
void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb,
__u32 size, int timeout_ms);
With user_ring_buffer__reserve(), a pointer to a 'size' region of the ring
buffer will be returned if sufficient space is available in the buffer.
user_ring_buffer__reserve_blocking() provides similar semantics, but will
block for up to 'timeout_ms' in epoll_wait if there is insufficient space
in the buffer. This function has the guarantee from the kernel that it will
receive at least one event-notification per invocation to
bpf_ringbuf_drain(), provided that at least one sample is drained, and the
BPF program did not pass the BPF_RB_NO_WAKEUP flag to bpf_ringbuf_drain().
Once a sample is reserved, it must either be committed to the ring buffer
with user_ring_buffer__submit(), or discarded with
user_ring_buffer__discard().
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-4-void@manifault.com
In a prior change, we added a new BPF_MAP_TYPE_USER_RINGBUF map type which
will allow user-space applications to publish messages to a ring buffer
that is consumed by a BPF program in kernel-space. In order for this
map-type to be useful, it will require a BPF helper function that BPF
programs can invoke to drain samples from the ring buffer, and invoke
callbacks on those samples. This change adds that capability via a new BPF
helper function:
bpf_user_ringbuf_drain(struct bpf_map *map, void *callback_fn, void *ctx,
u64 flags)
BPF programs may invoke this function to run callback_fn() on a series of
samples in the ring buffer. callback_fn() has the following signature:
long callback_fn(struct bpf_dynptr *dynptr, void *context);
Samples are provided to the callback in the form of struct bpf_dynptr *'s,
which the program can read using BPF helper functions for querying
struct bpf_dynptr's.
In order to support bpf_ringbuf_drain(), a new PTR_TO_DYNPTR register
type is added to the verifier to reflect a dynptr that was allocated by
a helper function and passed to a BPF program. Unlike PTR_TO_STACK
dynptrs which are allocated on the stack by a BPF program, PTR_TO_DYNPTR
dynptrs need not use reference tracking, as the BPF helper is trusted to
properly free the dynptr before returning. The verifier currently only
supports PTR_TO_DYNPTR registers that are also DYNPTR_TYPE_LOCAL.
Note that while the corresponding user-space libbpf logic will be added
in a subsequent patch, this patch does contain an implementation of the
.map_poll() callback for BPF_MAP_TYPE_USER_RINGBUF maps. This
.map_poll() callback guarantees that an epoll-waiting user-space
producer will receive at least one event notification whenever at least
one sample is drained in an invocation of bpf_user_ringbuf_drain(),
provided that the function is not invoked with the BPF_RB_NO_WAKEUP
flag. If the BPF_RB_FORCE_WAKEUP flag is provided, a wakeup
notification is sent even if no sample was drained.
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-3-void@manifault.com
We want to support a ringbuf map type where samples are published from
user-space, to be consumed by BPF programs. BPF currently supports a
kernel -> user-space circular ring buffer via the BPF_MAP_TYPE_RINGBUF
map type. We'll need to define a new map type for user-space -> kernel,
as none of the helpers exported for BPF_MAP_TYPE_RINGBUF will apply
to a user-space producer ring buffer, and we'll want to add one or
more helper functions that would not apply for a kernel-producer
ring buffer.
This patch therefore adds a new BPF_MAP_TYPE_USER_RINGBUF map type
definition. The map type is useless in its current form, as there is no
way to access or use it for anything until we one or more BPF helpers. A
follow-on patch will therefore add a new helper function that allows BPF
programs to run callbacks on samples that are published to the ring
buffer.
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-2-void@manifault.com
It could directly return 'btf_check_sec_info' to simplify code.
Signed-off-by: William Dean <williamsukatube@163.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220917084248.3649-1-williamsukatube@163.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>