tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
/* Copyright (c) 2011-2015 PLUMgrid, http://plumgrid.com
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of version 2 of the GNU General Public
|
|
|
|
* License as published by the Free Software Foundation.
|
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/bpf.h>
|
|
|
|
#include <linux/filter.h>
|
|
|
|
#include <linux/uaccess.h>
|
2015-03-26 03:49:22 +08:00
|
|
|
#include <linux/ctype.h>
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
#include "trace.h"
|
|
|
|
|
|
|
|
/**
|
|
|
|
* trace_call_bpf - invoke BPF program
|
|
|
|
* @prog: BPF program
|
|
|
|
* @ctx: opaque context pointer
|
|
|
|
*
|
|
|
|
* kprobe handlers execute BPF programs via this helper.
|
|
|
|
* Can be used from static tracepoints in the future.
|
|
|
|
*
|
|
|
|
* Return: BPF programs always return an integer which is interpreted by
|
|
|
|
* kprobe handler as:
|
|
|
|
* 0 - return from kprobe (event is filtered out)
|
|
|
|
* 1 - store kprobe event into ring buffer
|
|
|
|
* Other values are reserved and currently alias to 1
|
|
|
|
*/
|
|
|
|
unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
|
|
|
|
{
|
|
|
|
unsigned int ret;
|
|
|
|
|
|
|
|
if (in_nmi()) /* not supported yet */
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
preempt_disable();
|
|
|
|
|
|
|
|
if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
|
|
|
|
/*
|
|
|
|
* since some bpf program is already running on this cpu,
|
|
|
|
* don't call into another bpf program (same or different)
|
|
|
|
* and don't send kprobe event into ring-buffer,
|
|
|
|
* so return zero here
|
|
|
|
*/
|
|
|
|
ret = 0;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
ret = BPF_PROG_RUN(prog, ctx);
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
out:
|
|
|
|
__this_cpu_dec(bpf_prog_active);
|
|
|
|
preempt_enable();
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(trace_call_bpf);
|
|
|
|
|
|
|
|
static u64 bpf_probe_read(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
|
|
|
|
{
|
|
|
|
void *dst = (void *) (long) r1;
|
2016-04-13 06:10:52 +08:00
|
|
|
int ret, size = (int) r2;
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
void *unsafe_ptr = (void *) (long) r3;
|
|
|
|
|
2016-04-13 06:10:52 +08:00
|
|
|
ret = probe_kernel_read(dst, unsafe_ptr, size);
|
|
|
|
if (unlikely(ret < 0))
|
|
|
|
memset(dst, 0, size);
|
|
|
|
|
|
|
|
return ret;
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto bpf_probe_read_proto = {
|
|
|
|
.func = bpf_probe_read,
|
|
|
|
.gpl_only = true,
|
|
|
|
.ret_type = RET_INTEGER,
|
2016-04-13 06:10:52 +08:00
|
|
|
.arg1_type = ARG_PTR_TO_RAW_STACK,
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
.arg2_type = ARG_CONST_STACK_SIZE,
|
|
|
|
.arg3_type = ARG_ANYTHING,
|
|
|
|
};
|
|
|
|
|
2015-03-26 03:49:22 +08:00
|
|
|
/*
|
|
|
|
* limited trace_printk()
|
2015-08-29 06:56:23 +08:00
|
|
|
* only %d %u %x %ld %lu %lx %lld %llu %llx %p %s conversion specifiers allowed
|
2015-03-26 03:49:22 +08:00
|
|
|
*/
|
|
|
|
static u64 bpf_trace_printk(u64 r1, u64 fmt_size, u64 r3, u64 r4, u64 r5)
|
|
|
|
{
|
|
|
|
char *fmt = (char *) (long) r1;
|
2015-08-29 06:56:23 +08:00
|
|
|
bool str_seen = false;
|
2015-03-26 03:49:22 +08:00
|
|
|
int mod[3] = {};
|
|
|
|
int fmt_cnt = 0;
|
2015-08-29 06:56:23 +08:00
|
|
|
u64 unsafe_addr;
|
|
|
|
char buf[64];
|
2015-03-26 03:49:22 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* bpf_check()->check_func_arg()->check_stack_boundary()
|
|
|
|
* guarantees that fmt points to bpf program stack,
|
|
|
|
* fmt_size bytes of it were initialized and fmt_size > 0
|
|
|
|
*/
|
|
|
|
if (fmt[--fmt_size] != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* check format string for allowed specifiers */
|
|
|
|
for (i = 0; i < fmt_size; i++) {
|
|
|
|
if ((!isprint(fmt[i]) && !isspace(fmt[i])) || !isascii(fmt[i]))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (fmt[i] != '%')
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (fmt_cnt >= 3)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* fmt[i] != 0 && fmt[last] == 0, so we can access fmt[i + 1] */
|
|
|
|
i++;
|
|
|
|
if (fmt[i] == 'l') {
|
|
|
|
mod[fmt_cnt]++;
|
|
|
|
i++;
|
2015-08-29 06:56:23 +08:00
|
|
|
} else if (fmt[i] == 'p' || fmt[i] == 's') {
|
2015-03-26 03:49:22 +08:00
|
|
|
mod[fmt_cnt]++;
|
|
|
|
i++;
|
|
|
|
if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0)
|
|
|
|
return -EINVAL;
|
|
|
|
fmt_cnt++;
|
2015-08-29 06:56:23 +08:00
|
|
|
if (fmt[i - 1] == 's') {
|
|
|
|
if (str_seen)
|
|
|
|
/* allow only one '%s' per fmt string */
|
|
|
|
return -EINVAL;
|
|
|
|
str_seen = true;
|
|
|
|
|
|
|
|
switch (fmt_cnt) {
|
|
|
|
case 1:
|
|
|
|
unsafe_addr = r3;
|
|
|
|
r3 = (long) buf;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
unsafe_addr = r4;
|
|
|
|
r4 = (long) buf;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
unsafe_addr = r5;
|
|
|
|
r5 = (long) buf;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
buf[0] = 0;
|
|
|
|
strncpy_from_unsafe(buf,
|
|
|
|
(void *) (long) unsafe_addr,
|
|
|
|
sizeof(buf));
|
|
|
|
}
|
2015-03-26 03:49:22 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (fmt[i] == 'l') {
|
|
|
|
mod[fmt_cnt]++;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (fmt[i] != 'd' && fmt[i] != 'u' && fmt[i] != 'x')
|
|
|
|
return -EINVAL;
|
|
|
|
fmt_cnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return __trace_printk(1/* fake ip will not be printed */, fmt,
|
|
|
|
mod[0] == 2 ? r3 : mod[0] == 1 ? (long) r3 : (u32) r3,
|
|
|
|
mod[1] == 2 ? r4 : mod[1] == 1 ? (long) r4 : (u32) r4,
|
|
|
|
mod[2] == 2 ? r5 : mod[2] == 1 ? (long) r5 : (u32) r5);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto bpf_trace_printk_proto = {
|
|
|
|
.func = bpf_trace_printk,
|
|
|
|
.gpl_only = true,
|
|
|
|
.ret_type = RET_INTEGER,
|
|
|
|
.arg1_type = ARG_PTR_TO_STACK,
|
|
|
|
.arg2_type = ARG_CONST_STACK_SIZE,
|
|
|
|
};
|
|
|
|
|
2015-06-13 10:39:13 +08:00
|
|
|
const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* this program might be calling bpf_trace_printk,
|
|
|
|
* so allocate per-cpu printk buffers
|
|
|
|
*/
|
|
|
|
trace_printk_init_buffers();
|
|
|
|
|
|
|
|
return &bpf_trace_printk_proto;
|
|
|
|
}
|
|
|
|
|
2015-08-06 15:02:35 +08:00
|
|
|
static u64 bpf_perf_event_read(u64 r1, u64 index, u64 r3, u64 r4, u64 r5)
|
|
|
|
{
|
|
|
|
struct bpf_map *map = (struct bpf_map *) (unsigned long) r1;
|
|
|
|
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
|
|
|
struct perf_event *event;
|
2016-01-26 12:59:49 +08:00
|
|
|
struct file *file;
|
2015-08-06 15:02:35 +08:00
|
|
|
|
|
|
|
if (unlikely(index >= array->map.max_entries))
|
|
|
|
return -E2BIG;
|
|
|
|
|
2016-06-05 02:50:59 +08:00
|
|
|
file = READ_ONCE(array->ptrs[index]);
|
2016-01-26 12:59:49 +08:00
|
|
|
if (unlikely(!file))
|
2015-08-06 15:02:35 +08:00
|
|
|
return -ENOENT;
|
|
|
|
|
2016-01-26 12:59:49 +08:00
|
|
|
event = file->private_data;
|
|
|
|
|
2015-10-23 08:10:14 +08:00
|
|
|
/* make sure event is local and doesn't have pmu::count */
|
|
|
|
if (event->oncpu != smp_processor_id() ||
|
|
|
|
event->pmu->count)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2016-06-16 09:25:39 +08:00
|
|
|
if (unlikely(event->attr.type != PERF_TYPE_HARDWARE &&
|
|
|
|
event->attr.type != PERF_TYPE_RAW))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2015-08-06 15:02:35 +08:00
|
|
|
/*
|
|
|
|
* we don't know if the function is run successfully by the
|
|
|
|
* return value. It can be judged in other places, such as
|
|
|
|
* eBPF programs.
|
|
|
|
*/
|
|
|
|
return perf_event_read_local(event);
|
|
|
|
}
|
|
|
|
|
2015-10-23 08:10:14 +08:00
|
|
|
static const struct bpf_func_proto bpf_perf_event_read_proto = {
|
2015-08-06 15:02:35 +08:00
|
|
|
.func = bpf_perf_event_read,
|
2015-10-24 05:58:19 +08:00
|
|
|
.gpl_only = true,
|
2015-08-06 15:02:35 +08:00
|
|
|
.ret_type = RET_INTEGER,
|
|
|
|
.arg1_type = ARG_CONST_MAP_PTR,
|
|
|
|
.arg2_type = ARG_ANYTHING,
|
|
|
|
};
|
|
|
|
|
2016-04-19 03:01:23 +08:00
|
|
|
static u64 bpf_perf_event_output(u64 r1, u64 r2, u64 flags, u64 r4, u64 size)
|
2015-10-21 11:02:34 +08:00
|
|
|
{
|
|
|
|
struct pt_regs *regs = (struct pt_regs *) (long) r1;
|
|
|
|
struct bpf_map *map = (struct bpf_map *) (long) r2;
|
|
|
|
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
2016-04-19 03:01:23 +08:00
|
|
|
u64 index = flags & BPF_F_INDEX_MASK;
|
2015-10-21 11:02:34 +08:00
|
|
|
void *data = (void *) (long) r4;
|
|
|
|
struct perf_sample_data sample_data;
|
|
|
|
struct perf_event *event;
|
2016-01-26 12:59:49 +08:00
|
|
|
struct file *file;
|
2015-10-21 11:02:34 +08:00
|
|
|
struct perf_raw_record raw = {
|
|
|
|
.size = size,
|
|
|
|
.data = data,
|
|
|
|
};
|
|
|
|
|
2016-04-19 03:01:23 +08:00
|
|
|
if (unlikely(flags & ~(BPF_F_INDEX_MASK)))
|
|
|
|
return -EINVAL;
|
|
|
|
if (index == BPF_F_CURRENT_CPU)
|
|
|
|
index = raw_smp_processor_id();
|
2015-10-21 11:02:34 +08:00
|
|
|
if (unlikely(index >= array->map.max_entries))
|
|
|
|
return -E2BIG;
|
|
|
|
|
2016-06-05 02:50:59 +08:00
|
|
|
file = READ_ONCE(array->ptrs[index]);
|
2016-01-26 12:59:49 +08:00
|
|
|
if (unlikely(!file))
|
2015-10-21 11:02:34 +08:00
|
|
|
return -ENOENT;
|
|
|
|
|
2016-01-26 12:59:49 +08:00
|
|
|
event = file->private_data;
|
|
|
|
|
2015-10-21 11:02:34 +08:00
|
|
|
if (unlikely(event->attr.type != PERF_TYPE_SOFTWARE ||
|
|
|
|
event->attr.config != PERF_COUNT_SW_BPF_OUTPUT))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (unlikely(event->oncpu != smp_processor_id()))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
perf_sample_data_init(&sample_data, 0, 0);
|
|
|
|
sample_data.raw = &raw;
|
|
|
|
perf_event_output(event, &sample_data, regs);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto bpf_perf_event_output_proto = {
|
|
|
|
.func = bpf_perf_event_output,
|
2015-10-24 05:58:19 +08:00
|
|
|
.gpl_only = true,
|
2015-10-21 11:02:34 +08:00
|
|
|
.ret_type = RET_INTEGER,
|
|
|
|
.arg1_type = ARG_PTR_TO_CTX,
|
|
|
|
.arg2_type = ARG_CONST_MAP_PTR,
|
|
|
|
.arg3_type = ARG_ANYTHING,
|
|
|
|
.arg4_type = ARG_PTR_TO_STACK,
|
|
|
|
.arg5_type = ARG_CONST_STACK_SIZE,
|
|
|
|
};
|
|
|
|
|
bpf: add event output helper for notifications/sampling/logging
This patch adds a new helper for cls/act programs that can push events
to user space applications. For networking, this can be f.e. for sampling,
debugging, logging purposes or pushing of arbitrary wake-up events. The
idea is similar to a43eec304259 ("bpf: introduce bpf_perf_event_output()
helper") and 39111695b1b8 ("samples: bpf: add bpf_perf_event_output example").
The eBPF program utilizes a perf event array map that user space populates
with fds from perf_event_open(), the eBPF program calls into the helper
f.e. as skb_event_output(skb, &my_map, BPF_F_CURRENT_CPU, raw, sizeof(raw))
so that the raw data is pushed into the fd f.e. at the map index of the
current CPU.
User space can poll/mmap/etc on this and has a data channel for receiving
events that can be post-processed. The nice thing is that since the eBPF
program and user space application making use of it are tightly coupled,
they can define their own arbitrary raw data format and what/when they
want to push.
While f.e. packet headers could be one part of the meta data that is being
pushed, this is not a substitute for things like packet sockets as whole
packet is not being pushed and push is only done in a single direction.
Intention is more of a generically usable, efficient event pipe to applications.
Workflow is that tc can pin the map and applications can attach themselves
e.g. after cls/act setup to one or multiple map slots, demuxing is done by
the eBPF program.
Adding this facility is with minimal effort, it reuses the helper
introduced in a43eec304259 ("bpf: introduce bpf_perf_event_output() helper")
and we get its functionality for free by overloading its BPF_FUNC_ identifier
for cls/act programs, ctx is currently unused, but will be made use of in
future. Example will be added to iproute2's BPF example files.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-19 03:01:24 +08:00
|
|
|
static DEFINE_PER_CPU(struct pt_regs, bpf_pt_regs);
|
|
|
|
|
|
|
|
static u64 bpf_event_output(u64 r1, u64 r2, u64 flags, u64 r4, u64 size)
|
|
|
|
{
|
|
|
|
struct pt_regs *regs = this_cpu_ptr(&bpf_pt_regs);
|
|
|
|
|
|
|
|
perf_fetch_caller_regs(regs);
|
|
|
|
|
|
|
|
return bpf_perf_event_output((long)regs, r2, flags, r4, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto bpf_event_output_proto = {
|
|
|
|
.func = bpf_event_output,
|
|
|
|
.gpl_only = true,
|
|
|
|
.ret_type = RET_INTEGER,
|
|
|
|
.arg1_type = ARG_PTR_TO_CTX,
|
|
|
|
.arg2_type = ARG_CONST_MAP_PTR,
|
|
|
|
.arg3_type = ARG_ANYTHING,
|
|
|
|
.arg4_type = ARG_PTR_TO_STACK,
|
|
|
|
.arg5_type = ARG_CONST_STACK_SIZE,
|
|
|
|
};
|
|
|
|
|
|
|
|
const struct bpf_func_proto *bpf_get_event_output_proto(void)
|
|
|
|
{
|
|
|
|
return &bpf_event_output_proto;
|
|
|
|
}
|
|
|
|
|
2016-04-07 09:43:26 +08:00
|
|
|
static const struct bpf_func_proto *tracing_func_proto(enum bpf_func_id func_id)
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
{
|
|
|
|
switch (func_id) {
|
|
|
|
case BPF_FUNC_map_lookup_elem:
|
|
|
|
return &bpf_map_lookup_elem_proto;
|
|
|
|
case BPF_FUNC_map_update_elem:
|
|
|
|
return &bpf_map_update_elem_proto;
|
|
|
|
case BPF_FUNC_map_delete_elem:
|
|
|
|
return &bpf_map_delete_elem_proto;
|
|
|
|
case BPF_FUNC_probe_read:
|
|
|
|
return &bpf_probe_read_proto;
|
2015-03-26 03:49:21 +08:00
|
|
|
case BPF_FUNC_ktime_get_ns:
|
|
|
|
return &bpf_ktime_get_ns_proto;
|
bpf: allow bpf programs to tail-call other bpf programs
introduce bpf_tail_call(ctx, &jmp_table, index) helper function
which can be used from BPF programs like:
int bpf_prog(struct pt_regs *ctx)
{
...
bpf_tail_call(ctx, &jmp_table, index);
...
}
that is roughly equivalent to:
int bpf_prog(struct pt_regs *ctx)
{
...
if (jmp_table[index])
return (*jmp_table[index])(ctx);
...
}
The important detail that it's not a normal call, but a tail call.
The kernel stack is precious, so this helper reuses the current
stack frame and jumps into another BPF program without adding
extra call frame.
It's trivially done in interpreter and a bit trickier in JITs.
In case of x64 JIT the bigger part of generated assembler prologue
is common for all programs, so it is simply skipped while jumping.
Other JITs can do similar prologue-skipping optimization or
do stack unwind before jumping into the next program.
bpf_tail_call() arguments:
ctx - context pointer
jmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
index - index in the jump table
Since all BPF programs are idenitified by file descriptor, user space
need to populate the jmp_table with FDs of other BPF programs.
If jmp_table[index] is empty the bpf_tail_call() doesn't jump anywhere
and program execution continues as normal.
New BPF_MAP_TYPE_PROG_ARRAY map type is introduced so that user space can
populate this jmp_table array with FDs of other bpf programs.
Programs can share the same jmp_table array or use multiple jmp_tables.
The chain of tail calls can form unpredictable dynamic loops therefore
tail_call_cnt is used to limit the number of calls and currently is set to 32.
Use cases:
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
==========
- simplify complex programs by splitting them into a sequence of small programs
- dispatch routine
For tracing and future seccomp the program may be triggered on all system
calls, but processing of syscall arguments will be different. It's more
efficient to implement them as:
int syscall_entry(struct seccomp_data *ctx)
{
bpf_tail_call(ctx, &syscall_jmp_table, ctx->nr /* syscall number */);
... default: process unknown syscall ...
}
int sys_write_event(struct seccomp_data *ctx) {...}
int sys_read_event(struct seccomp_data *ctx) {...}
syscall_jmp_table[__NR_write] = sys_write_event;
syscall_jmp_table[__NR_read] = sys_read_event;
For networking the program may call into different parsers depending on
packet format, like:
int packet_parser(struct __sk_buff *skb)
{
... parse L2, L3 here ...
__u8 ipproto = load_byte(skb, ... offsetof(struct iphdr, protocol));
bpf_tail_call(skb, &ipproto_jmp_table, ipproto);
... default: process unknown protocol ...
}
int parse_tcp(struct __sk_buff *skb) {...}
int parse_udp(struct __sk_buff *skb) {...}
ipproto_jmp_table[IPPROTO_TCP] = parse_tcp;
ipproto_jmp_table[IPPROTO_UDP] = parse_udp;
- for TC use case, bpf_tail_call() allows to implement reclassify-like logic
- bpf_map_update_elem/delete calls into BPF_MAP_TYPE_PROG_ARRAY jump table
are atomic, so user space can build chains of BPF programs on the fly
Implementation details:
=======================
- high performance of bpf_tail_call() is the goal.
It could have been implemented without JIT changes as a wrapper on top of
BPF_PROG_RUN() macro, but with two downsides:
. all programs would have to pay performance penalty for this feature and
tail call itself would be slower, since mandatory stack unwind, return,
stack allocate would be done for every tailcall.
. tailcall would be limited to programs running preempt_disabled, since
generic 'void *ctx' doesn't have room for 'tail_call_cnt' and it would
need to be either global per_cpu variable accessed by helper and by wrapper
or global variable protected by locks.
In this implementation x64 JIT bypasses stack unwind and jumps into the
callee program after prologue.
- bpf_prog_array_compatible() ensures that prog_type of callee and caller
are the same and JITed/non-JITed flag is the same, since calling JITed
program from non-JITed is invalid, since stack frames are different.
Similarly calling kprobe type program from socket type program is invalid.
- jump table is implemented as BPF_MAP_TYPE_PROG_ARRAY to reuse 'map'
abstraction, its user space API and all of verifier logic.
It's in the existing arraymap.c file, since several functions are
shared with regular array map.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-20 07:59:03 +08:00
|
|
|
case BPF_FUNC_tail_call:
|
|
|
|
return &bpf_tail_call_proto;
|
2015-06-13 10:39:12 +08:00
|
|
|
case BPF_FUNC_get_current_pid_tgid:
|
|
|
|
return &bpf_get_current_pid_tgid_proto;
|
|
|
|
case BPF_FUNC_get_current_uid_gid:
|
|
|
|
return &bpf_get_current_uid_gid_proto;
|
|
|
|
case BPF_FUNC_get_current_comm:
|
|
|
|
return &bpf_get_current_comm_proto;
|
2015-03-26 03:49:22 +08:00
|
|
|
case BPF_FUNC_trace_printk:
|
2015-06-13 10:39:13 +08:00
|
|
|
return bpf_get_trace_printk_proto();
|
2015-06-13 10:39:14 +08:00
|
|
|
case BPF_FUNC_get_smp_processor_id:
|
|
|
|
return &bpf_get_smp_processor_id_proto;
|
2015-08-06 15:02:35 +08:00
|
|
|
case BPF_FUNC_perf_event_read:
|
|
|
|
return &bpf_perf_event_read_proto;
|
2016-04-07 09:43:26 +08:00
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func_id)
|
|
|
|
{
|
|
|
|
switch (func_id) {
|
2015-10-21 11:02:34 +08:00
|
|
|
case BPF_FUNC_perf_event_output:
|
|
|
|
return &bpf_perf_event_output_proto;
|
2016-02-18 11:58:58 +08:00
|
|
|
case BPF_FUNC_get_stackid:
|
|
|
|
return &bpf_get_stackid_proto;
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
default:
|
2016-04-07 09:43:26 +08:00
|
|
|
return tracing_func_proto(func_id);
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* bpf+kprobe programs can access fields of 'struct pt_regs' */
|
2016-06-16 09:25:38 +08:00
|
|
|
static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type type,
|
|
|
|
enum bpf_reg_type *reg_type)
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
{
|
|
|
|
/* check bounds */
|
|
|
|
if (off < 0 || off >= sizeof(struct pt_regs))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* only read is allowed */
|
|
|
|
if (type != BPF_READ)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* disallow misaligned access */
|
|
|
|
if (off % size != 0)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2015-12-12 01:35:59 +08:00
|
|
|
static const struct bpf_verifier_ops kprobe_prog_ops = {
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
.get_func_proto = kprobe_prog_func_proto,
|
|
|
|
.is_valid_access = kprobe_prog_is_valid_access,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct bpf_prog_type_list kprobe_tl = {
|
|
|
|
.ops = &kprobe_prog_ops,
|
|
|
|
.type = BPF_PROG_TYPE_KPROBE,
|
|
|
|
};
|
|
|
|
|
2016-04-07 09:43:27 +08:00
|
|
|
static u64 bpf_perf_event_output_tp(u64 r1, u64 r2, u64 index, u64 r4, u64 size)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* r1 points to perf tracepoint buffer where first 8 bytes are hidden
|
|
|
|
* from bpf program and contain a pointer to 'struct pt_regs'. Fetch it
|
|
|
|
* from there and call the same bpf_perf_event_output() helper
|
|
|
|
*/
|
2016-04-17 04:29:33 +08:00
|
|
|
u64 ctx = *(long *)(uintptr_t)r1;
|
2016-04-07 09:43:27 +08:00
|
|
|
|
|
|
|
return bpf_perf_event_output(ctx, r2, index, r4, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto bpf_perf_event_output_proto_tp = {
|
|
|
|
.func = bpf_perf_event_output_tp,
|
|
|
|
.gpl_only = true,
|
|
|
|
.ret_type = RET_INTEGER,
|
|
|
|
.arg1_type = ARG_PTR_TO_CTX,
|
|
|
|
.arg2_type = ARG_CONST_MAP_PTR,
|
|
|
|
.arg3_type = ARG_ANYTHING,
|
|
|
|
.arg4_type = ARG_PTR_TO_STACK,
|
|
|
|
.arg5_type = ARG_CONST_STACK_SIZE,
|
|
|
|
};
|
|
|
|
|
|
|
|
static u64 bpf_get_stackid_tp(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
|
|
|
|
{
|
2016-04-17 04:29:33 +08:00
|
|
|
u64 ctx = *(long *)(uintptr_t)r1;
|
2016-04-07 09:43:27 +08:00
|
|
|
|
|
|
|
return bpf_get_stackid(ctx, r2, r3, r4, r5);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_func_proto bpf_get_stackid_proto_tp = {
|
|
|
|
.func = bpf_get_stackid_tp,
|
|
|
|
.gpl_only = true,
|
|
|
|
.ret_type = RET_INTEGER,
|
|
|
|
.arg1_type = ARG_PTR_TO_CTX,
|
|
|
|
.arg2_type = ARG_CONST_MAP_PTR,
|
|
|
|
.arg3_type = ARG_ANYTHING,
|
|
|
|
};
|
|
|
|
|
2016-04-07 09:43:26 +08:00
|
|
|
static const struct bpf_func_proto *tp_prog_func_proto(enum bpf_func_id func_id)
|
|
|
|
{
|
|
|
|
switch (func_id) {
|
|
|
|
case BPF_FUNC_perf_event_output:
|
2016-04-07 09:43:27 +08:00
|
|
|
return &bpf_perf_event_output_proto_tp;
|
2016-04-07 09:43:26 +08:00
|
|
|
case BPF_FUNC_get_stackid:
|
2016-04-07 09:43:27 +08:00
|
|
|
return &bpf_get_stackid_proto_tp;
|
2016-04-07 09:43:26 +08:00
|
|
|
default:
|
|
|
|
return tracing_func_proto(func_id);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-16 09:25:38 +08:00
|
|
|
static bool tp_prog_is_valid_access(int off, int size, enum bpf_access_type type,
|
|
|
|
enum bpf_reg_type *reg_type)
|
2016-04-07 09:43:26 +08:00
|
|
|
{
|
|
|
|
if (off < sizeof(void *) || off >= PERF_MAX_TRACE_SIZE)
|
|
|
|
return false;
|
|
|
|
if (type != BPF_READ)
|
|
|
|
return false;
|
|
|
|
if (off % size != 0)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct bpf_verifier_ops tracepoint_prog_ops = {
|
|
|
|
.get_func_proto = tp_prog_func_proto,
|
|
|
|
.is_valid_access = tp_prog_is_valid_access,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct bpf_prog_type_list tracepoint_tl = {
|
|
|
|
.ops = &tracepoint_prog_ops,
|
|
|
|
.type = BPF_PROG_TYPE_TRACEPOINT,
|
|
|
|
};
|
|
|
|
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
static int __init register_kprobe_prog_ops(void)
|
|
|
|
{
|
|
|
|
bpf_register_prog_type(&kprobe_tl);
|
2016-04-07 09:43:26 +08:00
|
|
|
bpf_register_prog_type(&tracepoint_tl);
|
tracing, perf: Implement BPF programs attached to kprobes
BPF programs, attached to kprobes, provide a safe way to execute
user-defined BPF byte-code programs without being able to crash or
hang the kernel in any way. The BPF engine makes sure that such
programs have a finite execution time and that they cannot break
out of their sandbox.
The user interface is to attach to a kprobe via the perf syscall:
struct perf_event_attr attr = {
.type = PERF_TYPE_TRACEPOINT,
.config = event_id,
...
};
event_fd = perf_event_open(&attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
'prog_fd' is a file descriptor associated with BPF program
previously loaded.
'event_id' is an ID of the kprobe created.
Closing 'event_fd':
close(event_fd);
... automatically detaches BPF program from it.
BPF programs can call in-kernel helper functions to:
- lookup/update/delete elements in maps
- probe_read - wraper of probe_kernel_read() used to access any
kernel data structures
BPF programs receive 'struct pt_regs *' as an input ('struct pt_regs' is
architecture dependent) and return 0 to ignore the event and 1 to store
kprobe event into the ring buffer.
Note, kprobes are a fundamentally _not_ a stable kernel ABI,
so BPF programs attached to kprobes must be recompiled for
every kernel version and user must supply correct LINUX_VERSION_CODE
in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1427312966-8434-4-git-send-email-ast@plumgrid.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-26 03:49:20 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
late_initcall(register_kprobe_prog_ops);
|