perf parse-events: Remove BPF event support

New features like the BPF --filter support in perf record have made the
BPF event functionality somewhat redundant. As shown by commit
fcb027c1a4f6 ("perf tools: Revert enable indices setting syntax for BPF
map") and commit 14e4b9f428 ("perf trace: Raw augmented syscalls fix
libbpf 1.0+ compatibility") the BPF event support hasn't been well
maintained and it adds considerable complexity in areas like event
parsing, not least as '/' is a separator for event modifiers as well as
in paths.

This patch removes support in the event parser for BPF events and then
the associated functions are removed. This leads to the removal of whole
source files like bpf-loader.c.  Removing support means that augmented
syscalls in perf trace is broken, this will be fixed in a later commit
adding support using BPF skeletons.

The removal of BPF events causes an unused label warning from flex
generated code, so update build to ignore it:

  ```
  util/parse-events-flex.c:2704:1: error: label ‘find_rule’ defined but not used [-Werror=unused-label]
  2704 | find_rule: /* we branch to this label when backing up */
  ```

Committer notes:

Extracted from a larger patch that was also removing the support for
linking with libllvm and libclang, that were an alternative to using an
external clang execution to compile the .c event source code into BPF
bytecode.

Testing it:

  # perf trace -e /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c
  event syntax error: '/home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c'
                        \___ Bad event or PMU

  Unabled to find PMU or event on a PMU of 'home'

  Initial error:
  event syntax error: '/home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c'
                        \___ Cannot find PMU `home'. Missing kernel support?
  Run 'perf list' for a list of valid events

   Usage: perf trace [<options>] [<command>]
      or: perf trace [<options>] -- <command> [<options>]
      or: perf trace record [<options>] [<command>]
      or: perf trace record [<options>] -- <command> [<options>]

      -e, --event <event>   event/syscall selector. use 'perf list' to list available events
  #

Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Carsten Haitzler <carsten.haitzler@arm.com>
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Fangrui Song <maskray@google.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Clark <james.clark@arm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@amd.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: Tom Rix <trix@redhat.com>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Wang ShaoBo <bobo.shaobowang@huawei.com>
Cc: Yang Jihong <yangjihong1@huawei.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: YueHaibing <yuehaibing@huawei.com>
Cc: bpf@vger.kernel.org
Cc: llvm@lists.linux.dev
Link: https://lore.kernel.org/r/20230810184853.2860737-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This commit is contained in:
Ian Rogers 2023-08-11 15:26:11 -03:00 committed by Arnaldo Carvalho de Melo
parent 56b11a2126
commit 3d6dfae889
27 changed files with 3 additions and 4381 deletions

View File

@ -125,9 +125,6 @@ Given a $HOME/.perfconfig like this:
group = true
skip-empty = true
[llvm]
dump-obj = true
clang-opt = -g
You can hide source code of annotate feature setting the config to false with
@ -657,36 +654,6 @@ ftrace.*::
-F option is not specified. Possible values are 'function' and
'function_graph'.
llvm.*::
llvm.clang-path::
Path to clang. If omit, search it from $PATH.
llvm.clang-bpf-cmd-template::
Cmdline template. Below lines show its default value. Environment
variable is used to pass options.
"$CLANG_EXEC -D__KERNEL__ -D__NR_CPUS__=$NR_CPUS "\
"-DLINUX_VERSION_CODE=$LINUX_VERSION_CODE " \
"$CLANG_OPTIONS $PERF_BPF_INC_OPTIONS $KERNEL_INC_OPTIONS " \
"-Wno-unused-value -Wno-pointer-sign " \
"-working-directory $WORKING_DIR " \
"-c \"$CLANG_SOURCE\" --target=bpf $CLANG_EMIT_LLVM -O2 -o - $LLVM_OPTIONS_PIPE"
llvm.clang-opt::
Options passed to clang.
llvm.kbuild-dir::
kbuild directory. If not set, use /lib/modules/`uname -r`/build.
If set to "" deliberately, skip kernel header auto-detector.
llvm.kbuild-opts::
Options passed to 'make' when detecting kernel header options.
llvm.dump-obj::
Enable perf dump BPF object files compiled by LLVM.
llvm.opts::
Options passed to llc.
samples.*::
samples.context::

View File

@ -99,20 +99,6 @@ OPTIONS
If you want to profile write accesses in [0x1000~1008), just set
'mem:0x1000/8:w'.
- a BPF source file (ending in .c) or a precompiled object file (ending
in .o) selects one or more BPF events.
The BPF program can attach to various perf events based on the ELF section
names.
When processing a '.c' file, perf searches an installed LLVM to compile it
into an object file first. Optional clang options can be passed via the
'--clang-opt' command line option, e.g.:
perf record --clang-opt "-DLINUX_VERSION_CODE=0x50000" \
-e tests/bpf-script-example.c
Note: '--clang-opt' must be placed before '--event/-e'.
- a group of events surrounded by a pair of brace ("{event1,event2,...}").
Each event is separated by commas and the group should be quoted to
prevent the shell interpretation. You also need to use --group on
@ -547,14 +533,6 @@ PERF_RECORD_SWITCH_CPU_WIDE. In some cases (e.g. Intel PT, CoreSight or Arm SPE)
switch events will be enabled automatically, which can be suppressed by
by the option --no-switch-events.
--clang-path=PATH::
Path to clang binary to use for compiling BPF scriptlets.
(enabled when BPF support is on)
--clang-opt=OPTIONS::
Options passed to clang when compiling BPF scriptlets.
(enabled when BPF support is on)
--vmlinux=PATH::
Specify vmlinux path which has debuginfo.
(enabled when BPF prologue is on)

View File

@ -589,18 +589,6 @@ ifndef NO_LIBELF
LIBBPF_STATIC := 1
endif
endif
ifndef NO_DWARF
ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
CFLAGS += -DHAVE_BPF_PROLOGUE
$(call detected,CONFIG_BPF_PROLOGUE)
else
msg := $(warning BPF prologue is not supported by architecture $(SRCARCH), missing regs_query_register_offset());
endif
else
msg := $(warning DWARF support is off, BPF prologue is disabled);
endif
endif # NO_LIBBPF
endif # NO_LIBELF

View File

@ -37,8 +37,6 @@
#include "util/parse-branch-options.h"
#include "util/parse-regs-options.h"
#include "util/perf_api_probe.h"
#include "util/llvm-utils.h"
#include "util/bpf-loader.h"
#include "util/trigger.h"
#include "util/perf-hooks.h"
#include "util/cpu-set-sched.h"
@ -2465,16 +2463,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
}
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
goto out_free_threads;
}
/*
* Normally perf_session__new would do this, but it doesn't have the
* evlist.
@ -3486,10 +3474,6 @@ static struct option __record_options[] = {
"collect kernel callchains"),
OPT_BOOLEAN(0, "user-callchains", &record.opts.user_callchains,
"collect user callchains"),
OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path",
"clang binary to use for compiling BPF scriptlets"),
OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
"options passed to clang when compiling BPF scriptlets"),
OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name,
"file", "vmlinux pathname"),
OPT_BOOLEAN(0, "buildid-all", &record.buildid_all,
@ -3967,27 +3951,6 @@ int cmd_record(int argc, const char **argv)
setlocale(LC_ALL, "");
#ifndef HAVE_LIBBPF_SUPPORT
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c)
set_nobuild('\0', "clang-path", true);
set_nobuild('\0', "clang-opt", true);
# undef set_nobuild
#endif
#ifndef HAVE_BPF_PROLOGUE
# if !defined (HAVE_DWARF_SUPPORT)
# define REASON "NO_DWARF=1"
# elif !defined (HAVE_LIBBPF_SUPPORT)
# define REASON "NO_LIBBPF=1"
# else
# define REASON "this architecture doesn't support BPF prologue"
# endif
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, REASON, c)
set_nobuild('\0', "vmlinux", true);
# undef set_nobuild
# undef REASON
#endif
#ifndef HAVE_BPF_SKEL
# define set_nobuild(s, l, m, c) set_option_nobuild(record_options, s, l, m, c)
set_nobuild('\0', "off-cpu", "no BUILD_BPF_SKEL=1", true);
@ -4116,14 +4079,6 @@ int cmd_record(int argc, const char **argv)
if (dry_run)
goto out;
err = bpf__setup_stdout(rec->evlist);
if (err) {
bpf__strerror_setup_stdout(rec->evlist, err, errbuf, sizeof(errbuf));
pr_err("ERROR: Setup BPF stdout failed: %s\n",
errbuf);
goto out;
}
err = -ENOMEM;
if (rec->no_buildid_cache || rec->no_buildid) {

View File

@ -18,6 +18,7 @@
#include <api/fs/tracing_path.h>
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
#endif
#include "util/bpf_map.h"
#include "util/rlimit.h"
@ -53,7 +54,6 @@
#include "trace/beauty/beauty.h"
#include "trace-event.h"
#include "util/parse-events.h"
#include "util/bpf-loader.h"
#include "util/tracepoint.h"
#include "callchain.h"
#include "print_binary.h"
@ -3287,17 +3287,6 @@ static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace, const ch
return bpf_object__find_map_by_name(trace->bpf_obj, name);
}
static void trace__set_bpf_map_filtered_pids(struct trace *trace)
{
trace->filter_pids.map = trace__find_bpf_map_by_name(trace, "pids_filtered");
}
static void trace__set_bpf_map_syscalls(struct trace *trace)
{
trace->syscalls.prog_array.sys_enter = trace__find_bpf_map_by_name(trace, "syscalls_sys_enter");
trace->syscalls.prog_array.sys_exit = trace__find_bpf_map_by_name(trace, "syscalls_sys_exit");
}
static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace, const char *name)
{
struct bpf_program *pos, *prog = NULL;
@ -3553,25 +3542,6 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
return err;
}
static void trace__delete_augmented_syscalls(struct trace *trace)
{
struct evsel *evsel, *tmp;
evlist__remove(trace->evlist, trace->syscalls.events.augmented);
evsel__delete(trace->syscalls.events.augmented);
trace->syscalls.events.augmented = NULL;
evlist__for_each_entry_safe(trace->evlist, tmp, evsel) {
if (evsel->bpf_obj == trace->bpf_obj) {
evlist__remove(trace->evlist, evsel);
evsel__delete(evsel);
}
}
bpf_object__close(trace->bpf_obj);
trace->bpf_obj = NULL;
}
#else // HAVE_LIBBPF_SUPPORT
static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_unused,
const char *name __maybe_unused)
@ -3579,45 +3549,12 @@ static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_u
return NULL;
}
static void trace__set_bpf_map_filtered_pids(struct trace *trace __maybe_unused)
{
}
static void trace__set_bpf_map_syscalls(struct trace *trace __maybe_unused)
{
}
static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace __maybe_unused,
const char *name __maybe_unused)
{
return NULL;
}
static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace __maybe_unused)
{
return 0;
}
static void trace__delete_augmented_syscalls(struct trace *trace __maybe_unused)
{
}
#endif // HAVE_LIBBPF_SUPPORT
static bool trace__only_augmented_syscalls_evsels(struct trace *trace)
{
struct evsel *evsel;
evlist__for_each_entry(trace->evlist, evsel) {
if (evsel == trace->syscalls.events.augmented ||
evsel->bpf_obj == trace->bpf_obj)
continue;
return false;
}
return true;
}
static int trace__set_ev_qualifier_filter(struct trace *trace)
{
if (trace->syscalls.events.sys_enter)
@ -3981,16 +3918,6 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
if (err < 0)
goto out_error_open;
err = bpf__apply_obj_config();
if (err) {
char errbuf[BUFSIZ];
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
goto out_error_open;
}
err = trace__set_filter_pids(trace);
if (err < 0)
goto out_error_mem;
@ -4922,77 +4849,6 @@ int cmd_trace(int argc, const char **argv)
"cgroup monitoring only available in system-wide mode");
}
evsel = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__");
if (IS_ERR(evsel)) {
bpf__strerror_setup_output_event(trace.evlist, PTR_ERR(evsel), bf, sizeof(bf));
pr_err("ERROR: Setup trace syscalls enter failed: %s\n", bf);
goto out;
}
if (evsel) {
trace.syscalls.events.augmented = evsel;
evsel = evlist__find_tracepoint_by_name(trace.evlist, "raw_syscalls:sys_enter");
if (evsel == NULL) {
pr_err("ERROR: raw_syscalls:sys_enter not found in the augmented BPF object\n");
goto out;
}
if (evsel->bpf_obj == NULL) {
pr_err("ERROR: raw_syscalls:sys_enter not associated to a BPF object\n");
goto out;
}
trace.bpf_obj = evsel->bpf_obj;
/*
* If we have _just_ the augmenter event but don't have a
* explicit --syscalls, then assume we want all strace-like
* syscalls:
*/
if (!trace.trace_syscalls && trace__only_augmented_syscalls_evsels(&trace))
trace.trace_syscalls = true;
/*
* So, if we have a syscall augmenter, but trace_syscalls, aka
* strace-like syscall tracing is not set, then we need to trow
* away the augmenter, i.e. all the events that were created
* from that BPF object file.
*
* This is more to fix the current .perfconfig trace.add_events
* style of setting up the strace-like eBPF based syscall point
* payload augmenter.
*
* All this complexity will be avoided by adding an alternative
* to trace.add_events in the form of
* trace.bpf_augmented_syscalls, that will be only parsed if we
* need it.
*
* .perfconfig trace.add_events is still useful if we want, for
* instance, have msr_write.msr in some .perfconfig profile based
* 'perf trace --config determinism.profile' mode, where for some
* particular goal/workload type we want a set of events and
* output mode (with timings, etc) instead of having to add
* all via the command line.
*
* Also --config to specify an alternate .perfconfig file needs
* to be implemented.
*/
if (!trace.trace_syscalls) {
trace__delete_augmented_syscalls(&trace);
} else {
trace__set_bpf_map_filtered_pids(&trace);
trace__set_bpf_map_syscalls(&trace);
trace.syscalls.unaugmented_prog = trace__find_bpf_program_by_title(&trace, "!raw_syscalls:unaugmented");
}
}
err = bpf__setup_stdout(trace.evlist);
if (err) {
bpf__strerror_setup_stdout(trace.evlist, err, bf, sizeof(bf));
pr_err("ERROR: Setup BPF stdout failed: %s\n", bf);
goto out;
}
err = -1;
if (map_dump_str) {

View File

@ -18,7 +18,6 @@
#include <subcmd/run-command.h>
#include "util/parse-events.h"
#include <subcmd/parse-options.h>
#include "util/bpf-loader.h"
#include "util/debug.h"
#include "util/event.h"
#include "util/util.h" // usage()
@ -324,7 +323,6 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv)
perf_config__exit();
exit_browser(status);
perf_env__exit(&perf_env);
bpf__clear();
if (status)
return status & 0xff;

View File

@ -1,5 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
llvm-src-base.c
llvm-src-kbuild.c
llvm-src-prologue.c
llvm-src-relocation.c

View File

@ -37,8 +37,6 @@ perf-y += sample-parsing.o
perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o
perf-y += thread-map.o
perf-y += llvm.o llvm-src-base.o llvm-src-kbuild.o llvm-src-prologue.o llvm-src-relocation.o
perf-y += bpf.o
perf-y += topology.o
perf-y += mem.o
perf-y += cpumap.o
@ -69,34 +67,6 @@ perf-y += sigtrap.o
perf-y += event_groups.o
perf-y += symbols.o
$(OUTPUT)tests/llvm-src-base.c: tests/bpf-script-example.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_base_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
$(OUTPUT)tests/llvm-src-kbuild.c: tests/bpf-script-test-kbuild.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_test_kbuild_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
$(OUTPUT)tests/llvm-src-prologue.c: tests/bpf-script-test-prologue.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_test_prologue_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
$(OUTPUT)tests/llvm-src-relocation.c: tests/bpf-script-test-relocation.c tests/Build
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_test_relocation[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 arm arm64 powerpc))
perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
endif

View File

@ -1,60 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-example.c
* Test basic LLVM building
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define BPF_ANY 0
#define BPF_MAP_TYPE_ARRAY 2
#define BPF_FUNC_map_lookup_elem 1
#define BPF_FUNC_map_update_elem 2
static void *(*bpf_map_lookup_elem)(void *map, void *key) =
(void *) BPF_FUNC_map_lookup_elem;
static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) =
(void *) BPF_FUNC_map_update_elem;
/*
* Following macros are taken from tools/lib/bpf/bpf_helpers.h,
* and are used to create BTF defined maps. It is easier to take
* 2 simple macros, than being able to include above header in
* runtime.
*
* __uint - defines integer attribute of BTF map definition,
* Such attributes are represented using a pointer to an array,
* in which dimensionality of array encodes specified integer
* value.
*
* __type - defines pointer variable with typeof(val) type for
* attributes like key or value, which will be defined by the
* size of the type.
*/
#define __uint(name, val) int (*name)[val]
#define __type(name, val) typeof(val) *name
#define SEC(NAME) __attribute__((section(NAME), used))
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(max_entries, 1);
__type(key, int);
__type(value, int);
} flip_table SEC(".maps");
SEC("syscalls:sys_enter_epoll_pwait")
int bpf_func__SyS_epoll_pwait(void *ctx)
{
int ind =0;
int *flag = bpf_map_lookup_elem(&flip_table, &ind);
int new_flag;
if (!flag)
return 0;
/* flip flag and store back */
new_flag = !*flag;
bpf_map_update_elem(&flip_table, &ind, &new_flag, BPF_ANY);
return new_flag;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;

View File

@ -1,21 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-test-kbuild.c
* Test include from kernel header
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define SEC(NAME) __attribute__((section(NAME), used))
#include <uapi/linux/fs.h>
SEC("func=vfs_llseek")
int bpf_func__vfs_llseek(void *ctx)
{
return 0;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;

View File

@ -1,49 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-test-prologue.c
* Test BPF prologue
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define SEC(NAME) __attribute__((section(NAME), used))
#include <uapi/linux/fs.h>
/*
* If CONFIG_PROFILE_ALL_BRANCHES is selected,
* 'if' is redefined after include kernel header.
* Recover 'if' for BPF object code.
*/
#ifdef if
# undef if
#endif
typedef unsigned int __bitwise fmode_t;
#define FMODE_READ 0x1
#define FMODE_WRITE 0x2
static void (*bpf_trace_printk)(const char *fmt, int fmt_size, ...) =
(void *) 6;
SEC("func=null_lseek file->f_mode offset orig")
int bpf_func__null_lseek(void *ctx, int err, unsigned long _f_mode,
unsigned long offset, unsigned long orig)
{
fmode_t f_mode = (fmode_t)_f_mode;
if (err)
return 0;
if (f_mode & FMODE_WRITE)
return 0;
if (offset & 1)
return 0;
if (orig == SEEK_CUR)
return 0;
return 1;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;

View File

@ -1,51 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* bpf-script-test-relocation.c
* Test BPF loader checking relocation
*/
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
#endif
#define BPF_ANY 0
#define BPF_MAP_TYPE_ARRAY 2
#define BPF_FUNC_map_lookup_elem 1
#define BPF_FUNC_map_update_elem 2
static void *(*bpf_map_lookup_elem)(void *map, void *key) =
(void *) BPF_FUNC_map_lookup_elem;
static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) =
(void *) BPF_FUNC_map_update_elem;
struct bpf_map_def {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
};
#define SEC(NAME) __attribute__((section(NAME), used))
struct bpf_map_def SEC("maps") my_table = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(int),
.max_entries = 1,
};
int this_is_a_global_val;
SEC("func=sys_write")
int bpf_func__sys_write(void *ctx)
{
int key = 0;
int value = 0;
/*
* Incorrect relocation. Should not allow this program be
* loaded into kernel.
*/
bpf_map_update_elem(&this_is_a_global_val, &key, &value, 0);
return 0;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;

View File

@ -1,390 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/epoll.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <util/record.h>
#include <util/util.h>
#include <util/bpf-loader.h>
#include <util/evlist.h>
#include <linux/filter.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <api/fs/fs.h>
#include <perf/mmap.h>
#include "tests.h"
#include "llvm.h"
#include "debug.h"
#include "parse-events.h"
#include "util/mmap.h"
#define NR_ITERS 111
#define PERF_TEST_BPF_PATH "/sys/fs/bpf/perf_test"
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
#include <linux/bpf.h>
#include <bpf/bpf.h>
static int epoll_pwait_loop(void)
{
struct epoll_event events;
int i;
/* Should fail NR_ITERS times */
for (i = 0; i < NR_ITERS; i++)
epoll_pwait(-(i + 1), &events, 0, 0, NULL);
return 0;
}
#ifdef HAVE_BPF_PROLOGUE
static int llseek_loop(void)
{
int fds[2], i;
fds[0] = open("/dev/null", O_RDONLY);
fds[1] = open("/dev/null", O_RDWR);
if (fds[0] < 0 || fds[1] < 0)
return -1;
for (i = 0; i < NR_ITERS; i++) {
lseek(fds[i % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET);
lseek(fds[(i + 1) % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET);
}
close(fds[0]);
close(fds[1]);
return 0;
}
#endif
static struct {
enum test_llvm__testcase prog_id;
const char *name;
const char *msg_compile_fail;
const char *msg_load_fail;
int (*target_func)(void);
int expect_result;
bool pin;
} bpf_testcase_table[] = {
{
.prog_id = LLVM_TESTCASE_BASE,
.name = "[basic_bpf_test]",
.msg_compile_fail = "fix 'perf test LLVM' first",
.msg_load_fail = "load bpf object failed",
.target_func = &epoll_pwait_loop,
.expect_result = (NR_ITERS + 1) / 2,
},
{
.prog_id = LLVM_TESTCASE_BASE,
.name = "[bpf_pinning]",
.msg_compile_fail = "fix kbuild first",
.msg_load_fail = "check your vmlinux setting?",
.target_func = &epoll_pwait_loop,
.expect_result = (NR_ITERS + 1) / 2,
.pin = true,
},
#ifdef HAVE_BPF_PROLOGUE
{
.prog_id = LLVM_TESTCASE_BPF_PROLOGUE,
.name = "[bpf_prologue_test]",
.msg_compile_fail = "fix kbuild first",
.msg_load_fail = "check your vmlinux setting?",
.target_func = &llseek_loop,
.expect_result = (NR_ITERS + 1) / 4,
},
#endif
};
static int do_test(struct bpf_object *obj, int (*func)(void),
int expect)
{
struct record_opts opts = {
.target = {
.uid = UINT_MAX,
.uses_mmap = true,
},
.freq = 0,
.mmap_pages = 256,
.default_interval = 1,
};
char pid[16];
char sbuf[STRERR_BUFSIZE];
struct evlist *evlist;
int i, ret = TEST_FAIL, err = 0, count = 0;
struct parse_events_state parse_state;
struct parse_events_error parse_error;
parse_events_error__init(&parse_error);
bzero(&parse_state, sizeof(parse_state));
parse_state.error = &parse_error;
INIT_LIST_HEAD(&parse_state.list);
err = parse_events_load_bpf_obj(&parse_state, &parse_state.list, obj, NULL, NULL);
parse_events_error__exit(&parse_error);
if (err == -ENODATA) {
pr_debug("Failed to add events selected by BPF, debuginfo package not installed\n");
return TEST_SKIP;
}
if (err || list_empty(&parse_state.list)) {
pr_debug("Failed to add events selected by BPF\n");
return TEST_FAIL;
}
snprintf(pid, sizeof(pid), "%d", getpid());
pid[sizeof(pid) - 1] = '\0';
opts.target.tid = opts.target.pid = pid;
/* Instead of evlist__new_default, don't add default events */
evlist = evlist__new();
if (!evlist) {
pr_debug("Not enough memory to create evlist\n");
return TEST_FAIL;
}
err = evlist__create_maps(evlist, &opts.target);
if (err < 0) {
pr_debug("Not enough memory to create thread/cpu maps\n");
goto out_delete_evlist;
}
evlist__splice_list_tail(evlist, &parse_state.list);
evlist__config(evlist, &opts, NULL);
err = evlist__open(evlist);
if (err < 0) {
pr_debug("perf_evlist__open: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
goto out_delete_evlist;
}
err = evlist__mmap(evlist, opts.mmap_pages);
if (err < 0) {
pr_debug("evlist__mmap: %s\n",
str_error_r(errno, sbuf, sizeof(sbuf)));
goto out_delete_evlist;
}
evlist__enable(evlist);
(*func)();
evlist__disable(evlist);
for (i = 0; i < evlist->core.nr_mmaps; i++) {
union perf_event *event;
struct mmap *md;
md = &evlist->mmap[i];
if (perf_mmap__read_init(&md->core) < 0)
continue;
while ((event = perf_mmap__read_event(&md->core)) != NULL) {
const u32 type = event->header.type;
if (type == PERF_RECORD_SAMPLE)
count ++;
}
perf_mmap__read_done(&md->core);
}
if (count != expect * evlist->core.nr_entries) {
pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect * evlist->core.nr_entries, count);
goto out_delete_evlist;
}
ret = TEST_OK;
out_delete_evlist:
evlist__delete(evlist);
return ret;
}
static struct bpf_object *
prepare_bpf(void *obj_buf, size_t obj_buf_sz, const char *name)
{
struct bpf_object *obj;
obj = bpf__prepare_load_buffer(obj_buf, obj_buf_sz, name);
if (IS_ERR(obj)) {
pr_debug("Compile BPF program failed.\n");
return NULL;
}
return obj;
}
static int __test__bpf(int idx)
{
int ret;
void *obj_buf;
size_t obj_buf_sz;
struct bpf_object *obj;
ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz,
bpf_testcase_table[idx].prog_id,
false, NULL);
if (ret != TEST_OK || !obj_buf || !obj_buf_sz) {
pr_debug("Unable to get BPF object, %s\n",
bpf_testcase_table[idx].msg_compile_fail);
if ((idx == 0) || (ret == TEST_SKIP))
return TEST_SKIP;
else
return TEST_FAIL;
}
obj = prepare_bpf(obj_buf, obj_buf_sz,
bpf_testcase_table[idx].name);
if ((!!bpf_testcase_table[idx].target_func) != (!!obj)) {
if (!obj)
pr_debug("Fail to load BPF object: %s\n",
bpf_testcase_table[idx].msg_load_fail);
else
pr_debug("Success unexpectedly: %s\n",
bpf_testcase_table[idx].msg_load_fail);
ret = TEST_FAIL;
goto out;
}
if (obj) {
ret = do_test(obj,
bpf_testcase_table[idx].target_func,
bpf_testcase_table[idx].expect_result);
if (ret != TEST_OK)
goto out;
if (bpf_testcase_table[idx].pin) {
int err;
if (!bpf_fs__mount()) {
pr_debug("BPF filesystem not mounted\n");
ret = TEST_FAIL;
goto out;
}
err = mkdir(PERF_TEST_BPF_PATH, 0777);
if (err && errno != EEXIST) {
pr_debug("Failed to make perf_test dir: %s\n",
strerror(errno));
ret = TEST_FAIL;
goto out;
}
if (bpf_object__pin(obj, PERF_TEST_BPF_PATH))
ret = TEST_FAIL;
if (rm_rf(PERF_TEST_BPF_PATH))
ret = TEST_FAIL;
}
}
out:
free(obj_buf);
bpf__clear();
return ret;
}
static int check_env(void)
{
LIBBPF_OPTS(bpf_prog_load_opts, opts);
int err;
char license[] = "GPL";
struct bpf_insn insns[] = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_EXIT_INSN(),
};
err = fetch_kernel_version(&opts.kern_version, NULL, 0);
if (err) {
pr_debug("Unable to get kernel version\n");
return err;
}
err = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, license, insns,
ARRAY_SIZE(insns), &opts);
if (err < 0) {
pr_err("Missing basic BPF support, skip this test: %s\n",
strerror(errno));
return err;
}
close(err);
return 0;
}
static int test__bpf(int i)
{
int err;
if (i < 0 || i >= (int)ARRAY_SIZE(bpf_testcase_table))
return TEST_FAIL;
if (geteuid() != 0) {
pr_debug("Only root can run BPF test\n");
return TEST_SKIP;
}
if (check_env())
return TEST_SKIP;
err = __test__bpf(i);
return err;
}
#endif
static int test__basic_bpf_test(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
return test__bpf(0);
#else
pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__bpf_pinning(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
return test__bpf(1);
#else
pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__bpf_prologue_test(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_BPF_PROLOGUE) && defined(HAVE_LIBTRACEEVENT)
return test__bpf(2);
#else
pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n");
return TEST_SKIP;
#endif
}
static struct test_case bpf_tests[] = {
#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
TEST_CASE("Basic BPF filtering", basic_bpf_test),
TEST_CASE_REASON("BPF pinning", bpf_pinning,
"clang isn't installed or environment missing BPF support"),
#ifdef HAVE_BPF_PROLOGUE
TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test,
"clang/debuginfo isn't installed or environment missing BPF support"),
#else
TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in"),
#endif
#else
TEST_CASE_REASON("Basic BPF filtering", basic_bpf_test, "not compiled in or missing libtraceevent support"),
TEST_CASE_REASON("BPF pinning", bpf_pinning, "not compiled in or missing libtraceevent support"),
TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in or missing libtraceevent support"),
#endif
{ .name = NULL, }
};
struct test_suite suite__bpf = {
.desc = "BPF filter",
.test_cases = bpf_tests,
};

View File

@ -92,9 +92,7 @@ static struct test_suite *generic_tests[] = {
&suite__fdarray__add,
&suite__kmod_path__parse,
&suite__thread_map,
&suite__llvm,
&suite__session_topology,
&suite__bpf,
&suite__thread_map_synthesize,
&suite__thread_map_remove,
&suite__cpu_map,

View File

@ -1,219 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "tests.h"
#include "debug.h"
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/libbpf.h>
#include <util/llvm-utils.h>
#include "llvm.h"
static int test__bpf_parsing(void *obj_buf, size_t obj_buf_sz)
{
struct bpf_object *obj;
obj = bpf_object__open_mem(obj_buf, obj_buf_sz, NULL);
if (libbpf_get_error(obj))
return TEST_FAIL;
bpf_object__close(obj);
return TEST_OK;
}
static struct {
const char *source;
const char *desc;
bool should_load_fail;
} bpf_source_table[__LLVM_TESTCASE_MAX] = {
[LLVM_TESTCASE_BASE] = {
.source = test_llvm__bpf_base_prog,
.desc = "Basic BPF llvm compile",
},
[LLVM_TESTCASE_KBUILD] = {
.source = test_llvm__bpf_test_kbuild_prog,
.desc = "kbuild searching",
},
[LLVM_TESTCASE_BPF_PROLOGUE] = {
.source = test_llvm__bpf_test_prologue_prog,
.desc = "Compile source for BPF prologue generation",
},
[LLVM_TESTCASE_BPF_RELOCATION] = {
.source = test_llvm__bpf_test_relocation,
.desc = "Compile source for BPF relocation",
.should_load_fail = true,
},
};
int
test_llvm__fetch_bpf_obj(void **p_obj_buf,
size_t *p_obj_buf_sz,
enum test_llvm__testcase idx,
bool force,
bool *should_load_fail)
{
const char *source;
const char *desc;
const char *tmpl_old, *clang_opt_old;
char *tmpl_new = NULL, *clang_opt_new = NULL;
int err, old_verbose, ret = TEST_FAIL;
if (idx >= __LLVM_TESTCASE_MAX)
return TEST_FAIL;
source = bpf_source_table[idx].source;
desc = bpf_source_table[idx].desc;
if (should_load_fail)
*should_load_fail = bpf_source_table[idx].should_load_fail;
/*
* Skip this test if user's .perfconfig doesn't set [llvm] section
* and clang is not found in $PATH
*/
if (!force && (!llvm_param.user_set_param &&
llvm__search_clang())) {
pr_debug("No clang, skip this test\n");
return TEST_SKIP;
}
/*
* llvm is verbosity when error. Suppress all error output if
* not 'perf test -v'.
*/
old_verbose = verbose;
if (verbose == 0)
verbose = -1;
*p_obj_buf = NULL;
*p_obj_buf_sz = 0;
if (!llvm_param.clang_bpf_cmd_template)
goto out;
if (!llvm_param.clang_opt)
llvm_param.clang_opt = strdup("");
err = asprintf(&tmpl_new, "echo '%s' | %s%s", source,
llvm_param.clang_bpf_cmd_template,
old_verbose ? "" : " 2>/dev/null");
if (err < 0)
goto out;
err = asprintf(&clang_opt_new, "-xc %s", llvm_param.clang_opt);
if (err < 0)
goto out;
tmpl_old = llvm_param.clang_bpf_cmd_template;
llvm_param.clang_bpf_cmd_template = tmpl_new;
clang_opt_old = llvm_param.clang_opt;
llvm_param.clang_opt = clang_opt_new;
err = llvm__compile_bpf("-", p_obj_buf, p_obj_buf_sz);
llvm_param.clang_bpf_cmd_template = tmpl_old;
llvm_param.clang_opt = clang_opt_old;
verbose = old_verbose;
if (err)
goto out;
ret = TEST_OK;
out:
free(tmpl_new);
free(clang_opt_new);
if (ret != TEST_OK)
pr_debug("Failed to compile test case: '%s'\n", desc);
return ret;
}
static int test__llvm(int subtest)
{
int ret;
void *obj_buf = NULL;
size_t obj_buf_sz = 0;
bool should_load_fail = false;
if ((subtest < 0) || (subtest >= __LLVM_TESTCASE_MAX))
return TEST_FAIL;
ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz,
subtest, false, &should_load_fail);
if (ret == TEST_OK && !should_load_fail) {
ret = test__bpf_parsing(obj_buf, obj_buf_sz);
if (ret != TEST_OK) {
pr_debug("Failed to parse test case '%s'\n",
bpf_source_table[subtest].desc);
}
}
free(obj_buf);
return ret;
}
#endif //HAVE_LIBBPF_SUPPORT
static int test__llvm__bpf_base_prog(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_BASE);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__llvm__bpf_test_kbuild_prog(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_KBUILD);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__llvm__bpf_test_prologue_prog(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_BPF_PROLOGUE);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static int test__llvm__bpf_test_relocation(struct test_suite *test __maybe_unused,
int subtest __maybe_unused)
{
#ifdef HAVE_LIBBPF_SUPPORT
return test__llvm(LLVM_TESTCASE_BPF_RELOCATION);
#else
pr_debug("Skip LLVM test because BPF support is not compiled\n");
return TEST_SKIP;
#endif
}
static struct test_case llvm_tests[] = {
#ifdef HAVE_LIBBPF_SUPPORT
TEST_CASE("Basic BPF llvm compile", llvm__bpf_base_prog),
TEST_CASE("kbuild searching", llvm__bpf_test_kbuild_prog),
TEST_CASE("Compile source for BPF prologue generation",
llvm__bpf_test_prologue_prog),
TEST_CASE("Compile source for BPF relocation", llvm__bpf_test_relocation),
#else
TEST_CASE_REASON("Basic BPF llvm compile", llvm__bpf_base_prog, "not compiled in"),
TEST_CASE_REASON("kbuild searching", llvm__bpf_test_kbuild_prog, "not compiled in"),
TEST_CASE_REASON("Compile source for BPF prologue generation",
llvm__bpf_test_prologue_prog, "not compiled in"),
TEST_CASE_REASON("Compile source for BPF relocation",
llvm__bpf_test_relocation, "not compiled in"),
#endif
{ .name = NULL, }
};
struct test_suite suite__llvm = {
.desc = "LLVM search and compile",
.test_cases = llvm_tests,
};

View File

@ -1,31 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef PERF_TEST_LLVM_H
#define PERF_TEST_LLVM_H
#ifdef __cplusplus
extern "C" {
#endif
#include <stddef.h> /* for size_t */
#include <stdbool.h> /* for bool */
extern const char test_llvm__bpf_base_prog[];
extern const char test_llvm__bpf_test_kbuild_prog[];
extern const char test_llvm__bpf_test_prologue_prog[];
extern const char test_llvm__bpf_test_relocation[];
enum test_llvm__testcase {
LLVM_TESTCASE_BASE,
LLVM_TESTCASE_KBUILD,
LLVM_TESTCASE_BPF_PROLOGUE,
LLVM_TESTCASE_BPF_RELOCATION,
__LLVM_TESTCASE_MAX,
};
int test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz,
enum test_llvm__testcase index, bool force,
bool *should_load_fail);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -113,7 +113,6 @@ DECLARE_SUITE(fdarray__filter);
DECLARE_SUITE(fdarray__add);
DECLARE_SUITE(kmod_path__parse);
DECLARE_SUITE(thread_map);
DECLARE_SUITE(llvm);
DECLARE_SUITE(bpf);
DECLARE_SUITE(session_topology);
DECLARE_SUITE(thread_map_synthesize);
@ -129,7 +128,6 @@ DECLARE_SUITE(sdt_event);
DECLARE_SUITE(is_printable_array);
DECLARE_SUITE(bitmap_print);
DECLARE_SUITE(perf_hooks);
DECLARE_SUITE(clang);
DECLARE_SUITE(unit_number__scnprint);
DECLARE_SUITE(mem2node);
DECLARE_SUITE(maps__merge_in);

View File

@ -23,7 +23,6 @@ perf-y += evswitch.o
perf-y += find_bit.o
perf-y += get_current_dir_name.o
perf-y += levenshtein.o
perf-y += llvm-utils.o
perf-y += mmap.o
perf-y += memswap.o
perf-y += parse-events.o
@ -150,7 +149,6 @@ perf-y += list_sort.o
perf-y += mutex.o
perf-y += sharded_mutex.o
perf-$(CONFIG_LIBBPF) += bpf-loader.o
perf-$(CONFIG_LIBBPF) += bpf_map.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter_cgroup.o
@ -168,7 +166,6 @@ ifeq ($(CONFIG_LIBTRACEEVENT),y)
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_kwork.o
endif
perf-$(CONFIG_BPF_PROLOGUE) += bpf-prologue.o
perf-$(CONFIG_LIBELF) += symbol-elf.o
perf-$(CONFIG_LIBELF) += probe-file.o
perf-$(CONFIG_LIBELF) += probe-event.o
@ -235,7 +232,6 @@ perf-$(CONFIG_LIBBPF) += bpf-utils.o
perf-$(CONFIG_LIBPFM4) += pfm.o
CFLAGS_config.o += -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))"
CFLAGS_llvm-utils.o += -DLIBBPF_INCLUDE_DIR="BUILD_STR($(libbpf_include_dir_SQ))"
# avoid compiler warnings in 32-bit mode
CFLAGS_genelf_debug.o += -Wno-packed
@ -327,7 +323,7 @@ ifeq ($(BISON_LT_381),1)
bison_flags += -DYYNOMEM=YYABORT
endif
CFLAGS_parse-events-flex.o += $(flex_flags)
CFLAGS_parse-events-flex.o += $(flex_flags) -Wno-unused-label
CFLAGS_pmu-flex.o += $(flex_flags)
CFLAGS_expr-flex.o += $(flex_flags)
CFLAGS_bpf-filter-flex.o += $(flex_flags)

File diff suppressed because it is too large Load Diff

View File

@ -1,216 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015, Wang Nan <wangnan0@huawei.com>
* Copyright (C) 2015, Huawei Inc.
*/
#ifndef __BPF_LOADER_H
#define __BPF_LOADER_H
#include <linux/compiler.h>
#include <linux/err.h>
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/libbpf.h>
enum bpf_loader_errno {
__BPF_LOADER_ERRNO__START = __LIBBPF_ERRNO__START - 100,
/* Invalid config string */
BPF_LOADER_ERRNO__CONFIG = __BPF_LOADER_ERRNO__START,
BPF_LOADER_ERRNO__GROUP, /* Invalid group name */
BPF_LOADER_ERRNO__EVENTNAME, /* Event name is missing */
BPF_LOADER_ERRNO__INTERNAL, /* BPF loader internal error */
BPF_LOADER_ERRNO__COMPILE, /* Error when compiling BPF scriptlet */
BPF_LOADER_ERRNO__PROGCONF_TERM,/* Invalid program config term in config string */
BPF_LOADER_ERRNO__PROLOGUE, /* Failed to generate prologue */
BPF_LOADER_ERRNO__PROLOGUE2BIG, /* Prologue too big for program */
BPF_LOADER_ERRNO__PROLOGUEOOB, /* Offset out of bound for prologue */
BPF_LOADER_ERRNO__OBJCONF_OPT, /* Invalid object config option */
BPF_LOADER_ERRNO__OBJCONF_CONF, /* Config value not set (lost '=')) */
BPF_LOADER_ERRNO__OBJCONF_MAP_OPT, /* Invalid object map config option */
BPF_LOADER_ERRNO__OBJCONF_MAP_NOTEXIST, /* Target map not exist */
BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE, /* Incorrect value type for map */
BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE, /* Incorrect map type */
BPF_LOADER_ERRNO__OBJCONF_MAP_KEYSIZE, /* Incorrect map key size */
BPF_LOADER_ERRNO__OBJCONF_MAP_VALUESIZE,/* Incorrect map value size */
BPF_LOADER_ERRNO__OBJCONF_MAP_NOEVT, /* Event not found for map setting */
BPF_LOADER_ERRNO__OBJCONF_MAP_MAPSIZE, /* Invalid map size for event setting */
BPF_LOADER_ERRNO__OBJCONF_MAP_EVTDIM, /* Event dimension too large */
BPF_LOADER_ERRNO__OBJCONF_MAP_EVTINH, /* Doesn't support inherit event */
BPF_LOADER_ERRNO__OBJCONF_MAP_EVTTYPE, /* Wrong event type for map */
BPF_LOADER_ERRNO__OBJCONF_MAP_IDX2BIG, /* Index too large */
__BPF_LOADER_ERRNO__END,
};
#endif // HAVE_LIBBPF_SUPPORT
struct evsel;
struct evlist;
struct bpf_object;
struct parse_events_term;
#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
typedef int (*bpf_prog_iter_callback_t)(const char *group, const char *event,
int fd, struct bpf_object *obj, void *arg);
#ifdef HAVE_LIBBPF_SUPPORT
struct bpf_object *bpf__prepare_load(const char *filename, bool source);
int bpf__strerror_prepare_load(const char *filename, bool source,
int err, char *buf, size_t size);
struct bpf_object *bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz,
const char *name);
void bpf__clear(void);
int bpf__probe(struct bpf_object *obj);
int bpf__unprobe(struct bpf_object *obj);
int bpf__strerror_probe(struct bpf_object *obj, int err,
char *buf, size_t size);
int bpf__load(struct bpf_object *obj);
int bpf__strerror_load(struct bpf_object *obj, int err,
char *buf, size_t size);
int bpf__foreach_event(struct bpf_object *obj,
bpf_prog_iter_callback_t func, void *arg);
int bpf__config_obj(struct bpf_object *obj, struct parse_events_term *term,
struct evlist *evlist, int *error_pos);
int bpf__strerror_config_obj(struct bpf_object *obj,
struct parse_events_term *term,
struct evlist *evlist,
int *error_pos, int err, char *buf,
size_t size);
int bpf__apply_obj_config(void);
int bpf__strerror_apply_obj_config(int err, char *buf, size_t size);
int bpf__setup_stdout(struct evlist *evlist);
struct evsel *bpf__setup_output_event(struct evlist *evlist, const char *name);
int bpf__strerror_setup_output_event(struct evlist *evlist, int err, char *buf, size_t size);
#else
#include <errno.h>
#include <string.h>
#include "debug.h"
static inline struct bpf_object *
bpf__prepare_load(const char *filename __maybe_unused,
bool source __maybe_unused)
{
pr_debug("ERROR: eBPF object loading is disabled during compiling.\n");
return ERR_PTR(-ENOTSUP);
}
static inline struct bpf_object *
bpf__prepare_load_buffer(void *obj_buf __maybe_unused,
size_t obj_buf_sz __maybe_unused)
{
return ERR_PTR(-ENOTSUP);
}
static inline void bpf__clear(void) { }
static inline int bpf__probe(struct bpf_object *obj __maybe_unused) { return 0;}
static inline int bpf__unprobe(struct bpf_object *obj __maybe_unused) { return 0;}
static inline int bpf__load(struct bpf_object *obj __maybe_unused) { return 0; }
static inline int
bpf__foreach_event(struct bpf_object *obj __maybe_unused,
bpf_prog_iter_callback_t func __maybe_unused,
void *arg __maybe_unused)
{
return 0;
}
static inline int
bpf__config_obj(struct bpf_object *obj __maybe_unused,
struct parse_events_term *term __maybe_unused,
struct evlist *evlist __maybe_unused,
int *error_pos __maybe_unused)
{
return 0;
}
static inline int
bpf__apply_obj_config(void)
{
return 0;
}
static inline int
bpf__setup_stdout(struct evlist *evlist __maybe_unused)
{
return 0;
}
static inline struct evsel *
bpf__setup_output_event(struct evlist *evlist __maybe_unused, const char *name __maybe_unused)
{
return NULL;
}
static inline int
__bpf_strerror(char *buf, size_t size)
{
if (!size)
return 0;
strncpy(buf,
"ERROR: eBPF object loading is disabled during compiling.\n",
size);
buf[size - 1] = '\0';
return 0;
}
static inline
int bpf__strerror_prepare_load(const char *filename __maybe_unused,
bool source __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_probe(struct bpf_object *obj __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int bpf__strerror_load(struct bpf_object *obj __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_config_obj(struct bpf_object *obj __maybe_unused,
struct parse_events_term *term __maybe_unused,
struct evlist *evlist __maybe_unused,
int *error_pos __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_apply_obj_config(int err __maybe_unused,
char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
static inline int
bpf__strerror_setup_output_event(struct evlist *evlist __maybe_unused,
int err __maybe_unused, char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
#endif
static inline int bpf__strerror_setup_stdout(struct evlist *evlist, int err, char *buf, size_t size)
{
return bpf__strerror_setup_output_event(evlist, err, buf, size);
}
#endif

View File

@ -16,7 +16,6 @@
#include <subcmd/exec-cmd.h>
#include "util/event.h" /* proc_map_timeout */
#include "util/hist.h" /* perf_hist_config */
#include "util/llvm-utils.h" /* perf_llvm_config */
#include "util/stat.h" /* perf_stat__set_big_num */
#include "util/evsel.h" /* evsel__hw_names, evsel__use_bpf_counters */
#include "util/srcline.h" /* addr2line_timeout_ms */
@ -486,9 +485,6 @@ int perf_default_config(const char *var, const char *value,
if (strstarts(var, "call-graph."))
return perf_callchain_config(var, value);
if (strstarts(var, "llvm."))
return perf_llvm_config(var, value);
if (strstarts(var, "buildid."))
return perf_buildid_config(var, value);

View File

@ -1,612 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015, Wang Nan <wangnan0@huawei.com>
* Copyright (C) 2015, Huawei Inc.
*/
#include <errno.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <linux/err.h>
#include <linux/string.h>
#include <linux/zalloc.h>
#include "debug.h"
#include "llvm-utils.h"
#include "config.h"
#include "util.h"
#include <sys/wait.h>
#include <subcmd/exec-cmd.h>
#define CLANG_BPF_CMD_DEFAULT_TEMPLATE \
"$CLANG_EXEC -D__KERNEL__ -D__NR_CPUS__=$NR_CPUS "\
"-DLINUX_VERSION_CODE=$LINUX_VERSION_CODE " \
"$CLANG_OPTIONS $PERF_BPF_INC_OPTIONS $KERNEL_INC_OPTIONS " \
"-Wno-unused-value -Wno-pointer-sign " \
"-working-directory $WORKING_DIR " \
"-c \"$CLANG_SOURCE\" --target=bpf $CLANG_EMIT_LLVM -g -O2 -o - $LLVM_OPTIONS_PIPE"
struct llvm_param llvm_param = {
.clang_path = "clang",
.llc_path = "llc",
.clang_bpf_cmd_template = CLANG_BPF_CMD_DEFAULT_TEMPLATE,
.clang_opt = NULL,
.opts = NULL,
.kbuild_dir = NULL,
.kbuild_opts = NULL,
.user_set_param = false,
};
static void version_notice(void);
int perf_llvm_config(const char *var, const char *value)
{
if (!strstarts(var, "llvm."))
return 0;
var += sizeof("llvm.") - 1;
if (!strcmp(var, "clang-path"))
llvm_param.clang_path = strdup(value);
else if (!strcmp(var, "clang-bpf-cmd-template"))
llvm_param.clang_bpf_cmd_template = strdup(value);
else if (!strcmp(var, "clang-opt"))
llvm_param.clang_opt = strdup(value);
else if (!strcmp(var, "kbuild-dir"))
llvm_param.kbuild_dir = strdup(value);
else if (!strcmp(var, "kbuild-opts"))
llvm_param.kbuild_opts = strdup(value);
else if (!strcmp(var, "dump-obj"))
llvm_param.dump_obj = !!perf_config_bool(var, value);
else if (!strcmp(var, "opts"))
llvm_param.opts = strdup(value);
else {
pr_debug("Invalid LLVM config option: %s\n", value);
return -1;
}
llvm_param.user_set_param = true;
return 0;
}
static int
search_program(const char *def, const char *name,
char *output)
{
char *env, *path, *tmp = NULL;
char buf[PATH_MAX];
int ret;
output[0] = '\0';
if (def && def[0] != '\0') {
if (def[0] == '/') {
if (access(def, F_OK) == 0) {
strlcpy(output, def, PATH_MAX);
return 0;
}
} else if (def[0] != '\0')
name = def;
}
env = getenv("PATH");
if (!env)
return -1;
env = strdup(env);
if (!env)
return -1;
ret = -ENOENT;
path = strtok_r(env, ":", &tmp);
while (path) {
scnprintf(buf, sizeof(buf), "%s/%s", path, name);
if (access(buf, F_OK) == 0) {
strlcpy(output, buf, PATH_MAX);
ret = 0;
break;
}
path = strtok_r(NULL, ":", &tmp);
}
free(env);
return ret;
}
static int search_program_and_warn(const char *def, const char *name,
char *output)
{
int ret = search_program(def, name, output);
if (ret) {
pr_err("ERROR:\tunable to find %s.\n"
"Hint:\tTry to install latest clang/llvm to support BPF. Check your $PATH\n"
" \tand '%s-path' option in [llvm] section of ~/.perfconfig.\n",
name, name);
version_notice();
}
return ret;
}
#define READ_SIZE 4096
static int
read_from_pipe(const char *cmd, void **p_buf, size_t *p_read_sz)
{
int err = 0;
void *buf = NULL;
FILE *file = NULL;
size_t read_sz = 0, buf_sz = 0;
char serr[STRERR_BUFSIZE];
file = popen(cmd, "r");
if (!file) {
pr_err("ERROR: unable to popen cmd: %s\n",
str_error_r(errno, serr, sizeof(serr)));
return -EINVAL;
}
while (!feof(file) && !ferror(file)) {
/*
* Make buf_sz always have obe byte extra space so we
* can put '\0' there.
*/
if (buf_sz - read_sz < READ_SIZE + 1) {
void *new_buf;
buf_sz = read_sz + READ_SIZE + 1;
new_buf = realloc(buf, buf_sz);
if (!new_buf) {
pr_err("ERROR: failed to realloc memory\n");
err = -ENOMEM;
goto errout;
}
buf = new_buf;
}
read_sz += fread(buf + read_sz, 1, READ_SIZE, file);
}
if (buf_sz - read_sz < 1) {
pr_err("ERROR: internal error\n");
err = -EINVAL;
goto errout;
}
if (ferror(file)) {
pr_err("ERROR: error occurred when reading from pipe: %s\n",
str_error_r(errno, serr, sizeof(serr)));
err = -EIO;
goto errout;
}
err = WEXITSTATUS(pclose(file));
file = NULL;
if (err) {
err = -EINVAL;
goto errout;
}
/*
* If buf is string, give it terminal '\0' to make our life
* easier. If buf is not string, that '\0' is out of space
* indicated by read_sz so caller won't even notice it.
*/
((char *)buf)[read_sz] = '\0';
if (!p_buf)
free(buf);
else
*p_buf = buf;
if (p_read_sz)
*p_read_sz = read_sz;
return 0;
errout:
if (file)
pclose(file);
free(buf);
if (p_buf)
*p_buf = NULL;
if (p_read_sz)
*p_read_sz = 0;
return err;
}
static inline void
force_set_env(const char *var, const char *value)
{
if (value) {
setenv(var, value, 1);
pr_debug("set env: %s=%s\n", var, value);
} else {
unsetenv(var);
pr_debug("unset env: %s\n", var);
}
}
static void
version_notice(void)
{
pr_err(
" \tLLVM 3.7 or newer is required. Which can be found from http://llvm.org\n"
" \tYou may want to try git trunk:\n"
" \t\tgit clone http://llvm.org/git/llvm.git\n"
" \t\t and\n"
" \t\tgit clone http://llvm.org/git/clang.git\n\n"
" \tOr fetch the latest clang/llvm 3.7 from pre-built llvm packages for\n"
" \tdebian/ubuntu:\n"
" \t\thttps://apt.llvm.org/\n\n"
" \tIf you are using old version of clang, change 'clang-bpf-cmd-template'\n"
" \toption in [llvm] section of ~/.perfconfig to:\n\n"
" \t \"$CLANG_EXEC $CLANG_OPTIONS $KERNEL_INC_OPTIONS $PERF_BPF_INC_OPTIONS \\\n"
" \t -working-directory $WORKING_DIR -c $CLANG_SOURCE \\\n"
" \t -emit-llvm -o - | /path/to/llc -march=bpf -filetype=obj -o -\"\n"
" \t(Replace /path/to/llc with path to your llc)\n\n"
);
}
static int detect_kbuild_dir(char **kbuild_dir)
{
const char *test_dir = llvm_param.kbuild_dir;
const char *prefix_dir = "";
const char *suffix_dir = "";
/* _UTSNAME_LENGTH is 65 */
char release[128];
char *autoconf_path;
int err;
if (!test_dir) {
err = fetch_kernel_version(NULL, release,
sizeof(release));
if (err)
return -EINVAL;
test_dir = release;
prefix_dir = "/lib/modules/";
suffix_dir = "/build";
}
err = asprintf(&autoconf_path, "%s%s%s/include/generated/autoconf.h",
prefix_dir, test_dir, suffix_dir);
if (err < 0)
return -ENOMEM;
if (access(autoconf_path, R_OK) == 0) {
free(autoconf_path);
err = asprintf(kbuild_dir, "%s%s%s", prefix_dir, test_dir,
suffix_dir);
if (err < 0)
return -ENOMEM;
return 0;
}
pr_debug("%s: Couldn't find \"%s\", missing kernel-devel package?.\n",
__func__, autoconf_path);
free(autoconf_path);
return -ENOENT;
}
static const char *kinc_fetch_script =
"#!/usr/bin/env sh\n"
"if ! test -d \"$KBUILD_DIR\"\n"
"then\n"
" exit 1\n"
"fi\n"
"if ! test -f \"$KBUILD_DIR/include/generated/autoconf.h\"\n"
"then\n"
" exit 1\n"
"fi\n"
"TMPDIR=`mktemp -d`\n"
"if test -z \"$TMPDIR\"\n"
"then\n"
" exit 1\n"
"fi\n"
"cat << EOF > $TMPDIR/Makefile\n"
"obj-y := dummy.o\n"
"\\$(obj)/%.o: \\$(src)/%.c\n"
"\t@echo -n \"\\$(NOSTDINC_FLAGS) \\$(LINUXINCLUDE) \\$(EXTRA_CFLAGS)\"\n"
"\t\\$(CC) -c -o \\$@ \\$<\n"
"EOF\n"
"touch $TMPDIR/dummy.c\n"
"make -s -C $KBUILD_DIR M=$TMPDIR $KBUILD_OPTS dummy.o 2>/dev/null\n"
"RET=$?\n"
"rm -rf $TMPDIR\n"
"exit $RET\n";
void llvm__get_kbuild_opts(char **kbuild_dir, char **kbuild_include_opts)
{
static char *saved_kbuild_dir;
static char *saved_kbuild_include_opts;
int err;
if (!kbuild_dir || !kbuild_include_opts)
return;
*kbuild_dir = NULL;
*kbuild_include_opts = NULL;
if (saved_kbuild_dir && saved_kbuild_include_opts &&
!IS_ERR(saved_kbuild_dir) && !IS_ERR(saved_kbuild_include_opts)) {
*kbuild_dir = strdup(saved_kbuild_dir);
*kbuild_include_opts = strdup(saved_kbuild_include_opts);
if (*kbuild_dir && *kbuild_include_opts)
return;
zfree(kbuild_dir);
zfree(kbuild_include_opts);
/*
* Don't fall through: it may breaks saved_kbuild_dir and
* saved_kbuild_include_opts if detect them again when
* memory is low.
*/
return;
}
if (llvm_param.kbuild_dir && !llvm_param.kbuild_dir[0]) {
pr_debug("[llvm.kbuild-dir] is set to \"\" deliberately.\n");
pr_debug("Skip kbuild options detection.\n");
goto errout;
}
err = detect_kbuild_dir(kbuild_dir);
if (err) {
pr_warning(
"WARNING:\tunable to get correct kernel building directory.\n"
"Hint:\tSet correct kbuild directory using 'kbuild-dir' option in [llvm]\n"
" \tsection of ~/.perfconfig or set it to \"\" to suppress kbuild\n"
" \tdetection.\n\n");
goto errout;
}
pr_debug("Kernel build dir is set to %s\n", *kbuild_dir);
force_set_env("KBUILD_DIR", *kbuild_dir);
force_set_env("KBUILD_OPTS", llvm_param.kbuild_opts);
err = read_from_pipe(kinc_fetch_script,
(void **)kbuild_include_opts,
NULL);
if (err) {
pr_warning(
"WARNING:\tunable to get kernel include directories from '%s'\n"
"Hint:\tTry set clang include options using 'clang-bpf-cmd-template'\n"
" \toption in [llvm] section of ~/.perfconfig and set 'kbuild-dir'\n"
" \toption in [llvm] to \"\" to suppress this detection.\n\n",
*kbuild_dir);
zfree(kbuild_dir);
goto errout;
}
pr_debug("include option is set to %s\n", *kbuild_include_opts);
saved_kbuild_dir = strdup(*kbuild_dir);
saved_kbuild_include_opts = strdup(*kbuild_include_opts);
if (!saved_kbuild_dir || !saved_kbuild_include_opts) {
zfree(&saved_kbuild_dir);
zfree(&saved_kbuild_include_opts);
}
return;
errout:
saved_kbuild_dir = ERR_PTR(-EINVAL);
saved_kbuild_include_opts = ERR_PTR(-EINVAL);
}
int llvm__get_nr_cpus(void)
{
static int nr_cpus_avail = 0;
char serr[STRERR_BUFSIZE];
if (nr_cpus_avail > 0)
return nr_cpus_avail;
nr_cpus_avail = sysconf(_SC_NPROCESSORS_CONF);
if (nr_cpus_avail <= 0) {
pr_err(
"WARNING:\tunable to get available CPUs in this system: %s\n"
" \tUse 128 instead.\n", str_error_r(errno, serr, sizeof(serr)));
nr_cpus_avail = 128;
}
return nr_cpus_avail;
}
void llvm__dump_obj(const char *path, void *obj_buf, size_t size)
{
char *obj_path = strdup(path);
FILE *fp;
char *p;
if (!obj_path) {
pr_warning("WARNING: Not enough memory, skip object dumping\n");
return;
}
p = strrchr(obj_path, '.');
if (!p || (strcmp(p, ".c") != 0)) {
pr_warning("WARNING: invalid llvm source path: '%s', skip object dumping\n",
obj_path);
goto out;
}
p[1] = 'o';
fp = fopen(obj_path, "wb");
if (!fp) {
pr_warning("WARNING: failed to open '%s': %s, skip object dumping\n",
obj_path, strerror(errno));
goto out;
}
pr_debug("LLVM: dumping %s\n", obj_path);
if (fwrite(obj_buf, size, 1, fp) != 1)
pr_debug("WARNING: failed to write to file '%s': %s, skip object dumping\n", obj_path, strerror(errno));
fclose(fp);
out:
free(obj_path);
}
int llvm__compile_bpf(const char *path, void **p_obj_buf,
size_t *p_obj_buf_sz)
{
size_t obj_buf_sz;
void *obj_buf = NULL;
int err, nr_cpus_avail;
unsigned int kernel_version;
char linux_version_code_str[64];
const char *clang_opt = llvm_param.clang_opt;
char clang_path[PATH_MAX], llc_path[PATH_MAX], abspath[PATH_MAX], nr_cpus_avail_str[64];
char serr[STRERR_BUFSIZE];
char *kbuild_dir = NULL, *kbuild_include_opts = NULL,
*perf_bpf_include_opts = NULL;
const char *template = llvm_param.clang_bpf_cmd_template;
char *pipe_template = NULL;
const char *opts = llvm_param.opts;
char *command_echo = NULL, *command_out;
char *libbpf_include_dir = system_path(LIBBPF_INCLUDE_DIR);
if (path[0] != '-' && realpath(path, abspath) == NULL) {
err = errno;
pr_err("ERROR: problems with path %s: %s\n",
path, str_error_r(err, serr, sizeof(serr)));
return -err;
}
if (!template)
template = CLANG_BPF_CMD_DEFAULT_TEMPLATE;
err = search_program_and_warn(llvm_param.clang_path,
"clang", clang_path);
if (err)
return -ENOENT;
/*
* This is an optional work. Even it fail we can continue our
* work. Needn't check error return.
*/
llvm__get_kbuild_opts(&kbuild_dir, &kbuild_include_opts);
nr_cpus_avail = llvm__get_nr_cpus();
snprintf(nr_cpus_avail_str, sizeof(nr_cpus_avail_str), "%d",
nr_cpus_avail);
if (fetch_kernel_version(&kernel_version, NULL, 0))
kernel_version = 0;
snprintf(linux_version_code_str, sizeof(linux_version_code_str),
"0x%x", kernel_version);
if (asprintf(&perf_bpf_include_opts, "-I%s/", libbpf_include_dir) < 0)
goto errout;
force_set_env("NR_CPUS", nr_cpus_avail_str);
force_set_env("LINUX_VERSION_CODE", linux_version_code_str);
force_set_env("CLANG_EXEC", clang_path);
force_set_env("CLANG_OPTIONS", clang_opt);
force_set_env("KERNEL_INC_OPTIONS", kbuild_include_opts);
force_set_env("PERF_BPF_INC_OPTIONS", perf_bpf_include_opts);
force_set_env("WORKING_DIR", kbuild_dir ? : ".");
if (opts) {
err = search_program_and_warn(llvm_param.llc_path, "llc", llc_path);
if (err)
goto errout;
err = -ENOMEM;
if (asprintf(&pipe_template, "%s -emit-llvm | %s -march=bpf %s -filetype=obj -o -",
template, llc_path, opts) < 0) {
pr_err("ERROR:\tnot enough memory to setup command line\n");
goto errout;
}
template = pipe_template;
}
/*
* Since we may reset clang's working dir, path of source file
* should be transferred into absolute path, except we want
* stdin to be source file (testing).
*/
force_set_env("CLANG_SOURCE",
(path[0] == '-') ? path : abspath);
pr_debug("llvm compiling command template: %s\n", template);
/*
* Below, substitute control characters for values that can cause the
* echo to misbehave, then substitute the values back.
*/
err = -ENOMEM;
if (asprintf(&command_echo, "echo -n \a%s\a", template) < 0)
goto errout;
#define SWAP_CHAR(a, b) do { if (*p == a) *p = b; } while (0)
for (char *p = command_echo; *p; p++) {
SWAP_CHAR('<', '\001');
SWAP_CHAR('>', '\002');
SWAP_CHAR('"', '\003');
SWAP_CHAR('\'', '\004');
SWAP_CHAR('|', '\005');
SWAP_CHAR('&', '\006');
SWAP_CHAR('\a', '"');
}
err = read_from_pipe(command_echo, (void **) &command_out, NULL);
if (err)
goto errout;
for (char *p = command_out; *p; p++) {
SWAP_CHAR('\001', '<');
SWAP_CHAR('\002', '>');
SWAP_CHAR('\003', '"');
SWAP_CHAR('\004', '\'');
SWAP_CHAR('\005', '|');
SWAP_CHAR('\006', '&');
}
#undef SWAP_CHAR
pr_debug("llvm compiling command : %s\n", command_out);
err = read_from_pipe(template, &obj_buf, &obj_buf_sz);
if (err) {
pr_err("ERROR:\tunable to compile %s\n", path);
pr_err("Hint:\tCheck error message shown above.\n");
pr_err("Hint:\tYou can also pre-compile it into .o using:\n");
pr_err(" \t\tclang --target=bpf -O2 -c %s\n", path);
pr_err(" \twith proper -I and -D options.\n");
goto errout;
}
free(command_echo);
free(command_out);
free(kbuild_dir);
free(kbuild_include_opts);
free(perf_bpf_include_opts);
free(libbpf_include_dir);
if (!p_obj_buf)
free(obj_buf);
else
*p_obj_buf = obj_buf;
if (p_obj_buf_sz)
*p_obj_buf_sz = obj_buf_sz;
return 0;
errout:
free(command_echo);
free(kbuild_dir);
free(kbuild_include_opts);
free(obj_buf);
free(perf_bpf_include_opts);
free(libbpf_include_dir);
free(pipe_template);
if (p_obj_buf)
*p_obj_buf = NULL;
if (p_obj_buf_sz)
*p_obj_buf_sz = 0;
return err;
}
int llvm__search_clang(void)
{
char clang_path[PATH_MAX];
return search_program_and_warn(llvm_param.clang_path, "clang", clang_path);
}

View File

@ -1,69 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015, Wang Nan <wangnan0@huawei.com>
* Copyright (C) 2015, Huawei Inc.
*/
#ifndef __LLVM_UTILS_H
#define __LLVM_UTILS_H
#include <stdbool.h>
struct llvm_param {
/* Path of clang executable */
const char *clang_path;
/* Path of llc executable */
const char *llc_path;
/*
* Template of clang bpf compiling. 5 env variables
* can be used:
* $CLANG_EXEC: Path to clang.
* $CLANG_OPTIONS: Extra options to clang.
* $KERNEL_INC_OPTIONS: Kernel include directories.
* $WORKING_DIR: Kernel source directory.
* $CLANG_SOURCE: Source file to be compiled.
*/
const char *clang_bpf_cmd_template;
/* Will be filled in $CLANG_OPTIONS */
const char *clang_opt;
/*
* If present it'll add -emit-llvm to $CLANG_OPTIONS to pipe
* the clang output to llc, useful for new llvm options not
* yet selectable via 'clang -mllvm option', such as -mattr=dwarfris
* in clang 6.0/llvm 7
*/
const char *opts;
/* Where to find kbuild system */
const char *kbuild_dir;
/*
* Arguments passed to make, like 'ARCH=arm' if doing cross
* compiling. Should not be used for dynamic compiling.
*/
const char *kbuild_opts;
/*
* Default is false. If set to true, write compiling result
* to object file.
*/
bool dump_obj;
/*
* Default is false. If one of the above fields is set by user
* explicitly then user_set_llvm is set to true. This is used
* for perf test. If user doesn't set anything in .perfconfig
* and clang is not found, don't trigger llvm test.
*/
bool user_set_param;
};
extern struct llvm_param llvm_param;
int perf_llvm_config(const char *var, const char *value);
int llvm__compile_bpf(const char *path, void **p_obj_buf, size_t *p_obj_buf_sz);
/* This function is for test__llvm() use only */
int llvm__search_clang(void);
/* Following functions are reused by builtin clang support */
void llvm__get_kbuild_opts(char **kbuild_dir, char **kbuild_include_opts);
int llvm__get_nr_cpus(void);
void llvm__dump_obj(const char *path, void *obj_buf, size_t size);
#endif

View File

@ -14,7 +14,6 @@
#include "parse-events.h"
#include "string2.h"
#include "strlist.h"
#include "bpf-loader.h"
#include "debug.h"
#include <api/fs/tracing_path.h>
#include <perf/cpumap.h>
@ -648,272 +647,6 @@ static int add_tracepoint_multi_sys(struct list_head *list, int *idx,
}
#endif /* HAVE_LIBTRACEEVENT */
#ifdef HAVE_LIBBPF_SUPPORT
struct __add_bpf_event_param {
struct parse_events_state *parse_state;
struct list_head *list;
struct list_head *head_config;
YYLTYPE *loc;
};
static int add_bpf_event(const char *group, const char *event, int fd, struct bpf_object *obj,
void *_param)
{
LIST_HEAD(new_evsels);
struct __add_bpf_event_param *param = _param;
struct parse_events_state *parse_state = param->parse_state;
struct list_head *list = param->list;
struct evsel *pos;
int err;
/*
* Check if we should add the event, i.e. if it is a TP but starts with a '!',
* then don't add the tracepoint, this will be used for something else, like
* adding to a BPF_MAP_TYPE_PROG_ARRAY.
*
* See tools/perf/examples/bpf/augmented_raw_syscalls.c
*/
if (group[0] == '!')
return 0;
pr_debug("add bpf event %s:%s and attach bpf program %d\n",
group, event, fd);
err = parse_events_add_tracepoint(&new_evsels, &parse_state->idx, group,
event, parse_state->error,
param->head_config, param->loc);
if (err) {
struct evsel *evsel, *tmp;
pr_debug("Failed to add BPF event %s:%s\n",
group, event);
list_for_each_entry_safe(evsel, tmp, &new_evsels, core.node) {
list_del_init(&evsel->core.node);
evsel__delete(evsel);
}
return err;
}
pr_debug("adding %s:%s\n", group, event);
list_for_each_entry(pos, &new_evsels, core.node) {
pr_debug("adding %s:%s to %p\n",
group, event, pos);
pos->bpf_fd = fd;
pos->bpf_obj = obj;
}
list_splice(&new_evsels, list);
return 0;
}
int parse_events_load_bpf_obj(struct parse_events_state *parse_state,
struct list_head *list,
struct bpf_object *obj,
struct list_head *head_config,
void *loc)
{
int err;
char errbuf[BUFSIZ];
struct __add_bpf_event_param param = {parse_state, list, head_config, loc};
static bool registered_unprobe_atexit = false;
YYLTYPE test_loc = {.first_column = -1};
if (IS_ERR(obj) || !obj) {
snprintf(errbuf, sizeof(errbuf),
"Internal error: load bpf obj with NULL");
err = -EINVAL;
goto errout;
}
/*
* Register atexit handler before calling bpf__probe() so
* bpf__probe() don't need to unprobe probe points its already
* created when failure.
*/
if (!registered_unprobe_atexit) {
atexit(bpf__clear);
registered_unprobe_atexit = true;
}
err = bpf__probe(obj);
if (err) {
bpf__strerror_probe(obj, err, errbuf, sizeof(errbuf));
goto errout;
}
err = bpf__load(obj);
if (err) {
bpf__strerror_load(obj, err, errbuf, sizeof(errbuf));
goto errout;
}
if (!param.loc)
param.loc = &test_loc;
err = bpf__foreach_event(obj, add_bpf_event, &param);
if (err) {
snprintf(errbuf, sizeof(errbuf),
"Attach events in BPF object failed");
goto errout;
}
return 0;
errout:
parse_events_error__handle(parse_state->error, param.loc ? param.loc->first_column : 0,
strdup(errbuf), strdup("(add -v to see detail)"));
return err;
}
static int
parse_events_config_bpf(struct parse_events_state *parse_state,
struct bpf_object *obj,
struct list_head *head_config)
{
struct parse_events_term *term;
int error_pos = 0;
if (!head_config || list_empty(head_config))
return 0;
list_for_each_entry(term, head_config, list) {
int err;
if (term->type_term != PARSE_EVENTS__TERM_TYPE_USER) {
parse_events_error__handle(parse_state->error, term->err_term,
strdup("Invalid config term for BPF object"),
NULL);
return -EINVAL;
}
err = bpf__config_obj(obj, term, parse_state->evlist, &error_pos);
if (err) {
char errbuf[BUFSIZ];
int idx;
bpf__strerror_config_obj(obj, term, parse_state->evlist,
&error_pos, err, errbuf,
sizeof(errbuf));
if (err == -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE)
idx = term->err_val;
else
idx = term->err_term + error_pos;
parse_events_error__handle(parse_state->error, idx,
strdup(errbuf),
NULL);
return err;
}
}
return 0;
}
/*
* Split config terms:
* perf record -e bpf.c/call-graph=fp,map:array.value[0]=1/ ...
* 'call-graph=fp' is 'evt config', should be applied to each
* events in bpf.c.
* 'map:array.value[0]=1' is 'obj config', should be processed
* with parse_events_config_bpf.
*
* Move object config terms from the first list to obj_head_config.
*/
static void
split_bpf_config_terms(struct list_head *evt_head_config,
struct list_head *obj_head_config)
{
struct parse_events_term *term, *temp;
/*
* Currently, all possible user config term
* belong to bpf object. parse_events__is_hardcoded_term()
* happens to be a good flag.
*
* See parse_events_config_bpf() and
* config_term_tracepoint().
*/
list_for_each_entry_safe(term, temp, evt_head_config, list)
if (!parse_events__is_hardcoded_term(term))
list_move_tail(&term->list, obj_head_config);
}
int parse_events_load_bpf(struct parse_events_state *parse_state,
struct list_head *list,
char *bpf_file_name,
bool source,
struct list_head *head_config,
void *loc_)
{
int err;
struct bpf_object *obj;
LIST_HEAD(obj_head_config);
YYLTYPE *loc = loc_;
if (head_config)
split_bpf_config_terms(head_config, &obj_head_config);
obj = bpf__prepare_load(bpf_file_name, source);
if (IS_ERR(obj)) {
char errbuf[BUFSIZ];
err = PTR_ERR(obj);
if (err == -ENOTSUP)
snprintf(errbuf, sizeof(errbuf),
"BPF support is not compiled");
else
bpf__strerror_prepare_load(bpf_file_name,
source,
-err, errbuf,
sizeof(errbuf));
parse_events_error__handle(parse_state->error, loc->first_column,
strdup(errbuf), strdup("(add -v to see detail)"));
return err;
}
err = parse_events_load_bpf_obj(parse_state, list, obj, head_config, loc);
if (err)
return err;
err = parse_events_config_bpf(parse_state, obj, &obj_head_config);
/*
* Caller doesn't know anything about obj_head_config,
* so combine them together again before returning.
*/
if (head_config)
list_splice_tail(&obj_head_config, head_config);
return err;
}
#else // HAVE_LIBBPF_SUPPORT
int parse_events_load_bpf_obj(struct parse_events_state *parse_state,
struct list_head *list __maybe_unused,
struct bpf_object *obj __maybe_unused,
struct list_head *head_config __maybe_unused,
void *loc_)
{
YYLTYPE *loc = loc_;
parse_events_error__handle(parse_state->error, loc->first_column,
strdup("BPF support is not compiled"),
strdup("Make sure libbpf-devel is available at build time."));
return -ENOTSUP;
}
int parse_events_load_bpf(struct parse_events_state *parse_state,
struct list_head *list __maybe_unused,
char *bpf_file_name __maybe_unused,
bool source __maybe_unused,
struct list_head *head_config __maybe_unused,
void *loc_)
{
YYLTYPE *loc = loc_;
parse_events_error__handle(parse_state->error, loc->first_column,
strdup("BPF support is not compiled"),
strdup("Make sure libbpf-devel is available at build time."));
return -ENOTSUP;
}
#endif // HAVE_LIBBPF_SUPPORT
static int
parse_breakpoint_type(const char *type, struct perf_event_attr *attr)
{
@ -2274,7 +2007,6 @@ int __parse_events(struct evlist *evlist, const char *str, const char *pmu_filte
.list = LIST_HEAD_INIT(parse_state.list),
.idx = evlist->core.nr_entries,
.error = err,
.evlist = evlist,
.stoken = PE_START_EVENTS,
.fake_pmu = fake_pmu,
.pmu_filter = pmu_filter,

View File

@ -118,8 +118,6 @@ struct parse_events_state {
int idx;
/* Error information. */
struct parse_events_error *error;
/* Used by BPF event creation. */
struct evlist *evlist;
/* Holds returned terms for term parsing. */
struct list_head *terms;
/* Start token. */
@ -160,19 +158,6 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
const char *sys, const char *event,
struct parse_events_error *error,
struct list_head *head_config, void *loc);
int parse_events_load_bpf(struct parse_events_state *parse_state,
struct list_head *list,
char *bpf_file_name,
bool source,
struct list_head *head_config,
void *loc);
/* Provide this function for perf test */
struct bpf_object;
int parse_events_load_bpf_obj(struct parse_events_state *parse_state,
struct list_head *list,
struct bpf_object *obj,
struct list_head *head_config,
void *loc);
int parse_events_add_numeric(struct parse_events_state *parse_state,
struct list_head *list,
u32 type, u64 config,

View File

@ -68,31 +68,6 @@ static int lc_str(yyscan_t scanner, const struct parse_events_state *state)
return str(scanner, state->match_legacy_cache_terms ? PE_LEGACY_CACHE : PE_NAME);
}
static bool isbpf_suffix(char *text)
{
int len = strlen(text);
if (len < 2)
return false;
if ((text[len - 1] == 'c' || text[len - 1] == 'o') &&
text[len - 2] == '.')
return true;
if (len > 4 && !strcmp(text + len - 4, ".obj"))
return true;
return false;
}
static bool isbpf(yyscan_t scanner)
{
char *text = parse_events_get_text(scanner);
struct stat st;
if (!isbpf_suffix(text))
return false;
return stat(text, &st) == 0;
}
/*
* This function is called when the parser gets two kind of input:
*
@ -179,8 +154,6 @@ do { \
group [^,{}/]*[{][^}]*[}][^,{}/]*
event_pmu [^,{}/]+[/][^/]*[/][^,{}/]*
event [^,{}/]+
bpf_object [^,{}]+\.(o|bpf)[a-zA-Z0-9._]*
bpf_source [^,{}]+\.c[a-zA-Z0-9._]*
num_dec [0-9]+
num_hex 0x[a-fA-F0-9]+
@ -233,8 +206,6 @@ non_digit [^0-9]
}
{event_pmu} |
{bpf_object} |
{bpf_source} |
{event} {
BEGIN(INITIAL);
REWIND(1);
@ -363,8 +334,6 @@ r{num_raw_hex} { return str(yyscanner, PE_RAW); }
{num_hex} { return value(yyscanner, 16); }
{modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); }
{bpf_object} { if (!isbpf(yyscanner)) { USER_REJECT }; return str(yyscanner, PE_BPF_OBJECT); }
{bpf_source} { if (!isbpf(yyscanner)) { USER_REJECT }; return str(yyscanner, PE_BPF_SOURCE); }
{name} { return str(yyscanner, PE_NAME); }
{name_tag} { return str(yyscanner, PE_NAME); }
"/" { BEGIN(config); return '/'; }

View File

@ -60,7 +60,6 @@ static void free_list_evsel(struct list_head* list_evsel)
%token PE_VALUE_SYM_TOOL
%token PE_EVENT_NAME
%token PE_RAW PE_NAME
%token PE_BPF_OBJECT PE_BPF_SOURCE
%token PE_MODIFIER_EVENT PE_MODIFIER_BP PE_BP_COLON PE_BP_SLASH
%token PE_LEGACY_CACHE
%token PE_PREFIX_MEM
@ -75,8 +74,6 @@ static void free_list_evsel(struct list_head* list_evsel)
%type <num> value_sym
%type <str> PE_RAW
%type <str> PE_NAME
%type <str> PE_BPF_OBJECT
%type <str> PE_BPF_SOURCE
%type <str> PE_LEGACY_CACHE
%type <str> PE_MODIFIER_EVENT
%type <str> PE_MODIFIER_BP
@ -97,7 +94,6 @@ static void free_list_evsel(struct list_head* list_evsel)
%type <list_evsel> event_legacy_tracepoint
%type <list_evsel> event_legacy_numeric
%type <list_evsel> event_legacy_raw
%type <list_evsel> event_bpf_file
%type <list_evsel> event_def
%type <list_evsel> event_mod
%type <list_evsel> event_name
@ -271,8 +267,7 @@ event_def: event_pmu |
event_legacy_mem sep_dc |
event_legacy_tracepoint sep_dc |
event_legacy_numeric sep_dc |
event_legacy_raw sep_dc |
event_bpf_file
event_legacy_raw sep_dc
event_pmu:
PE_NAME opt_pmu_config
@ -620,43 +615,6 @@ PE_RAW opt_event_config
$$ = list;
}
event_bpf_file:
PE_BPF_OBJECT opt_event_config
{
struct parse_events_state *parse_state = _parse_state;
struct list_head *list;
int err;
list = alloc_list();
if (!list)
YYNOMEM;
err = parse_events_load_bpf(parse_state, list, $1, false, $2, &@1);
parse_events_terms__delete($2);
free($1);
if (err) {
free(list);
PE_ABORT(err);
}
$$ = list;
}
|
PE_BPF_SOURCE opt_event_config
{
struct list_head *list;
int err;
list = alloc_list();
if (!list)
YYNOMEM;
err = parse_events_load_bpf(_parse_state, list, $1, true, $2, &@1);
parse_events_terms__delete($2);
if (err) {
free(list);
PE_ABORT(err);
}
$$ = list;
}
opt_event_config:
'/' event_config '/'
{