License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2013-10-15 22:27:33 +08:00
|
|
|
#include <linux/compiler.h>
|
|
|
|
#include <linux/kernel.h>
|
2019-08-30 03:18:59 +08:00
|
|
|
#include <linux/string.h>
|
2019-07-04 22:32:27 +08:00
|
|
|
#include <linux/zalloc.h>
|
2013-10-15 22:27:33 +08:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
2017-04-18 21:46:11 +08:00
|
|
|
#include <errno.h>
|
2017-09-11 21:50:26 +08:00
|
|
|
#include <fcntl.h>
|
2013-10-15 22:27:33 +08:00
|
|
|
#include <unistd.h>
|
|
|
|
#include <string.h>
|
2019-02-25 03:06:44 +08:00
|
|
|
#include <asm/bug.h>
|
2019-02-25 03:06:45 +08:00
|
|
|
#include <dirent.h>
|
2013-10-15 22:27:33 +08:00
|
|
|
|
|
|
|
#include "data.h"
|
2019-09-03 21:56:06 +08:00
|
|
|
#include "util.h" // rm_rf_perf_data()
|
2014-07-15 05:46:48 +08:00
|
|
|
#include "debug.h"
|
2019-03-08 21:47:39 +08:00
|
|
|
#include "header.h"
|
2019-09-03 21:56:06 +08:00
|
|
|
#include <internal/lib.h>
|
2013-10-15 22:27:33 +08:00
|
|
|
|
2019-02-25 03:06:44 +08:00
|
|
|
static void close_dir(struct perf_data_file *files, int nr)
|
|
|
|
{
|
2021-07-16 22:11:20 +08:00
|
|
|
while (--nr >= 0) {
|
2019-02-25 03:06:44 +08:00
|
|
|
close(files[nr].fd);
|
2019-07-04 23:06:20 +08:00
|
|
|
zfree(&files[nr].path);
|
2019-02-25 03:06:44 +08:00
|
|
|
}
|
|
|
|
free(files);
|
|
|
|
}
|
|
|
|
|
|
|
|
void perf_data__close_dir(struct perf_data *data)
|
|
|
|
{
|
|
|
|
close_dir(data->dir.files, data->dir.nr);
|
|
|
|
}
|
|
|
|
|
|
|
|
int perf_data__create_dir(struct perf_data *data, int nr)
|
|
|
|
{
|
|
|
|
struct perf_data_file *files = NULL;
|
2021-04-15 16:34:16 +08:00
|
|
|
int i, ret;
|
2019-02-25 03:06:44 +08:00
|
|
|
|
2019-03-08 21:47:35 +08:00
|
|
|
if (WARN_ON(!data->is_dir))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-02-25 03:06:44 +08:00
|
|
|
files = zalloc(nr * sizeof(*files));
|
|
|
|
if (!files)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2019-03-08 21:47:39 +08:00
|
|
|
data->dir.version = PERF_DIR_VERSION;
|
|
|
|
data->dir.files = files;
|
|
|
|
data->dir.nr = nr;
|
2019-02-25 03:06:44 +08:00
|
|
|
|
|
|
|
for (i = 0; i < nr; i++) {
|
|
|
|
struct perf_data_file *file = &files[i];
|
|
|
|
|
2021-04-15 16:34:16 +08:00
|
|
|
ret = asprintf(&file->path, "%s/data.%d", data->path, i);
|
|
|
|
if (ret < 0)
|
2019-02-25 03:06:44 +08:00
|
|
|
goto out_err;
|
|
|
|
|
|
|
|
ret = open(file->path, O_RDWR|O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out_err;
|
|
|
|
|
|
|
|
file->fd = ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_err:
|
|
|
|
close_dir(files, i);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-02-25 03:06:45 +08:00
|
|
|
int perf_data__open_dir(struct perf_data *data)
|
|
|
|
{
|
|
|
|
struct perf_data_file *files = NULL;
|
|
|
|
struct dirent *dent;
|
|
|
|
int ret = -1;
|
|
|
|
DIR *dir;
|
|
|
|
int nr = 0;
|
|
|
|
|
2019-10-04 16:31:20 +08:00
|
|
|
/*
|
|
|
|
* Directory containing a single regular perf data file which is already
|
|
|
|
* open, means there is nothing more to do here.
|
|
|
|
*/
|
|
|
|
if (perf_data__is_single_file(data))
|
|
|
|
return 0;
|
|
|
|
|
2019-03-08 21:47:35 +08:00
|
|
|
if (WARN_ON(!data->is_dir))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-03-08 21:47:39 +08:00
|
|
|
/* The version is provided by DIR_FORMAT feature. */
|
|
|
|
if (WARN_ON(data->dir.version != PERF_DIR_VERSION))
|
|
|
|
return -1;
|
|
|
|
|
2019-02-25 03:06:45 +08:00
|
|
|
dir = opendir(data->path);
|
|
|
|
if (!dir)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
while ((dent = readdir(dir)) != NULL) {
|
|
|
|
struct perf_data_file *file;
|
|
|
|
char path[PATH_MAX];
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
snprintf(path, sizeof(path), "%s/%s", data->path, dent->d_name);
|
|
|
|
if (stat(path, &st))
|
|
|
|
continue;
|
|
|
|
|
2019-10-04 16:31:17 +08:00
|
|
|
if (!S_ISREG(st.st_mode) || strncmp(dent->d_name, "data.", 5))
|
2019-02-25 03:06:45 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
ret = -ENOMEM;
|
|
|
|
|
|
|
|
file = realloc(files, (nr + 1) * sizeof(*files));
|
|
|
|
if (!file)
|
|
|
|
goto out_err;
|
|
|
|
|
|
|
|
files = file;
|
|
|
|
file = &files[nr++];
|
|
|
|
|
|
|
|
file->path = strdup(path);
|
|
|
|
if (!file->path)
|
|
|
|
goto out_err;
|
|
|
|
|
|
|
|
ret = open(file->path, O_RDONLY);
|
|
|
|
if (ret < 0)
|
|
|
|
goto out_err;
|
|
|
|
|
|
|
|
file->fd = ret;
|
|
|
|
file->size = st.st_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!files)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
data->dir.files = files;
|
|
|
|
data->dir.nr = nr;
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_err:
|
|
|
|
close_dir(files, nr);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-03-08 21:47:37 +08:00
|
|
|
int perf_data__update_dir(struct perf_data *data)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (WARN_ON(!data->is_dir))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
for (i = 0; i < data->dir.nr; i++) {
|
|
|
|
struct perf_data_file *file = &data->dir.files[i];
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
if (fstat(file->fd, &st))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
file->size = st.st_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
static bool check_pipe(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
|
|
|
struct stat st;
|
|
|
|
bool is_pipe = false;
|
2017-01-24 05:07:59 +08:00
|
|
|
int fd = perf_data__is_read(data) ?
|
2013-10-15 22:27:33 +08:00
|
|
|
STDIN_FILENO : STDOUT_FILENO;
|
|
|
|
|
2019-02-21 17:41:30 +08:00
|
|
|
if (!data->path) {
|
2013-10-15 22:27:33 +08:00
|
|
|
if (!fstat(fd, &st) && S_ISFIFO(st.st_mode))
|
|
|
|
is_pipe = true;
|
|
|
|
} else {
|
2019-02-21 17:41:30 +08:00
|
|
|
if (!strcmp(data->path, "-"))
|
2013-10-15 22:27:33 +08:00
|
|
|
is_pipe = true;
|
|
|
|
}
|
|
|
|
|
perf data: Allow to use stdio functions for pipe mode
When perf data is in a pipe, it reads each event separately using
read(2) syscall. This is a huge performance bottleneck when
processing large data like in perf inject. Also perf inject needs to
use write(2) syscall for the output.
So convert it to use buffer I/O functions in stdio library for pipe
data. This makes inject-build-id bench time drops from 20ms to 8ms.
$ perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.074 msec (+- 0.013 msec)
Average time per event: 0.792 usec (+- 0.001 usec)
Average memory usage: 8328 KB (+- 0 KB)
Average build-id-all injection took: 5.490 msec (+- 0.008 msec)
Average time per event: 0.538 usec (+- 0.001 usec)
Average memory usage: 7563 KB (+- 0 KB)
This patch enables it just for perf inject when used with pipe (it's a
default behavior). Maybe we could do it for perf record and/or report
later..
Committer testing:
Before:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.605 msec (+- 0.064 msec)
Average time per event: 1.334 usec (+- 0.006 usec)
Average memory usage: 12220 KB (+- 7 KB)
Average build-id-all injection took: 11.458 msec (+- 0.058 msec)
Average time per event: 1.123 usec (+- 0.006 usec)
Average memory usage: 11546 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.673 msec (+- 0.057 msec)
Average time per event: 1.341 usec (+- 0.006 usec)
Average memory usage: 12508 KB (+- 8 KB)
Average build-id-all injection took: 11.437 msec (+- 0.046 msec)
Average time per event: 1.121 usec (+- 0.004 usec)
Average memory usage: 11812 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.641 msec (+- 0.069 msec)
Average time per event: 1.337 usec (+- 0.007 usec)
Average memory usage: 12302 KB (+- 8 KB)
Average build-id-all injection took: 10.820 msec (+- 0.106 msec)
Average time per event: 1.061 usec (+- 0.010 usec)
Average memory usage: 11616 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.379 msec (+- 0.074 msec)
Average time per event: 1.312 usec (+- 0.007 usec)
Average memory usage: 12334 KB (+- 8 KB)
Average build-id-all injection took: 11.288 msec (+- 0.071 msec)
Average time per event: 1.107 usec (+- 0.007 usec)
Average memory usage: 11657 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.534 msec (+- 0.058 msec)
Average time per event: 1.327 usec (+- 0.006 usec)
Average memory usage: 12264 KB (+- 8 KB)
Average build-id-all injection took: 11.557 msec (+- 0.076 msec)
Average time per event: 1.133 usec (+- 0.007 usec)
Average memory usage: 11593 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
4,060.05 msec task-clock:u # 1.566 CPUs utilized ( +- 0.65% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
101,888 page-faults:u # 0.025 M/sec ( +- 0.12% )
3,745,833,163 cycles:u # 0.923 GHz ( +- 0.10% ) (83.22%)
194,346,613 stalled-cycles-frontend:u # 5.19% frontend cycles idle ( +- 0.57% ) (83.30%)
708,495,034 stalled-cycles-backend:u # 18.91% backend cycles idle ( +- 0.48% ) (83.48%)
5,629,328,628 instructions:u # 1.50 insn per cycle
# 0.13 stalled cycles per insn ( +- 0.21% ) (83.57%)
1,236,697,927 branches:u # 304.602 M/sec ( +- 0.16% ) (83.44%)
17,564,877 branch-misses:u # 1.42% of all branches ( +- 0.23% ) (82.99%)
2.5934 +- 0.0128 seconds time elapsed ( +- 0.49% )
$
After:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.560 msec (+- 0.125 msec)
Average time per event: 0.839 usec (+- 0.012 usec)
Average memory usage: 12520 KB (+- 8 KB)
Average build-id-all injection took: 5.789 msec (+- 0.054 msec)
Average time per event: 0.568 usec (+- 0.005 usec)
Average memory usage: 11919 KB (+- 9 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.639 msec (+- 0.111 msec)
Average time per event: 0.847 usec (+- 0.011 usec)
Average memory usage: 12732 KB (+- 8 KB)
Average build-id-all injection took: 5.647 msec (+- 0.069 msec)
Average time per event: 0.554 usec (+- 0.007 usec)
Average memory usage: 12093 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.551 msec (+- 0.096 msec)
Average time per event: 0.838 usec (+- 0.009 usec)
Average memory usage: 12739 KB (+- 8 KB)
Average build-id-all injection took: 5.617 msec (+- 0.061 msec)
Average time per event: 0.551 usec (+- 0.006 usec)
Average memory usage: 12105 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.403 msec (+- 0.097 msec)
Average time per event: 0.824 usec (+- 0.010 usec)
Average memory usage: 12770 KB (+- 8 KB)
Average build-id-all injection took: 5.611 msec (+- 0.085 msec)
Average time per event: 0.550 usec (+- 0.008 usec)
Average memory usage: 12134 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.518 msec (+- 0.102 msec)
Average time per event: 0.835 usec (+- 0.010 usec)
Average memory usage: 12518 KB (+- 10 KB)
Average build-id-all injection took: 5.503 msec (+- 0.073 msec)
Average time per event: 0.540 usec (+- 0.007 usec)
Average memory usage: 11882 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
2,394.88 msec task-clock:u # 1.577 CPUs utilized ( +- 0.83% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
103,181 page-faults:u # 0.043 M/sec ( +- 0.11% )
3,548,172,030 cycles:u # 1.482 GHz ( +- 0.30% ) (83.26%)
81,537,700 stalled-cycles-frontend:u # 2.30% frontend cycles idle ( +- 1.54% ) (83.24%)
876,631,544 stalled-cycles-backend:u # 24.71% backend cycles idle ( +- 1.14% ) (83.45%)
5,960,361,707 instructions:u # 1.68 insn per cycle
# 0.15 stalled cycles per insn ( +- 0.27% ) (83.26%)
1,269,413,491 branches:u # 530.054 M/sec ( +- 0.10% ) (83.48%)
11,372,453 branch-misses:u # 0.90% of all branches ( +- 0.52% ) (83.31%)
1.51874 +- 0.00642 seconds time elapsed ( +- 0.42% )
$
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201030054742.87740-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-30 13:47:42 +08:00
|
|
|
if (is_pipe) {
|
|
|
|
if (data->use_stdio) {
|
|
|
|
const char *mode;
|
|
|
|
|
|
|
|
mode = perf_data__is_read(data) ? "r" : "w";
|
|
|
|
data->file.fptr = fdopen(fd, mode);
|
|
|
|
|
|
|
|
if (data->file.fptr == NULL) {
|
|
|
|
data->file.fd = fd;
|
|
|
|
data->use_stdio = false;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
data->file.fd = fd;
|
|
|
|
}
|
|
|
|
}
|
2013-10-15 22:27:33 +08:00
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
return data->is_pipe = is_pipe;
|
2013-10-15 22:27:33 +08:00
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
static int check_backup(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
|
|
|
struct stat st;
|
|
|
|
|
2019-02-25 03:06:42 +08:00
|
|
|
if (perf_data__is_read(data))
|
|
|
|
return 0;
|
|
|
|
|
2019-02-21 17:41:30 +08:00
|
|
|
if (!stat(data->path, &st) && st.st_size) {
|
2013-10-15 22:27:33 +08:00
|
|
|
char oldname[PATH_MAX];
|
2019-02-25 03:06:43 +08:00
|
|
|
int ret;
|
|
|
|
|
2013-10-15 22:27:33 +08:00
|
|
|
snprintf(oldname, sizeof(oldname), "%s.old",
|
2019-02-21 17:41:30 +08:00
|
|
|
data->path);
|
2019-02-25 03:06:43 +08:00
|
|
|
|
|
|
|
ret = rm_rf_perf_data(oldname);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Can't remove old data: %s (%s)\n",
|
|
|
|
ret == -2 ?
|
|
|
|
"Unknown file found" : strerror(errno),
|
|
|
|
oldname);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rename(data->path, oldname)) {
|
|
|
|
pr_err("Can't move data: %s (%s to %s)\n",
|
|
|
|
strerror(errno),
|
|
|
|
data->path, oldname);
|
|
|
|
return -1;
|
|
|
|
}
|
2013-10-15 22:27:33 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-03-08 21:47:35 +08:00
|
|
|
static bool is_dir(struct perf_data *data)
|
|
|
|
{
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
if (stat(data->path, &st))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return (st.st_mode & S_IFMT) == S_IFDIR;
|
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
static int open_file_read(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
2021-04-30 15:03:01 +08:00
|
|
|
int flags = data->in_place_update ? O_RDWR : O_RDONLY;
|
2013-10-15 22:27:33 +08:00
|
|
|
struct stat st;
|
|
|
|
int fd;
|
2014-08-14 10:22:36 +08:00
|
|
|
char sbuf[STRERR_BUFSIZE];
|
2013-10-15 22:27:33 +08:00
|
|
|
|
2021-04-30 15:03:01 +08:00
|
|
|
fd = open(data->file.path, flags);
|
2013-10-15 22:27:33 +08:00
|
|
|
if (fd < 0) {
|
|
|
|
int err = errno;
|
|
|
|
|
2017-01-24 05:25:41 +08:00
|
|
|
pr_err("failed to open %s: %s", data->file.path,
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 22:56:20 +08:00
|
|
|
str_error_r(err, sbuf, sizeof(sbuf)));
|
2017-01-24 05:25:41 +08:00
|
|
|
if (err == ENOENT && !strcmp(data->file.path, "perf.data"))
|
2013-10-15 22:27:33 +08:00
|
|
|
pr_err(" (try 'perf record' first)");
|
|
|
|
pr_err("\n");
|
|
|
|
return -err;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (fstat(fd, &st) < 0)
|
|
|
|
goto out_close;
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
if (!data->force && st.st_uid && (st.st_uid != geteuid())) {
|
2014-07-08 23:40:11 +08:00
|
|
|
pr_err("File %s not owned by current user or root (use -f to override)\n",
|
2017-01-24 05:25:41 +08:00
|
|
|
data->file.path);
|
2013-10-15 22:27:33 +08:00
|
|
|
goto out_close;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!st.st_size) {
|
2017-01-24 05:07:59 +08:00
|
|
|
pr_info("zero-sized data (%s), nothing to do!\n",
|
2017-01-24 05:25:41 +08:00
|
|
|
data->file.path);
|
2013-10-15 22:27:33 +08:00
|
|
|
goto out_close;
|
|
|
|
}
|
|
|
|
|
2019-02-21 17:41:29 +08:00
|
|
|
data->file.size = st.st_size;
|
2013-10-15 22:27:33 +08:00
|
|
|
return fd;
|
|
|
|
|
|
|
|
out_close:
|
|
|
|
close(fd);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
static int open_file_write(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
2014-04-18 10:00:43 +08:00
|
|
|
int fd;
|
2014-08-14 10:22:36 +08:00
|
|
|
char sbuf[STRERR_BUFSIZE];
|
2014-04-18 10:00:43 +08:00
|
|
|
|
2017-01-24 05:25:41 +08:00
|
|
|
fd = open(data->file.path, O_CREAT|O_RDWR|O_TRUNC|O_CLOEXEC,
|
2017-09-08 16:46:20 +08:00
|
|
|
S_IRUSR|S_IWUSR);
|
2014-04-18 10:00:43 +08:00
|
|
|
|
|
|
|
if (fd < 0)
|
2017-01-24 05:25:41 +08:00
|
|
|
pr_err("failed to open %s : %s\n", data->file.path,
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 22:56:20 +08:00
|
|
|
str_error_r(errno, sbuf, sizeof(sbuf)));
|
2014-04-18 10:00:43 +08:00
|
|
|
|
|
|
|
return fd;
|
2013-10-15 22:27:33 +08:00
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
static int open_file(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
|
|
|
int fd;
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
fd = perf_data__is_read(data) ?
|
|
|
|
open_file_read(data) : open_file_write(data);
|
2013-10-15 22:27:33 +08:00
|
|
|
|
2019-02-21 17:41:30 +08:00
|
|
|
if (fd < 0) {
|
2019-03-05 23:25:36 +08:00
|
|
|
zfree(&data->file.path);
|
2019-02-21 17:41:30 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-01-24 05:25:41 +08:00
|
|
|
data->file.fd = fd;
|
2019-02-21 17:41:30 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int open_file_dup(struct perf_data *data)
|
|
|
|
{
|
|
|
|
data->file.path = strdup(data->path);
|
|
|
|
if (!data->file.path)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return open_file(data);
|
2013-10-15 22:27:33 +08:00
|
|
|
}
|
|
|
|
|
2019-03-08 21:47:35 +08:00
|
|
|
static int open_dir(struct perf_data *data)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* So far we open only the header, so we can read the data version and
|
|
|
|
* layout.
|
|
|
|
*/
|
2019-10-04 16:31:19 +08:00
|
|
|
if (asprintf(&data->file.path, "%s/data", data->path) < 0)
|
2019-03-08 21:47:35 +08:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (perf_data__is_write(data) &&
|
|
|
|
mkdir(data->path, S_IRWXU) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
ret = open_file(data);
|
|
|
|
|
|
|
|
/* Cleanup whatever we managed to create so far. */
|
|
|
|
if (ret && perf_data__is_write(data))
|
|
|
|
rm_rf_perf_data(data->path);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
int perf_data__open(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
2017-01-24 05:07:59 +08:00
|
|
|
if (check_pipe(data))
|
2013-10-15 22:27:33 +08:00
|
|
|
return 0;
|
|
|
|
|
perf data: Allow to use stdio functions for pipe mode
When perf data is in a pipe, it reads each event separately using
read(2) syscall. This is a huge performance bottleneck when
processing large data like in perf inject. Also perf inject needs to
use write(2) syscall for the output.
So convert it to use buffer I/O functions in stdio library for pipe
data. This makes inject-build-id bench time drops from 20ms to 8ms.
$ perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.074 msec (+- 0.013 msec)
Average time per event: 0.792 usec (+- 0.001 usec)
Average memory usage: 8328 KB (+- 0 KB)
Average build-id-all injection took: 5.490 msec (+- 0.008 msec)
Average time per event: 0.538 usec (+- 0.001 usec)
Average memory usage: 7563 KB (+- 0 KB)
This patch enables it just for perf inject when used with pipe (it's a
default behavior). Maybe we could do it for perf record and/or report
later..
Committer testing:
Before:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.605 msec (+- 0.064 msec)
Average time per event: 1.334 usec (+- 0.006 usec)
Average memory usage: 12220 KB (+- 7 KB)
Average build-id-all injection took: 11.458 msec (+- 0.058 msec)
Average time per event: 1.123 usec (+- 0.006 usec)
Average memory usage: 11546 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.673 msec (+- 0.057 msec)
Average time per event: 1.341 usec (+- 0.006 usec)
Average memory usage: 12508 KB (+- 8 KB)
Average build-id-all injection took: 11.437 msec (+- 0.046 msec)
Average time per event: 1.121 usec (+- 0.004 usec)
Average memory usage: 11812 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.641 msec (+- 0.069 msec)
Average time per event: 1.337 usec (+- 0.007 usec)
Average memory usage: 12302 KB (+- 8 KB)
Average build-id-all injection took: 10.820 msec (+- 0.106 msec)
Average time per event: 1.061 usec (+- 0.010 usec)
Average memory usage: 11616 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.379 msec (+- 0.074 msec)
Average time per event: 1.312 usec (+- 0.007 usec)
Average memory usage: 12334 KB (+- 8 KB)
Average build-id-all injection took: 11.288 msec (+- 0.071 msec)
Average time per event: 1.107 usec (+- 0.007 usec)
Average memory usage: 11657 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.534 msec (+- 0.058 msec)
Average time per event: 1.327 usec (+- 0.006 usec)
Average memory usage: 12264 KB (+- 8 KB)
Average build-id-all injection took: 11.557 msec (+- 0.076 msec)
Average time per event: 1.133 usec (+- 0.007 usec)
Average memory usage: 11593 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
4,060.05 msec task-clock:u # 1.566 CPUs utilized ( +- 0.65% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
101,888 page-faults:u # 0.025 M/sec ( +- 0.12% )
3,745,833,163 cycles:u # 0.923 GHz ( +- 0.10% ) (83.22%)
194,346,613 stalled-cycles-frontend:u # 5.19% frontend cycles idle ( +- 0.57% ) (83.30%)
708,495,034 stalled-cycles-backend:u # 18.91% backend cycles idle ( +- 0.48% ) (83.48%)
5,629,328,628 instructions:u # 1.50 insn per cycle
# 0.13 stalled cycles per insn ( +- 0.21% ) (83.57%)
1,236,697,927 branches:u # 304.602 M/sec ( +- 0.16% ) (83.44%)
17,564,877 branch-misses:u # 1.42% of all branches ( +- 0.23% ) (82.99%)
2.5934 +- 0.0128 seconds time elapsed ( +- 0.49% )
$
After:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.560 msec (+- 0.125 msec)
Average time per event: 0.839 usec (+- 0.012 usec)
Average memory usage: 12520 KB (+- 8 KB)
Average build-id-all injection took: 5.789 msec (+- 0.054 msec)
Average time per event: 0.568 usec (+- 0.005 usec)
Average memory usage: 11919 KB (+- 9 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.639 msec (+- 0.111 msec)
Average time per event: 0.847 usec (+- 0.011 usec)
Average memory usage: 12732 KB (+- 8 KB)
Average build-id-all injection took: 5.647 msec (+- 0.069 msec)
Average time per event: 0.554 usec (+- 0.007 usec)
Average memory usage: 12093 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.551 msec (+- 0.096 msec)
Average time per event: 0.838 usec (+- 0.009 usec)
Average memory usage: 12739 KB (+- 8 KB)
Average build-id-all injection took: 5.617 msec (+- 0.061 msec)
Average time per event: 0.551 usec (+- 0.006 usec)
Average memory usage: 12105 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.403 msec (+- 0.097 msec)
Average time per event: 0.824 usec (+- 0.010 usec)
Average memory usage: 12770 KB (+- 8 KB)
Average build-id-all injection took: 5.611 msec (+- 0.085 msec)
Average time per event: 0.550 usec (+- 0.008 usec)
Average memory usage: 12134 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.518 msec (+- 0.102 msec)
Average time per event: 0.835 usec (+- 0.010 usec)
Average memory usage: 12518 KB (+- 10 KB)
Average build-id-all injection took: 5.503 msec (+- 0.073 msec)
Average time per event: 0.540 usec (+- 0.007 usec)
Average memory usage: 11882 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
2,394.88 msec task-clock:u # 1.577 CPUs utilized ( +- 0.83% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
103,181 page-faults:u # 0.043 M/sec ( +- 0.11% )
3,548,172,030 cycles:u # 1.482 GHz ( +- 0.30% ) (83.26%)
81,537,700 stalled-cycles-frontend:u # 2.30% frontend cycles idle ( +- 1.54% ) (83.24%)
876,631,544 stalled-cycles-backend:u # 24.71% backend cycles idle ( +- 1.14% ) (83.45%)
5,960,361,707 instructions:u # 1.68 insn per cycle
# 0.15 stalled cycles per insn ( +- 0.27% ) (83.26%)
1,269,413,491 branches:u # 530.054 M/sec ( +- 0.10% ) (83.48%)
11,372,453 branch-misses:u # 0.90% of all branches ( +- 0.52% ) (83.31%)
1.51874 +- 0.00642 seconds time elapsed ( +- 0.42% )
$
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201030054742.87740-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-30 13:47:42 +08:00
|
|
|
/* currently it allows stdio for pipe only */
|
|
|
|
data->use_stdio = false;
|
|
|
|
|
2019-02-21 17:41:30 +08:00
|
|
|
if (!data->path)
|
|
|
|
data->path = "perf.data";
|
2013-10-15 22:27:33 +08:00
|
|
|
|
2019-02-25 03:06:42 +08:00
|
|
|
if (check_backup(data))
|
|
|
|
return -1;
|
|
|
|
|
2019-03-08 21:47:35 +08:00
|
|
|
if (perf_data__is_read(data))
|
|
|
|
data->is_dir = is_dir(data);
|
|
|
|
|
|
|
|
return perf_data__is_dir(data) ?
|
|
|
|
open_dir(data) : open_file_dup(data);
|
2013-10-15 22:27:33 +08:00
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
void perf_data__close(struct perf_data *data)
|
2013-10-15 22:27:33 +08:00
|
|
|
{
|
2019-03-08 21:47:35 +08:00
|
|
|
if (perf_data__is_dir(data))
|
|
|
|
perf_data__close_dir(data);
|
|
|
|
|
2019-03-05 23:25:36 +08:00
|
|
|
zfree(&data->file.path);
|
perf data: Allow to use stdio functions for pipe mode
When perf data is in a pipe, it reads each event separately using
read(2) syscall. This is a huge performance bottleneck when
processing large data like in perf inject. Also perf inject needs to
use write(2) syscall for the output.
So convert it to use buffer I/O functions in stdio library for pipe
data. This makes inject-build-id bench time drops from 20ms to 8ms.
$ perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.074 msec (+- 0.013 msec)
Average time per event: 0.792 usec (+- 0.001 usec)
Average memory usage: 8328 KB (+- 0 KB)
Average build-id-all injection took: 5.490 msec (+- 0.008 msec)
Average time per event: 0.538 usec (+- 0.001 usec)
Average memory usage: 7563 KB (+- 0 KB)
This patch enables it just for perf inject when used with pipe (it's a
default behavior). Maybe we could do it for perf record and/or report
later..
Committer testing:
Before:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.605 msec (+- 0.064 msec)
Average time per event: 1.334 usec (+- 0.006 usec)
Average memory usage: 12220 KB (+- 7 KB)
Average build-id-all injection took: 11.458 msec (+- 0.058 msec)
Average time per event: 1.123 usec (+- 0.006 usec)
Average memory usage: 11546 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.673 msec (+- 0.057 msec)
Average time per event: 1.341 usec (+- 0.006 usec)
Average memory usage: 12508 KB (+- 8 KB)
Average build-id-all injection took: 11.437 msec (+- 0.046 msec)
Average time per event: 1.121 usec (+- 0.004 usec)
Average memory usage: 11812 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.641 msec (+- 0.069 msec)
Average time per event: 1.337 usec (+- 0.007 usec)
Average memory usage: 12302 KB (+- 8 KB)
Average build-id-all injection took: 10.820 msec (+- 0.106 msec)
Average time per event: 1.061 usec (+- 0.010 usec)
Average memory usage: 11616 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.379 msec (+- 0.074 msec)
Average time per event: 1.312 usec (+- 0.007 usec)
Average memory usage: 12334 KB (+- 8 KB)
Average build-id-all injection took: 11.288 msec (+- 0.071 msec)
Average time per event: 1.107 usec (+- 0.007 usec)
Average memory usage: 11657 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.534 msec (+- 0.058 msec)
Average time per event: 1.327 usec (+- 0.006 usec)
Average memory usage: 12264 KB (+- 8 KB)
Average build-id-all injection took: 11.557 msec (+- 0.076 msec)
Average time per event: 1.133 usec (+- 0.007 usec)
Average memory usage: 11593 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
4,060.05 msec task-clock:u # 1.566 CPUs utilized ( +- 0.65% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
101,888 page-faults:u # 0.025 M/sec ( +- 0.12% )
3,745,833,163 cycles:u # 0.923 GHz ( +- 0.10% ) (83.22%)
194,346,613 stalled-cycles-frontend:u # 5.19% frontend cycles idle ( +- 0.57% ) (83.30%)
708,495,034 stalled-cycles-backend:u # 18.91% backend cycles idle ( +- 0.48% ) (83.48%)
5,629,328,628 instructions:u # 1.50 insn per cycle
# 0.13 stalled cycles per insn ( +- 0.21% ) (83.57%)
1,236,697,927 branches:u # 304.602 M/sec ( +- 0.16% ) (83.44%)
17,564,877 branch-misses:u # 1.42% of all branches ( +- 0.23% ) (82.99%)
2.5934 +- 0.0128 seconds time elapsed ( +- 0.49% )
$
After:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.560 msec (+- 0.125 msec)
Average time per event: 0.839 usec (+- 0.012 usec)
Average memory usage: 12520 KB (+- 8 KB)
Average build-id-all injection took: 5.789 msec (+- 0.054 msec)
Average time per event: 0.568 usec (+- 0.005 usec)
Average memory usage: 11919 KB (+- 9 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.639 msec (+- 0.111 msec)
Average time per event: 0.847 usec (+- 0.011 usec)
Average memory usage: 12732 KB (+- 8 KB)
Average build-id-all injection took: 5.647 msec (+- 0.069 msec)
Average time per event: 0.554 usec (+- 0.007 usec)
Average memory usage: 12093 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.551 msec (+- 0.096 msec)
Average time per event: 0.838 usec (+- 0.009 usec)
Average memory usage: 12739 KB (+- 8 KB)
Average build-id-all injection took: 5.617 msec (+- 0.061 msec)
Average time per event: 0.551 usec (+- 0.006 usec)
Average memory usage: 12105 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.403 msec (+- 0.097 msec)
Average time per event: 0.824 usec (+- 0.010 usec)
Average memory usage: 12770 KB (+- 8 KB)
Average build-id-all injection took: 5.611 msec (+- 0.085 msec)
Average time per event: 0.550 usec (+- 0.008 usec)
Average memory usage: 12134 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.518 msec (+- 0.102 msec)
Average time per event: 0.835 usec (+- 0.010 usec)
Average memory usage: 12518 KB (+- 10 KB)
Average build-id-all injection took: 5.503 msec (+- 0.073 msec)
Average time per event: 0.540 usec (+- 0.007 usec)
Average memory usage: 11882 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
2,394.88 msec task-clock:u # 1.577 CPUs utilized ( +- 0.83% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
103,181 page-faults:u # 0.043 M/sec ( +- 0.11% )
3,548,172,030 cycles:u # 1.482 GHz ( +- 0.30% ) (83.26%)
81,537,700 stalled-cycles-frontend:u # 2.30% frontend cycles idle ( +- 1.54% ) (83.24%)
876,631,544 stalled-cycles-backend:u # 24.71% backend cycles idle ( +- 1.14% ) (83.45%)
5,960,361,707 instructions:u # 1.68 insn per cycle
# 0.15 stalled cycles per insn ( +- 0.27% ) (83.26%)
1,269,413,491 branches:u # 530.054 M/sec ( +- 0.10% ) (83.48%)
11,372,453 branch-misses:u # 0.90% of all branches ( +- 0.52% ) (83.31%)
1.51874 +- 0.00642 seconds time elapsed ( +- 0.42% )
$
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201030054742.87740-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-30 13:47:42 +08:00
|
|
|
|
|
|
|
if (data->use_stdio)
|
|
|
|
fclose(data->file.fptr);
|
|
|
|
else
|
|
|
|
close(data->file.fd);
|
|
|
|
}
|
|
|
|
|
|
|
|
ssize_t perf_data__read(struct perf_data *data, void *buf, size_t size)
|
|
|
|
{
|
|
|
|
if (data->use_stdio) {
|
|
|
|
if (fread(buf, size, 1, data->file.fptr) == 1)
|
|
|
|
return size;
|
|
|
|
return feof(data->file.fptr) ? 0 : -1;
|
|
|
|
}
|
|
|
|
return readn(data->file.fd, buf, size);
|
2013-10-15 22:27:33 +08:00
|
|
|
}
|
2013-11-28 18:30:17 +08:00
|
|
|
|
2017-01-24 05:42:56 +08:00
|
|
|
ssize_t perf_data_file__write(struct perf_data_file *file,
|
|
|
|
void *buf, size_t size)
|
|
|
|
{
|
|
|
|
return writen(file->fd, buf, size);
|
|
|
|
}
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
ssize_t perf_data__write(struct perf_data *data,
|
2013-11-28 18:30:17 +08:00
|
|
|
void *buf, size_t size)
|
|
|
|
{
|
perf data: Allow to use stdio functions for pipe mode
When perf data is in a pipe, it reads each event separately using
read(2) syscall. This is a huge performance bottleneck when
processing large data like in perf inject. Also perf inject needs to
use write(2) syscall for the output.
So convert it to use buffer I/O functions in stdio library for pipe
data. This makes inject-build-id bench time drops from 20ms to 8ms.
$ perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.074 msec (+- 0.013 msec)
Average time per event: 0.792 usec (+- 0.001 usec)
Average memory usage: 8328 KB (+- 0 KB)
Average build-id-all injection took: 5.490 msec (+- 0.008 msec)
Average time per event: 0.538 usec (+- 0.001 usec)
Average memory usage: 7563 KB (+- 0 KB)
This patch enables it just for perf inject when used with pipe (it's a
default behavior). Maybe we could do it for perf record and/or report
later..
Committer testing:
Before:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.605 msec (+- 0.064 msec)
Average time per event: 1.334 usec (+- 0.006 usec)
Average memory usage: 12220 KB (+- 7 KB)
Average build-id-all injection took: 11.458 msec (+- 0.058 msec)
Average time per event: 1.123 usec (+- 0.006 usec)
Average memory usage: 11546 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.673 msec (+- 0.057 msec)
Average time per event: 1.341 usec (+- 0.006 usec)
Average memory usage: 12508 KB (+- 8 KB)
Average build-id-all injection took: 11.437 msec (+- 0.046 msec)
Average time per event: 1.121 usec (+- 0.004 usec)
Average memory usage: 11812 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.641 msec (+- 0.069 msec)
Average time per event: 1.337 usec (+- 0.007 usec)
Average memory usage: 12302 KB (+- 8 KB)
Average build-id-all injection took: 10.820 msec (+- 0.106 msec)
Average time per event: 1.061 usec (+- 0.010 usec)
Average memory usage: 11616 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.379 msec (+- 0.074 msec)
Average time per event: 1.312 usec (+- 0.007 usec)
Average memory usage: 12334 KB (+- 8 KB)
Average build-id-all injection took: 11.288 msec (+- 0.071 msec)
Average time per event: 1.107 usec (+- 0.007 usec)
Average memory usage: 11657 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 13.534 msec (+- 0.058 msec)
Average time per event: 1.327 usec (+- 0.006 usec)
Average memory usage: 12264 KB (+- 8 KB)
Average build-id-all injection took: 11.557 msec (+- 0.076 msec)
Average time per event: 1.133 usec (+- 0.007 usec)
Average memory usage: 11593 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
4,060.05 msec task-clock:u # 1.566 CPUs utilized ( +- 0.65% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
101,888 page-faults:u # 0.025 M/sec ( +- 0.12% )
3,745,833,163 cycles:u # 0.923 GHz ( +- 0.10% ) (83.22%)
194,346,613 stalled-cycles-frontend:u # 5.19% frontend cycles idle ( +- 0.57% ) (83.30%)
708,495,034 stalled-cycles-backend:u # 18.91% backend cycles idle ( +- 0.48% ) (83.48%)
5,629,328,628 instructions:u # 1.50 insn per cycle
# 0.13 stalled cycles per insn ( +- 0.21% ) (83.57%)
1,236,697,927 branches:u # 304.602 M/sec ( +- 0.16% ) (83.44%)
17,564,877 branch-misses:u # 1.42% of all branches ( +- 0.23% ) (82.99%)
2.5934 +- 0.0128 seconds time elapsed ( +- 0.49% )
$
After:
$ perf stat -r 5 perf bench internals inject-build-id
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.560 msec (+- 0.125 msec)
Average time per event: 0.839 usec (+- 0.012 usec)
Average memory usage: 12520 KB (+- 8 KB)
Average build-id-all injection took: 5.789 msec (+- 0.054 msec)
Average time per event: 0.568 usec (+- 0.005 usec)
Average memory usage: 11919 KB (+- 9 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.639 msec (+- 0.111 msec)
Average time per event: 0.847 usec (+- 0.011 usec)
Average memory usage: 12732 KB (+- 8 KB)
Average build-id-all injection took: 5.647 msec (+- 0.069 msec)
Average time per event: 0.554 usec (+- 0.007 usec)
Average memory usage: 12093 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.551 msec (+- 0.096 msec)
Average time per event: 0.838 usec (+- 0.009 usec)
Average memory usage: 12739 KB (+- 8 KB)
Average build-id-all injection took: 5.617 msec (+- 0.061 msec)
Average time per event: 0.551 usec (+- 0.006 usec)
Average memory usage: 12105 KB (+- 7 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.403 msec (+- 0.097 msec)
Average time per event: 0.824 usec (+- 0.010 usec)
Average memory usage: 12770 KB (+- 8 KB)
Average build-id-all injection took: 5.611 msec (+- 0.085 msec)
Average time per event: 0.550 usec (+- 0.008 usec)
Average memory usage: 12134 KB (+- 8 KB)
# Running 'internals/inject-build-id' benchmark:
Average build-id injection took: 8.518 msec (+- 0.102 msec)
Average time per event: 0.835 usec (+- 0.010 usec)
Average memory usage: 12518 KB (+- 10 KB)
Average build-id-all injection took: 5.503 msec (+- 0.073 msec)
Average time per event: 0.540 usec (+- 0.007 usec)
Average memory usage: 11882 KB (+- 8 KB)
Performance counter stats for 'perf bench internals inject-build-id' (5 runs):
2,394.88 msec task-clock:u # 1.577 CPUs utilized ( +- 0.83% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
103,181 page-faults:u # 0.043 M/sec ( +- 0.11% )
3,548,172,030 cycles:u # 1.482 GHz ( +- 0.30% ) (83.26%)
81,537,700 stalled-cycles-frontend:u # 2.30% frontend cycles idle ( +- 1.54% ) (83.24%)
876,631,544 stalled-cycles-backend:u # 24.71% backend cycles idle ( +- 1.14% ) (83.45%)
5,960,361,707 instructions:u # 1.68 insn per cycle
# 0.15 stalled cycles per insn ( +- 0.27% ) (83.26%)
1,269,413,491 branches:u # 530.054 M/sec ( +- 0.10% ) (83.48%)
11,372,453 branch-misses:u # 0.90% of all branches ( +- 0.52% ) (83.31%)
1.51874 +- 0.00642 seconds time elapsed ( +- 0.42% )
$
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201030054742.87740-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-30 13:47:42 +08:00
|
|
|
if (data->use_stdio) {
|
|
|
|
if (fwrite(buf, size, 1, data->file.fptr) == 1)
|
|
|
|
return size;
|
|
|
|
return -1;
|
|
|
|
}
|
2017-01-24 05:42:56 +08:00
|
|
|
return perf_data_file__write(&data->file, buf, size);
|
2013-11-28 18:30:17 +08:00
|
|
|
}
|
2016-04-13 16:21:05 +08:00
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
int perf_data__switch(struct perf_data *data,
|
2016-04-13 16:21:05 +08:00
|
|
|
const char *postfix,
|
2019-03-15 06:49:55 +08:00
|
|
|
size_t pos, bool at_exit,
|
|
|
|
char **new_filepath)
|
2016-04-13 16:21:05 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
if (check_pipe(data))
|
2016-04-13 16:21:05 +08:00
|
|
|
return -EINVAL;
|
2017-01-24 05:07:59 +08:00
|
|
|
if (perf_data__is_read(data))
|
2016-04-13 16:21:05 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2019-03-15 06:49:55 +08:00
|
|
|
if (asprintf(new_filepath, "%s.%s", data->path, postfix) < 0)
|
2016-04-13 16:21:05 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only fire a warning, don't return error, continue fill
|
|
|
|
* original file.
|
|
|
|
*/
|
2019-03-15 06:49:55 +08:00
|
|
|
if (rename(data->path, *new_filepath))
|
|
|
|
pr_warning("Failed to rename %s to %s\n", data->path, *new_filepath);
|
2016-04-13 16:21:05 +08:00
|
|
|
|
|
|
|
if (!at_exit) {
|
2017-01-24 05:25:41 +08:00
|
|
|
close(data->file.fd);
|
2017-01-24 05:07:59 +08:00
|
|
|
ret = perf_data__open(data);
|
2016-04-13 16:21:05 +08:00
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
2017-01-24 05:25:41 +08:00
|
|
|
if (lseek(data->file.fd, pos, SEEK_SET) == (off_t)-1) {
|
2016-04-13 16:21:05 +08:00
|
|
|
ret = -errno;
|
|
|
|
pr_debug("Failed to lseek to %zu: %s",
|
|
|
|
pos, strerror(errno));
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2017-01-24 05:25:41 +08:00
|
|
|
ret = data->file.fd;
|
2016-04-13 16:21:05 +08:00
|
|
|
out:
|
|
|
|
return ret;
|
|
|
|
}
|
2019-03-08 21:47:38 +08:00
|
|
|
|
|
|
|
unsigned long perf_data__size(struct perf_data *data)
|
|
|
|
{
|
|
|
|
u64 size = data->file.size;
|
|
|
|
int i;
|
|
|
|
|
2019-10-04 16:31:20 +08:00
|
|
|
if (perf_data__is_single_file(data))
|
2019-03-08 21:47:38 +08:00
|
|
|
return size;
|
|
|
|
|
|
|
|
for (i = 0; i < data->dir.nr; i++) {
|
|
|
|
struct perf_data_file *file = &data->dir.files[i];
|
|
|
|
|
|
|
|
size += file->size;
|
|
|
|
}
|
|
|
|
|
|
|
|
return size;
|
|
|
|
}
|
perf record: Put a copy of kcore into the perf.data directory
Add a new 'perf record' option '--kcore' which will put a copy of
/proc/kcore, kallsyms and modules into a perf.data directory. Note, that
without the --kcore option, output goes to a file as previously. The
tools' -o and -i options work with either a file name or directory name.
Example:
$ sudo perf record --kcore uname
$ sudo tree perf.data
perf.data
├── kcore_dir
│ ├── kallsyms
│ ├── kcore
│ └── modules
└── data
$ sudo perf script -v
build id event received for vmlinux: 1eaa285996affce2d74d8e66dcea09a80c9941de
build id event received for [vdso]: 8bbaf5dc62a9b644b4d4e4539737e104e4a84541
Samples for 'cycles' event do not have CPU attribute set. Skipping 'cpu' field.
Using CPUID GenuineIntel-6-8E-A
Using perf.data/kcore_dir/kcore for kernel data
Using perf.data/kcore_dir/kallsyms for symbols
perf 19058 506778.423729: 1 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux)
perf 19058 506778.423733: 1 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux)
perf 19058 506778.423734: 7 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux)
perf 19058 506778.423736: 117 cycles: ffffffffa2caa54a native_write_msr+0xa (vmlinux)
perf 19058 506778.423738: 2092 cycles: ffffffffa2c9b7b0 native_apic_msr_write+0x0 (vmlinux)
perf 19058 506778.423740: 37380 cycles: ffffffffa2f121d0 perf_event_addr_filters_exec+0x0 (vmlinux)
uname 19058 506778.423751: 582673 cycles: ffffffffa303a407 propagate_protected_usage+0x147 (vmlinux)
uname 19058 506778.423892: 2241841 cycles: ffffffffa2cae0c9 unwind_next_frame.part.5+0x79 (vmlinux)
uname 19058 506778.424430: 2457397 cycles: ffffffffa3019232 check_memory_region+0x52 (vmlinux)
Committer testing:
# rm -rf perf.data*
# perf record sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.024 MB perf.data (7 samples) ]
# ls -l perf.data
-rw-------. 1 root root 34772 Oct 21 11:08 perf.data
# perf record --kcore uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.024 MB perf.data (7 samples) ]
ls[root@quaco ~]# ls -lad perf.data*
drwx------. 3 root root 4096 Oct 21 11:08 perf.data
-rw-------. 1 root root 34772 Oct 21 11:08 perf.data.old
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|PERIOD, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, enable_on_exec: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1
# perf evlist -v -i perf.data/data
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|PERIOD, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, enable_on_exec: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1
#
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lore.kernel.org/lkml/20191004083121.12182-6-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-10-04 16:31:21 +08:00
|
|
|
|
|
|
|
int perf_data__make_kcore_dir(struct perf_data *data, char *buf, size_t buf_sz)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!data->is_dir)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
ret = snprintf(buf, buf_sz, "%s/kcore_dir", data->path);
|
|
|
|
if (ret < 0 || (size_t)ret >= buf_sz)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
return mkdir(buf, S_IRWXU);
|
|
|
|
}
|
|
|
|
|
|
|
|
char *perf_data__kallsyms_name(struct perf_data *data)
|
|
|
|
{
|
|
|
|
char *kallsyms_name;
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
if (!data->is_dir)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (asprintf(&kallsyms_name, "%s/kcore_dir/kallsyms", data->path) < 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (stat(kallsyms_name, &st)) {
|
|
|
|
free(kallsyms_name);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return kallsyms_name;
|
|
|
|
}
|
2020-11-27 01:00:21 +08:00
|
|
|
|
|
|
|
bool is_perf_data(const char *path)
|
|
|
|
{
|
|
|
|
bool ret = false;
|
|
|
|
FILE *file;
|
|
|
|
u64 magic;
|
|
|
|
|
|
|
|
file = fopen(path, "r");
|
|
|
|
if (!file)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (fread(&magic, 1, 8, file) < 8)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = is_perf_magic(magic);
|
|
|
|
out:
|
|
|
|
fclose(file);
|
|
|
|
return ret;
|
|
|
|
}
|