License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2012-09-17 16:31:14 +08:00
|
|
|
#ifndef __PERF_STATS_H
|
|
|
|
#define __PERF_STATS_H
|
|
|
|
|
2014-04-26 03:31:02 +08:00
|
|
|
#include <linux/types.h>
|
2015-06-03 22:25:59 +08:00
|
|
|
#include <stdio.h>
|
2018-08-30 14:32:44 +08:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/resource.h>
|
2017-12-05 22:03:01 +08:00
|
|
|
#include "rblist.h"
|
2012-09-17 16:31:14 +08:00
|
|
|
|
2019-09-11 00:17:33 +08:00
|
|
|
struct perf_cpu_map;
|
|
|
|
struct perf_stat_config;
|
2019-08-23 01:35:55 +08:00
|
|
|
struct timespec;
|
|
|
|
|
2018-04-23 17:08:21 +08:00
|
|
|
struct stats {
|
2012-09-17 16:31:14 +08:00
|
|
|
double n, mean, M2;
|
2013-08-03 04:05:40 +08:00
|
|
|
u64 max, min;
|
2012-09-17 16:31:14 +08:00
|
|
|
};
|
|
|
|
|
2015-06-04 21:50:55 +08:00
|
|
|
enum perf_stat_evsel_id {
|
|
|
|
PERF_STAT_EVSEL_ID__NONE = 0,
|
2015-06-03 22:25:52 +08:00
|
|
|
PERF_STAT_EVSEL_ID__CYCLES_IN_TX,
|
|
|
|
PERF_STAT_EVSEL_ID__TRANSACTION_START,
|
|
|
|
PERF_STAT_EVSEL_ID__ELISION_START,
|
|
|
|
PERF_STAT_EVSEL_ID__CYCLES_IN_TX_CP,
|
2016-05-25 03:52:37 +08:00
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_TOTAL_SLOTS,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_SLOTS_ISSUED,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_SLOTS_RETIRED,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_FETCH_BUBBLES,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_RECOVERY_BUBBLES,
|
2020-09-11 22:48:07 +08:00
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_RETIRING,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_BAD_SPEC,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_FE_BOUND,
|
|
|
|
PERF_STAT_EVSEL_ID__TOPDOWN_BE_BOUND,
|
2017-05-27 03:05:38 +08:00
|
|
|
PERF_STAT_EVSEL_ID__SMI_NUM,
|
|
|
|
PERF_STAT_EVSEL_ID__APERF,
|
2015-06-04 21:50:55 +08:00
|
|
|
PERF_STAT_EVSEL_ID__MAX,
|
|
|
|
};
|
|
|
|
|
2015-10-16 18:41:03 +08:00
|
|
|
struct perf_stat_evsel {
|
2017-07-26 20:02:05 +08:00
|
|
|
struct stats res_stats[3];
|
|
|
|
enum perf_stat_evsel_id id;
|
|
|
|
u64 *group_data;
|
2015-06-04 21:50:55 +08:00
|
|
|
};
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
enum aggr_mode {
|
|
|
|
AGGR_NONE,
|
|
|
|
AGGR_GLOBAL,
|
|
|
|
AGGR_SOCKET,
|
2019-06-05 06:50:42 +08:00
|
|
|
AGGR_DIE,
|
2015-06-03 22:25:59 +08:00
|
|
|
AGGR_CORE,
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 17:29:27 +08:00
|
|
|
AGGR_THREAD,
|
2015-10-16 18:41:04 +08:00
|
|
|
AGGR_UNSET,
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 16:17:43 +08:00
|
|
|
AGGR_NODE,
|
2015-06-03 22:25:59 +08:00
|
|
|
};
|
|
|
|
|
2017-12-05 22:03:01 +08:00
|
|
|
enum {
|
|
|
|
CTX_BIT_USER = 1 << 0,
|
|
|
|
CTX_BIT_KERNEL = 1 << 1,
|
|
|
|
CTX_BIT_HV = 1 << 2,
|
|
|
|
CTX_BIT_HOST = 1 << 3,
|
|
|
|
CTX_BIT_IDLE = 1 << 4,
|
|
|
|
CTX_BIT_MAX = 1 << 5,
|
|
|
|
};
|
|
|
|
|
|
|
|
#define NUM_CTX CTX_BIT_MAX
|
|
|
|
|
|
|
|
enum stat_type {
|
|
|
|
STAT_NONE = 0,
|
|
|
|
STAT_NSECS,
|
|
|
|
STAT_CYCLES,
|
|
|
|
STAT_STALLED_CYCLES_FRONT,
|
|
|
|
STAT_STALLED_CYCLES_BACK,
|
|
|
|
STAT_BRANCHES,
|
|
|
|
STAT_CACHEREFS,
|
|
|
|
STAT_L1_DCACHE,
|
|
|
|
STAT_L1_ICACHE,
|
|
|
|
STAT_LL_CACHE,
|
|
|
|
STAT_ITLB_CACHE,
|
|
|
|
STAT_DTLB_CACHE,
|
|
|
|
STAT_CYCLES_IN_TX,
|
|
|
|
STAT_TRANSACTION,
|
|
|
|
STAT_ELISION,
|
|
|
|
STAT_TOPDOWN_TOTAL_SLOTS,
|
|
|
|
STAT_TOPDOWN_SLOTS_ISSUED,
|
|
|
|
STAT_TOPDOWN_SLOTS_RETIRED,
|
|
|
|
STAT_TOPDOWN_FETCH_BUBBLES,
|
|
|
|
STAT_TOPDOWN_RECOVERY_BUBBLES,
|
2020-09-11 22:48:07 +08:00
|
|
|
STAT_TOPDOWN_RETIRING,
|
|
|
|
STAT_TOPDOWN_BAD_SPEC,
|
|
|
|
STAT_TOPDOWN_FE_BOUND,
|
|
|
|
STAT_TOPDOWN_BE_BOUND,
|
2017-12-05 22:03:01 +08:00
|
|
|
STAT_SMI_NUM,
|
|
|
|
STAT_APERF,
|
|
|
|
STAT_MAX
|
|
|
|
};
|
|
|
|
|
|
|
|
struct runtime_stat {
|
|
|
|
struct rblist value_list;
|
|
|
|
};
|
|
|
|
|
2018-08-30 14:32:45 +08:00
|
|
|
typedef int (*aggr_get_id_t)(struct perf_stat_config *config,
|
2019-07-21 19:23:49 +08:00
|
|
|
struct perf_cpu_map *m, int cpu);
|
2018-08-30 14:32:45 +08:00
|
|
|
|
2015-07-21 20:31:22 +08:00
|
|
|
struct perf_stat_config {
|
2018-08-30 14:32:11 +08:00
|
|
|
enum aggr_mode aggr_mode;
|
|
|
|
bool scale;
|
2018-08-30 14:32:12 +08:00
|
|
|
bool no_inherit;
|
2018-08-30 14:32:14 +08:00
|
|
|
bool identifier;
|
2018-08-30 14:32:29 +08:00
|
|
|
bool csv_output;
|
2018-08-30 14:32:30 +08:00
|
|
|
bool interval_clear;
|
2018-08-30 14:32:31 +08:00
|
|
|
bool metric_only;
|
2018-08-30 14:32:41 +08:00
|
|
|
bool null_run;
|
2018-08-30 14:32:44 +08:00
|
|
|
bool ru_display;
|
2018-08-30 14:32:47 +08:00
|
|
|
bool big_num;
|
2018-08-30 14:32:48 +08:00
|
|
|
bool no_merge;
|
2018-08-30 14:32:50 +08:00
|
|
|
bool walltime_run_table;
|
2019-10-11 13:05:45 +08:00
|
|
|
bool all_kernel;
|
|
|
|
bool all_user;
|
perf stat: Show percore counts in per CPU output
We have supported the event modifier "percore" which sums up the event
counts for all hardware threads in a core and show the counts per core.
For example,
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A -- sleep 1
Performance counter stats for 'system wide':
S0-D0-C0 395,072 cpu/event=cpu-cycles,percore/
S0-D0-C1 851,248 cpu/event=cpu-cycles,percore/
S0-D0-C2 954,226 cpu/event=cpu-cycles,percore/
S0-D0-C3 1,233,659 cpu/event=cpu-cycles,percore/
This patch provides a new option "--percore-show-thread". It is used
with event modifier "percore" together to sum up the event counts for
all hardware threads in a core but show the counts per hardware thread.
This is essentially a replacement for the any bit (which is gone in
Icelake). Per core counts are useful for some formulas, e.g. CoreIPC.
The original percore version was inconvenient to post process. This
variant matches the output of the any bit.
With this patch, for example,
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -- sleep 1
Performance counter stats for 'system wide':
CPU0 2,453,061 cpu/event=cpu-cycles,percore/
CPU1 1,823,921 cpu/event=cpu-cycles,percore/
CPU2 1,383,166 cpu/event=cpu-cycles,percore/
CPU3 1,102,652 cpu/event=cpu-cycles,percore/
CPU4 2,453,061 cpu/event=cpu-cycles,percore/
CPU5 1,823,921 cpu/event=cpu-cycles,percore/
CPU6 1,383,166 cpu/event=cpu-cycles,percore/
CPU7 1,102,652 cpu/event=cpu-cycles,percore/
We can see counts are duplicated in CPU pairs (CPU0/CPU4, CPU1/CPU5,
CPU2/CPU6, CPU3/CPU7).
The interval mode also works. For example,
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -I 1000
# time CPU counts unit events
1.000425421 CPU0 925,032 cpu/event=cpu-cycles,percore/
1.000425421 CPU1 430,202 cpu/event=cpu-cycles,percore/
1.000425421 CPU2 436,843 cpu/event=cpu-cycles,percore/
1.000425421 CPU3 1,192,504 cpu/event=cpu-cycles,percore/
1.000425421 CPU4 925,032 cpu/event=cpu-cycles,percore/
1.000425421 CPU5 430,202 cpu/event=cpu-cycles,percore/
1.000425421 CPU6 436,843 cpu/event=cpu-cycles,percore/
1.000425421 CPU7 1,192,504 cpu/event=cpu-cycles,percore/
If we offline CPU5, the result is:
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -- sleep 1
Performance counter stats for 'system wide':
CPU0 2,752,148 cpu/event=cpu-cycles,percore/
CPU1 1,009,312 cpu/event=cpu-cycles,percore/
CPU2 2,784,072 cpu/event=cpu-cycles,percore/
CPU3 2,427,922 cpu/event=cpu-cycles,percore/
CPU4 2,752,148 cpu/event=cpu-cycles,percore/
CPU6 2,784,072 cpu/event=cpu-cycles,percore/
CPU7 2,427,922 cpu/event=cpu-cycles,percore/
1.001416041 seconds time elapsed
v4:
---
Ravi Bangoria reports an issue in v3. Once we offline a CPU,
the output is not correct. The issue is we should use the cpu
idx in print_percore_thread rather than using the cpu value.
v3:
---
1. Fix the interval mode output error
2. Use cpu value (not cpu index) in config->aggr_get_id().
3. Refine the code according to Jiri's comments.
v2:
---
Add the explanation in change log. This is essentially a replacement
for the any bit. No code change.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200214080452.26402-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-14 16:04:52 +08:00
|
|
|
bool percore_show_thread;
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 12:27:37 +08:00
|
|
|
bool summary;
|
2020-05-21 02:20:10 +08:00
|
|
|
bool metric_no_group;
|
|
|
|
bool metric_no_merge;
|
perf stat: Turn off summary for interval mode by default
There's a risk that outputting interval mode summaries by default breaks
CSV consumers. It already broke pmu-tools/toplev.
So now we turn off the summary by default but we create a new option
'--summary' to enable the summary. This is active even when not using
CSV mode.
Before:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.000265904 8,005.73 msec cpu-clock # 8.006 CPUs utilized
1.000265904 601 context-switches # 0.075 K/sec
1.000265904 10 cpu-migrations # 0.001 K/sec
1.000265904 0 page-faults # 0.000 K/sec
1.000265904 66,746,521 cycles # 0.008 GHz
1.000265904 71,874,398 instructions # 1.08 insn per cycle
1.000265904 13,356,781 branches # 1.668 M/sec
1.000265904 298,756 branch-misses # 2.24% of all branches
2.001857667 8,012.52 msec cpu-clock # 8.013 CPUs utilized
2.001857667 164 context-switches # 0.020 K/sec
2.001857667 10 cpu-migrations # 0.001 K/sec
2.001857667 2 page-faults # 0.000 K/sec
2.001857667 5,822,188 cycles # 0.001 GHz
2.001857667 2,186,170 instructions # 0.38 insn per cycle
2.001857667 442,378 branches # 0.055 M/sec
2.001857667 44,750 branch-misses # 10.12% of all branches
Performance counter stats for 'system wide':
16,018.25 msec cpu-clock # 7.993 CPUs utilized
765 context-switches # 0.048 K/sec
20 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
72,568,709 cycles # 0.005 GHz
74,060,568 instructions # 1.02 insn per cycle
13,799,159 branches # 0.861 M/sec
343,506 branch-misses # 2.49% of all branches
2.004118489 seconds time elapsed
After:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.001336393 8,013.28 msec cpu-clock # 8.013 CPUs utilized
1.001336393 82 context-switches # 0.010 K/sec
1.001336393 8 cpu-migrations # 0.001 K/sec
1.001336393 0 page-faults # 0.000 K/sec
1.001336393 4,199,121 cycles # 0.001 GHz
1.001336393 1,373,991 instructions # 0.33 insn per cycle
1.001336393 270,681 branches # 0.034 M/sec
1.001336393 31,659 branch-misses # 11.70% of all branches
2.003905006 8,020.52 msec cpu-clock # 8.021 CPUs utilized
2.003905006 184 context-switches # 0.023 K/sec
2.003905006 8 cpu-migrations # 0.001 K/sec
2.003905006 2 page-faults # 0.000 K/sec
2.003905006 5,446,190 cycles # 0.001 GHz
2.003905006 2,312,547 instructions # 0.42 insn per cycle
2.003905006 451,691 branches # 0.056 M/sec
2.003905006 37,925 branch-misses # 8.40% of all branches
root@kbl-ppc:~# perf stat -I1000 --interval-count 2 --summary
# time counts unit events
1.001313128 8,013.20 msec cpu-clock # 8.013 CPUs utilized
1.001313128 83 context-switches # 0.010 K/sec
1.001313128 8 cpu-migrations # 0.001 K/sec
1.001313128 0 page-faults # 0.000 K/sec
1.001313128 4,470,950 cycles # 0.001 GHz
1.001313128 1,440,045 instructions # 0.32 insn per cycle
1.001313128 283,222 branches # 0.035 M/sec
1.001313128 33,576 branch-misses # 11.86% of all branches
2.003857385 8,020.34 msec cpu-clock # 8.020 CPUs utilized
2.003857385 154 context-switches # 0.019 K/sec
2.003857385 8 cpu-migrations # 0.001 K/sec
2.003857385 2 page-faults # 0.000 K/sec
2.003857385 4,515,676 cycles # 0.001 GHz
2.003857385 2,180,449 instructions # 0.48 insn per cycle
2.003857385 435,254 branches # 0.054 M/sec
2.003857385 31,179 branch-misses # 7.16% of all branches
Performance counter stats for 'system wide':
16,033.53 msec cpu-clock # 7.992 CPUs utilized
237 context-switches # 0.015 K/sec
16 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
8,986,626 cycles # 0.001 GHz
3,620,494 instructions # 0.40 insn per cycle
718,476 branches # 0.045 M/sec
64,755 branch-misses # 9.01% of all branches
2.006124542 seconds time elapsed
Fixes: c7e5b328a8d4 ("perf stat: Report summary for interval mode")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200903010113.32232-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-03 09:01:13 +08:00
|
|
|
bool stop_read_counter;
|
2018-08-30 14:32:11 +08:00
|
|
|
FILE *output;
|
|
|
|
unsigned int interval;
|
|
|
|
unsigned int timeout;
|
2020-07-17 15:04:33 +08:00
|
|
|
int initial_delay;
|
2018-08-30 14:32:32 +08:00
|
|
|
unsigned int unit_width;
|
2018-08-30 14:32:37 +08:00
|
|
|
unsigned int metric_only_len;
|
2018-08-30 14:32:11 +08:00
|
|
|
int times;
|
2018-08-30 14:32:36 +08:00
|
|
|
int run_count;
|
2018-08-30 14:32:42 +08:00
|
|
|
int print_free_counters_hint;
|
2018-08-30 14:32:43 +08:00
|
|
|
int print_mixed_hw_group_error;
|
2018-08-30 14:32:11 +08:00
|
|
|
struct runtime_stat *stats;
|
|
|
|
int stats_num;
|
2018-08-30 14:32:29 +08:00
|
|
|
const char *csv_sep;
|
2018-08-30 14:32:40 +08:00
|
|
|
struct stats *walltime_nsecs_stats;
|
2018-08-30 14:32:44 +08:00
|
|
|
struct rusage ru_data;
|
2019-07-21 19:23:49 +08:00
|
|
|
struct perf_cpu_map *aggr_map;
|
2018-08-30 14:32:45 +08:00
|
|
|
aggr_get_id_t aggr_get_id;
|
2019-07-21 19:23:49 +08:00
|
|
|
struct perf_cpu_map *cpus_aggr_map;
|
2018-08-30 14:32:50 +08:00
|
|
|
u64 *walltime_run;
|
2018-08-30 14:32:51 +08:00
|
|
|
struct rblist metric_events;
|
2020-07-17 15:05:41 +08:00
|
|
|
int ctl_fd;
|
|
|
|
int ctl_fd_ack;
|
2020-09-02 18:57:07 +08:00
|
|
|
bool ctl_fd_close;
|
2015-07-21 20:31:22 +08:00
|
|
|
};
|
|
|
|
|
2020-05-21 00:23:35 +08:00
|
|
|
void perf_stat__set_big_num(int set);
|
|
|
|
|
2012-09-17 16:31:14 +08:00
|
|
|
void update_stats(struct stats *stats, u64 val);
|
|
|
|
double avg_stats(struct stats *stats);
|
|
|
|
double stddev_stats(struct stats *stats);
|
|
|
|
double rel_stddev_stats(double stddev, double avg);
|
|
|
|
|
2013-08-03 04:05:40 +08:00
|
|
|
static inline void init_stats(struct stats *stats)
|
|
|
|
{
|
|
|
|
stats->n = 0.0;
|
|
|
|
stats->mean = 0.0;
|
|
|
|
stats->M2 = 0.0;
|
|
|
|
stats->min = (u64) -1;
|
|
|
|
stats->max = 0;
|
|
|
|
}
|
2015-06-04 21:50:55 +08:00
|
|
|
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel;
|
2019-07-21 19:23:52 +08:00
|
|
|
struct evlist;
|
2015-06-26 17:29:16 +08:00
|
|
|
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 22:03:11 +08:00
|
|
|
struct perf_aggr_thread_value {
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *counter;
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 22:03:11 +08:00
|
|
|
int id;
|
|
|
|
double uval;
|
|
|
|
u64 val;
|
|
|
|
u64 run;
|
|
|
|
u64 ena;
|
|
|
|
};
|
|
|
|
|
2019-07-21 19:23:51 +08:00
|
|
|
bool __perf_evsel_stat__is(struct evsel *evsel,
|
2015-06-04 21:50:55 +08:00
|
|
|
enum perf_stat_evsel_id id);
|
|
|
|
|
|
|
|
#define perf_stat_evsel__is(evsel, id) \
|
|
|
|
__perf_evsel_stat__is(evsel, PERF_STAT_EVSEL_ID__ ## id)
|
|
|
|
|
2017-12-05 22:03:03 +08:00
|
|
|
extern struct runtime_stat rt_stat;
|
2015-06-03 22:25:59 +08:00
|
|
|
extern struct stats walltime_nsecs_stats;
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
typedef void (*print_metric_t)(struct perf_stat_config *config,
|
|
|
|
void *ctx, const char *color, const char *unit,
|
2016-01-31 01:06:49 +08:00
|
|
|
const char *fmt, double val);
|
2018-08-30 14:32:28 +08:00
|
|
|
typedef void (*new_line_t)(struct perf_stat_config *config, void *ctx);
|
2016-01-31 01:06:49 +08:00
|
|
|
|
2017-12-05 22:03:03 +08:00
|
|
|
void runtime_stat__init(struct runtime_stat *st);
|
|
|
|
void runtime_stat__exit(struct runtime_stat *st);
|
2016-03-02 02:57:52 +08:00
|
|
|
void perf_stat__init_shadow_stats(void);
|
2015-06-03 22:25:59 +08:00
|
|
|
void perf_stat__reset_shadow_stats(void);
|
2017-12-05 22:03:06 +08:00
|
|
|
void perf_stat__reset_shadow_per_stat(struct runtime_stat *st);
|
2019-07-21 19:23:51 +08:00
|
|
|
void perf_stat__update_shadow_stats(struct evsel *counter, u64 count,
|
2017-12-05 22:03:04 +08:00
|
|
|
int cpu, struct runtime_stat *st);
|
2016-01-31 01:06:49 +08:00
|
|
|
struct perf_stat_output_ctx {
|
|
|
|
void *ctx;
|
|
|
|
print_metric_t print_metric;
|
|
|
|
new_line_t new_line;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
bool force_header;
|
2016-01-31 01:06:49 +08:00
|
|
|
};
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
void perf_stat__print_shadow_stats(struct perf_stat_config *config,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg, int cpu,
|
2017-09-01 03:40:31 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct rblist *metric_events,
|
|
|
|
struct runtime_stat *st);
|
2019-07-21 19:23:52 +08:00
|
|
|
void perf_stat__collect_metric_expr(struct evlist *);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2019-07-21 19:23:52 +08:00
|
|
|
int perf_evlist__alloc_stats(struct evlist *evlist, bool alloc_raw);
|
|
|
|
void perf_evlist__free_stats(struct evlist *evlist);
|
|
|
|
void perf_evlist__reset_stats(struct evlist *evlist);
|
perf stat: Reset previous counts on repeat with interval
When using 'perf stat' with repeat and interval option, it shows wrong
values for events.
The wrong values will be shown for the first interval on the second and
subsequent repetitions.
Without the fix:
# perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
2.000282489 53 faults
2.000282489 513 sched:sched_switch
4.005478208 3,721 faults
4.005478208 2,666 sched:sched_switch
5.025470933 395 faults
5.025470933 1,307 sched:sched_switch
2.009602825 1,84,46,74,40,73,70,95,47,520 faults <------
2.009602825 1,84,46,74,40,73,70,95,49,568 sched:sched_switch <------
4.019612206 4,730 faults
4.019612206 2,746 sched:sched_switch
5.039615484 3,953 faults
5.039615484 1,496 sched:sched_switch
2.000274620 1,84,46,74,40,73,70,95,47,520 faults <------
2.000274620 1,84,46,74,40,73,70,95,47,520 sched:sched_switch <------
4.000480342 4,282 faults
4.000480342 2,303 sched:sched_switch
5.000916811 1,322 faults
5.000916811 1,064 sched:sched_switch
#
prev_raw_counts is allocated when using intervals. This is used when
calculating the difference in the counts of events when using interval.
The current counts are stored in prev_raw_counts to calculate the
differences in the next iteration.
On the first interval of the second and subsequent repetitions,
prev_raw_counts would be the values stored in the last interval of the
previous repetitions, while the current counts will only be for the
first interval of the current repetition.
Hence there is a possibility of events showing up as big number.
Fix this by resetting prev_raw_counts whenever perf stat repeats the
command.
With the fix:
# perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
2.019349347 2,597 faults
2.019349347 2,753 sched:sched_switch
4.019577372 3,098 faults
4.019577372 2,532 sched:sched_switch
5.019415481 1,879 faults
5.019415481 1,356 sched:sched_switch
2.000178813 8,468 faults
2.000178813 2,254 sched:sched_switch
4.000404621 7,440 faults
4.000404621 1,266 sched:sched_switch
5.040196079 2,458 faults
5.040196079 556 sched:sched_switch
2.000191939 6,870 faults
2.000191939 1,170 sched:sched_switch
4.000414103 541 faults
4.000414103 902 sched:sched_switch
5.000809863 450 faults
5.000809863 364 sched:sched_switch
#
Committer notes:
This was broken since the cset introducing the --interval feature, i.e.
--repeat + --interval wasn't tested at that point, add the Fixes tag so
that automatic scripts can pick this up.
Fixes: 13370a9b5bb8 ("perf stat: Add interval printing")
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: stable@vger.kernel.org # v3.9+
Link: http://lore.kernel.org/lkml/20190904094738.9558-2-srikar@linux.vnet.ibm.com
[ Fixed up conflicts with libperf, i.e. some perf_{evsel,evlist} lost the 'perf' prefix ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-04 17:47:37 +08:00
|
|
|
void perf_evlist__reset_prev_raw_counts(struct evlist *evlist);
|
2020-05-20 12:27:35 +08:00
|
|
|
void perf_evlist__copy_prev_raw_counts(struct evlist *evlist);
|
2020-05-20 12:27:36 +08:00
|
|
|
void perf_evlist__save_aggr_prev_raw_counts(struct evlist *evlist);
|
2015-07-21 20:31:27 +08:00
|
|
|
|
|
|
|
int perf_stat_process_counter(struct perf_stat_config *config,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *counter);
|
2015-10-25 22:51:32 +08:00
|
|
|
struct perf_tool;
|
|
|
|
union perf_event;
|
|
|
|
struct perf_session;
|
2019-08-23 02:40:29 +08:00
|
|
|
struct target;
|
|
|
|
|
2018-09-13 20:54:03 +08:00
|
|
|
int perf_event__process_stat_event(struct perf_session *session,
|
|
|
|
union perf_event *event);
|
2015-10-25 22:51:35 +08:00
|
|
|
|
|
|
|
size_t perf_event__fprintf_stat(union perf_event *event, FILE *fp);
|
|
|
|
size_t perf_event__fprintf_stat_round(union perf_event *event, FILE *fp);
|
|
|
|
size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp);
|
2018-08-30 14:32:17 +08:00
|
|
|
|
2019-07-21 19:23:51 +08:00
|
|
|
int create_perf_stat_counter(struct evsel *evsel,
|
2018-08-30 14:32:17 +08:00
|
|
|
struct perf_stat_config *config,
|
2019-11-21 08:15:19 +08:00
|
|
|
struct target *target,
|
|
|
|
int cpu);
|
2018-08-30 14:32:52 +08:00
|
|
|
void
|
2019-07-21 19:23:52 +08:00
|
|
|
perf_evlist__print_counters(struct evlist *evlist,
|
2018-08-30 14:32:52 +08:00
|
|
|
struct perf_stat_config *config,
|
|
|
|
struct target *_target,
|
|
|
|
struct timespec *ts,
|
|
|
|
int argc, const char **argv);
|
2020-06-03 05:47:39 +08:00
|
|
|
|
|
|
|
struct metric_expr;
|
|
|
|
double test_generic_metric(struct metric_expr *mexp, int cpu, struct runtime_stat *st);
|
2012-09-17 16:31:14 +08:00
|
|
|
#endif
|