License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2015-06-03 22:25:59 +08:00
|
|
|
#include <stdio.h>
|
|
|
|
#include "evsel.h"
|
|
|
|
#include "stat.h"
|
|
|
|
#include "color.h"
|
2016-03-02 02:57:52 +08:00
|
|
|
#include "pmu.h"
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
#include "rblist.h"
|
|
|
|
#include "evlist.h"
|
|
|
|
#include "expr.h"
|
2017-09-01 03:40:31 +08:00
|
|
|
#include "metricgroup.h"
|
2019-07-04 23:06:20 +08:00
|
|
|
#include <linux/zalloc.h>
|
2015-06-03 22:25:59 +08:00
|
|
|
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-01 06:36:22 +08:00
|
|
|
/*
|
|
|
|
* AGGR_GLOBAL: Use CPU 0
|
|
|
|
* AGGR_SOCKET: Use first CPU of socket
|
2019-06-05 06:50:42 +08:00
|
|
|
* AGGR_DIE: Use first CPU of die
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-01 06:36:22 +08:00
|
|
|
* AGGR_CORE: Use first CPU of core
|
|
|
|
* AGGR_NONE: Use matching CPU
|
|
|
|
* AGGR_THREAD: Not supported?
|
|
|
|
*/
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2017-12-05 22:03:03 +08:00
|
|
|
struct runtime_stat rt_stat;
|
2015-06-03 22:25:59 +08:00
|
|
|
struct stats walltime_nsecs_stats;
|
|
|
|
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
struct saved_value {
|
|
|
|
struct rb_node rb_node;
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel;
|
2017-12-05 22:03:02 +08:00
|
|
|
enum stat_type type;
|
|
|
|
int ctx;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
int cpu;
|
2017-12-05 22:03:02 +08:00
|
|
|
struct runtime_stat *stat;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
struct stats stats;
|
2019-08-28 13:59:32 +08:00
|
|
|
u64 metric_total;
|
|
|
|
int metric_other;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static int saved_value_cmp(struct rb_node *rb_node, const void *entry)
|
|
|
|
{
|
|
|
|
struct saved_value *a = container_of(rb_node,
|
|
|
|
struct saved_value,
|
|
|
|
rb_node);
|
|
|
|
const struct saved_value *b = entry;
|
|
|
|
|
|
|
|
if (a->cpu != b->cpu)
|
|
|
|
return a->cpu - b->cpu;
|
2017-12-05 22:03:02 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Previously the rbtree was used to link generic metrics.
|
|
|
|
* The keys were evsel/cpu. Now the rbtree is extended to support
|
|
|
|
* per-thread shadow stats. For shadow stats case, the keys
|
|
|
|
* are cpu/type/ctx/stat (evsel is NULL). For generic metrics
|
|
|
|
* case, the keys are still evsel/cpu (type/ctx/stat are 0 or NULL).
|
|
|
|
*/
|
|
|
|
if (a->type != b->type)
|
|
|
|
return a->type - b->type;
|
|
|
|
|
|
|
|
if (a->ctx != b->ctx)
|
|
|
|
return a->ctx - b->ctx;
|
|
|
|
|
|
|
|
if (a->evsel == NULL && b->evsel == NULL) {
|
|
|
|
if (a->stat == b->stat)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if ((char *)a->stat < (char *)b->stat)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2017-07-25 07:40:03 +08:00
|
|
|
if (a->evsel == b->evsel)
|
|
|
|
return 0;
|
|
|
|
if ((char *)a->evsel < (char *)b->evsel)
|
|
|
|
return -1;
|
|
|
|
return +1;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct rb_node *saved_value_new(struct rblist *rblist __maybe_unused,
|
|
|
|
const void *entry)
|
|
|
|
{
|
|
|
|
struct saved_value *nd = malloc(sizeof(struct saved_value));
|
|
|
|
|
|
|
|
if (!nd)
|
|
|
|
return NULL;
|
|
|
|
memcpy(nd, entry, sizeof(struct saved_value));
|
|
|
|
return &nd->rb_node;
|
|
|
|
}
|
|
|
|
|
2017-12-01 18:57:28 +08:00
|
|
|
static void saved_value_delete(struct rblist *rblist __maybe_unused,
|
|
|
|
struct rb_node *rb_node)
|
|
|
|
{
|
|
|
|
struct saved_value *v;
|
|
|
|
|
|
|
|
BUG_ON(!rb_node);
|
|
|
|
v = container_of(rb_node, struct saved_value, rb_node);
|
|
|
|
free(v);
|
|
|
|
}
|
|
|
|
|
2019-07-21 19:23:51 +08:00
|
|
|
static struct saved_value *saved_value_lookup(struct evsel *evsel,
|
2017-09-01 03:40:33 +08:00
|
|
|
int cpu,
|
2017-12-05 22:03:04 +08:00
|
|
|
bool create,
|
|
|
|
enum stat_type type,
|
|
|
|
int ctx,
|
|
|
|
struct runtime_stat *st)
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
{
|
2017-12-05 22:03:04 +08:00
|
|
|
struct rblist *rblist;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
struct rb_node *nd;
|
|
|
|
struct saved_value dm = {
|
|
|
|
.cpu = cpu,
|
|
|
|
.evsel = evsel,
|
2017-12-05 22:03:04 +08:00
|
|
|
.type = type,
|
|
|
|
.ctx = ctx,
|
|
|
|
.stat = st,
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
};
|
2017-12-05 22:03:04 +08:00
|
|
|
|
|
|
|
rblist = &st->value_list;
|
|
|
|
|
|
|
|
nd = rblist__find(rblist, &dm);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
if (nd)
|
|
|
|
return container_of(nd, struct saved_value, rb_node);
|
|
|
|
if (create) {
|
2017-12-05 22:03:04 +08:00
|
|
|
rblist__add_node(rblist, &dm);
|
|
|
|
nd = rblist__find(rblist, &dm);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
if (nd)
|
|
|
|
return container_of(nd, struct saved_value, rb_node);
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:03 +08:00
|
|
|
void runtime_stat__init(struct runtime_stat *st)
|
|
|
|
{
|
|
|
|
struct rblist *rblist = &st->value_list;
|
|
|
|
|
|
|
|
rblist__init(rblist);
|
|
|
|
rblist->node_cmp = saved_value_cmp;
|
|
|
|
rblist->node_new = saved_value_new;
|
|
|
|
rblist->node_delete = saved_value_delete;
|
|
|
|
}
|
|
|
|
|
|
|
|
void runtime_stat__exit(struct runtime_stat *st)
|
|
|
|
{
|
|
|
|
rblist__exit(&st->value_list);
|
|
|
|
}
|
|
|
|
|
2016-03-02 02:57:52 +08:00
|
|
|
void perf_stat__init_shadow_stats(void)
|
|
|
|
{
|
2017-12-05 22:03:03 +08:00
|
|
|
runtime_stat__init(&rt_stat);
|
2016-03-02 02:57:52 +08:00
|
|
|
}
|
|
|
|
|
2019-07-21 19:23:51 +08:00
|
|
|
static int evsel_context(struct evsel *evsel)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
int ctx = 0;
|
|
|
|
|
2019-07-21 19:24:29 +08:00
|
|
|
if (evsel->core.attr.exclude_kernel)
|
2015-06-03 22:25:59 +08:00
|
|
|
ctx |= CTX_BIT_KERNEL;
|
2019-07-21 19:24:29 +08:00
|
|
|
if (evsel->core.attr.exclude_user)
|
2015-06-03 22:25:59 +08:00
|
|
|
ctx |= CTX_BIT_USER;
|
2019-07-21 19:24:29 +08:00
|
|
|
if (evsel->core.attr.exclude_hv)
|
2015-06-03 22:25:59 +08:00
|
|
|
ctx |= CTX_BIT_HV;
|
2019-07-21 19:24:29 +08:00
|
|
|
if (evsel->core.attr.exclude_host)
|
2015-06-03 22:25:59 +08:00
|
|
|
ctx |= CTX_BIT_HOST;
|
2019-07-21 19:24:29 +08:00
|
|
|
if (evsel->core.attr.exclude_idle)
|
2015-06-03 22:25:59 +08:00
|
|
|
ctx |= CTX_BIT_IDLE;
|
|
|
|
|
|
|
|
return ctx;
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:06 +08:00
|
|
|
static void reset_stat(struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
2017-12-05 22:03:06 +08:00
|
|
|
struct rblist *rblist;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
struct rb_node *pos, *next;
|
|
|
|
|
2017-12-05 22:03:06 +08:00
|
|
|
rblist = &st->value_list;
|
2018-12-07 03:18:16 +08:00
|
|
|
next = rb_first_cached(&rblist->entries);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
while (next) {
|
|
|
|
pos = next;
|
|
|
|
next = rb_next(pos);
|
|
|
|
memset(&container_of(pos, struct saved_value, rb_node)->stats,
|
|
|
|
0,
|
|
|
|
sizeof(struct stats));
|
|
|
|
}
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:06 +08:00
|
|
|
void perf_stat__reset_shadow_stats(void)
|
|
|
|
{
|
|
|
|
reset_stat(&rt_stat);
|
|
|
|
memset(&walltime_nsecs_stats, 0, sizeof(walltime_nsecs_stats));
|
|
|
|
}
|
|
|
|
|
|
|
|
void perf_stat__reset_shadow_per_stat(struct runtime_stat *st)
|
|
|
|
{
|
|
|
|
reset_stat(st);
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:04 +08:00
|
|
|
static void update_runtime_stat(struct runtime_stat *st,
|
|
|
|
enum stat_type type,
|
|
|
|
int ctx, int cpu, u64 count)
|
|
|
|
{
|
|
|
|
struct saved_value *v = saved_value_lookup(NULL, cpu, true,
|
|
|
|
type, ctx, st);
|
|
|
|
|
|
|
|
if (v)
|
|
|
|
update_stats(&v->stats, count);
|
|
|
|
}
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
/*
|
|
|
|
* Update various tracking values we maintain to print
|
|
|
|
* more semantic information such as miss/hit ratios,
|
|
|
|
* instruction rates, etc:
|
|
|
|
*/
|
2019-07-21 19:23:51 +08:00
|
|
|
void perf_stat__update_shadow_stats(struct evsel *counter, u64 count,
|
2017-12-05 22:03:04 +08:00
|
|
|
int cpu, struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
int ctx = evsel_context(counter);
|
2018-11-16 12:28:43 +08:00
|
|
|
u64 count_ns = count;
|
2019-08-28 13:59:32 +08:00
|
|
|
struct saved_value *v;
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2017-01-24 05:42:56 +08:00
|
|
|
count *= counter->scale;
|
|
|
|
|
2020-04-30 21:51:16 +08:00
|
|
|
if (evsel__is_clock(counter))
|
2018-11-16 12:28:43 +08:00
|
|
|
update_runtime_stat(st, STAT_NSECS, 0, cpu, count_ns);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HARDWARE, HW_CPU_CYCLES))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_CYCLES, ctx, cpu, count);
|
2015-06-03 22:25:59 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, CYCLES_IN_TX))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_CYCLES_IN_TX, ctx, cpu, count);
|
2015-06-03 22:25:59 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, TRANSACTION_START))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_TRANSACTION, ctx, cpu, count);
|
2015-06-03 22:25:59 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, ELISION_START))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_ELISION, ctx, cpu, count);
|
2016-05-25 03:52:37 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_TOPDOWN_TOTAL_SLOTS,
|
|
|
|
ctx, cpu, count);
|
2016-05-25 03:52:37 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_TOPDOWN_SLOTS_ISSUED,
|
|
|
|
ctx, cpu, count);
|
2016-05-25 03:52:37 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_TOPDOWN_SLOTS_RETIRED,
|
|
|
|
ctx, cpu, count);
|
2016-05-25 03:52:37 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_TOPDOWN_FETCH_BUBBLES,
|
|
|
|
ctx, cpu, count);
|
2016-05-25 03:52:37 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_TOPDOWN_RECOVERY_BUBBLES,
|
|
|
|
ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_STALLED_CYCLES_FRONT,
|
|
|
|
ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_STALLED_CYCLES_BACK,
|
|
|
|
ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_BRANCHES, ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HARDWARE, HW_CACHE_REFERENCES))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_CACHEREFS, ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1D))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_L1_DCACHE, ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1I))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_L1_ICACHE, ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HW_CACHE, HW_CACHE_LL))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_LL_CACHE, ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HW_CACHE, HW_CACHE_DTLB))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_DTLB_CACHE, ctx, cpu, count);
|
2020-04-30 21:51:16 +08:00
|
|
|
else if (evsel__match(counter, HW_CACHE, HW_CACHE_ITLB))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_ITLB_CACHE, ctx, cpu, count);
|
2017-05-27 03:05:38 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, SMI_NUM))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_SMI_NUM, ctx, cpu, count);
|
2017-05-27 03:05:38 +08:00
|
|
|
else if (perf_stat_evsel__is(counter, APERF))
|
2017-12-05 22:03:04 +08:00
|
|
|
update_runtime_stat(st, STAT_APERF, ctx, cpu, count);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
|
|
|
|
if (counter->collect_stat) {
|
2019-08-28 13:59:32 +08:00
|
|
|
v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st);
|
2017-01-24 05:42:56 +08:00
|
|
|
update_stats(&v->stats, count);
|
2019-08-28 13:59:32 +08:00
|
|
|
if (counter->metric_leader)
|
|
|
|
v->metric_total += count;
|
|
|
|
} else if (counter->metric_leader) {
|
|
|
|
v = saved_value_lookup(counter->metric_leader,
|
|
|
|
cpu, true, STAT_NONE, 0, st);
|
|
|
|
v->metric_total += count;
|
|
|
|
v->metric_other++;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
}
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* used for get_ratio_color() */
|
|
|
|
enum grc_type {
|
|
|
|
GRC_STALLED_CYCLES_FE,
|
|
|
|
GRC_STALLED_CYCLES_BE,
|
|
|
|
GRC_CACHE_MISSES,
|
|
|
|
GRC_MAX_NR
|
|
|
|
};
|
|
|
|
|
|
|
|
static const char *get_ratio_color(enum grc_type type, double ratio)
|
|
|
|
{
|
|
|
|
static const double grc_table[GRC_MAX_NR][3] = {
|
|
|
|
[GRC_STALLED_CYCLES_FE] = { 50.0, 30.0, 10.0 },
|
|
|
|
[GRC_STALLED_CYCLES_BE] = { 75.0, 50.0, 20.0 },
|
|
|
|
[GRC_CACHE_MISSES] = { 20.0, 10.0, 5.0 },
|
|
|
|
};
|
|
|
|
const char *color = PERF_COLOR_NORMAL;
|
|
|
|
|
|
|
|
if (ratio > grc_table[type][0])
|
|
|
|
color = PERF_COLOR_RED;
|
|
|
|
else if (ratio > grc_table[type][1])
|
|
|
|
color = PERF_COLOR_MAGENTA;
|
|
|
|
else if (ratio > grc_table[type][2])
|
|
|
|
color = PERF_COLOR_YELLOW;
|
|
|
|
|
|
|
|
return color;
|
|
|
|
}
|
|
|
|
|
2019-07-21 19:23:52 +08:00
|
|
|
static struct evsel *perf_stat__find_event(struct evlist *evsel_list,
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
const char *name)
|
|
|
|
{
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *c2;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
|
|
|
|
evlist__for_each_entry (evsel_list, c2) {
|
2019-06-25 03:37:08 +08:00
|
|
|
if (!strcasecmp(c2->name, name) && !c2->collect_stat)
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
return c2;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Mark MetricExpr target events and link events using them to them. */
|
2019-07-21 19:23:52 +08:00
|
|
|
void perf_stat__collect_metric_expr(struct evlist *evsel_list)
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
{
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *counter, *leader, **metric_events, *oc;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
bool found;
|
|
|
|
const char **metric_names;
|
|
|
|
int i;
|
|
|
|
int num_metric_names;
|
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
bool invalid = false;
|
|
|
|
|
|
|
|
leader = counter->leader;
|
|
|
|
if (!counter->metric_expr)
|
|
|
|
continue;
|
|
|
|
metric_events = counter->metric_events;
|
|
|
|
if (!metric_events) {
|
|
|
|
if (expr__find_other(counter->metric_expr, counter->name,
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
&metric_names, &num_metric_names, 1) < 0)
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
continue;
|
|
|
|
|
2019-07-21 19:23:51 +08:00
|
|
|
metric_events = calloc(sizeof(struct evsel *),
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
num_metric_names + 1);
|
|
|
|
if (!metric_events)
|
|
|
|
return;
|
|
|
|
counter->metric_events = metric_events;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < num_metric_names; i++) {
|
|
|
|
found = false;
|
|
|
|
if (leader) {
|
|
|
|
/* Search in group */
|
|
|
|
for_each_group_member (oc, leader) {
|
2019-06-25 03:37:08 +08:00
|
|
|
if (!strcasecmp(oc->name, metric_names[i]) &&
|
|
|
|
!oc->collect_stat) {
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
found = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!found) {
|
|
|
|
/* Search ignoring groups */
|
|
|
|
oc = perf_stat__find_event(evsel_list, metric_names[i]);
|
|
|
|
}
|
|
|
|
if (!oc) {
|
|
|
|
/* Deduping one is good enough to handle duplicated PMUs. */
|
|
|
|
static char *printed;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adding events automatically would be difficult, because
|
|
|
|
* it would risk creating groups that are not schedulable.
|
|
|
|
* perf stat doesn't understand all the scheduling constraints
|
|
|
|
* of events. So we ask the user instead to add the missing
|
|
|
|
* events.
|
|
|
|
*/
|
|
|
|
if (!printed || strcasecmp(printed, metric_names[i])) {
|
|
|
|
fprintf(stderr,
|
|
|
|
"Add %s event to groups to get metric expression for %s\n",
|
|
|
|
metric_names[i],
|
|
|
|
counter->name);
|
|
|
|
printed = strdup(metric_names[i]);
|
|
|
|
}
|
|
|
|
invalid = true;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
metric_events[i] = oc;
|
|
|
|
oc->collect_stat = true;
|
|
|
|
}
|
|
|
|
metric_events[i] = NULL;
|
|
|
|
free(metric_names);
|
|
|
|
if (invalid) {
|
|
|
|
free(metric_events);
|
|
|
|
counter->metric_events = NULL;
|
|
|
|
counter->metric_expr = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
static double runtime_stat_avg(struct runtime_stat *st,
|
|
|
|
enum stat_type type, int ctx, int cpu)
|
|
|
|
{
|
|
|
|
struct saved_value *v;
|
|
|
|
|
|
|
|
v = saved_value_lookup(NULL, cpu, false, type, ctx, st);
|
|
|
|
if (!v)
|
|
|
|
return 0.0;
|
|
|
|
|
|
|
|
return avg_stats(&v->stats);
|
|
|
|
}
|
|
|
|
|
|
|
|
static double runtime_stat_n(struct runtime_stat *st,
|
|
|
|
enum stat_type type, int ctx, int cpu)
|
|
|
|
{
|
|
|
|
struct saved_value *v;
|
|
|
|
|
|
|
|
v = saved_value_lookup(NULL, cpu, false, type, ctx, st);
|
|
|
|
if (!v)
|
|
|
|
return 0.0;
|
|
|
|
|
|
|
|
return v->stats.n;
|
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_stalled_cycles_frontend(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel, double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_STALLED_CYCLES_FE, ratio);
|
|
|
|
|
2016-01-31 01:06:49 +08:00
|
|
|
if (ratio)
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "frontend cycles idle",
|
2016-01-31 01:06:49 +08:00
|
|
|
ratio);
|
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, NULL, NULL, "frontend cycles idle", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_stalled_cycles_backend(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel, double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_STALLED_CYCLES_BE, ratio);
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "backend cycles idle", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_branch_misses(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_BRANCHES, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "of all branches", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_l1_dcache_misses(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_L1_DCACHE, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "of all L1-dcache hits", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_l1_icache_misses(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_L1_ICACHE, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "of all L1-icache hits", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_dtlb_cache_misses(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_DTLB_CACHE, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "of all dTLB cache hits", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_itlb_cache_misses(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_ITLB_CACHE, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "of all iTLB cache hits", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_ll_cache_misses(struct perf_stat_config *config,
|
|
|
|
int cpu,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
|
|
|
double total, ratio = 0.0;
|
|
|
|
const char *color;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_LL_CACHE, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg / total * 100.0;
|
|
|
|
|
|
|
|
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%7.2f%%", "of all LL-cache hits", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
|
2016-05-25 03:52:37 +08:00
|
|
|
/*
|
|
|
|
* High level "TopDown" CPU core pipe line bottleneck break down.
|
|
|
|
*
|
|
|
|
* Basic concept following
|
|
|
|
* Yasin, A Top Down Method for Performance analysis and Counter architecture
|
|
|
|
* ISPASS14
|
|
|
|
*
|
|
|
|
* The CPU pipeline is divided into 4 areas that can be bottlenecks:
|
|
|
|
*
|
|
|
|
* Frontend -> Backend -> Retiring
|
|
|
|
* BadSpeculation in addition means out of order execution that is thrown away
|
|
|
|
* (for example branch mispredictions)
|
|
|
|
* Frontend is instruction decoding.
|
|
|
|
* Backend is execution, like computation and accessing data in memory
|
|
|
|
* Retiring is good execution that is not directly bottlenecked
|
|
|
|
*
|
|
|
|
* The formulas are computed in slots.
|
|
|
|
* A slot is an entry in the pipeline each for the pipeline width
|
|
|
|
* (for example a 4-wide pipeline has 4 slots for each cycle)
|
|
|
|
*
|
|
|
|
* Formulas:
|
|
|
|
* BadSpeculation = ((SlotsIssued - SlotsRetired) + RecoveryBubbles) /
|
|
|
|
* TotalSlots
|
|
|
|
* Retiring = SlotsRetired / TotalSlots
|
|
|
|
* FrontendBound = FetchBubbles / TotalSlots
|
|
|
|
* BackendBound = 1.0 - BadSpeculation - Retiring - FrontendBound
|
|
|
|
*
|
|
|
|
* The kernel provides the mapping to the low level CPU events and any scaling
|
|
|
|
* needed for the CPU pipeline width, for example:
|
|
|
|
*
|
|
|
|
* TotalSlots = Cycles * 4
|
|
|
|
*
|
|
|
|
* The scaling factor is communicated in the sysfs unit.
|
|
|
|
*
|
|
|
|
* In some cases the CPU may not be able to measure all the formulas due to
|
|
|
|
* missing events. In this case multiple formulas are combined, as possible.
|
|
|
|
*
|
|
|
|
* Full TopDown supports more levels to sub-divide each area: for example
|
|
|
|
* BackendBound into computing bound and memory bound. For now we only
|
|
|
|
* support Level 1 TopDown.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static double sanitize_val(double x)
|
|
|
|
{
|
|
|
|
if (x < 0 && x >= -0.02)
|
|
|
|
return 0.0;
|
|
|
|
return x;
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
static double td_total_slots(int ctx, int cpu, struct runtime_stat *st)
|
2016-05-25 03:52:37 +08:00
|
|
|
{
|
2017-12-05 22:03:05 +08:00
|
|
|
return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, ctx, cpu);
|
2016-05-25 03:52:37 +08:00
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
static double td_bad_spec(int ctx, int cpu, struct runtime_stat *st)
|
2016-05-25 03:52:37 +08:00
|
|
|
{
|
|
|
|
double bad_spec = 0;
|
|
|
|
double total_slots;
|
|
|
|
double total;
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, ctx, cpu) -
|
|
|
|
runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, ctx, cpu) +
|
|
|
|
runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, ctx, cpu);
|
|
|
|
|
|
|
|
total_slots = td_total_slots(ctx, cpu, st);
|
2016-05-25 03:52:37 +08:00
|
|
|
if (total_slots)
|
|
|
|
bad_spec = total / total_slots;
|
|
|
|
return sanitize_val(bad_spec);
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
static double td_retiring(int ctx, int cpu, struct runtime_stat *st)
|
2016-05-25 03:52:37 +08:00
|
|
|
{
|
|
|
|
double retiring = 0;
|
2017-12-05 22:03:05 +08:00
|
|
|
double total_slots = td_total_slots(ctx, cpu, st);
|
|
|
|
double ret_slots = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED,
|
|
|
|
ctx, cpu);
|
2016-05-25 03:52:37 +08:00
|
|
|
|
|
|
|
if (total_slots)
|
|
|
|
retiring = ret_slots / total_slots;
|
|
|
|
return retiring;
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
static double td_fe_bound(int ctx, int cpu, struct runtime_stat *st)
|
2016-05-25 03:52:37 +08:00
|
|
|
{
|
|
|
|
double fe_bound = 0;
|
2017-12-05 22:03:05 +08:00
|
|
|
double total_slots = td_total_slots(ctx, cpu, st);
|
|
|
|
double fetch_bub = runtime_stat_avg(st, STAT_TOPDOWN_FETCH_BUBBLES,
|
|
|
|
ctx, cpu);
|
2016-05-25 03:52:37 +08:00
|
|
|
|
|
|
|
if (total_slots)
|
|
|
|
fe_bound = fetch_bub / total_slots;
|
|
|
|
return fe_bound;
|
|
|
|
}
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
static double td_be_bound(int ctx, int cpu, struct runtime_stat *st)
|
2016-05-25 03:52:37 +08:00
|
|
|
{
|
2017-12-05 22:03:05 +08:00
|
|
|
double sum = (td_fe_bound(ctx, cpu, st) +
|
|
|
|
td_bad_spec(ctx, cpu, st) +
|
|
|
|
td_retiring(ctx, cpu, st));
|
2016-05-25 03:52:37 +08:00
|
|
|
if (sum == 0)
|
|
|
|
return 0;
|
|
|
|
return sanitize_val(1.0 - sum);
|
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void print_smi_cost(struct perf_stat_config *config,
|
2019-07-21 19:23:51 +08:00
|
|
|
int cpu, struct evsel *evsel,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2017-05-27 03:05:38 +08:00
|
|
|
{
|
|
|
|
double smi_num, aperf, cycles, cost = 0.0;
|
|
|
|
int ctx = evsel_context(evsel);
|
|
|
|
const char *color = NULL;
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
smi_num = runtime_stat_avg(st, STAT_SMI_NUM, ctx, cpu);
|
|
|
|
aperf = runtime_stat_avg(st, STAT_APERF, ctx, cpu);
|
|
|
|
cycles = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
|
2017-05-27 03:05:38 +08:00
|
|
|
|
|
|
|
if ((cycles == 0) || (aperf == 0))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (smi_num)
|
|
|
|
cost = (aperf - cycles) / aperf * 100.00;
|
|
|
|
|
|
|
|
if (cost > 10)
|
|
|
|
color = PERF_COLOR_RED;
|
2018-08-30 14:32:28 +08:00
|
|
|
out->print_metric(config, out->ctx, color, "%8.1f%%", "SMI cycles%", cost);
|
|
|
|
out->print_metric(config, out->ctx, NULL, "%4.0f", "SMI#", smi_num);
|
2017-05-27 03:05:38 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
static void generic_metric(struct perf_stat_config *config,
|
|
|
|
const char *metric_expr,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel **metric_events,
|
2017-09-01 03:40:28 +08:00
|
|
|
char *name,
|
|
|
|
const char *metric_name,
|
perf metricgroup: Scale the metric result
Some metrics define the scale unit, such as
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
"Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
"MetricName": "UNC_M_PMM_READ_LATENCY",
"PerPkg": "1",
"ScaleUnit": "6000000000ns",
"UMask": "0x1",
"Unit": "iMC"
},
For above example, the ratio should be,
ratio = (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS) * 6000000000
But in current code, the ratio is not scaled ( * 6000000000)
With this patch, the ratio is scaled and the unit (ns) is printed.
For example,
# 219.4 ns UNC_M_PMM_READ_LATENCY
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20190828055932.8269-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 13:59:31 +08:00
|
|
|
const char *metric_unit,
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
int runtime,
|
2017-09-01 03:40:28 +08:00
|
|
|
double avg,
|
|
|
|
int cpu,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
|
|
|
struct runtime_stat *st)
|
2017-09-01 03:40:28 +08:00
|
|
|
{
|
|
|
|
print_metric_t print_metric = out->print_metric;
|
2020-04-02 04:33:34 +08:00
|
|
|
struct expr_parse_ctx pctx;
|
perf metricgroup: Scale the metric result
Some metrics define the scale unit, such as
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
"Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
"MetricName": "UNC_M_PMM_READ_LATENCY",
"PerPkg": "1",
"ScaleUnit": "6000000000ns",
"UMask": "0x1",
"Unit": "iMC"
},
For above example, the ratio should be,
ratio = (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS) * 6000000000
But in current code, the ratio is not scaled ( * 6000000000)
With this patch, the ratio is scaled and the unit (ns) is printed.
For example,
# 219.4 ns UNC_M_PMM_READ_LATENCY
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20190828055932.8269-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 13:59:31 +08:00
|
|
|
double ratio, scale;
|
2017-09-01 03:40:28 +08:00
|
|
|
int i;
|
|
|
|
void *ctxp = out->ctx;
|
2019-06-25 03:37:11 +08:00
|
|
|
char *n, *pn;
|
2017-09-01 03:40:28 +08:00
|
|
|
|
|
|
|
expr__ctx_init(&pctx);
|
perf stat: Fix free memory access / memory leaks in metrics
Make sure to not free the name passed in by the caller, but free all the
allocated ids when parsing expressions.
The loop at the end knows that the first entry shouldn't be freed, so
make sure the caller name is the first entry.
Fixes
% perf stat -M IpB,IpCall,IpTB,IPC,Retiring_SMT,Frontend_Bound_SMT,Kernel_Utilization,CPU_Utilization --metric-only -a -I 1000 sleep 2
valgrind:
1.009943231 ==21527== Invalid read of size 1
==21527== at 0x483CB74: strcmp (vg_replace_strmem.c:849)
==21527== by 0x582CF8: collect_all_aliases (stat-display.c:554)
==21527== by 0x582EB3: collect_data (stat-display.c:577)
==21527== by 0x583A32: print_counter_aggr (stat-display.c:806)
==21527== by 0x584FAD: perf_evlist__print_counters (stat-display.c:1200)
==21527== by 0x45133A: print_counters (builtin-stat.c:655)
==21527== by 0x450629: process_interval (builtin-stat.c:353)
==21527== by 0x450FBD: __run_perf_stat (builtin-stat.c:564)
==21527== by 0x451285: run_perf_stat (builtin-stat.c:636)
==21527== by 0x454619: cmd_stat (builtin-stat.c:1966)
==21527== by 0x4D557D: run_builtin (perf.c:310)
==21527== by 0x4D57EA: handle_internal_command (perf.c:362)
==21527== Address 0x12826cd0 is 0 bytes inside a block of size 25 free'd
==21527== at 0x4839A0C: free (vg_replace_malloc.c:540)
==21527== by 0x627041: __zfree (zalloc.c:13)
==21527== by 0x57F66A: generic_metric (stat-shadow.c:814)
==21527== by 0x580B21: perf_stat__print_shadow_stats (stat-shadow.c:1057)
==21527== by 0x58418E: print_metric_headers (stat-display.c:943)
==21527== by 0x5844BC: print_interval (stat-display.c:1004)
==21527== by 0x584DEB: perf_evlist__print_counters (stat-display.c:1172)
==21527== by 0x45133A: print_counters (builtin-stat.c:655)
==21527== by 0x450629: process_interval (builtin-stat.c:353)
==21527== by 0x450FBD: __run_perf_stat (builtin-stat.c:564)
==21527== by 0x451285: run_perf_stat (builtin-stat.c:636)
==21527== by 0x454619: cmd_stat (builtin-stat.c:1966)
==21527== Block was alloc'd at
==21527== at 0x483880B: malloc (vg_replace_malloc.c:309)
==21527== by 0x51677DE: strdup (in /usr/lib64/libc-2.29.so)
==21527== by 0x506457: parse_events_name (parse-events.c:1754)
==21527== by 0x5550BB: parse_events_parse (parse-events.y:214)
==21527== by 0x50694D: parse_events__scanner (parse-events.c:1887)
==21527== by 0x506AEF: parse_events (parse-events.c:1927)
==21527== by 0x521D8B: metricgroup__parse_groups (metricgroup.c:527)
==21527== by 0x45156F: parse_metric_groups (builtin-stat.c:721)
==21527== by 0x6228A9: get_value (parse-options.c:243)
==21527== by 0x62363F: parse_short_opt (parse-options.c:348)
==21527== by 0x62363F: parse_options_step (parse-options.c:536)
==21527== by 0x62363F: parse_options_subcommand (parse-options.c:651)
==21527== by 0x453C1D: cmd_stat (builtin-stat.c:1718)
==21527== by 0x4D557D: run_builtin (perf.c:310)
and also a leak report.
Committer testing:
Before:
# perf stat -M IpB,IpCall,IpTB,IPC,Retiring_SMT,Frontend_Bound_SMT,Kernel_Utilization,CPU_Utilization --metric-only -a -I 1000 sleep 2
# time CPU_Utilization
1.000470810 free(): double free detected in tcache 2
Aborted (core dumped)
#
After:
# perf stat -M IpB,IpCall,IpTB,IPC,Retiring_SMT,Frontend_Bound_SMT,Kernel_Utilization,CPU_Utilization --metric-only -a -I 1000 sleep 2
# time CPU_Utilization
1.000494752 0.1
2.001105112 0.1
#
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lore.kernel.org/lkml/20190923233339.25326-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-24 07:33:39 +08:00
|
|
|
/* Must be first id entry */
|
|
|
|
expr__add_id(&pctx, name, avg);
|
2017-09-01 03:40:28 +08:00
|
|
|
for (i = 0; metric_events[i]; i++) {
|
|
|
|
struct saved_value *v;
|
2017-09-01 03:40:34 +08:00
|
|
|
struct stats *stats;
|
2019-08-28 13:59:32 +08:00
|
|
|
u64 metric_total = 0;
|
2017-09-01 03:40:34 +08:00
|
|
|
|
|
|
|
if (!strcmp(metric_events[i]->name, "duration_time")) {
|
|
|
|
stats = &walltime_nsecs_stats;
|
|
|
|
scale = 1e-9;
|
|
|
|
} else {
|
2017-12-05 22:03:04 +08:00
|
|
|
v = saved_value_lookup(metric_events[i], cpu, false,
|
2017-12-05 22:03:05 +08:00
|
|
|
STAT_NONE, 0, st);
|
2017-09-01 03:40:34 +08:00
|
|
|
if (!v)
|
|
|
|
break;
|
|
|
|
stats = &v->stats;
|
|
|
|
scale = 1.0;
|
2019-08-28 13:59:32 +08:00
|
|
|
|
|
|
|
if (v->metric_other)
|
|
|
|
metric_total = v->metric_total;
|
2017-09-01 03:40:34 +08:00
|
|
|
}
|
2019-06-25 03:37:11 +08:00
|
|
|
|
|
|
|
n = strdup(metric_events[i]->name);
|
|
|
|
if (!n)
|
|
|
|
return;
|
|
|
|
/*
|
|
|
|
* This display code with --no-merge adds [cpu] postfixes.
|
|
|
|
* These are not supported by the parser. Remove everything
|
|
|
|
* after the space.
|
|
|
|
*/
|
|
|
|
pn = strchr(n, ' ');
|
|
|
|
if (pn)
|
|
|
|
*pn = 0;
|
2019-08-28 13:59:32 +08:00
|
|
|
|
|
|
|
if (metric_total)
|
|
|
|
expr__add_id(&pctx, n, metric_total);
|
|
|
|
else
|
|
|
|
expr__add_id(&pctx, n, avg_stats(stats)*scale);
|
2017-09-01 03:40:28 +08:00
|
|
|
}
|
2019-08-28 13:59:32 +08:00
|
|
|
|
2017-09-01 03:40:28 +08:00
|
|
|
if (!metric_events[i]) {
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
if (expr__parse(&ratio, &pctx, metric_expr, runtime) == 0) {
|
perf metricgroup: Scale the metric result
Some metrics define the scale unit, such as
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
"Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
"MetricName": "UNC_M_PMM_READ_LATENCY",
"PerPkg": "1",
"ScaleUnit": "6000000000ns",
"UMask": "0x1",
"Unit": "iMC"
},
For above example, the ratio should be,
ratio = (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS) * 6000000000
But in current code, the ratio is not scaled ( * 6000000000)
With this patch, the ratio is scaled and the unit (ns) is printed.
For example,
# 219.4 ns UNC_M_PMM_READ_LATENCY
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20190828055932.8269-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 13:59:31 +08:00
|
|
|
char *unit;
|
|
|
|
char metric_bf[64];
|
|
|
|
|
|
|
|
if (metric_unit && metric_name) {
|
|
|
|
if (perf_pmu__convert_scale(metric_unit,
|
|
|
|
&unit, &scale) >= 0) {
|
|
|
|
ratio *= scale;
|
|
|
|
}
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
if (strstr(metric_expr, "?"))
|
|
|
|
scnprintf(metric_bf, sizeof(metric_bf),
|
|
|
|
"%s %s_%d", unit, metric_name, runtime);
|
|
|
|
else
|
|
|
|
scnprintf(metric_bf, sizeof(metric_bf),
|
perf metricgroup: Scale the metric result
Some metrics define the scale unit, such as
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
"Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
"MetricName": "UNC_M_PMM_READ_LATENCY",
"PerPkg": "1",
"ScaleUnit": "6000000000ns",
"UMask": "0x1",
"Unit": "iMC"
},
For above example, the ratio should be,
ratio = (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS) * 6000000000
But in current code, the ratio is not scaled ( * 6000000000)
With this patch, the ratio is scaled and the unit (ns) is printed.
For example,
# 219.4 ns UNC_M_PMM_READ_LATENCY
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20190828055932.8269-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 13:59:31 +08:00
|
|
|
"%s %s", unit, metric_name);
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
|
perf metricgroup: Scale the metric result
Some metrics define the scale unit, such as
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
"Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
"MetricName": "UNC_M_PMM_READ_LATENCY",
"PerPkg": "1",
"ScaleUnit": "6000000000ns",
"UMask": "0x1",
"Unit": "iMC"
},
For above example, the ratio should be,
ratio = (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS) * 6000000000
But in current code, the ratio is not scaled ( * 6000000000)
With this patch, the ratio is scaled and the unit (ns) is printed.
For example,
# 219.4 ns UNC_M_PMM_READ_LATENCY
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20190828055932.8269-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 13:59:31 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.1f",
|
|
|
|
metric_bf, ratio);
|
|
|
|
} else {
|
|
|
|
print_metric(config, ctxp, NULL, "%8.1f",
|
|
|
|
metric_name ?
|
|
|
|
metric_name :
|
|
|
|
out->force_header ? name : "",
|
|
|
|
ratio);
|
|
|
|
}
|
|
|
|
} else {
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL,
|
2017-09-01 03:40:29 +08:00
|
|
|
out->force_header ?
|
|
|
|
(metric_name ? metric_name : name) : "", 0);
|
perf metricgroup: Scale the metric result
Some metrics define the scale unit, such as
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
"Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
"MetricName": "UNC_M_PMM_READ_LATENCY",
"PerPkg": "1",
"ScaleUnit": "6000000000ns",
"UMask": "0x1",
"Unit": "iMC"
},
For above example, the ratio should be,
ratio = (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS) * 6000000000
But in current code, the ratio is not scaled ( * 6000000000)
With this patch, the ratio is scaled and the unit (ns) is printed.
For example,
# 219.4 ns UNC_M_PMM_READ_LATENCY
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20190828055932.8269-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 13:59:31 +08:00
|
|
|
}
|
2020-04-01 02:02:26 +08:00
|
|
|
} else {
|
|
|
|
print_metric(config, ctxp, NULL, NULL,
|
|
|
|
out->force_header ?
|
|
|
|
(metric_name ? metric_name : name) : "", 0);
|
|
|
|
}
|
2019-06-25 03:37:11 +08:00
|
|
|
|
|
|
|
for (i = 1; i < pctx.num_ids; i++)
|
2019-07-04 23:06:20 +08:00
|
|
|
zfree(&pctx.ids[i].name);
|
2017-09-01 03:40:28 +08:00
|
|
|
}
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
void perf_stat__print_shadow_stats(struct perf_stat_config *config,
|
2019-07-21 19:23:51 +08:00
|
|
|
struct evsel *evsel,
|
2016-01-31 01:06:49 +08:00
|
|
|
double avg, int cpu,
|
2017-09-01 03:40:31 +08:00
|
|
|
struct perf_stat_output_ctx *out,
|
2017-12-05 22:03:05 +08:00
|
|
|
struct rblist *metric_events,
|
|
|
|
struct runtime_stat *st)
|
2015-06-03 22:25:59 +08:00
|
|
|
{
|
2016-01-31 01:06:49 +08:00
|
|
|
void *ctxp = out->ctx;
|
|
|
|
print_metric_t print_metric = out->print_metric;
|
2015-06-03 22:25:59 +08:00
|
|
|
double total, ratio = 0.0, total2;
|
2016-05-25 03:52:37 +08:00
|
|
|
const char *color = NULL;
|
2015-06-03 22:25:59 +08:00
|
|
|
int ctx = evsel_context(evsel);
|
2017-09-01 03:40:31 +08:00
|
|
|
struct metric_event *me;
|
|
|
|
int num = 1;
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2020-04-30 21:51:16 +08:00
|
|
|
if (evsel__match(evsel, HARDWARE, HW_INSTRUCTIONS)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
if (total) {
|
|
|
|
ratio = avg / total;
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%7.2f ",
|
2016-01-31 01:06:49 +08:00
|
|
|
"insn per cycle", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else {
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "insn per cycle", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
2017-12-05 22:03:05 +08:00
|
|
|
|
|
|
|
total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT,
|
|
|
|
ctx, cpu);
|
|
|
|
|
|
|
|
total = max(total, runtime_stat_avg(st,
|
|
|
|
STAT_STALLED_CYCLES_BACK,
|
|
|
|
ctx, cpu));
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total && avg) {
|
2018-08-30 14:32:28 +08:00
|
|
|
out->new_line(config, ctxp);
|
2015-06-03 22:25:59 +08:00
|
|
|
ratio = total / avg;
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%7.2f ",
|
2016-01-31 01:06:49 +08:00
|
|
|
"stalled cycles per insn",
|
|
|
|
ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
2020-04-30 21:51:16 +08:00
|
|
|
} else if (evsel__match(evsel, HARDWARE, HW_BRANCH_MISSES)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
if (runtime_stat_n(st, STAT_BRANCHES, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_branch_misses(config, cpu, evsel, avg, out, st);
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all branches", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else if (
|
2019-07-21 19:24:29 +08:00
|
|
|
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
|
|
|
|
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_L1D |
|
2015-06-03 22:25:59 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
|
2016-01-31 01:06:49 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
|
2017-12-05 22:03:05 +08:00
|
|
|
|
|
|
|
if (runtime_stat_n(st, STAT_L1_DCACHE, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_l1_dcache_misses(config, cpu, evsel, avg, out, st);
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all L1-dcache hits", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else if (
|
2019-07-21 19:24:29 +08:00
|
|
|
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
|
|
|
|
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_L1I |
|
2015-06-03 22:25:59 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
|
2016-01-31 01:06:49 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
|
2017-12-05 22:03:05 +08:00
|
|
|
|
|
|
|
if (runtime_stat_n(st, STAT_L1_ICACHE, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_l1_icache_misses(config, cpu, evsel, avg, out, st);
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all L1-icache hits", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else if (
|
2019-07-21 19:24:29 +08:00
|
|
|
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
|
|
|
|
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_DTLB |
|
2015-06-03 22:25:59 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
|
2016-01-31 01:06:49 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
|
2017-12-05 22:03:05 +08:00
|
|
|
|
|
|
|
if (runtime_stat_n(st, STAT_DTLB_CACHE, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_dtlb_cache_misses(config, cpu, evsel, avg, out, st);
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all dTLB cache hits", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else if (
|
2019-07-21 19:24:29 +08:00
|
|
|
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
|
|
|
|
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_ITLB |
|
2015-06-03 22:25:59 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
|
2016-01-31 01:06:49 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
|
2017-12-05 22:03:05 +08:00
|
|
|
|
|
|
|
if (runtime_stat_n(st, STAT_ITLB_CACHE, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_itlb_cache_misses(config, cpu, evsel, avg, out, st);
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all iTLB cache hits", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else if (
|
2019-07-21 19:24:29 +08:00
|
|
|
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
|
|
|
|
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_LL |
|
2015-06-03 22:25:59 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_OP_READ) << 8) |
|
2016-01-31 01:06:49 +08:00
|
|
|
((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) {
|
2017-12-05 22:03:05 +08:00
|
|
|
|
|
|
|
if (runtime_stat_n(st, STAT_LL_CACHE, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_ll_cache_misses(config, cpu, evsel, avg, out, st);
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all LL-cache hits", 0);
|
2020-04-30 21:51:16 +08:00
|
|
|
} else if (evsel__match(evsel, HARDWARE, HW_CACHE_MISSES)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CACHEREFS, ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = avg * 100 / total;
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
if (runtime_stat_n(st, STAT_CACHEREFS, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.3f %%",
|
2016-01-31 01:06:49 +08:00
|
|
|
"of all cache refs", ratio);
|
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "of all cache refs", 0);
|
2020-04-30 21:51:16 +08:00
|
|
|
} else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) {
|
2018-08-30 14:32:28 +08:00
|
|
|
print_stalled_cycles_frontend(config, cpu, evsel, avg, out, st);
|
2020-04-30 21:51:16 +08:00
|
|
|
} else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_BACKEND)) {
|
2018-08-30 14:32:28 +08:00
|
|
|
print_stalled_cycles_backend(config, cpu, evsel, avg, out, st);
|
2020-04-30 21:51:16 +08:00
|
|
|
} else if (evsel__match(evsel, HARDWARE, HW_CPU_CYCLES)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_NSECS, 0, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total) {
|
|
|
|
ratio = avg / total;
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.3f", "GHz", ratio);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else {
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "Ghz", 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
|
|
|
} else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
if (total)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL,
|
2016-01-31 01:06:49 +08:00
|
|
|
"%7.2f%%", "transactional cycles",
|
|
|
|
100.0 * (avg / total));
|
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "transactional cycles",
|
2016-01-31 01:06:49 +08:00
|
|
|
0);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX_CP)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu);
|
|
|
|
total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, ctx, cpu);
|
|
|
|
|
2015-06-03 22:25:59 +08:00
|
|
|
if (total2 < avg)
|
|
|
|
total2 = avg;
|
|
|
|
if (total)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%7.2f%%", "aborted cycles",
|
2015-06-03 22:25:59 +08:00
|
|
|
100.0 * ((total2-avg) / total));
|
2016-01-31 01:06:49 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "aborted cycles", 0);
|
2016-01-31 01:06:49 +08:00
|
|
|
} else if (perf_stat_evsel__is(evsel, TRANSACTION_START)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES_IN_TX,
|
|
|
|
ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2015-07-28 07:24:51 +08:00
|
|
|
if (avg)
|
2015-06-03 22:25:59 +08:00
|
|
|
ratio = total / avg;
|
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
if (runtime_stat_n(st, STAT_CYCLES_IN_TX, ctx, cpu) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.0f",
|
2016-01-31 01:06:49 +08:00
|
|
|
"cycles / transaction", ratio);
|
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "cycles / transaction",
|
2017-12-05 22:03:05 +08:00
|
|
|
0);
|
2016-01-31 01:06:49 +08:00
|
|
|
} else if (perf_stat_evsel__is(evsel, ELISION_START)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_CYCLES_IN_TX,
|
|
|
|
ctx, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2015-07-28 07:24:51 +08:00
|
|
|
if (avg)
|
2015-06-03 22:25:59 +08:00
|
|
|
ratio = total / avg;
|
|
|
|
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.0f", "cycles / elision", ratio);
|
2020-04-30 21:51:16 +08:00
|
|
|
} else if (evsel__is_clock(evsel)) {
|
2015-11-03 09:50:20 +08:00
|
|
|
if ((ratio = avg_stats(&walltime_nsecs_stats)) != 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.3f", "CPUs utilized",
|
perf stat: Get rid of extra clock display function
There's no reason to have separate function to display clock events.
It's only purpose was to convert the nanosecond value into microseconds.
We do that now in generic code, if the unit and scale values are
properly set, which this patch do for clock events.
The output differs in the unit field being displayed in its columns
rather than having it added as a suffix of the event name. Plus the
value is rounded into 2 decimal numbers as for any other event.
Before:
# perf stat -e cpu-clock,task-clock -C 0 sleep 3
Performance counter stats for 'CPU(s) 0':
3001.123137 cpu-clock (msec) # 1.000 CPUs utilized
3001.133250 task-clock (msec) # 1.000 CPUs utilized
3.001159813 seconds time elapsed
Now:
# perf stat -e cpu-clock,task-clock -C 0 sleep 3
Performance counter stats for 'CPU(s) 0':
3,001.05 msec cpu-clock # 1.000 CPUs utilized
3,001.05 msec task-clock # 1.000 CPUs utilized
3.001077794 seconds time elapsed
There's a small difference in csv output, as we now output the unit
field, which was empty before. It's in the proper spot, so there's no
compatibility issue.
Before:
# perf stat -e cpu-clock,task-clock -C 0 -x, sleep 3
3001.065177,,cpu-clock,3001064187,100.00,1.000,CPUs utilized
3001.077085,,task-clock,3001077085,100.00,1.000,CPUs utilized
# perf stat -e cpu-clock,task-clock -C 0 -x, sleep 3
3000.80,msec,cpu-clock,3000799026,100.00,1.000,CPUs utilized
3000.80,msec,task-clock,3000799550,100.00,1.000,CPUs utilized
Add perf_evsel__is_clock to replace nsec_counter.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180720110036.32251-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-07-20 19:00:34 +08:00
|
|
|
avg / (ratio * evsel->scale));
|
2015-11-03 09:50:20 +08:00
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, "CPUs utilized", 0);
|
2016-05-25 03:52:37 +08:00
|
|
|
} else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_BUBBLES)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
double fe_bound = td_fe_bound(ctx, cpu, st);
|
2016-05-25 03:52:37 +08:00
|
|
|
|
|
|
|
if (fe_bound > 0.2)
|
|
|
|
color = PERF_COLOR_RED;
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, color, "%8.1f%%", "frontend bound",
|
2016-05-25 03:52:37 +08:00
|
|
|
fe_bound * 100.);
|
|
|
|
} else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_RETIRED)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
double retiring = td_retiring(ctx, cpu, st);
|
2016-05-25 03:52:37 +08:00
|
|
|
|
|
|
|
if (retiring > 0.7)
|
|
|
|
color = PERF_COLOR_GREEN;
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, color, "%8.1f%%", "retiring",
|
2016-05-25 03:52:37 +08:00
|
|
|
retiring * 100.);
|
|
|
|
} else if (perf_stat_evsel__is(evsel, TOPDOWN_RECOVERY_BUBBLES)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
double bad_spec = td_bad_spec(ctx, cpu, st);
|
2016-05-25 03:52:37 +08:00
|
|
|
|
|
|
|
if (bad_spec > 0.1)
|
|
|
|
color = PERF_COLOR_RED;
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, color, "%8.1f%%", "bad speculation",
|
2016-05-25 03:52:37 +08:00
|
|
|
bad_spec * 100.);
|
|
|
|
} else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_ISSUED)) {
|
2017-12-05 22:03:05 +08:00
|
|
|
double be_bound = td_be_bound(ctx, cpu, st);
|
2016-05-25 03:52:37 +08:00
|
|
|
const char *name = "backend bound";
|
|
|
|
static int have_recovery_bubbles = -1;
|
|
|
|
|
|
|
|
/* In case the CPU does not support topdown-recovery-bubbles */
|
|
|
|
if (have_recovery_bubbles < 0)
|
|
|
|
have_recovery_bubbles = pmu_have_event("cpu",
|
|
|
|
"topdown-recovery-bubbles");
|
|
|
|
if (!have_recovery_bubbles)
|
|
|
|
name = "backend bound/bad spec";
|
|
|
|
|
|
|
|
if (be_bound > 0.2)
|
|
|
|
color = PERF_COLOR_RED;
|
2017-12-05 22:03:05 +08:00
|
|
|
if (td_total_slots(ctx, cpu, st) > 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, color, "%8.1f%%", name,
|
2016-05-25 03:52:37 +08:00
|
|
|
be_bound * 100.);
|
|
|
|
else
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, name, 0);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 04:17:08 +08:00
|
|
|
} else if (evsel->metric_expr) {
|
2018-08-30 14:32:28 +08:00
|
|
|
generic_metric(config, evsel->metric_expr, evsel->metric_events, evsel->name,
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
evsel->metric_name, NULL, 1, avg, cpu, out, st);
|
2017-12-05 22:03:05 +08:00
|
|
|
} else if (runtime_stat_n(st, STAT_NSECS, 0, cpu) != 0) {
|
2015-06-03 22:25:59 +08:00
|
|
|
char unit = 'M';
|
2016-01-31 01:06:49 +08:00
|
|
|
char unit_buf[10];
|
2015-06-03 22:25:59 +08:00
|
|
|
|
2017-12-05 22:03:05 +08:00
|
|
|
total = runtime_stat_avg(st, STAT_NSECS, 0, cpu);
|
2015-06-03 22:25:59 +08:00
|
|
|
|
|
|
|
if (total)
|
|
|
|
ratio = 1000.0 * avg / total;
|
|
|
|
if (ratio < 0.001) {
|
|
|
|
ratio *= 1000;
|
|
|
|
unit = 'K';
|
|
|
|
}
|
2016-01-31 01:06:49 +08:00
|
|
|
snprintf(unit_buf, sizeof(unit_buf), "%c/sec", unit);
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, "%8.3f", unit_buf, ratio);
|
2017-05-27 03:05:38 +08:00
|
|
|
} else if (perf_stat_evsel__is(evsel, SMI_NUM)) {
|
2018-08-30 14:32:28 +08:00
|
|
|
print_smi_cost(config, cpu, evsel, out, st);
|
2015-06-03 22:25:59 +08:00
|
|
|
} else {
|
2017-09-01 03:40:31 +08:00
|
|
|
num = 0;
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|
2017-09-01 03:40:31 +08:00
|
|
|
|
|
|
|
if ((me = metricgroup__lookup(metric_events, evsel, false)) != NULL) {
|
|
|
|
struct metric_expr *mexp;
|
|
|
|
|
|
|
|
list_for_each_entry (mexp, &me->head, nd) {
|
|
|
|
if (num++ > 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
out->new_line(config, ctxp);
|
|
|
|
generic_metric(config, mexp->metric_expr, mexp->metric_events,
|
2017-09-01 03:40:31 +08:00
|
|
|
evsel->name, mexp->metric_name,
|
perf metricgroups: Enhance JSON/metric infrastructure to handle "?"
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lore.kernel.org/lkml/20200401203340.31402-5-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-02 04:33:37 +08:00
|
|
|
mexp->metric_unit, mexp->runtime, avg, cpu, out, st);
|
2017-09-01 03:40:31 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (num == 0)
|
2018-08-30 14:32:28 +08:00
|
|
|
print_metric(config, ctxp, NULL, NULL, NULL, 0);
|
2015-06-03 22:25:59 +08:00
|
|
|
}
|