License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2009-09-11 18:12:54 +08:00
|
|
|
#include "builtin.h"
|
2009-09-11 18:12:54 +08:00
|
|
|
#include "perf.h"
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
#include "util/util.h"
|
perf tools: Save some loops using perf_evlist__id2evsel
Since we already ask for PERF_SAMPLE_ID and use it to quickly find the
associated evsel, add handler func + data to struct perf_evsel to avoid
using chains of if(strcmp(event_name)) and also to avoid all the linear
list searches via trace_event_find.
To demonstrate the technique convert 'perf sched' to it:
# perf sched record sleep 5m
And then:
Performance counter stats for '/tmp/oldperf sched lat':
646.929438 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,901 page-faults # 0.032 M/sec
1,290,144,450 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,606,158,439 instructions # 1.24 insns per cycle
339,088,395 branches # 524.151 M/sec
4,550,735 branch-misses # 1.34% of all branches
0.647524759 seconds time elapsed
Versus:
Performance counter stats for 'perf sched lat':
473.564691 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,903 page-faults # 0.044 M/sec
944,367,984 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,442,385,571 instructions # 1.53 insns per cycle
308,383,106 branches # 651.195 M/sec
4,481,784 branch-misses # 1.45% of all branches
0.474215751 seconds time elapsed
[root@emilia ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1kbzpl74lwi6lavpqke2u2p3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-11-29 03:57:40 +08:00
|
|
|
#include "util/evlist.h"
|
2009-09-11 18:12:54 +08:00
|
|
|
#include "util/cache.h"
|
2011-11-17 03:02:54 +08:00
|
|
|
#include "util/evsel.h"
|
2009-09-11 18:12:54 +08:00
|
|
|
#include "util/symbol.h"
|
|
|
|
#include "util/thread.h"
|
|
|
|
#include "util/header.h"
|
2009-12-12 07:24:02 +08:00
|
|
|
#include "util/session.h"
|
2011-11-28 18:30:20 +08:00
|
|
|
#include "util/tool.h"
|
2014-07-01 04:28:47 +08:00
|
|
|
#include "util/cloexec.h"
|
2016-04-12 21:29:29 +08:00
|
|
|
#include "util/thread_map.h"
|
2016-04-12 21:29:27 +08:00
|
|
|
#include "util/color.h"
|
2016-11-16 14:06:29 +08:00
|
|
|
#include "util/stat.h"
|
2016-11-16 14:06:32 +08:00
|
|
|
#include "util/callchain.h"
|
2016-11-30 01:15:44 +08:00
|
|
|
#include "util/time-utils.h"
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2015-12-15 23:39:39 +08:00
|
|
|
#include <subcmd/parse-options.h>
|
2009-09-11 18:12:54 +08:00
|
|
|
#include "util/trace-event.h"
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
#include "util/debug.h"
|
|
|
|
|
2017-04-17 22:39:06 +08:00
|
|
|
#include <linux/kernel.h>
|
2016-11-16 14:06:29 +08:00
|
|
|
#include <linux/log2.h>
|
2009-09-11 18:12:54 +08:00
|
|
|
#include <sys/prctl.h>
|
2012-04-04 16:45:27 +08:00
|
|
|
#include <sys/resource.h>
|
2017-04-18 02:23:08 +08:00
|
|
|
#include <inttypes.h>
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2017-04-18 21:46:11 +08:00
|
|
|
#include <errno.h>
|
2009-09-11 18:12:54 +08:00
|
|
|
#include <semaphore.h>
|
|
|
|
#include <pthread.h>
|
|
|
|
#include <math.h>
|
perf sched replay: Alloc the memory of pid_to_task dynamically to adapt to the unexpected change of pid_max
The current memory allocation of struct task_desc *pid_to_task[MAX_PID]
is in a permanent and preset way, and it has two problems:
Problem 1: If the pid_max, which is the max number of pids in the
system, is much smaller than MAX_PID (1024*1000), then it causes a waste
of stack memory. This may happen in the case where the number of cpu
cores is much smaller than 1000.
Problem 2: If the pid_max is changed from the default value to a value
larger than MAX_PID, then it will cause assertion failure problem. The
maximum value of pid_max can be set to pid_max_max (see pidmap_init
defined in kernel/pid.c), which equals to PID_MAX_LIMIT. In x86_64,
PID_MAX_LIMIT is 4*1024*1024 (defined in include/linux/threads.h). This
value is much larger than MAX_PID, and will take up 32768 Kbytes
(4*1024*1024*8/1024) for memory allocation of pid_to_task, which is much
larger than the default 8192 Kbytes of the stack size of calling
process.
Due to these two problems, we use calloc to allocate the memory of
pid_to_task dynamically.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ echo 1025000 > /proc/sys/kernel/pid_max
$ cat /proc/sys/kernel/pid_max
1025000
Run some applications until the pid of some process is greater than
the value of MAX_PID (1024*1000).
Before this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55480 nsecs
the run test took 1000008 nsecs
the sleep test took 1063151 nsecs
perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 1024000)'
failed.
Aborted
After this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55435 nsecs
the run test took 1000004 nsecs
the sleep test took 1059312 nsecs
nr_run_events: 10
nr_sleep_events: 1562
nr_wakeup_events: 5
task 0 ( :1: 1), nr_events: 1
task 1 ( :2: 2), nr_events: 1
task 2 ( :3: 3), nr_events: 1
task 3 ( :5: 5), nr_events: 1
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-4-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:30 +08:00
|
|
|
#include <api/fs/fs.h>
|
2016-08-08 23:23:49 +08:00
|
|
|
#include <linux/time64.h>
|
2009-09-12 09:59:01 +08:00
|
|
|
|
2017-04-18 03:10:49 +08:00
|
|
|
#include "sane_ctype.h"
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
#define PR_SET_NAME 15 /* Set process name */
|
|
|
|
#define MAX_CPUS 4096
|
|
|
|
#define COMM_LEN 20
|
|
|
|
#define SYM_LEN 129
|
perf sched replay: Increase the MAX_PID value to fix assertion failure problem
Current MAX_PID is only 65536, which will cause assertion failure problem
when CPU cores are more than 64 in x86_64.
This is because the pid_max value in x86_64 is at least
PIDS_PER_CPU_DEFAULT * num_possible_cpus() (see function pidmap_init
defined in kernel/pid.c), where PIDS_PER_CPU_DEFAULT is 1024 (defined in
include/linux/threads.h).
Thus for MAX_PID = 65536, the correspoinding CPU cores are
65536/1024=64. This is obviously not enough at all for x86_64, and will
cause an assertion failure problem due to BUG_ON(pid >= MAX_PID) in the
codes.
We increase MAX_PID value from 65536 to 1024*1000, which can be used in
x86_64 with 1000 cores.
This number is finally decided according to the limitation of stack size
of calling process.
Use 'ulimit -a', the result shows the stack size of any process is 8192
Kbytes, which is defined in include/uapi/linux/resource.h (#define
_STK_LIM (8*1024*1024)).
Thus we choose a large enough value for MAX_PID, and make it satisfy to
the limitation of the stack size, i.e., making the perf process take up
a memory space just smaller than 8192 Kbytes.
We have calculated and tested that 1024*1000 is OK for MAX_PID.
This means perf sched replay can now be used with at most 1000 cores in
x86_64 without any assertion failure problem.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
Before this patch:
$ perf sched replay
run measurement overhead: 240 nsecs
sleep measurement overhead: 55379 nsecs
the run test took 1000004 nsecs
the sleep test took 1059424 nsecs
perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 65536)'
failed.
Aborted
After this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55397 nsecs
the run test took 999920 nsecs
the sleep test took 1053313 nsecs
nr_run_events: 10
nr_sleep_events: 1562
nr_wakeup_events: 5
task 0 ( :1: 1), nr_events: 1
task 1 ( :2: 2), nr_events: 1
task 2 ( :3: 3), nr_events: 1
task 3 ( :5: 5), nr_events: 1
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-3-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:29 +08:00
|
|
|
#define MAX_PID 1024000
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
struct sched_atom;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
struct task_desc {
|
|
|
|
unsigned long nr;
|
|
|
|
unsigned long pid;
|
|
|
|
char comm[COMM_LEN];
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
unsigned long nr_events;
|
|
|
|
unsigned long curr_event;
|
2009-09-15 02:04:48 +08:00
|
|
|
struct sched_atom **atoms;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
pthread_t thread;
|
|
|
|
sem_t sleep_sem;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
sem_t ready_for_work;
|
|
|
|
sem_t work_done_sem;
|
|
|
|
|
|
|
|
u64 cpu_usage;
|
|
|
|
};
|
|
|
|
|
|
|
|
enum sched_event_type {
|
|
|
|
SCHED_EVENT_RUN,
|
|
|
|
SCHED_EVENT_SLEEP,
|
|
|
|
SCHED_EVENT_WAKEUP,
|
2009-10-10 20:46:04 +08:00
|
|
|
SCHED_EVENT_MIGRATION,
|
2009-09-11 18:12:54 +08:00
|
|
|
};
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
struct sched_atom {
|
2009-09-11 18:12:54 +08:00
|
|
|
enum sched_event_type type;
|
perf tools: Reorganize some structs to save space
Using 'pahole --packable' I found some structs that could be reorganized
to eliminate alignment holes, in some cases getting them to be cacheline
multiples.
[acme@doppio linux-2.6-tip]$ codiff perf.old ~/bin/perf
builtin-annotate.c:
struct perf_session | -8
struct perf_header | -8
2 structs changed
builtin-diff.c:
struct sample_data | -8
1 struct changed
diff__process_sample_event | -8
1 function changed, 8 bytes removed, diff: -8
builtin-sched.c:
struct sched_atom | -8
1 struct changed
builtin-timechart.c:
struct per_pid | -8
1 struct changed
cmd_timechart | -16
1 function changed, 16 bytes removed, diff: -16
builtin-probe.c:
struct perf_probe_point | -8
struct perf_probe_event | -8
2 structs changed
opt_add_probe_event | -3
1 function changed, 3 bytes removed, diff: -3
util/probe-finder.c:
struct probe_finder | -8
1 struct changed
find_kprobe_trace_events | -16
1 function changed, 16 bytes removed, diff: -16
/home/acme/bin/perf:
4 functions changed, 43 bytes removed, diff: -43
[acme@doppio linux-2.6-tip]$
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <new-submission>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-04-05 23:53:45 +08:00
|
|
|
int specific_wait;
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 timestamp;
|
|
|
|
u64 duration;
|
|
|
|
unsigned long nr;
|
|
|
|
sem_t *wait_sem;
|
|
|
|
struct task_desc *wakee;
|
|
|
|
};
|
|
|
|
|
2014-05-05 15:05:54 +08:00
|
|
|
#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKWP"
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2017-01-13 18:45:21 +08:00
|
|
|
/* task state bitmask, copied from include/linux/sched.h */
|
|
|
|
#define TASK_RUNNING 0
|
|
|
|
#define TASK_INTERRUPTIBLE 1
|
|
|
|
#define TASK_UNINTERRUPTIBLE 2
|
|
|
|
#define __TASK_STOPPED 4
|
|
|
|
#define __TASK_TRACED 8
|
|
|
|
/* in tsk->exit_state */
|
|
|
|
#define EXIT_DEAD 16
|
|
|
|
#define EXIT_ZOMBIE 32
|
|
|
|
#define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD)
|
|
|
|
/* in tsk->state again */
|
|
|
|
#define TASK_DEAD 64
|
|
|
|
#define TASK_WAKEKILL 128
|
|
|
|
#define TASK_WAKING 256
|
|
|
|
#define TASK_PARKED 512
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
enum thread_state {
|
|
|
|
THREAD_SLEEPING = 0,
|
|
|
|
THREAD_WAIT_CPU,
|
|
|
|
THREAD_SCHED_IN,
|
|
|
|
THREAD_IGNORE
|
|
|
|
};
|
|
|
|
|
|
|
|
struct work_atom {
|
|
|
|
struct list_head list;
|
|
|
|
enum thread_state state;
|
perf tools: Fix processing of randomly serialized sched traces
Currently it's possible to meet such too high latency results
with 'perf sched latency'.
-----------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms |
-----------------------------------------------------------------------------------
xfce4-panel | 0.222 ms | 2 | avg: 4718.345 ms | max: 9436.493 ms |
scsi_eh_3 | 3.962 ms | 36 | avg: 55.957 ms | max: 1977.829 ms |
The origin is on traces that are sometimes badly serialized across cpus.
For example the raw traces that raised such results for xfce4-panel:
(1) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(2) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(3) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
The traces are processed in the order they arrive. Then in (2),
xfce4-panel sleeps, it is first waken up in (3) and eventually
scheduled in (5).
The latency reported is then 1504 - 1495 = 9 secs, as reported by perf
sched. But this is wrong, we are confident in the fact the traces are
nicely serialized while we should actually more trust the timestamps.
If we reorder by timestamps we get:
(1) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(2) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(3) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
Now the trace make more sense, xfce4-panel is sleeping. Then it is
woken up in (1), scheduled in (2)
It goes to sleep in (3), woken up in (4) and scheduled in (5).
Now, latency captured between (1) and (2) is of 39 us.
And between (4) and (5) it is 2.1 ms.
Such pattern of bad serializing is the origin of the high latencies
reported by perf sched.
Basically, we need to check whether wake up time is higher than
schedule out time. If it's not the case, we need to tag the current
work atom as invalid.
Beside that, we may need to work later on a better ordering of the
traces given by the kernel.
After this patch:
xfce4-session | 0.221 ms | 1 | avg: 0.538 ms | max: 0.538 ms |
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-14 09:01:12 +08:00
|
|
|
u64 sched_out_time;
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 wake_up_time;
|
|
|
|
u64 sched_in_time;
|
|
|
|
u64 runtime;
|
|
|
|
};
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms {
|
|
|
|
struct list_head work_list;
|
2009-09-11 18:12:54 +08:00
|
|
|
struct thread *thread;
|
|
|
|
struct rb_node node;
|
|
|
|
u64 max_lat;
|
2009-12-10 04:40:08 +08:00
|
|
|
u64 max_lat_at;
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 total_lat;
|
|
|
|
u64 nb_atoms;
|
|
|
|
u64 total_runtime;
|
2015-05-22 21:18:40 +08:00
|
|
|
int num_merged;
|
2009-09-11 18:12:54 +08:00
|
|
|
};
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
typedef int (*sort_fn_t)(struct work_atoms *, struct work_atoms *);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_sched;
|
2012-09-12 04:29:27 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct trace_sched_handler {
|
|
|
|
int (*switch_event)(struct perf_sched *sched, struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample, struct machine *machine);
|
2012-09-12 04:29:27 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
int (*runtime_event)(struct perf_sched *sched, struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample, struct machine *machine);
|
2012-09-12 04:29:27 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
int (*wakeup_event)(struct perf_sched *sched, struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample, struct machine *machine);
|
2012-09-12 04:29:27 +08:00
|
|
|
|
2013-08-08 10:50:47 +08:00
|
|
|
/* PERF_RECORD_FORK event, not sched_process_fork tracepoint */
|
|
|
|
int (*fork_event)(struct perf_sched *sched, union perf_event *event,
|
|
|
|
struct machine *machine);
|
2012-09-12 04:29:27 +08:00
|
|
|
|
|
|
|
int (*migrate_task_event)(struct perf_sched *sched,
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine);
|
2012-09-12 04:29:27 +08:00
|
|
|
};
|
|
|
|
|
2016-04-12 21:29:29 +08:00
|
|
|
#define COLOR_PIDS PERF_COLOR_BLUE
|
2016-04-12 21:29:30 +08:00
|
|
|
#define COLOR_CPUS PERF_COLOR_BG_RED
|
2016-04-12 21:29:29 +08:00
|
|
|
|
2016-04-12 21:29:26 +08:00
|
|
|
struct perf_sched_map {
|
|
|
|
DECLARE_BITMAP(comp_cpus_mask, MAX_CPUS);
|
|
|
|
int *comp_cpus;
|
|
|
|
bool comp;
|
2016-04-12 21:29:29 +08:00
|
|
|
struct thread_map *color_pids;
|
|
|
|
const char *color_pids_str;
|
2016-04-12 21:29:30 +08:00
|
|
|
struct cpu_map *color_cpus;
|
|
|
|
const char *color_cpus_str;
|
2016-04-12 21:29:31 +08:00
|
|
|
struct cpu_map *cpus;
|
|
|
|
const char *cpus_str;
|
2016-04-12 21:29:26 +08:00
|
|
|
};
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
struct perf_sched {
|
|
|
|
struct perf_tool tool;
|
|
|
|
const char *sort_order;
|
|
|
|
unsigned long nr_tasks;
|
perf sched replay: Alloc the memory of pid_to_task dynamically to adapt to the unexpected change of pid_max
The current memory allocation of struct task_desc *pid_to_task[MAX_PID]
is in a permanent and preset way, and it has two problems:
Problem 1: If the pid_max, which is the max number of pids in the
system, is much smaller than MAX_PID (1024*1000), then it causes a waste
of stack memory. This may happen in the case where the number of cpu
cores is much smaller than 1000.
Problem 2: If the pid_max is changed from the default value to a value
larger than MAX_PID, then it will cause assertion failure problem. The
maximum value of pid_max can be set to pid_max_max (see pidmap_init
defined in kernel/pid.c), which equals to PID_MAX_LIMIT. In x86_64,
PID_MAX_LIMIT is 4*1024*1024 (defined in include/linux/threads.h). This
value is much larger than MAX_PID, and will take up 32768 Kbytes
(4*1024*1024*8/1024) for memory allocation of pid_to_task, which is much
larger than the default 8192 Kbytes of the stack size of calling
process.
Due to these two problems, we use calloc to allocate the memory of
pid_to_task dynamically.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ echo 1025000 > /proc/sys/kernel/pid_max
$ cat /proc/sys/kernel/pid_max
1025000
Run some applications until the pid of some process is greater than
the value of MAX_PID (1024*1000).
Before this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55480 nsecs
the run test took 1000008 nsecs
the sleep test took 1063151 nsecs
perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 1024000)'
failed.
Aborted
After this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55435 nsecs
the run test took 1000004 nsecs
the sleep test took 1059312 nsecs
nr_run_events: 10
nr_sleep_events: 1562
nr_wakeup_events: 5
task 0 ( :1: 1), nr_events: 1
task 1 ( :2: 2), nr_events: 1
task 2 ( :3: 3), nr_events: 1
task 3 ( :5: 5), nr_events: 1
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-4-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:30 +08:00
|
|
|
struct task_desc **pid_to_task;
|
2012-09-12 04:29:27 +08:00
|
|
|
struct task_desc **tasks;
|
|
|
|
const struct trace_sched_handler *tp_handler;
|
|
|
|
pthread_mutex_t start_work_mutex;
|
|
|
|
pthread_mutex_t work_done_wait_mutex;
|
|
|
|
int profile_cpu;
|
|
|
|
/*
|
|
|
|
* Track the current task - that way we can know whether there's any
|
|
|
|
* weird events, such as a task being switched away that is not current.
|
|
|
|
*/
|
|
|
|
int max_cpu;
|
|
|
|
u32 curr_pid[MAX_CPUS];
|
|
|
|
struct thread *curr_thread[MAX_CPUS];
|
|
|
|
char next_shortname1;
|
|
|
|
char next_shortname2;
|
|
|
|
unsigned int replay_repeat;
|
|
|
|
unsigned long nr_run_events;
|
|
|
|
unsigned long nr_sleep_events;
|
|
|
|
unsigned long nr_wakeup_events;
|
|
|
|
unsigned long nr_sleep_corrections;
|
|
|
|
unsigned long nr_run_events_optimized;
|
|
|
|
unsigned long targetless_wakeups;
|
|
|
|
unsigned long multitarget_wakeups;
|
|
|
|
unsigned long nr_runs;
|
|
|
|
unsigned long nr_timestamps;
|
|
|
|
unsigned long nr_unordered_timestamps;
|
|
|
|
unsigned long nr_context_switch_bugs;
|
|
|
|
unsigned long nr_events;
|
|
|
|
unsigned long nr_lost_chunks;
|
|
|
|
unsigned long nr_lost_events;
|
|
|
|
u64 run_measurement_overhead;
|
|
|
|
u64 sleep_measurement_overhead;
|
|
|
|
u64 start_time;
|
|
|
|
u64 cpu_usage;
|
|
|
|
u64 runavg_cpu_usage;
|
|
|
|
u64 parent_cpu_usage;
|
|
|
|
u64 runavg_parent_cpu_usage;
|
|
|
|
u64 sum_runtime;
|
|
|
|
u64 sum_fluct;
|
|
|
|
u64 run_avg;
|
|
|
|
u64 all_runtime;
|
|
|
|
u64 all_count;
|
|
|
|
u64 cpu_last_switched[MAX_CPUS];
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_root_cached atom_root, sorted_atom_root, merged_atom_root;
|
2012-09-12 04:29:27 +08:00
|
|
|
struct list_head sort_list, cmp_pid;
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
bool force;
|
2015-05-22 21:18:40 +08:00
|
|
|
bool skip_merge;
|
2016-04-12 21:29:26 +08:00
|
|
|
struct perf_sched_map map;
|
2016-11-16 14:06:30 +08:00
|
|
|
|
|
|
|
/* options for timehist command */
|
|
|
|
bool summary;
|
|
|
|
bool summary_only;
|
2016-12-08 22:47:52 +08:00
|
|
|
bool idle_hist;
|
2016-11-16 14:06:32 +08:00
|
|
|
bool show_callchain;
|
|
|
|
unsigned int max_stack;
|
2016-11-16 14:06:33 +08:00
|
|
|
bool show_cpu_visual;
|
2016-11-16 14:06:31 +08:00
|
|
|
bool show_wakeups;
|
2017-03-14 09:56:29 +08:00
|
|
|
bool show_next;
|
2016-11-26 00:28:41 +08:00
|
|
|
bool show_migrations;
|
2017-01-13 18:45:22 +08:00
|
|
|
bool show_state;
|
2016-11-16 14:06:30 +08:00
|
|
|
u64 skipped_samples;
|
2016-11-30 01:15:44 +08:00
|
|
|
const char *time_str;
|
|
|
|
struct perf_time_interval ptime;
|
2016-12-22 14:03:50 +08:00
|
|
|
struct perf_time_interval hist_time;
|
2012-09-12 04:29:27 +08:00
|
|
|
};
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/* per thread run time data */
|
|
|
|
struct thread_runtime {
|
|
|
|
u64 last_time; /* time of previous sched in/out event */
|
|
|
|
u64 dt_run; /* run time */
|
2017-01-13 18:45:21 +08:00
|
|
|
u64 dt_sleep; /* time between CPU access by sleep (off cpu) */
|
|
|
|
u64 dt_iowait; /* time between CPU access by iowait (off cpu) */
|
|
|
|
u64 dt_preempt; /* time between CPU access by preempt (off cpu) */
|
2016-11-16 14:06:29 +08:00
|
|
|
u64 dt_delay; /* time between wakeup and sched-in */
|
|
|
|
u64 ready_to_run; /* time of wakeup */
|
|
|
|
|
|
|
|
struct stats run_stats;
|
|
|
|
u64 total_run_time;
|
2017-01-13 18:45:23 +08:00
|
|
|
u64 total_sleep_time;
|
|
|
|
u64 total_iowait_time;
|
|
|
|
u64 total_preempt_time;
|
|
|
|
u64 total_delay_time;
|
2016-11-26 00:28:41 +08:00
|
|
|
|
2017-01-13 18:45:21 +08:00
|
|
|
int last_state;
|
2018-03-06 11:37:36 +08:00
|
|
|
|
|
|
|
char shortname[3];
|
2018-03-06 11:37:37 +08:00
|
|
|
bool comm_changed;
|
|
|
|
|
2016-11-26 00:28:41 +08:00
|
|
|
u64 migrations;
|
2016-11-16 14:06:29 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* per event run time data */
|
|
|
|
struct evsel_runtime {
|
|
|
|
u64 *last_time; /* time this event was last seen per cpu */
|
|
|
|
u32 ncpu; /* highest cpu slot allocated */
|
|
|
|
};
|
|
|
|
|
2016-12-08 22:47:51 +08:00
|
|
|
/* per cpu idle time data */
|
|
|
|
struct idle_thread_runtime {
|
|
|
|
struct thread_runtime tr;
|
|
|
|
struct thread *last_thread;
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_root_cached sorted_root;
|
2016-12-08 22:47:51 +08:00
|
|
|
struct callchain_root callchain;
|
|
|
|
struct callchain_cursor cursor;
|
|
|
|
};
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/* track idle times per cpu */
|
|
|
|
static struct thread **idle_threads;
|
|
|
|
static int idle_max_cpu;
|
|
|
|
static char idle_comm[] = "<idle>";
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
static u64 get_nsecs(void)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct timespec ts;
|
|
|
|
|
|
|
|
clock_gettime(CLOCK_MONOTONIC, &ts);
|
|
|
|
|
2016-08-08 23:23:49 +08:00
|
|
|
return ts.tv_sec * NSEC_PER_SEC + ts.tv_nsec;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void burn_nsecs(struct perf_sched *sched, u64 nsecs)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 T0 = get_nsecs(), T1;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
do {
|
|
|
|
T1 = get_nsecs();
|
2012-09-12 04:29:27 +08:00
|
|
|
} while (T1 + sched->run_measurement_overhead < T0 + nsecs);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
static void sleep_nsecs(u64 nsecs)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct timespec ts;
|
|
|
|
|
|
|
|
ts.tv_nsec = nsecs % 999999999;
|
|
|
|
ts.tv_sec = nsecs / 999999999;
|
|
|
|
|
|
|
|
nanosleep(&ts, NULL);
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void calibrate_run_measurement_overhead(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2016-08-08 23:23:49 +08:00
|
|
|
u64 T0, T1, delta, min_delta = NSEC_PER_SEC;
|
2009-09-11 18:12:54 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < 10; i++) {
|
|
|
|
T0 = get_nsecs();
|
2012-09-12 04:29:27 +08:00
|
|
|
burn_nsecs(sched, 0);
|
2009-09-11 18:12:54 +08:00
|
|
|
T1 = get_nsecs();
|
|
|
|
delta = T1-T0;
|
|
|
|
min_delta = min(min_delta, delta);
|
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->run_measurement_overhead = min_delta;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2011-01-23 06:37:02 +08:00
|
|
|
printf("run measurement overhead: %" PRIu64 " nsecs\n", min_delta);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void calibrate_sleep_measurement_overhead(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2016-08-08 23:23:49 +08:00
|
|
|
u64 T0, T1, delta, min_delta = NSEC_PER_SEC;
|
2009-09-11 18:12:54 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < 10; i++) {
|
|
|
|
T0 = get_nsecs();
|
|
|
|
sleep_nsecs(10000);
|
|
|
|
T1 = get_nsecs();
|
|
|
|
delta = T1-T0;
|
|
|
|
min_delta = min(min_delta, delta);
|
|
|
|
}
|
|
|
|
min_delta -= 10000;
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->sleep_measurement_overhead = min_delta;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2011-01-23 06:37:02 +08:00
|
|
|
printf("sleep measurement overhead: %" PRIu64 " nsecs\n", min_delta);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static struct sched_atom *
|
2009-09-11 18:12:54 +08:00
|
|
|
get_new_event(struct task_desc *task, u64 timestamp)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-11-24 22:05:16 +08:00
|
|
|
struct sched_atom *event = zalloc(sizeof(*event));
|
2009-09-11 18:12:54 +08:00
|
|
|
unsigned long idx = task->nr_events;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
event->timestamp = timestamp;
|
|
|
|
event->nr = idx;
|
|
|
|
|
|
|
|
task->nr_events++;
|
2009-09-15 02:04:48 +08:00
|
|
|
size = sizeof(struct sched_atom *) * task->nr_events;
|
|
|
|
task->atoms = realloc(task->atoms, size);
|
|
|
|
BUG_ON(!task->atoms);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
task->atoms[idx] = event;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
return event;
|
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static struct sched_atom *last_event(struct task_desc *task)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
if (!task->nr_events)
|
|
|
|
return NULL;
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
return task->atoms[task->nr_events - 1];
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void add_sched_event_run(struct perf_sched *sched, struct task_desc *task,
|
|
|
|
u64 timestamp, u64 duration)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-09-15 02:04:48 +08:00
|
|
|
struct sched_atom *event, *curr_event = last_event(task);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
/*
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
* optimize an existing RUN event by merging this one
|
|
|
|
* to it:
|
|
|
|
*/
|
2009-09-11 18:12:54 +08:00
|
|
|
if (curr_event && curr_event->type == SCHED_EVENT_RUN) {
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_run_events_optimized++;
|
2009-09-11 18:12:54 +08:00
|
|
|
curr_event->duration += duration;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
event = get_new_event(task, timestamp);
|
|
|
|
|
|
|
|
event->type = SCHED_EVENT_RUN;
|
|
|
|
event->duration = duration;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_run_events++;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void add_sched_event_wakeup(struct perf_sched *sched, struct task_desc *task,
|
|
|
|
u64 timestamp, struct task_desc *wakee)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-09-15 02:04:48 +08:00
|
|
|
struct sched_atom *event, *wakee_event;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
event = get_new_event(task, timestamp);
|
|
|
|
event->type = SCHED_EVENT_WAKEUP;
|
|
|
|
event->wakee = wakee;
|
|
|
|
|
|
|
|
wakee_event = last_event(wakee);
|
|
|
|
if (!wakee_event || wakee_event->type != SCHED_EVENT_SLEEP) {
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->targetless_wakeups++;
|
2009-09-11 18:12:54 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (wakee_event->wait_sem) {
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->multitarget_wakeups++;
|
2009-09-11 18:12:54 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2009-11-24 22:05:16 +08:00
|
|
|
wakee_event->wait_sem = zalloc(sizeof(*wakee_event->wait_sem));
|
2009-09-11 18:12:54 +08:00
|
|
|
sem_init(wakee_event->wait_sem, 0, 0);
|
|
|
|
wakee_event->specific_wait = 1;
|
|
|
|
event->wait_sem = wakee_event->wait_sem;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_wakeup_events++;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void add_sched_event_sleep(struct perf_sched *sched, struct task_desc *task,
|
|
|
|
u64 timestamp, u64 task_state __maybe_unused)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-09-15 02:04:48 +08:00
|
|
|
struct sched_atom *event = get_new_event(task, timestamp);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
event->type = SCHED_EVENT_SLEEP;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_sleep_events++;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static struct task_desc *register_pid(struct perf_sched *sched,
|
|
|
|
unsigned long pid, const char *comm)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct task_desc *task;
|
perf sched replay: Alloc the memory of pid_to_task dynamically to adapt to the unexpected change of pid_max
The current memory allocation of struct task_desc *pid_to_task[MAX_PID]
is in a permanent and preset way, and it has two problems:
Problem 1: If the pid_max, which is the max number of pids in the
system, is much smaller than MAX_PID (1024*1000), then it causes a waste
of stack memory. This may happen in the case where the number of cpu
cores is much smaller than 1000.
Problem 2: If the pid_max is changed from the default value to a value
larger than MAX_PID, then it will cause assertion failure problem. The
maximum value of pid_max can be set to pid_max_max (see pidmap_init
defined in kernel/pid.c), which equals to PID_MAX_LIMIT. In x86_64,
PID_MAX_LIMIT is 4*1024*1024 (defined in include/linux/threads.h). This
value is much larger than MAX_PID, and will take up 32768 Kbytes
(4*1024*1024*8/1024) for memory allocation of pid_to_task, which is much
larger than the default 8192 Kbytes of the stack size of calling
process.
Due to these two problems, we use calloc to allocate the memory of
pid_to_task dynamically.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ echo 1025000 > /proc/sys/kernel/pid_max
$ cat /proc/sys/kernel/pid_max
1025000
Run some applications until the pid of some process is greater than
the value of MAX_PID (1024*1000).
Before this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55480 nsecs
the run test took 1000008 nsecs
the sleep test took 1063151 nsecs
perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 1024000)'
failed.
Aborted
After this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55435 nsecs
the run test took 1000004 nsecs
the sleep test took 1059312 nsecs
nr_run_events: 10
nr_sleep_events: 1562
nr_wakeup_events: 5
task 0 ( :1: 1), nr_events: 1
task 1 ( :2: 2), nr_events: 1
task 2 ( :3: 3), nr_events: 1
task 3 ( :5: 5), nr_events: 1
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-4-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:30 +08:00
|
|
|
static int pid_max;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched replay: Alloc the memory of pid_to_task dynamically to adapt to the unexpected change of pid_max
The current memory allocation of struct task_desc *pid_to_task[MAX_PID]
is in a permanent and preset way, and it has two problems:
Problem 1: If the pid_max, which is the max number of pids in the
system, is much smaller than MAX_PID (1024*1000), then it causes a waste
of stack memory. This may happen in the case where the number of cpu
cores is much smaller than 1000.
Problem 2: If the pid_max is changed from the default value to a value
larger than MAX_PID, then it will cause assertion failure problem. The
maximum value of pid_max can be set to pid_max_max (see pidmap_init
defined in kernel/pid.c), which equals to PID_MAX_LIMIT. In x86_64,
PID_MAX_LIMIT is 4*1024*1024 (defined in include/linux/threads.h). This
value is much larger than MAX_PID, and will take up 32768 Kbytes
(4*1024*1024*8/1024) for memory allocation of pid_to_task, which is much
larger than the default 8192 Kbytes of the stack size of calling
process.
Due to these two problems, we use calloc to allocate the memory of
pid_to_task dynamically.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ echo 1025000 > /proc/sys/kernel/pid_max
$ cat /proc/sys/kernel/pid_max
1025000
Run some applications until the pid of some process is greater than
the value of MAX_PID (1024*1000).
Before this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55480 nsecs
the run test took 1000008 nsecs
the sleep test took 1063151 nsecs
perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 1024000)'
failed.
Aborted
After this patch:
$ perf sched replay
run measurement overhead: 221 nsecs
sleep measurement overhead: 55435 nsecs
the run test took 1000004 nsecs
the sleep test took 1059312 nsecs
nr_run_events: 10
nr_sleep_events: 1562
nr_wakeup_events: 5
task 0 ( :1: 1), nr_events: 1
task 1 ( :2: 2), nr_events: 1
task 2 ( :3: 3), nr_events: 1
task 3 ( :5: 5), nr_events: 1
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-4-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:30 +08:00
|
|
|
if (sched->pid_to_task == NULL) {
|
|
|
|
if (sysctl__read_int("kernel/pid_max", &pid_max) < 0)
|
|
|
|
pid_max = MAX_PID;
|
|
|
|
BUG_ON((sched->pid_to_task = calloc(pid_max, sizeof(struct task_desc *))) == NULL);
|
|
|
|
}
|
2015-03-31 21:46:31 +08:00
|
|
|
if (pid >= (unsigned long)pid_max) {
|
|
|
|
BUG_ON((sched->pid_to_task = realloc(sched->pid_to_task, (pid + 1) *
|
|
|
|
sizeof(struct task_desc *))) == NULL);
|
|
|
|
while (pid >= (unsigned long)pid_max)
|
|
|
|
sched->pid_to_task[pid_max++] = NULL;
|
|
|
|
}
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
task = sched->pid_to_task[pid];
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
if (task)
|
|
|
|
return task;
|
|
|
|
|
2009-11-24 22:05:16 +08:00
|
|
|
task = zalloc(sizeof(*task));
|
2009-09-11 18:12:54 +08:00
|
|
|
task->pid = pid;
|
2012-09-12 04:29:27 +08:00
|
|
|
task->nr = sched->nr_tasks;
|
2009-09-11 18:12:54 +08:00
|
|
|
strcpy(task->comm, comm);
|
|
|
|
/*
|
|
|
|
* every task starts in sleeping state - this gets ignored
|
|
|
|
* if there's no wakeup pointing to this sleep state:
|
|
|
|
*/
|
2012-09-12 04:29:27 +08:00
|
|
|
add_sched_event_sleep(sched, task, 0, 0);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->pid_to_task[pid] = task;
|
|
|
|
sched->nr_tasks++;
|
2015-03-31 21:46:28 +08:00
|
|
|
sched->tasks = realloc(sched->tasks, sched->nr_tasks * sizeof(struct task_desc *));
|
2012-09-12 04:29:27 +08:00
|
|
|
BUG_ON(!sched->tasks);
|
|
|
|
sched->tasks[task->nr] = task;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2017-02-17 16:17:38 +08:00
|
|
|
if (verbose > 0)
|
2012-09-12 04:29:27 +08:00
|
|
|
printf("registered task #%ld, PID %ld (%s)\n", sched->nr_tasks, pid, comm);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
return task;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void print_task_traces(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct task_desc *task;
|
|
|
|
unsigned long i;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->nr_tasks; i++) {
|
|
|
|
task = sched->tasks[i];
|
2009-09-11 18:12:54 +08:00
|
|
|
printf("task %6ld (%20s:%10ld), nr_events: %ld\n",
|
2009-09-11 18:12:54 +08:00
|
|
|
task->nr, task->comm, task->pid, task->nr_events);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void add_cross_task_wakeups(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct task_desc *task1, *task2;
|
|
|
|
unsigned long i, j;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->nr_tasks; i++) {
|
|
|
|
task1 = sched->tasks[i];
|
2009-09-11 18:12:54 +08:00
|
|
|
j = i + 1;
|
2012-09-12 04:29:27 +08:00
|
|
|
if (j == sched->nr_tasks)
|
2009-09-11 18:12:54 +08:00
|
|
|
j = 0;
|
2012-09-12 04:29:27 +08:00
|
|
|
task2 = sched->tasks[j];
|
|
|
|
add_sched_event_wakeup(sched, task1, 0, task2);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void perf_sched__process_event(struct perf_sched *sched,
|
|
|
|
struct sched_atom *atom)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
switch (atom->type) {
|
2009-09-11 18:12:54 +08:00
|
|
|
case SCHED_EVENT_RUN:
|
2012-09-12 04:29:27 +08:00
|
|
|
burn_nsecs(sched, atom->duration);
|
2009-09-11 18:12:54 +08:00
|
|
|
break;
|
|
|
|
case SCHED_EVENT_SLEEP:
|
2009-09-15 02:04:48 +08:00
|
|
|
if (atom->wait_sem)
|
|
|
|
ret = sem_wait(atom->wait_sem);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
|
|
|
break;
|
|
|
|
case SCHED_EVENT_WAKEUP:
|
2009-09-15 02:04:48 +08:00
|
|
|
if (atom->wait_sem)
|
|
|
|
ret = sem_post(atom->wait_sem);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
|
|
|
break;
|
2009-10-10 20:46:04 +08:00
|
|
|
case SCHED_EVENT_MIGRATION:
|
|
|
|
break;
|
2009-09-11 18:12:54 +08:00
|
|
|
default:
|
|
|
|
BUG_ON(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
static u64 get_cpu_usage_nsec_parent(void)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct rusage ru;
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 sum;
|
2009-09-11 18:12:54 +08:00
|
|
|
int err;
|
|
|
|
|
|
|
|
err = getrusage(RUSAGE_SELF, &ru);
|
|
|
|
BUG_ON(err);
|
|
|
|
|
2016-08-08 23:23:49 +08:00
|
|
|
sum = ru.ru_utime.tv_sec * NSEC_PER_SEC + ru.ru_utime.tv_usec * NSEC_PER_USEC;
|
|
|
|
sum += ru.ru_stime.tv_sec * NSEC_PER_SEC + ru.ru_stime.tv_usec * NSEC_PER_USEC;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
return sum;
|
|
|
|
}
|
|
|
|
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
static int self_open_counters(struct perf_sched *sched, unsigned long cur_task)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-12-09 17:51:30 +08:00
|
|
|
struct perf_event_attr attr;
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
char sbuf[STRERR_BUFSIZE], info[STRERR_BUFSIZE];
|
2009-12-09 17:51:30 +08:00
|
|
|
int fd;
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
struct rlimit limit;
|
|
|
|
bool need_privilege = false;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-12-09 17:51:30 +08:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-12-09 17:51:30 +08:00
|
|
|
attr.type = PERF_TYPE_SOFTWARE;
|
|
|
|
attr.config = PERF_COUNT_SW_TASK_CLOCK;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
force_again:
|
2014-07-01 04:28:47 +08:00
|
|
|
fd = sys_perf_event_open(&attr, 0, -1, -1,
|
|
|
|
perf_event_open_cloexec_flag());
|
2009-12-09 17:51:30 +08:00
|
|
|
|
perf sched replay: Handle the dead halt of sem_wait when create_tasks() fails for any task
Since there is sem_wait for each task in the wait_for_tasks(), e.g.
sem_wait(&task->work_done_sem).
The sem_wait can continue only when work_done_sem is greater than 0, or
it will be blocked.
For perf sched replay, one task may sem_post the work_done_sem of
another task, which causes the work_done_sem of that task processed in a
reasonable sequence, e.g. sem_post, sem_wait, sem_wait, sem_post...
This sequence simulates the sched process of the running tasks at the
time when perf sched record runs.
As a result, all the tasks are required and their threads must be
successfully created.
If any one (task A) of the tasks fails to create its thread, then
another task (task B), whose work_done_sem needs sem_post from that
failed task A, may likely block itself due to seg_wait.
And this is a dead halt, since task B's thread_func cannot continue at
all.
To solve this problem, perf sched replay should exit once any task fails
to create its thread.
Example:
Test environment: x86_64 with 160 cores
Before this patch:
$ perf sched replay
...
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
------------------------------------------------------------ <- dead halt
After this patch:
$ perf sched replay
...
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
$
As shown above, perf sched replay finishes the process after printing an
error message and does not block itself.
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-7-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:33 +08:00
|
|
|
if (fd < 0) {
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
if (errno == EMFILE) {
|
|
|
|
if (sched->force) {
|
|
|
|
BUG_ON(getrlimit(RLIMIT_NOFILE, &limit) == -1);
|
|
|
|
limit.rlim_cur += sched->nr_tasks - cur_task;
|
|
|
|
if (limit.rlim_cur > limit.rlim_max) {
|
|
|
|
limit.rlim_max = limit.rlim_cur;
|
|
|
|
need_privilege = true;
|
|
|
|
}
|
|
|
|
if (setrlimit(RLIMIT_NOFILE, &limit) == -1) {
|
|
|
|
if (need_privilege && errno == EPERM)
|
|
|
|
strcpy(info, "Need privilege\n");
|
|
|
|
} else
|
|
|
|
goto force_again;
|
|
|
|
} else
|
|
|
|
strcpy(info, "Have a try with -f option\n");
|
|
|
|
}
|
2012-09-12 10:11:06 +08:00
|
|
|
pr_err("Error: sys_perf_event_open() syscall returned "
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
"with %d (%s)\n%s", fd,
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 22:56:20 +08:00
|
|
|
str_error_r(errno, sbuf, sizeof(sbuf)), info);
|
perf sched replay: Handle the dead halt of sem_wait when create_tasks() fails for any task
Since there is sem_wait for each task in the wait_for_tasks(), e.g.
sem_wait(&task->work_done_sem).
The sem_wait can continue only when work_done_sem is greater than 0, or
it will be blocked.
For perf sched replay, one task may sem_post the work_done_sem of
another task, which causes the work_done_sem of that task processed in a
reasonable sequence, e.g. sem_post, sem_wait, sem_wait, sem_post...
This sequence simulates the sched process of the running tasks at the
time when perf sched record runs.
As a result, all the tasks are required and their threads must be
successfully created.
If any one (task A) of the tasks fails to create its thread, then
another task (task B), whose work_done_sem needs sem_post from that
failed task A, may likely block itself due to seg_wait.
And this is a dead halt, since task B's thread_func cannot continue at
all.
To solve this problem, perf sched replay should exit once any task fails
to create its thread.
Example:
Test environment: x86_64 with 160 cores
Before this patch:
$ perf sched replay
...
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
------------------------------------------------------------ <- dead halt
After this patch:
$ perf sched replay
...
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
$
As shown above, perf sched replay finishes the process after printing an
error message and does not block itself.
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-7-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:33 +08:00
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
2009-12-09 17:51:30 +08:00
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static u64 get_cpu_usage_nsec_self(int fd)
|
|
|
|
{
|
|
|
|
u64 runtime;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = read(fd, &runtime, sizeof(runtime));
|
|
|
|
BUG_ON(ret != sizeof(runtime));
|
|
|
|
|
|
|
|
return runtime;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
struct sched_thread_parms {
|
|
|
|
struct task_desc *task;
|
|
|
|
struct perf_sched *sched;
|
perf sched replay: Fix the segmentation fault problem caused by pr_err in threads
The pr_err in self_open_counters() prints error message to stderr.
Unlike stdout, stderr uses memory buffer on the stack of each calling
process.
The pr_err in self_open_counters() works in a thread called thread_func
created in function create_tasks, which concurrently creates
sched->nr_tasks threads.
If the error happens and pr_err prints the error message in each of
these threads, the stack size of the perf process (default is 8192
kbytes) will quickly run out and the segmentation fault will happen
then.
To solve this problem, pr_err with self_open_counters() should be moved
from newly created threads to the old main thread of the perf process.
Then the pr_err can work in a stable situation without the strange
segmentation fault problem.
Example:
Test environment: x86_64 with 160 cores
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Segmentation fault
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
...
As shown above, the result continues without any segmentation fault.
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-6-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:32 +08:00
|
|
|
int fd;
|
2012-09-12 04:29:27 +08:00
|
|
|
};
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
static void *thread_func(void *ctx)
|
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
struct sched_thread_parms *parms = ctx;
|
|
|
|
struct task_desc *this_task = parms->task;
|
|
|
|
struct perf_sched *sched = parms->sched;
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 cpu_usage_0, cpu_usage_1;
|
2009-09-11 18:12:54 +08:00
|
|
|
unsigned long i, ret;
|
|
|
|
char comm2[22];
|
perf sched replay: Fix the segmentation fault problem caused by pr_err in threads
The pr_err in self_open_counters() prints error message to stderr.
Unlike stdout, stderr uses memory buffer on the stack of each calling
process.
The pr_err in self_open_counters() works in a thread called thread_func
created in function create_tasks, which concurrently creates
sched->nr_tasks threads.
If the error happens and pr_err prints the error message in each of
these threads, the stack size of the perf process (default is 8192
kbytes) will quickly run out and the segmentation fault will happen
then.
To solve this problem, pr_err with self_open_counters() should be moved
from newly created threads to the old main thread of the perf process.
Then the pr_err can work in a stable situation without the strange
segmentation fault problem.
Example:
Test environment: x86_64 with 160 cores
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Segmentation fault
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
...
As shown above, the result continues without any segmentation fault.
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-6-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:32 +08:00
|
|
|
int fd = parms->fd;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2013-12-28 03:55:14 +08:00
|
|
|
zfree(&parms);
|
2012-09-12 04:29:27 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
sprintf(comm2, ":%s", this_task->comm);
|
|
|
|
prctl(PR_SET_NAME, comm2);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (fd < 0)
|
|
|
|
return NULL;
|
2009-09-11 18:12:54 +08:00
|
|
|
again:
|
|
|
|
ret = sem_post(&this_task->ready_for_work);
|
|
|
|
BUG_ON(ret);
|
2012-09-12 04:29:27 +08:00
|
|
|
ret = pthread_mutex_lock(&sched->start_work_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
2012-09-12 04:29:27 +08:00
|
|
|
ret = pthread_mutex_unlock(&sched->start_work_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
|
|
|
|
2009-12-09 17:51:30 +08:00
|
|
|
cpu_usage_0 = get_cpu_usage_nsec_self(fd);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
for (i = 0; i < this_task->nr_events; i++) {
|
|
|
|
this_task->curr_event = i;
|
2012-09-12 04:29:27 +08:00
|
|
|
perf_sched__process_event(sched, this_task->atoms[i]);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2009-12-09 17:51:30 +08:00
|
|
|
cpu_usage_1 = get_cpu_usage_nsec_self(fd);
|
2009-09-11 18:12:54 +08:00
|
|
|
this_task->cpu_usage = cpu_usage_1 - cpu_usage_0;
|
|
|
|
ret = sem_post(&this_task->work_done_sem);
|
|
|
|
BUG_ON(ret);
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
ret = pthread_mutex_lock(&sched->work_done_wait_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
2012-09-12 04:29:27 +08:00
|
|
|
ret = pthread_mutex_unlock(&sched->work_done_wait_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
|
|
|
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void create_tasks(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
|
|
|
struct task_desc *task;
|
|
|
|
pthread_attr_t attr;
|
|
|
|
unsigned long i;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = pthread_attr_init(&attr);
|
|
|
|
BUG_ON(err);
|
2011-01-11 00:14:23 +08:00
|
|
|
err = pthread_attr_setstacksize(&attr,
|
|
|
|
(size_t) max(16 * 1024, PTHREAD_STACK_MIN));
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(err);
|
2012-09-12 04:29:27 +08:00
|
|
|
err = pthread_mutex_lock(&sched->start_work_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(err);
|
2012-09-12 04:29:27 +08:00
|
|
|
err = pthread_mutex_lock(&sched->work_done_wait_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(err);
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->nr_tasks; i++) {
|
|
|
|
struct sched_thread_parms *parms = malloc(sizeof(*parms));
|
|
|
|
BUG_ON(parms == NULL);
|
|
|
|
parms->task = task = sched->tasks[i];
|
|
|
|
parms->sched = sched;
|
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.
Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.
And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.
That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.
This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.
For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.
That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.
As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.
Example:
Test environment: x86_64 with 160 cores
$ cat /proc/sys/kernel/pid_max
163840
$ cat /proc/sys/fs/file-max
6815744
$ ulimit -Sn
1024
$ ulimit -Hn
4096
Before this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
After this patch:
$ perf sched replay
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
Error: sys_perf_event_open() syscall returned with -1 (Too many open
files)
Have a try with -f option
$ perf sched replay -f
...
task 1549 ( :163132: 163132), nr_events: 1
task 1550 ( :163540: 163540), nr_events: 1
task 1551 ( <unknown>: 0), nr_events: 10
------------------------------------------------------------
#1 : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
#2 : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
#3 : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
#4 : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
#5 : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
#6 : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
#7 : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
#8 : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
#9 : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
#10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-31 21:46:34 +08:00
|
|
|
parms->fd = self_open_counters(sched, i);
|
2009-09-11 18:12:54 +08:00
|
|
|
sem_init(&task->sleep_sem, 0, 0);
|
|
|
|
sem_init(&task->ready_for_work, 0, 0);
|
|
|
|
sem_init(&task->work_done_sem, 0, 0);
|
|
|
|
task->curr_event = 0;
|
2012-09-12 04:29:27 +08:00
|
|
|
err = pthread_create(&task->thread, &attr, thread_func, parms);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(err);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void wait_for_tasks(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 cpu_usage_0, cpu_usage_1;
|
2009-09-11 18:12:54 +08:00
|
|
|
struct task_desc *task;
|
|
|
|
unsigned long i, ret;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->start_time = get_nsecs();
|
|
|
|
sched->cpu_usage = 0;
|
|
|
|
pthread_mutex_unlock(&sched->work_done_wait_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->nr_tasks; i++) {
|
|
|
|
task = sched->tasks[i];
|
2009-09-11 18:12:54 +08:00
|
|
|
ret = sem_wait(&task->ready_for_work);
|
|
|
|
BUG_ON(ret);
|
|
|
|
sem_init(&task->ready_for_work, 0, 0);
|
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
ret = pthread_mutex_lock(&sched->work_done_wait_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
|
|
|
|
|
|
|
cpu_usage_0 = get_cpu_usage_nsec_parent();
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
pthread_mutex_unlock(&sched->start_work_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->nr_tasks; i++) {
|
|
|
|
task = sched->tasks[i];
|
2009-09-11 18:12:54 +08:00
|
|
|
ret = sem_wait(&task->work_done_sem);
|
|
|
|
BUG_ON(ret);
|
|
|
|
sem_init(&task->work_done_sem, 0, 0);
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->cpu_usage += task->cpu_usage;
|
2009-09-11 18:12:54 +08:00
|
|
|
task->cpu_usage = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
cpu_usage_1 = get_cpu_usage_nsec_parent();
|
2012-09-12 04:29:27 +08:00
|
|
|
if (!sched->runavg_cpu_usage)
|
|
|
|
sched->runavg_cpu_usage = sched->cpu_usage;
|
2015-03-31 21:46:36 +08:00
|
|
|
sched->runavg_cpu_usage = (sched->runavg_cpu_usage * (sched->replay_repeat - 1) + sched->cpu_usage) / sched->replay_repeat;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->parent_cpu_usage = cpu_usage_1 - cpu_usage_0;
|
|
|
|
if (!sched->runavg_parent_cpu_usage)
|
|
|
|
sched->runavg_parent_cpu_usage = sched->parent_cpu_usage;
|
2015-03-31 21:46:36 +08:00
|
|
|
sched->runavg_parent_cpu_usage = (sched->runavg_parent_cpu_usage * (sched->replay_repeat - 1) +
|
|
|
|
sched->parent_cpu_usage)/sched->replay_repeat;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
ret = pthread_mutex_lock(&sched->start_work_mutex);
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(ret);
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->nr_tasks; i++) {
|
|
|
|
task = sched->tasks[i];
|
2009-09-11 18:12:54 +08:00
|
|
|
sem_init(&task->sleep_sem, 0, 0);
|
|
|
|
task->curr_event = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void run_one_test(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2011-01-25 00:13:04 +08:00
|
|
|
u64 T0, T1, delta, avg_delta, fluct;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
T0 = get_nsecs();
|
2012-09-12 04:29:27 +08:00
|
|
|
wait_for_tasks(sched);
|
2009-09-11 18:12:54 +08:00
|
|
|
T1 = get_nsecs();
|
|
|
|
|
|
|
|
delta = T1 - T0;
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->sum_runtime += delta;
|
|
|
|
sched->nr_runs++;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
avg_delta = sched->sum_runtime / sched->nr_runs;
|
2009-09-11 18:12:54 +08:00
|
|
|
if (delta < avg_delta)
|
|
|
|
fluct = avg_delta - delta;
|
|
|
|
else
|
|
|
|
fluct = delta - avg_delta;
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->sum_fluct += fluct;
|
|
|
|
if (!sched->run_avg)
|
|
|
|
sched->run_avg = delta;
|
2015-03-31 21:46:36 +08:00
|
|
|
sched->run_avg = (sched->run_avg * (sched->replay_repeat - 1) + delta) / sched->replay_repeat;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2016-08-08 23:23:49 +08:00
|
|
|
printf("#%-3ld: %0.3f, ", sched->nr_runs, (double)delta / NSEC_PER_MSEC);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2016-08-08 23:23:49 +08:00
|
|
|
printf("ravg: %0.2f, ", (double)sched->run_avg / NSEC_PER_MSEC);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
printf("cpu: %0.2f / %0.2f",
|
2016-08-08 23:23:49 +08:00
|
|
|
(double)sched->cpu_usage / NSEC_PER_MSEC, (double)sched->runavg_cpu_usage / NSEC_PER_MSEC);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
#if 0
|
|
|
|
/*
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
* rusage statistics done by the parent, these are less
|
2012-09-12 04:29:27 +08:00
|
|
|
* accurate than the sched->sum_exec_runtime based statistics:
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
*/
|
2009-09-11 18:12:54 +08:00
|
|
|
printf(" [%0.2f / %0.2f]",
|
2016-08-08 23:23:49 +08:00
|
|
|
(double)sched->parent_cpu_usage / NSEC_PER_MSEC,
|
|
|
|
(double)sched->runavg_parent_cpu_usage / NSEC_PER_MSEC);
|
2009-09-11 18:12:54 +08:00
|
|
|
#endif
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
printf("\n");
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->nr_sleep_corrections)
|
|
|
|
printf(" (%ld sleep corrections)\n", sched->nr_sleep_corrections);
|
|
|
|
sched->nr_sleep_corrections = 0;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void test_calibrations(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2009-09-11 18:12:54 +08:00
|
|
|
u64 T0, T1;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
T0 = get_nsecs();
|
2016-08-08 23:23:49 +08:00
|
|
|
burn_nsecs(sched, NSEC_PER_MSEC);
|
2009-09-11 18:12:54 +08:00
|
|
|
T1 = get_nsecs();
|
|
|
|
|
2011-01-23 06:37:02 +08:00
|
|
|
printf("the run test took %" PRIu64 " nsecs\n", T1 - T0);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
T0 = get_nsecs();
|
2016-08-08 23:23:49 +08:00
|
|
|
sleep_nsecs(NSEC_PER_MSEC);
|
2009-09-11 18:12:54 +08:00
|
|
|
T1 = get_nsecs();
|
|
|
|
|
2011-01-23 06:37:02 +08:00
|
|
|
printf("the sleep test took %" PRIu64 " nsecs\n", T1 - T0);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
static int
|
2012-09-12 04:29:27 +08:00
|
|
|
replay_wakeup_event(struct perf_sched *sched,
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel, struct perf_sample *sample,
|
|
|
|
struct machine *machine __maybe_unused)
|
2009-09-12 09:59:01 +08:00
|
|
|
{
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
const char *comm = perf_evsel__strval(evsel, sample, "comm");
|
|
|
|
const u32 pid = perf_evsel__intval(evsel, sample, "pid");
|
2009-09-12 09:59:01 +08:00
|
|
|
struct task_desc *waker, *wakee;
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
2017-02-17 16:17:38 +08:00
|
|
|
if (verbose > 0) {
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
printf("sched_wakeup event %p\n", evsel);
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
printf(" ... pid %d woke up %s/%d\n", sample->tid, comm, pid);
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
waker = register_pid(sched, sample->tid, "<unknown>");
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
wakee = register_pid(sched, pid, comm);
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
add_sched_event_wakeup(sched, waker, sample->time, wakee);
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static int replay_switch_event(struct perf_sched *sched,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine __maybe_unused)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
const char *prev_comm = perf_evsel__strval(evsel, sample, "prev_comm"),
|
|
|
|
*next_comm = perf_evsel__strval(evsel, sample, "next_comm");
|
|
|
|
const u32 prev_pid = perf_evsel__intval(evsel, sample, "prev_pid"),
|
|
|
|
next_pid = perf_evsel__intval(evsel, sample, "next_pid");
|
|
|
|
const u64 prev_state = perf_evsel__intval(evsel, sample, "prev_state");
|
2012-09-11 06:15:03 +08:00
|
|
|
struct task_desc *prev, __maybe_unused *next;
|
2012-08-07 22:33:42 +08:00
|
|
|
u64 timestamp0, timestamp = sample->time;
|
|
|
|
int cpu = sample->cpu;
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
s64 delta;
|
|
|
|
|
2017-02-17 16:17:38 +08:00
|
|
|
if (verbose > 0)
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
printf("sched_switch event %p\n", evsel);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
if (cpu >= MAX_CPUS || cpu < 0)
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
timestamp0 = sched->cpu_last_switched[cpu];
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
if (timestamp0)
|
|
|
|
delta = timestamp - timestamp0;
|
|
|
|
else
|
|
|
|
delta = 0;
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
if (delta < 0) {
|
2012-09-12 10:11:06 +08:00
|
|
|
pr_err("hm, delta: %" PRIu64 " < 0 ?\n", delta);
|
2012-09-09 09:53:06 +08:00
|
|
|
return -1;
|
|
|
|
}
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
pr_debug(" ... switch from %s/%d to %s/%d [ran %" PRIu64 " nsecs]\n",
|
|
|
|
prev_comm, prev_pid, next_comm, next_pid, delta);
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
prev = register_pid(sched, prev_pid, prev_comm);
|
|
|
|
next = register_pid(sched, next_pid, next_comm);
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->cpu_last_switched[cpu] = timestamp;
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
add_sched_event_run(sched, prev, timestamp, delta);
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
add_sched_event_sleep(sched, prev, timestamp, prev_state);
|
2012-09-09 09:53:06 +08:00
|
|
|
|
|
|
|
return 0;
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2013-08-08 10:50:47 +08:00
|
|
|
static int replay_fork_event(struct perf_sched *sched,
|
|
|
|
union perf_event *event,
|
|
|
|
struct machine *machine)
|
2009-09-12 09:59:01 +08:00
|
|
|
{
|
2013-08-08 10:50:47 +08:00
|
|
|
struct thread *child, *parent;
|
|
|
|
|
2013-08-27 16:23:03 +08:00
|
|
|
child = machine__findnew_thread(machine, event->fork.pid,
|
|
|
|
event->fork.tid);
|
|
|
|
parent = machine__findnew_thread(machine, event->fork.ppid,
|
|
|
|
event->fork.ptid);
|
2013-08-08 10:50:47 +08:00
|
|
|
|
|
|
|
if (child == NULL || parent == NULL) {
|
|
|
|
pr_debug("thread does not exist on fork event: child %p, parent %p\n",
|
|
|
|
child, parent);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2013-08-08 10:50:47 +08:00
|
|
|
}
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
|
2017-02-17 16:17:38 +08:00
|
|
|
if (verbose > 0) {
|
2013-08-08 10:50:47 +08:00
|
|
|
printf("fork event\n");
|
2013-09-11 20:46:56 +08:00
|
|
|
printf("... parent: %s/%d\n", thread__comm_str(parent), parent->tid);
|
|
|
|
printf("... child: %s/%d\n", thread__comm_str(child), child->tid);
|
2009-09-12 09:59:01 +08:00
|
|
|
}
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
|
2013-09-11 20:46:56 +08:00
|
|
|
register_pid(sched, parent->tid, thread__comm_str(parent));
|
|
|
|
register_pid(sched, child->tid, thread__comm_str(child));
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
out_put:
|
|
|
|
thread__put(child);
|
|
|
|
thread__put(parent);
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-12 09:59:01 +08:00
|
|
|
}
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
struct sort_dimension {
|
|
|
|
const char *name;
|
2009-09-11 18:12:54 +08:00
|
|
|
sort_fn_t cmp;
|
2009-09-11 18:12:54 +08:00
|
|
|
struct list_head list;
|
|
|
|
};
|
|
|
|
|
2018-03-06 11:37:36 +08:00
|
|
|
/*
|
|
|
|
* handle runtime stats saved per thread
|
|
|
|
*/
|
|
|
|
static struct thread_runtime *thread__init_runtime(struct thread *thread)
|
|
|
|
{
|
|
|
|
struct thread_runtime *r;
|
|
|
|
|
|
|
|
r = zalloc(sizeof(struct thread_runtime));
|
|
|
|
if (!r)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
init_stats(&r->run_stats);
|
|
|
|
thread__set_priv(thread, r);
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct thread_runtime *thread__get_runtime(struct thread *thread)
|
|
|
|
{
|
|
|
|
struct thread_runtime *tr;
|
|
|
|
|
|
|
|
tr = thread__priv(thread);
|
|
|
|
if (tr == NULL) {
|
|
|
|
tr = thread__init_runtime(thread);
|
|
|
|
if (tr == NULL)
|
|
|
|
pr_debug("Failed to malloc memory for runtime data.\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
return tr;
|
|
|
|
}
|
|
|
|
|
2009-09-13 09:36:29 +08:00
|
|
|
static int
|
2009-09-15 02:04:48 +08:00
|
|
|
thread_lat_cmp(struct list_head *list, struct work_atoms *l, struct work_atoms *r)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
|
|
|
struct sort_dimension *sort;
|
|
|
|
int ret = 0;
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
BUG_ON(list_empty(list));
|
|
|
|
|
2009-09-13 09:36:29 +08:00
|
|
|
list_for_each_entry(sort, list, list) {
|
|
|
|
ret = sort->cmp(l, r);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static struct work_atoms *
|
2018-12-07 03:18:19 +08:00
|
|
|
thread_atoms_search(struct rb_root_cached *root, struct thread *thread,
|
2009-09-11 18:12:54 +08:00
|
|
|
struct list_head *sort_list)
|
|
|
|
{
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_node *node = root->rb_root.rb_node;
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms key = { .thread = thread };
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
while (node) {
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms *atoms;
|
2009-09-11 18:12:54 +08:00
|
|
|
int cmp;
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
atoms = container_of(node, struct work_atoms, node);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
|
|
|
cmp = thread_lat_cmp(sort_list, &key, atoms);
|
|
|
|
if (cmp > 0)
|
|
|
|
node = node->rb_left;
|
|
|
|
else if (cmp < 0)
|
|
|
|
node = node->rb_right;
|
|
|
|
else {
|
|
|
|
BUG_ON(thread != atoms->thread);
|
|
|
|
return atoms;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2009-09-12 14:06:14 +08:00
|
|
|
static void
|
2018-12-07 03:18:19 +08:00
|
|
|
__thread_latency_insert(struct rb_root_cached *root, struct work_atoms *data,
|
2009-09-13 09:36:29 +08:00
|
|
|
struct list_head *sort_list)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL;
|
|
|
|
bool leftmost = true;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
|
|
|
while (*new) {
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms *this;
|
2009-09-13 09:36:29 +08:00
|
|
|
int cmp;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
this = container_of(*new, struct work_atoms, node);
|
2009-09-12 14:06:14 +08:00
|
|
|
parent = *new;
|
2009-09-13 09:36:29 +08:00
|
|
|
|
|
|
|
cmp = thread_lat_cmp(sort_list, data, this);
|
|
|
|
|
|
|
|
if (cmp > 0)
|
2009-09-12 14:06:14 +08:00
|
|
|
new = &((*new)->rb_left);
|
2018-12-07 03:18:19 +08:00
|
|
|
else {
|
2009-09-13 09:36:29 +08:00
|
|
|
new = &((*new)->rb_right);
|
2018-12-07 03:18:19 +08:00
|
|
|
leftmost = false;
|
|
|
|
}
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
rb_link_node(&data->node, parent, new);
|
2018-12-07 03:18:19 +08:00
|
|
|
rb_insert_color_cached(&data->node, root, leftmost);
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int thread_atoms_insert(struct perf_sched *sched, struct thread *thread)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
2009-11-24 22:05:16 +08:00
|
|
|
struct work_atoms *atoms = zalloc(sizeof(*atoms));
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!atoms) {
|
|
|
|
pr_err("No memory at %s\n", __func__);
|
|
|
|
return -1;
|
|
|
|
}
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2015-03-03 09:21:35 +08:00
|
|
|
atoms->thread = thread__get(thread);
|
2009-09-15 02:04:48 +08:00
|
|
|
INIT_LIST_HEAD(&atoms->work_list);
|
2012-09-12 04:29:27 +08:00
|
|
|
__thread_latency_insert(&sched->atom_root, atoms, &sched->cmp_pid);
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static char sched_out_state(u64 prev_state)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
|
|
|
const char *str = TASK_STATE_TO_CHAR_STR;
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
return str[prev_state];
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
static int
|
2009-09-15 02:04:48 +08:00
|
|
|
add_sched_out_event(struct work_atoms *atoms,
|
|
|
|
char run_state,
|
|
|
|
u64 timestamp)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
2009-11-24 22:05:16 +08:00
|
|
|
struct work_atom *atom = zalloc(sizeof(*atom));
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!atom) {
|
|
|
|
pr_err("Non memory at %s", __func__);
|
|
|
|
return -1;
|
|
|
|
}
|
2009-09-12 14:06:14 +08:00
|
|
|
|
perf tools: Fix processing of randomly serialized sched traces
Currently it's possible to meet such too high latency results
with 'perf sched latency'.
-----------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms |
-----------------------------------------------------------------------------------
xfce4-panel | 0.222 ms | 2 | avg: 4718.345 ms | max: 9436.493 ms |
scsi_eh_3 | 3.962 ms | 36 | avg: 55.957 ms | max: 1977.829 ms |
The origin is on traces that are sometimes badly serialized across cpus.
For example the raw traces that raised such results for xfce4-panel:
(1) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(2) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(3) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
The traces are processed in the order they arrive. Then in (2),
xfce4-panel sleeps, it is first waken up in (3) and eventually
scheduled in (5).
The latency reported is then 1504 - 1495 = 9 secs, as reported by perf
sched. But this is wrong, we are confident in the fact the traces are
nicely serialized while we should actually more trust the timestamps.
If we reorder by timestamps we get:
(1) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(2) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(3) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
Now the trace make more sense, xfce4-panel is sleeping. Then it is
woken up in (1), scheduled in (2)
It goes to sleep in (3), woken up in (4) and scheduled in (5).
Now, latency captured between (1) and (2) is of 39 us.
And between (4) and (5) it is 2.1 ms.
Such pattern of bad serializing is the origin of the high latencies
reported by perf sched.
Basically, we need to check whether wake up time is higher than
schedule out time. If it's not the case, we need to tag the current
work atom as invalid.
Beside that, we may need to work later on a better ordering of the
traces given by the kernel.
After this patch:
xfce4-session | 0.221 ms | 1 | avg: 0.538 ms | max: 0.538 ms |
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-14 09:01:12 +08:00
|
|
|
atom->sched_out_time = timestamp;
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
if (run_state == 'R') {
|
2009-09-11 18:12:54 +08:00
|
|
|
atom->state = THREAD_WAIT_CPU;
|
perf tools: Fix processing of randomly serialized sched traces
Currently it's possible to meet such too high latency results
with 'perf sched latency'.
-----------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms |
-----------------------------------------------------------------------------------
xfce4-panel | 0.222 ms | 2 | avg: 4718.345 ms | max: 9436.493 ms |
scsi_eh_3 | 3.962 ms | 36 | avg: 55.957 ms | max: 1977.829 ms |
The origin is on traces that are sometimes badly serialized across cpus.
For example the raw traces that raised such results for xfce4-panel:
(1) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(2) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(3) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
The traces are processed in the order they arrive. Then in (2),
xfce4-panel sleeps, it is first waken up in (3) and eventually
scheduled in (5).
The latency reported is then 1504 - 1495 = 9 secs, as reported by perf
sched. But this is wrong, we are confident in the fact the traces are
nicely serialized while we should actually more trust the timestamps.
If we reorder by timestamps we get:
(1) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(2) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(3) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
Now the trace make more sense, xfce4-panel is sleeping. Then it is
woken up in (1), scheduled in (2)
It goes to sleep in (3), woken up in (4) and scheduled in (5).
Now, latency captured between (1) and (2) is of 39 us.
And between (4) and (5) it is 2.1 ms.
Such pattern of bad serializing is the origin of the high latencies
reported by perf sched.
Basically, we need to check whether wake up time is higher than
schedule out time. If it's not the case, we need to tag the current
work atom as invalid.
Beside that, we may need to work later on a better ordering of the
traces given by the kernel.
After this patch:
xfce4-session | 0.221 ms | 1 | avg: 0.538 ms | max: 0.538 ms |
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-14 09:01:12 +08:00
|
|
|
atom->wake_up_time = atom->sched_out_time;
|
2009-09-13 06:46:19 +08:00
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
list_add_tail(&atom->list, &atoms->work_list);
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2012-09-11 06:15:03 +08:00
|
|
|
add_runtime_event(struct work_atoms *atoms, u64 delta,
|
|
|
|
u64 timestamp __maybe_unused)
|
2009-09-15 02:04:48 +08:00
|
|
|
{
|
|
|
|
struct work_atom *atom;
|
|
|
|
|
|
|
|
BUG_ON(list_empty(&atoms->work_list));
|
|
|
|
|
|
|
|
atom = list_entry(atoms->work_list.prev, struct work_atom, list);
|
|
|
|
|
|
|
|
atom->runtime += delta;
|
|
|
|
atoms->total_runtime += delta;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
add_sched_in_event(struct work_atoms *atoms, u64 timestamp)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
2009-09-11 18:12:54 +08:00
|
|
|
struct work_atom *atom;
|
2009-09-13 07:56:25 +08:00
|
|
|
u64 delta;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
if (list_empty(&atoms->work_list))
|
2009-09-12 14:06:14 +08:00
|
|
|
return;
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
atom = list_entry(atoms->work_list.prev, struct work_atom, list);
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
if (atom->state != THREAD_WAIT_CPU)
|
2009-09-12 14:06:14 +08:00
|
|
|
return;
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
if (timestamp < atom->wake_up_time) {
|
|
|
|
atom->state = THREAD_IGNORE;
|
2009-09-12 14:06:14 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
atom->state = THREAD_SCHED_IN;
|
|
|
|
atom->sched_in_time = timestamp;
|
2009-09-13 07:56:25 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
delta = atom->sched_in_time - atom->wake_up_time;
|
2009-09-13 07:56:25 +08:00
|
|
|
atoms->total_lat += delta;
|
2009-12-10 04:40:08 +08:00
|
|
|
if (delta > atoms->max_lat) {
|
2009-09-13 07:56:25 +08:00
|
|
|
atoms->max_lat = delta;
|
2009-12-10 04:40:08 +08:00
|
|
|
atoms->max_lat_at = timestamp;
|
|
|
|
}
|
2009-09-13 07:56:25 +08:00
|
|
|
atoms->nb_atoms++;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static int latency_switch_event(struct perf_sched *sched,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
const u32 prev_pid = perf_evsel__intval(evsel, sample, "prev_pid"),
|
|
|
|
next_pid = perf_evsel__intval(evsel, sample, "next_pid");
|
|
|
|
const u64 prev_state = perf_evsel__intval(evsel, sample, "prev_state");
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms *out_events, *in_events;
|
2009-09-12 14:06:14 +08:00
|
|
|
struct thread *sched_out, *sched_in;
|
2012-08-07 22:33:42 +08:00
|
|
|
u64 timestamp0, timestamp = sample->time;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
int cpu = sample->cpu, err = -1;
|
2009-09-12 16:08:34 +08:00
|
|
|
s64 delta;
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
BUG_ON(cpu >= MAX_CPUS || cpu < 0);
|
2009-09-12 16:08:34 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
timestamp0 = sched->cpu_last_switched[cpu];
|
|
|
|
sched->cpu_last_switched[cpu] = timestamp;
|
2009-09-12 16:08:34 +08:00
|
|
|
if (timestamp0)
|
|
|
|
delta = timestamp - timestamp0;
|
|
|
|
else
|
|
|
|
delta = 0;
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
if (delta < 0) {
|
|
|
|
pr_err("hm, delta: %" PRIu64 " < 0 ?\n", delta);
|
|
|
|
return -1;
|
|
|
|
}
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2014-07-14 18:02:25 +08:00
|
|
|
sched_out = machine__findnew_thread(machine, -1, prev_pid);
|
|
|
|
sched_in = machine__findnew_thread(machine, -1, next_pid);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
if (sched_out == NULL || sched_in == NULL)
|
|
|
|
goto out_put;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
out_events = thread_atoms_search(&sched->atom_root, sched_out, &sched->cmp_pid);
|
2009-09-15 02:04:48 +08:00
|
|
|
if (!out_events) {
|
2012-09-12 04:29:27 +08:00
|
|
|
if (thread_atoms_insert(sched, sched_out))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-12 04:29:27 +08:00
|
|
|
out_events = thread_atoms_search(&sched->atom_root, sched_out, &sched->cmp_pid);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!out_events) {
|
|
|
|
pr_err("out-event: Internal tree error");
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-09 09:53:06 +08:00
|
|
|
}
|
2009-09-15 02:04:48 +08:00
|
|
|
}
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (add_sched_out_event(out_events, sched_out_state(prev_state), timestamp))
|
2012-09-09 09:53:06 +08:00
|
|
|
return -1;
|
2009-09-15 02:04:48 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
in_events = thread_atoms_search(&sched->atom_root, sched_in, &sched->cmp_pid);
|
2009-09-15 02:04:48 +08:00
|
|
|
if (!in_events) {
|
2012-09-12 04:29:27 +08:00
|
|
|
if (thread_atoms_insert(sched, sched_in))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-12 04:29:27 +08:00
|
|
|
in_events = thread_atoms_search(&sched->atom_root, sched_in, &sched->cmp_pid);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!in_events) {
|
|
|
|
pr_err("in-event: Internal tree error");
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-09 09:53:06 +08:00
|
|
|
}
|
2009-09-15 02:04:48 +08:00
|
|
|
/*
|
|
|
|
* Take came in we have not heard about yet,
|
|
|
|
* add in an initial atom in runnable state:
|
|
|
|
*/
|
2012-09-09 09:53:06 +08:00
|
|
|
if (add_sched_out_event(in_events, 'R', timestamp))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
2009-09-15 02:04:48 +08:00
|
|
|
add_sched_in_event(in_events, timestamp);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
err = 0;
|
|
|
|
out_put:
|
|
|
|
thread__put(sched_out);
|
|
|
|
thread__put(sched_in);
|
|
|
|
return err;
|
2009-09-15 02:04:48 +08:00
|
|
|
}
|
2009-09-12 14:06:14 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static int latency_runtime_event(struct perf_sched *sched,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
2009-09-15 02:04:48 +08:00
|
|
|
{
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
const u32 pid = perf_evsel__intval(evsel, sample, "pid");
|
|
|
|
const u64 runtime = perf_evsel__intval(evsel, sample, "runtime");
|
2014-07-14 18:02:25 +08:00
|
|
|
struct thread *thread = machine__findnew_thread(machine, -1, pid);
|
2012-09-12 04:29:27 +08:00
|
|
|
struct work_atoms *atoms = thread_atoms_search(&sched->atom_root, thread, &sched->cmp_pid);
|
2012-08-07 22:33:42 +08:00
|
|
|
u64 timestamp = sample->time;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
int cpu = sample->cpu, err = -1;
|
|
|
|
|
|
|
|
if (thread == NULL)
|
|
|
|
return -1;
|
2009-09-15 02:04:48 +08:00
|
|
|
|
|
|
|
BUG_ON(cpu >= MAX_CPUS || cpu < 0);
|
|
|
|
if (!atoms) {
|
2012-09-12 04:29:27 +08:00
|
|
|
if (thread_atoms_insert(sched, thread))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-12 04:29:27 +08:00
|
|
|
atoms = thread_atoms_search(&sched->atom_root, thread, &sched->cmp_pid);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!atoms) {
|
2012-09-12 10:11:06 +08:00
|
|
|
pr_err("in-event: Internal tree error");
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-09 09:53:06 +08:00
|
|
|
}
|
|
|
|
if (add_sched_out_event(atoms, 'R', timestamp))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
add_runtime_event(atoms, runtime, timestamp);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
err = 0;
|
|
|
|
out_put:
|
|
|
|
thread__put(thread);
|
|
|
|
return err;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static int latency_wakeup_event(struct perf_sched *sched,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
2014-05-13 02:19:46 +08:00
|
|
|
const u32 pid = perf_evsel__intval(evsel, sample, "pid");
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms *atoms;
|
2009-09-11 18:12:54 +08:00
|
|
|
struct work_atom *atom;
|
2009-09-12 14:06:14 +08:00
|
|
|
struct thread *wakee;
|
2012-08-07 22:33:42 +08:00
|
|
|
u64 timestamp = sample->time;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
int err = -1;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2014-07-14 18:02:25 +08:00
|
|
|
wakee = machine__findnew_thread(machine, -1, pid);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
if (wakee == NULL)
|
|
|
|
return -1;
|
2012-09-12 04:29:27 +08:00
|
|
|
atoms = thread_atoms_search(&sched->atom_root, wakee, &sched->cmp_pid);
|
2009-09-13 05:11:32 +08:00
|
|
|
if (!atoms) {
|
2012-09-12 04:29:27 +08:00
|
|
|
if (thread_atoms_insert(sched, wakee))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-12 04:29:27 +08:00
|
|
|
atoms = thread_atoms_search(&sched->atom_root, wakee, &sched->cmp_pid);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!atoms) {
|
2012-09-12 10:11:06 +08:00
|
|
|
pr_err("wakeup-event: Internal tree error");
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-09 09:53:06 +08:00
|
|
|
}
|
|
|
|
if (add_sched_out_event(atoms, 'S', timestamp))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
BUG_ON(list_empty(&atoms->work_list));
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
atom = list_entry(atoms->work_list.prev, struct work_atom, list);
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-10-10 20:46:04 +08:00
|
|
|
/*
|
2014-05-13 09:38:21 +08:00
|
|
|
* As we do not guarantee the wakeup event happens when
|
|
|
|
* task is out of run queue, also may happen when task is
|
|
|
|
* on run queue and wakeup only change ->state to TASK_RUNNING,
|
|
|
|
* then we should not set the ->wake_up_time when wake up a
|
|
|
|
* task which is on run queue.
|
|
|
|
*
|
2009-10-10 20:46:04 +08:00
|
|
|
* You WILL be missing events if you've recorded only
|
|
|
|
* one CPU, or are only looking at only one, so don't
|
2014-05-13 09:38:21 +08:00
|
|
|
* skip in this case.
|
2009-10-10 20:46:04 +08:00
|
|
|
*/
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->profile_cpu == -1 && atom->state != THREAD_SLEEPING)
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_ok;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_timestamps++;
|
2009-09-14 00:15:54 +08:00
|
|
|
if (atom->sched_out_time > timestamp) {
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_unordered_timestamps++;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_ok;
|
2009-09-14 00:15:54 +08:00
|
|
|
}
|
perf tools: Fix processing of randomly serialized sched traces
Currently it's possible to meet such too high latency results
with 'perf sched latency'.
-----------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms |
-----------------------------------------------------------------------------------
xfce4-panel | 0.222 ms | 2 | avg: 4718.345 ms | max: 9436.493 ms |
scsi_eh_3 | 3.962 ms | 36 | avg: 55.957 ms | max: 1977.829 ms |
The origin is on traces that are sometimes badly serialized across cpus.
For example the raw traces that raised such results for xfce4-panel:
(1) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(2) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(3) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
The traces are processed in the order they arrive. Then in (2),
xfce4-panel sleeps, it is first waken up in (3) and eventually
scheduled in (5).
The latency reported is then 1504 - 1495 = 9 secs, as reported by perf
sched. But this is wrong, we are confident in the fact the traces are
nicely serialized while we should actually more trust the timestamps.
If we reorder by timestamps we get:
(1) Xorg-4276 [001] 1494.663860125: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(2) [init]-0 [000] 1494.663899990: sched_switch: task swapper:0 [140] (R) ==> xfce4-panel:4569 [120]
(3) xfce4-panel-4569 [000] 1494.663928373: sched_switch: task xfce4-panel:4569 [120] (S) ==> swapper:0 [140]
(4) Xorg-4276 [001] 1504.098252756: sched_wakeup: task xfce4-panel:4569 [120] success=1 [000]
(5) perf-5219 [000] 1504.100353302: sched_switch: task perf:5219 [120] (S) ==> xfce4-panel:4569 [120]
Now the trace make more sense, xfce4-panel is sleeping. Then it is
woken up in (1), scheduled in (2)
It goes to sleep in (3), woken up in (4) and scheduled in (5).
Now, latency captured between (1) and (2) is of 39 us.
And between (4) and (5) it is 2.1 ms.
Such pattern of bad serializing is the origin of the high latencies
reported by perf sched.
Basically, we need to check whether wake up time is higher than
schedule out time. If it's not the case, we need to tag the current
work atom as invalid.
Beside that, we may need to work later on a better ordering of the
traces given by the kernel.
After this patch:
xfce4-session | 0.221 ms | 1 | avg: 0.538 ms | max: 0.538 ms |
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-14 09:01:12 +08:00
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
atom->state = THREAD_WAIT_CPU;
|
|
|
|
atom->wake_up_time = timestamp;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
out_ok:
|
|
|
|
err = 0;
|
|
|
|
out_put:
|
|
|
|
thread__put(wakee);
|
|
|
|
return err;
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static int latency_migrate_task_event(struct perf_sched *sched,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
2009-10-10 20:46:04 +08:00
|
|
|
{
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
const u32 pid = perf_evsel__intval(evsel, sample, "pid");
|
2012-08-07 22:33:42 +08:00
|
|
|
u64 timestamp = sample->time;
|
2009-10-10 20:46:04 +08:00
|
|
|
struct work_atoms *atoms;
|
|
|
|
struct work_atom *atom;
|
|
|
|
struct thread *migrant;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
int err = -1;
|
2009-10-10 20:46:04 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Only need to worry about migration when profiling one CPU.
|
|
|
|
*/
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->profile_cpu == -1)
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-10-10 20:46:04 +08:00
|
|
|
|
2014-07-14 18:02:25 +08:00
|
|
|
migrant = machine__findnew_thread(machine, -1, pid);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
if (migrant == NULL)
|
|
|
|
return -1;
|
2012-09-12 04:29:27 +08:00
|
|
|
atoms = thread_atoms_search(&sched->atom_root, migrant, &sched->cmp_pid);
|
2009-10-10 20:46:04 +08:00
|
|
|
if (!atoms) {
|
2012-09-12 04:29:27 +08:00
|
|
|
if (thread_atoms_insert(sched, migrant))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2013-09-11 20:46:56 +08:00
|
|
|
register_pid(sched, migrant->tid, thread__comm_str(migrant));
|
2012-09-12 04:29:27 +08:00
|
|
|
atoms = thread_atoms_search(&sched->atom_root, migrant, &sched->cmp_pid);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (!atoms) {
|
2012-09-12 10:11:06 +08:00
|
|
|
pr_err("migration-event: Internal tree error");
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2012-09-09 09:53:06 +08:00
|
|
|
}
|
|
|
|
if (add_sched_out_event(atoms, 'R', timestamp))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
goto out_put;
|
2009-10-10 20:46:04 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
BUG_ON(list_empty(&atoms->work_list));
|
|
|
|
|
|
|
|
atom = list_entry(atoms->work_list.prev, struct work_atom, list);
|
|
|
|
atom->sched_in_time = atom->sched_out_time = atom->wake_up_time = timestamp;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_timestamps++;
|
2009-10-10 20:46:04 +08:00
|
|
|
|
|
|
|
if (atom->sched_out_time > timestamp)
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_unordered_timestamps++;
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
err = 0;
|
|
|
|
out_put:
|
|
|
|
thread__put(migrant);
|
|
|
|
return err;
|
2009-10-10 20:46:04 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void output_lat_thread(struct perf_sched *sched, struct work_atoms *work_list)
|
2009-09-12 14:06:14 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int ret;
|
2009-09-13 07:56:25 +08:00
|
|
|
u64 avg;
|
perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands. This was because of difference in printing the
timestamp. Factor out the code and share it so that they can be in
sync. Also I found that sched map has similar problem, fix it too.
Committer notes:
Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:
char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)
to
int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)
Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-24 10:02:45 +08:00
|
|
|
char max_lat_at[32];
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
if (!work_list->nb_atoms)
|
2009-09-12 14:06:14 +08:00
|
|
|
return;
|
2009-09-14 00:15:54 +08:00
|
|
|
/*
|
|
|
|
* Ignore idle threads:
|
|
|
|
*/
|
2013-09-11 20:46:56 +08:00
|
|
|
if (!strcmp(thread__comm_str(work_list->thread), "swapper"))
|
2009-09-14 00:15:54 +08:00
|
|
|
return;
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->all_runtime += work_list->total_runtime;
|
|
|
|
sched->all_count += work_list->nb_atoms;
|
2009-09-13 07:56:25 +08:00
|
|
|
|
2015-05-22 21:18:40 +08:00
|
|
|
if (work_list->num_merged > 1)
|
|
|
|
ret = printf(" %s:(%d) ", thread__comm_str(work_list->thread), work_list->num_merged);
|
|
|
|
else
|
|
|
|
ret = printf(" %s:%d ", thread__comm_str(work_list->thread), work_list->thread->tid);
|
2009-09-12 14:06:14 +08:00
|
|
|
|
2009-09-15 00:30:44 +08:00
|
|
|
for (i = 0; i < 24 - ret; i++)
|
2009-09-12 14:06:14 +08:00
|
|
|
printf(" ");
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
avg = work_list->total_lat / work_list->nb_atoms;
|
perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands. This was because of difference in printing the
timestamp. Factor out the code and share it so that they can be in
sync. Also I found that sched map has similar problem, fix it too.
Committer notes:
Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:
char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)
to
int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)
Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-24 10:02:45 +08:00
|
|
|
timestamp__scnprintf_usec(work_list->max_lat_at, max_lat_at, sizeof(max_lat_at));
|
2009-09-12 14:06:14 +08:00
|
|
|
|
perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands. This was because of difference in printing the
timestamp. Factor out the code and share it so that they can be in
sync. Also I found that sched map has similar problem, fix it too.
Committer notes:
Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:
char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)
to
int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)
Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-24 10:02:45 +08:00
|
|
|
printf("|%11.3f ms |%9" PRIu64 " | avg:%9.3f ms | max:%9.3f ms | max at: %13s s\n",
|
2016-08-08 23:23:49 +08:00
|
|
|
(double)work_list->total_runtime / NSEC_PER_MSEC,
|
|
|
|
work_list->nb_atoms, (double)avg / NSEC_PER_MSEC,
|
|
|
|
(double)work_list->max_lat / NSEC_PER_MSEC,
|
perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands. This was because of difference in printing the
timestamp. Factor out the code and share it so that they can be in
sync. Also I found that sched map has similar problem, fix it too.
Committer notes:
Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:
char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)
to
int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)
Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-24 10:02:45 +08:00
|
|
|
max_lat_at);
|
2009-09-12 14:06:14 +08:00
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static int pid_cmp(struct work_atoms *l, struct work_atoms *r)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
2015-11-02 19:10:25 +08:00
|
|
|
if (l->thread == r->thread)
|
|
|
|
return 0;
|
2013-07-04 21:20:31 +08:00
|
|
|
if (l->thread->tid < r->thread->tid)
|
2009-09-13 09:36:29 +08:00
|
|
|
return -1;
|
2013-07-04 21:20:31 +08:00
|
|
|
if (l->thread->tid > r->thread->tid)
|
2009-09-13 09:36:29 +08:00
|
|
|
return 1;
|
2015-11-02 19:10:25 +08:00
|
|
|
return (int)(l->thread - r->thread);
|
2009-09-13 09:36:29 +08:00
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static int avg_cmp(struct work_atoms *l, struct work_atoms *r)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
|
|
|
u64 avgl, avgr;
|
|
|
|
|
|
|
|
if (!l->nb_atoms)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (!r->nb_atoms)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
avgl = l->total_lat / l->nb_atoms;
|
|
|
|
avgr = r->total_lat / r->nb_atoms;
|
|
|
|
|
|
|
|
if (avgl < avgr)
|
|
|
|
return -1;
|
|
|
|
if (avgl > avgr)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static int max_cmp(struct work_atoms *l, struct work_atoms *r)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
|
|
|
if (l->max_lat < r->max_lat)
|
|
|
|
return -1;
|
|
|
|
if (l->max_lat > r->max_lat)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static int switch_cmp(struct work_atoms *l, struct work_atoms *r)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
|
|
|
if (l->nb_atoms < r->nb_atoms)
|
|
|
|
return -1;
|
|
|
|
if (l->nb_atoms > r->nb_atoms)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-15 02:04:48 +08:00
|
|
|
static int runtime_cmp(struct work_atoms *l, struct work_atoms *r)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
|
|
|
if (l->total_runtime < r->total_runtime)
|
|
|
|
return -1;
|
|
|
|
if (l->total_runtime > r->total_runtime)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-10-06 04:17:29 +08:00
|
|
|
static int sort_dimension__add(const char *tok, struct list_head *list)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
size_t i;
|
|
|
|
static struct sort_dimension avg_sort_dimension = {
|
|
|
|
.name = "avg",
|
|
|
|
.cmp = avg_cmp,
|
|
|
|
};
|
|
|
|
static struct sort_dimension max_sort_dimension = {
|
|
|
|
.name = "max",
|
|
|
|
.cmp = max_cmp,
|
|
|
|
};
|
|
|
|
static struct sort_dimension pid_sort_dimension = {
|
|
|
|
.name = "pid",
|
|
|
|
.cmp = pid_cmp,
|
|
|
|
};
|
|
|
|
static struct sort_dimension runtime_sort_dimension = {
|
|
|
|
.name = "runtime",
|
|
|
|
.cmp = runtime_cmp,
|
|
|
|
};
|
|
|
|
static struct sort_dimension switch_sort_dimension = {
|
|
|
|
.name = "switch",
|
|
|
|
.cmp = switch_cmp,
|
|
|
|
};
|
|
|
|
struct sort_dimension *available_sorts[] = {
|
|
|
|
&pid_sort_dimension,
|
|
|
|
&avg_sort_dimension,
|
|
|
|
&max_sort_dimension,
|
|
|
|
&switch_sort_dimension,
|
|
|
|
&runtime_sort_dimension,
|
|
|
|
};
|
2009-09-13 09:36:29 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(available_sorts); i++) {
|
2009-09-13 09:36:29 +08:00
|
|
|
if (!strcmp(available_sorts[i]->name, tok)) {
|
|
|
|
list_add_tail(&available_sorts[i]->list, list);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void perf_sched__sort_lat(struct perf_sched *sched)
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_root_cached *root = &sched->atom_root;
|
2015-05-22 21:18:40 +08:00
|
|
|
again:
|
2009-09-13 09:36:29 +08:00
|
|
|
for (;;) {
|
2009-09-15 02:04:48 +08:00
|
|
|
struct work_atoms *data;
|
2018-12-07 03:18:19 +08:00
|
|
|
node = rb_first_cached(root);
|
2009-09-13 09:36:29 +08:00
|
|
|
if (!node)
|
|
|
|
break;
|
|
|
|
|
2018-12-07 03:18:19 +08:00
|
|
|
rb_erase_cached(node, root);
|
2009-09-15 02:04:48 +08:00
|
|
|
data = rb_entry(node, struct work_atoms, node);
|
2012-09-12 04:29:27 +08:00
|
|
|
__thread_latency_insert(&sched->sorted_atom_root, data, &sched->sort_list);
|
2009-09-13 09:36:29 +08:00
|
|
|
}
|
2015-05-22 21:18:40 +08:00
|
|
|
if (root == &sched->atom_root) {
|
|
|
|
root = &sched->merged_atom_root;
|
|
|
|
goto again;
|
|
|
|
}
|
2009-09-13 09:36:29 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int process_sched_wakeup_event(struct perf_tool *tool,
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel,
|
2012-09-11 06:15:03 +08:00
|
|
|
struct perf_sample *sample,
|
2012-09-12 00:18:47 +08:00
|
|
|
struct machine *machine)
|
2009-09-12 09:59:01 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2009-09-12 09:59:01 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (sched->tp_handler->wakeup_event)
|
|
|
|
return sched->tp_handler->wakeup_event(sched, evsel, sample, machine);
|
2012-09-09 09:53:06 +08:00
|
|
|
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
return 0;
|
2009-09-12 09:59:01 +08:00
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:29 +08:00
|
|
|
union map_priv {
|
|
|
|
void *ptr;
|
|
|
|
bool color;
|
|
|
|
};
|
|
|
|
|
|
|
|
static bool thread__has_color(struct thread *thread)
|
|
|
|
{
|
|
|
|
union map_priv priv = {
|
|
|
|
.ptr = thread__priv(thread),
|
|
|
|
};
|
|
|
|
|
|
|
|
return priv.color;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct thread*
|
|
|
|
map__findnew_thread(struct perf_sched *sched, struct machine *machine, pid_t pid, pid_t tid)
|
|
|
|
{
|
|
|
|
struct thread *thread = machine__findnew_thread(machine, pid, tid);
|
|
|
|
union map_priv priv = {
|
|
|
|
.color = false,
|
|
|
|
};
|
|
|
|
|
|
|
|
if (!sched->map.color_pids || !thread || thread__priv(thread))
|
|
|
|
return thread;
|
|
|
|
|
|
|
|
if (thread_map__has(sched->map.color_pids, tid))
|
|
|
|
priv.color = true;
|
|
|
|
|
|
|
|
thread__set_priv(thread, priv.ptr);
|
|
|
|
return thread;
|
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
static int map_switch_event(struct perf_sched *sched, struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample, struct machine *machine)
|
2009-09-16 23:40:48 +08:00
|
|
|
{
|
2014-05-16 13:37:05 +08:00
|
|
|
const u32 next_pid = perf_evsel__intval(evsel, sample, "next_pid");
|
|
|
|
struct thread *sched_in;
|
2018-03-06 11:37:36 +08:00
|
|
|
struct thread_runtime *tr;
|
2009-09-16 23:40:48 +08:00
|
|
|
int new_shortname;
|
2012-08-07 22:33:42 +08:00
|
|
|
u64 timestamp0, timestamp = sample->time;
|
2009-09-16 23:40:48 +08:00
|
|
|
s64 delta;
|
2016-04-12 21:29:26 +08:00
|
|
|
int i, this_cpu = sample->cpu;
|
|
|
|
int cpus_nr;
|
|
|
|
bool new_cpu = false;
|
2016-04-12 21:29:27 +08:00
|
|
|
const char *color = PERF_COLOR_NORMAL;
|
perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands. This was because of difference in printing the
timestamp. Factor out the code and share it so that they can be in
sync. Also I found that sched map has similar problem, fix it too.
Committer notes:
Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:
char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)
to
int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)
Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-24 10:02:45 +08:00
|
|
|
char stimestamp[32];
|
2009-09-16 23:40:48 +08:00
|
|
|
|
|
|
|
BUG_ON(this_cpu >= MAX_CPUS || this_cpu < 0);
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
if (this_cpu > sched->max_cpu)
|
|
|
|
sched->max_cpu = this_cpu;
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2016-04-12 21:29:26 +08:00
|
|
|
if (sched->map.comp) {
|
|
|
|
cpus_nr = bitmap_weight(sched->map.comp_cpus_mask, MAX_CPUS);
|
|
|
|
if (!test_and_set_bit(this_cpu, sched->map.comp_cpus_mask)) {
|
|
|
|
sched->map.comp_cpus[cpus_nr++] = this_cpu;
|
|
|
|
new_cpu = true;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
cpus_nr = sched->max_cpu;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
timestamp0 = sched->cpu_last_switched[this_cpu];
|
|
|
|
sched->cpu_last_switched[this_cpu] = timestamp;
|
2009-09-16 23:40:48 +08:00
|
|
|
if (timestamp0)
|
|
|
|
delta = timestamp - timestamp0;
|
|
|
|
else
|
|
|
|
delta = 0;
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
if (delta < 0) {
|
2012-09-12 10:11:06 +08:00
|
|
|
pr_err("hm, delta: %" PRIu64 " < 0 ?\n", delta);
|
2012-09-09 09:53:06 +08:00
|
|
|
return -1;
|
|
|
|
}
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2016-04-12 21:29:29 +08:00
|
|
|
sched_in = map__findnew_thread(sched, machine, -1, next_pid);
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
if (sched_in == NULL)
|
|
|
|
return -1;
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2018-03-06 11:37:36 +08:00
|
|
|
tr = thread__get_runtime(sched_in);
|
|
|
|
if (tr == NULL) {
|
|
|
|
thread__put(sched_in);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
sched->curr_thread[this_cpu] = thread__get(sched_in);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
|
|
|
printf(" ");
|
|
|
|
|
|
|
|
new_shortname = 0;
|
2018-03-06 11:37:36 +08:00
|
|
|
if (!tr->shortname[0]) {
|
2014-05-06 13:39:01 +08:00
|
|
|
if (!strcmp(thread__comm_str(sched_in), "swapper")) {
|
|
|
|
/*
|
|
|
|
* Don't allocate a letter-number for swapper:0
|
|
|
|
* as a shortname. Instead, we use '.' for it.
|
|
|
|
*/
|
2018-03-06 11:37:36 +08:00
|
|
|
tr->shortname[0] = '.';
|
|
|
|
tr->shortname[1] = ' ';
|
2009-09-16 23:40:48 +08:00
|
|
|
} else {
|
2018-03-06 11:37:36 +08:00
|
|
|
tr->shortname[0] = sched->next_shortname1;
|
|
|
|
tr->shortname[1] = sched->next_shortname2;
|
2014-05-06 13:39:01 +08:00
|
|
|
|
|
|
|
if (sched->next_shortname1 < 'Z') {
|
|
|
|
sched->next_shortname1++;
|
2009-09-16 23:40:48 +08:00
|
|
|
} else {
|
2014-05-06 13:39:01 +08:00
|
|
|
sched->next_shortname1 = 'A';
|
|
|
|
if (sched->next_shortname2 < '9')
|
|
|
|
sched->next_shortname2++;
|
|
|
|
else
|
|
|
|
sched->next_shortname2 = '0';
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
new_shortname = 1;
|
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:26 +08:00
|
|
|
for (i = 0; i < cpus_nr; i++) {
|
|
|
|
int cpu = sched->map.comp ? sched->map.comp_cpus[i] : i;
|
2016-04-12 21:29:29 +08:00
|
|
|
struct thread *curr_thread = sched->curr_thread[cpu];
|
2018-03-06 11:37:36 +08:00
|
|
|
struct thread_runtime *curr_tr;
|
2016-04-12 21:29:29 +08:00
|
|
|
const char *pid_color = color;
|
2016-04-12 21:29:30 +08:00
|
|
|
const char *cpu_color = color;
|
2016-04-12 21:29:29 +08:00
|
|
|
|
|
|
|
if (curr_thread && thread__has_color(curr_thread))
|
|
|
|
pid_color = COLOR_PIDS;
|
2016-04-12 21:29:26 +08:00
|
|
|
|
2016-04-12 21:29:31 +08:00
|
|
|
if (sched->map.cpus && !cpu_map__has(sched->map.cpus, cpu))
|
|
|
|
continue;
|
|
|
|
|
2016-04-12 21:29:30 +08:00
|
|
|
if (sched->map.color_cpus && cpu_map__has(sched->map.color_cpus, cpu))
|
|
|
|
cpu_color = COLOR_CPUS;
|
|
|
|
|
2009-09-16 23:40:48 +08:00
|
|
|
if (cpu != this_cpu)
|
2016-10-24 10:02:43 +08:00
|
|
|
color_fprintf(stdout, color, " ");
|
2009-09-16 23:40:48 +08:00
|
|
|
else
|
2016-04-12 21:29:30 +08:00
|
|
|
color_fprintf(stdout, cpu_color, "*");
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2018-03-06 11:37:36 +08:00
|
|
|
if (sched->curr_thread[cpu]) {
|
|
|
|
curr_tr = thread__get_runtime(sched->curr_thread[cpu]);
|
|
|
|
if (curr_tr == NULL) {
|
|
|
|
thread__put(sched_in);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
color_fprintf(stdout, pid_color, "%2s ", curr_tr->shortname);
|
|
|
|
} else
|
2016-04-12 21:29:27 +08:00
|
|
|
color_fprintf(stdout, color, " ");
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:31 +08:00
|
|
|
if (sched->map.cpus && !cpu_map__has(sched->map.cpus, this_cpu))
|
|
|
|
goto out;
|
|
|
|
|
perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands. This was because of difference in printing the
timestamp. Factor out the code and share it so that they can be in
sync. Also I found that sched map has similar problem, fix it too.
Committer notes:
Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:
char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)
to
int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)
Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-24 10:02:45 +08:00
|
|
|
timestamp__scnprintf_usec(timestamp, stimestamp, sizeof(stimestamp));
|
|
|
|
color_fprintf(stdout, color, " %12s secs ", stimestamp);
|
2018-03-06 11:37:37 +08:00
|
|
|
if (new_shortname || tr->comm_changed || (verbose > 0 && sched_in->tid)) {
|
2016-04-12 21:29:29 +08:00
|
|
|
const char *pid_color = color;
|
|
|
|
|
|
|
|
if (thread__has_color(sched_in))
|
|
|
|
pid_color = COLOR_PIDS;
|
|
|
|
|
|
|
|
color_fprintf(stdout, pid_color, "%s => %s:%d",
|
2018-03-06 11:37:36 +08:00
|
|
|
tr->shortname, thread__comm_str(sched_in), sched_in->tid);
|
2018-03-06 11:37:37 +08:00
|
|
|
tr->comm_changed = false;
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
2012-09-09 09:53:06 +08:00
|
|
|
|
2016-04-12 21:29:26 +08:00
|
|
|
if (sched->map.comp && new_cpu)
|
2016-04-12 21:29:27 +08:00
|
|
|
color_fprintf(stdout, color, " (CPU %d)", this_cpu);
|
2016-04-12 21:29:26 +08:00
|
|
|
|
2016-04-12 21:29:31 +08:00
|
|
|
out:
|
2016-04-12 21:29:27 +08:00
|
|
|
color_fprintf(stdout, color, "\n");
|
2016-04-12 21:29:26 +08:00
|
|
|
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-07 07:43:22 +08:00
|
|
|
thread__put(sched_in);
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int process_sched_switch_event(struct perf_tool *tool,
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel,
|
2012-09-11 06:15:03 +08:00
|
|
|
struct perf_sample *sample,
|
2012-09-12 00:18:47 +08:00
|
|
|
struct machine *machine)
|
2009-09-12 09:59:01 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2012-09-09 09:53:06 +08:00
|
|
|
int this_cpu = sample->cpu, err = 0;
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
u32 prev_pid = perf_evsel__intval(evsel, sample, "prev_pid"),
|
|
|
|
next_pid = perf_evsel__intval(evsel, sample, "next_pid");
|
2009-09-12 09:59:01 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->curr_pid[this_cpu] != (u32)-1) {
|
2009-09-16 20:07:00 +08:00
|
|
|
/*
|
|
|
|
* Are we trying to switch away a PID that is
|
|
|
|
* not current?
|
|
|
|
*/
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (sched->curr_pid[this_cpu] != prev_pid)
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_context_switch_bugs++;
|
2009-09-16 20:07:00 +08:00
|
|
|
}
|
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (sched->tp_handler->switch_event)
|
|
|
|
err = sched->tp_handler->switch_event(sched, evsel, sample, machine);
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
|
|
|
|
sched->curr_pid[this_cpu] = next_pid;
|
2012-09-09 09:53:06 +08:00
|
|
|
return err;
|
2009-09-12 09:59:01 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int process_sched_runtime_event(struct perf_tool *tool,
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel,
|
2012-09-11 06:15:03 +08:00
|
|
|
struct perf_sample *sample,
|
2012-09-12 00:18:47 +08:00
|
|
|
struct machine *machine)
|
2009-09-15 02:04:48 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2009-09-15 02:04:48 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (sched->tp_handler->runtime_event)
|
|
|
|
return sched->tp_handler->runtime_event(sched, evsel, sample, machine);
|
2012-09-09 09:53:06 +08:00
|
|
|
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
return 0;
|
2009-09-15 02:04:48 +08:00
|
|
|
}
|
|
|
|
|
2013-08-08 10:50:47 +08:00
|
|
|
static int perf_sched__process_fork_event(struct perf_tool *tool,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2009-09-12 08:43:45 +08:00
|
|
|
|
2013-08-08 10:50:47 +08:00
|
|
|
/* run the fork event through the perf machineruy */
|
|
|
|
perf_event__process_fork(tool, event, sample, machine);
|
|
|
|
|
|
|
|
/* and then run additional processing needed for this command */
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (sched->tp_handler->fork_event)
|
2013-08-08 10:50:47 +08:00
|
|
|
return sched->tp_handler->fork_event(sched, event, machine);
|
2012-09-09 09:53:06 +08:00
|
|
|
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
return 0;
|
perf sched: Implement the scheduling workload replay engine
Integrate the schedbench.c bits with the raw trace events
that we get from the perf machinery, and activate the
workload replayer/simulator.
Example of a captured 'make -j' workload:
$ perf sched
run measurement overhead: 90 nsecs
sleep measurement overhead: 2724743 nsecs
the run test took 1000081 nsecs
the sleep test took 2981111 nsecs
version = 0.5
...
nr_run_events: 70
nr_sleep_events: 66
nr_wakeup_events: 9
target-less wakeups: 71
multi-target wakeups: 47
run events optimized: 139
task 0 ( perf: 6607), nr_events: 2
task 1 ( perf: 6608), nr_events: 6
task 2 ( : 0), nr_events: 1
task 3 ( make: 6609), nr_events: 5
task 4 ( sh: 6610), nr_events: 4
task 5 ( make: 6611), nr_events: 6
task 6 ( sh: 6612), nr_events: 4
task 7 ( make: 6613), nr_events: 5
task 8 ( migration/11: 25), nr_events: 1
task 9 ( migration/13: 29), nr_events: 1
task 10 ( migration/15: 33), nr_events: 1
task 11 ( migration/9: 21), nr_events: 1
task 12 ( sh: 6614), nr_events: 4
task 13 ( make: 6615), nr_events: 5
task 14 ( sh: 6616), nr_events: 4
task 15 ( make: 6617), nr_events: 7
task 16 ( migration/3: 9), nr_events: 1
task 17 ( migration/5: 13), nr_events: 1
task 18 ( migration/7: 17), nr_events: 1
task 19 ( migration/1: 5), nr_events: 1
task 20 ( sh: 6618), nr_events: 4
task 21 ( make: 6619), nr_events: 5
task 22 ( sh: 6620), nr_events: 4
task 23 ( make: 6621), nr_events: 10
task 24 ( sh: 6623), nr_events: 3
task 25 ( gcc: 6624), nr_events: 4
task 26 ( gcc: 6625), nr_events: 4
task 27 ( gcc: 6626), nr_events: 5
task 28 ( collect2: 6627), nr_events: 5
task 29 ( sh: 6622), nr_events: 1
task 30 ( make: 6628), nr_events: 7
task 31 ( sh: 6630), nr_events: 4
task 32 ( gcc: 6631), nr_events: 4
task 33 ( sh: 6629), nr_events: 1
task 34 ( gcc: 6632), nr_events: 4
task 35 ( gcc: 6633), nr_events: 4
task 36 ( collect2: 6634), nr_events: 4
task 37 ( make: 6635), nr_events: 8
task 38 ( sh: 6637), nr_events: 4
task 39 ( sh: 6636), nr_events: 1
task 40 ( gcc: 6638), nr_events: 4
task 41 ( gcc: 6639), nr_events: 4
task 42 ( gcc: 6640), nr_events: 4
task 43 ( collect2: 6641), nr_events: 4
task 44 ( make: 6642), nr_events: 6
task 45 ( sh: 6643), nr_events: 5
task 46 ( sh: 6644), nr_events: 3
task 47 ( sh: 6645), nr_events: 4
task 48 ( make: 6646), nr_events: 6
task 49 ( sh: 6647), nr_events: 3
task 50 ( make: 6648), nr_events: 5
task 51 ( sh: 6649), nr_events: 5
task 52 ( sh: 6650), nr_events: 6
task 53 ( make: 6651), nr_events: 4
task 54 ( make: 6652), nr_events: 5
task 55 ( make: 6653), nr_events: 4
task 56 ( make: 6654), nr_events: 4
task 57 ( make: 6655), nr_events: 5
task 58 ( sh: 6656), nr_events: 4
task 59 ( gcc: 6657), nr_events: 9
task 60 ( ksoftirqd/3: 10), nr_events: 1
task 61 ( gcc: 6658), nr_events: 4
task 62 ( make: 6659), nr_events: 5
task 63 ( sh: 6660), nr_events: 3
task 64 ( gcc: 6661), nr_events: 5
task 65 ( collect2: 6662), nr_events: 4
------------------------------------------------------------
#1 : 256.745, ravg: 256.74, cpu: 0.00 / 0.00
#2 : 439.372, ravg: 275.01, cpu: 0.00 / 0.00
#3 : 411.971, ravg: 288.70, cpu: 0.00 / 0.00
#4 : 385.500, ravg: 298.38, cpu: 0.00 / 0.00
#5 : 366.526, ravg: 305.20, cpu: 0.00 / 0.00
#6 : 381.281, ravg: 312.81, cpu: 0.00 / 0.00
#7 : 410.756, ravg: 322.60, cpu: 0.00 / 0.00
#8 : 368.009, ravg: 327.14, cpu: 0.00 / 0.00
#9 : 408.098, ravg: 335.24, cpu: 0.00 / 0.00
#10 : 368.582, ravg: 338.57, cpu: 0.00 / 0.00
I.e. we successfully analyzed the trace, replayed it
via real threads and measured the replayed workload's
scheduling properties.
This is how it looked like in 'top' output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7164 mingo 20 0 1434m 8080 888 R 57.0 0.1 0:02.04 :perf
7165 mingo 20 0 1434m 8080 888 R 41.8 0.1 0:01.52 :perf
7228 mingo 20 0 1434m 8080 888 R 39.8 0.1 0:01.44 :gcc
7225 mingo 20 0 1434m 8080 888 R 33.8 0.1 0:01.26 :gcc
7202 mingo 20 0 1434m 8080 888 R 31.2 0.1 0:01.16 :sh
7222 mingo 20 0 1434m 8080 888 R 25.2 0.1 0:00.96 :sh
7211 mingo 20 0 1434m 8080 888 R 21.9 0.1 0:00.82 :sh
7213 mingo 20 0 1434m 8080 888 D 19.2 0.1 0:00.74 :sh
7194 mingo 20 0 1434m 8080 888 D 18.6 0.1 0:00.72 :make
There's still various kinks in it - more patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int process_sched_migrate_task_event(struct perf_tool *tool,
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel,
|
2012-09-11 06:15:03 +08:00
|
|
|
struct perf_sample *sample,
|
2012-09-12 00:18:47 +08:00
|
|
|
struct machine *machine)
|
2009-10-10 20:46:04 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2009-10-10 20:46:04 +08:00
|
|
|
|
perf sched: Don't read all tracepoint variables in advance
Do it just at the actual consumer of these fields, that way we avoid
needless lookups:
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
98.848272 task-clock # 0.993 CPUs utilized ( +- 0.48% )
11 context-switches # 0.112 K/sec ( +- 2.83% )
0 cpu-migrations # 0.003 K/sec ( +- 50.92% )
7,604 page-faults # 0.077 M/sec ( +- 0.00% )
332,216,085 cycles # 3.361 GHz ( +- 0.14% ) [82.87%]
100,623,710 stalled-cycles-frontend # 30.29% frontend cycles idle ( +- 0.53% ) [82.95%]
58,788,692 stalled-cycles-backend # 17.70% backend cycles idle ( +- 0.59% ) [67.15%]
609,402,433 instructions # 1.83 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.76%]
131,277,138 branches # 1328.067 M/sec ( +- 0.06% ) [83.77%]
1,117,871 branch-misses # 0.85% of all branches ( +- 0.32% ) [83.51%]
0.099580430 seconds time elapsed ( +- 0.48% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-kracdpw8wqlr0xjh75uk8g11@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
if (sched->tp_handler->migrate_task_event)
|
|
|
|
return sched->tp_handler->migrate_task_event(sched, evsel, sample, machine);
|
2012-09-09 09:53:06 +08:00
|
|
|
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
return 0;
|
2009-10-10 20:46:04 +08:00
|
|
|
}
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
typedef int (*tracepoint_handler)(struct perf_tool *tool,
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
struct perf_evsel *evsel,
|
2012-09-09 09:53:06 +08:00
|
|
|
struct perf_sample *sample,
|
2012-09-12 00:18:47 +08:00
|
|
|
struct machine *machine);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-11 06:15:03 +08:00
|
|
|
static int perf_sched__process_tracepoint_sample(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event __maybe_unused,
|
perf tools: Save some loops using perf_evlist__id2evsel
Since we already ask for PERF_SAMPLE_ID and use it to quickly find the
associated evsel, add handler func + data to struct perf_evsel to avoid
using chains of if(strcmp(event_name)) and also to avoid all the linear
list searches via trace_event_find.
To demonstrate the technique convert 'perf sched' to it:
# perf sched record sleep 5m
And then:
Performance counter stats for '/tmp/oldperf sched lat':
646.929438 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,901 page-faults # 0.032 M/sec
1,290,144,450 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,606,158,439 instructions # 1.24 insns per cycle
339,088,395 branches # 524.151 M/sec
4,550,735 branch-misses # 1.34% of all branches
0.647524759 seconds time elapsed
Versus:
Performance counter stats for 'perf sched lat':
473.564691 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,903 page-faults # 0.044 M/sec
944,367,984 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,442,385,571 instructions # 1.53 insns per cycle
308,383,106 branches # 651.195 M/sec
4,481,784 branch-misses # 1.45% of all branches
0.474215751 seconds time elapsed
[root@emilia ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1kbzpl74lwi6lavpqke2u2p3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-11-29 03:57:40 +08:00
|
|
|
struct perf_sample *sample,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct machine *machine)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
2012-09-09 09:53:06 +08:00
|
|
|
int err = 0;
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2013-11-06 21:17:38 +08:00
|
|
|
if (evsel->handler != NULL) {
|
|
|
|
tracepoint_handler f = evsel->handler;
|
perf sched: Use perf_evsel__{int,str}val
This patch also stops reading the common fields, as they were not being used except
for one ->common_pid case that was replaced by sample->tid, i.e. the info is already
in the perf_sample struct.
Also it only fills the _event structures when there is a handler.
[root@sandy ~]# perf sched record sleep 30s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 8.585 MB perf.data (~375063 samples) ]
Before:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
129.117838 task-clock # 0.994 CPUs utilized ( +- 0.28% )
14 context-switches # 0.111 K/sec ( +- 2.10% )
0 cpu-migrations # 0.002 K/sec ( +- 66.67% )
7,654 page-faults # 0.059 M/sec ( +- 0.67% )
438,121,661 cycles # 3.393 GHz ( +- 0.06% ) [83.06%]
150,808,605 stalled-cycles-frontend # 34.42% frontend cycles idle ( +- 0.14% ) [83.10%]
80,748,941 stalled-cycles-backend # 18.43% backend cycles idle ( +- 0.64% ) [66.73%]
758,605,879 instructions # 1.73 insns per cycle
# 0.20 stalled cycles per insn ( +- 0.08% ) [83.54%]
162,164,321 branches # 1255.940 M/sec ( +- 0.10% ) [83.70%]
1,609,903 branch-misses # 0.99% of all branches ( +- 0.08% ) [83.62%]
0.129949153 seconds time elapsed ( +- 0.28% )
After:
[root@sandy ~]# perf stat -r 10 perf sched lat > /dev/null
Performance counter stats for 'perf sched lat' (10 runs):
103.592215 task-clock # 0.993 CPUs utilized ( +- 0.33% )
12 context-switches # 0.114 K/sec ( +- 3.29% )
0 cpu-migrations # 0.000 K/sec
7,605 page-faults # 0.073 M/sec ( +- 0.00% )
345,796,112 cycles # 3.338 GHz ( +- 0.07% ) [82.90%]
106,876,796 stalled-cycles-frontend # 30.91% frontend cycles idle ( +- 0.38% ) [83.23%]
62,060,877 stalled-cycles-backend # 17.95% backend cycles idle ( +- 0.80% ) [67.14%]
628,246,586 instructions # 1.82 insns per cycle
# 0.17 stalled cycles per insn ( +- 0.04% ) [83.64%]
134,962,057 branches # 1302.820 M/sec ( +- 0.10% ) [83.64%]
1,233,037 branch-misses # 0.91% of all branches ( +- 0.29% ) [83.41%]
0.104333272 seconds time elapsed ( +- 0.33% )
[root@sandy ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-weu9t63zkrfrazkn0gxj48xy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-12 06:29:17 +08:00
|
|
|
err = f(tool, evsel, sample, machine);
|
perf tools: Save some loops using perf_evlist__id2evsel
Since we already ask for PERF_SAMPLE_ID and use it to quickly find the
associated evsel, add handler func + data to struct perf_evsel to avoid
using chains of if(strcmp(event_name)) and also to avoid all the linear
list searches via trace_event_find.
To demonstrate the technique convert 'perf sched' to it:
# perf sched record sleep 5m
And then:
Performance counter stats for '/tmp/oldperf sched lat':
646.929438 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,901 page-faults # 0.032 M/sec
1,290,144,450 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,606,158,439 instructions # 1.24 insns per cycle
339,088,395 branches # 524.151 M/sec
4,550,735 branch-misses # 1.34% of all branches
0.647524759 seconds time elapsed
Versus:
Performance counter stats for 'perf sched lat':
473.564691 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,903 page-faults # 0.044 M/sec
944,367,984 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,442,385,571 instructions # 1.53 insns per cycle
308,383,106 branches # 651.195 M/sec
4,481,784 branch-misses # 1.45% of all branches
0.474215751 seconds time elapsed
[root@emilia ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1kbzpl74lwi6lavpqke2u2p3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-11-29 03:57:40 +08:00
|
|
|
}
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
return err;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2018-03-06 11:37:37 +08:00
|
|
|
static int perf_sched__process_comm(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
|
|
|
struct thread *thread;
|
|
|
|
struct thread_runtime *tr;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = perf_event__process_comm(tool, event, sample, machine);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
thread = machine__find_thread(machine, sample->pid, sample->tid);
|
|
|
|
if (!thread) {
|
|
|
|
pr_err("Internal error: can't find thread\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
tr = thread__get_runtime(thread);
|
|
|
|
if (tr == NULL) {
|
|
|
|
thread__put(thread);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
tr->comm_changed = true;
|
|
|
|
thread__put(thread);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-03-03 09:28:41 +08:00
|
|
|
static int perf_sched__read_events(struct perf_sched *sched)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
perf tools: Save some loops using perf_evlist__id2evsel
Since we already ask for PERF_SAMPLE_ID and use it to quickly find the
associated evsel, add handler func + data to struct perf_evsel to avoid
using chains of if(strcmp(event_name)) and also to avoid all the linear
list searches via trace_event_find.
To demonstrate the technique convert 'perf sched' to it:
# perf sched record sleep 5m
And then:
Performance counter stats for '/tmp/oldperf sched lat':
646.929438 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,901 page-faults # 0.032 M/sec
1,290,144,450 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,606,158,439 instructions # 1.24 insns per cycle
339,088,395 branches # 524.151 M/sec
4,550,735 branch-misses # 1.34% of all branches
0.647524759 seconds time elapsed
Versus:
Performance counter stats for 'perf sched lat':
473.564691 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,903 page-faults # 0.044 M/sec
944,367,984 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,442,385,571 instructions # 1.53 insns per cycle
308,383,106 branches # 651.195 M/sec
4,481,784 branch-misses # 1.45% of all branches
0.474215751 seconds time elapsed
[root@emilia ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1kbzpl74lwi6lavpqke2u2p3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-11-29 03:57:40 +08:00
|
|
|
const struct perf_evsel_str_handler handlers[] = {
|
|
|
|
{ "sched:sched_switch", process_sched_switch_event, },
|
|
|
|
{ "sched:sched_stat_runtime", process_sched_runtime_event, },
|
|
|
|
{ "sched:sched_wakeup", process_sched_wakeup_event, },
|
|
|
|
{ "sched:sched_wakeup_new", process_sched_wakeup_event, },
|
|
|
|
{ "sched:sched_migrate_task", process_sched_migrate_task_event, },
|
|
|
|
};
|
2012-06-28 00:08:42 +08:00
|
|
|
struct perf_session *session;
|
2017-01-24 05:07:59 +08:00
|
|
|
struct perf_data data = {
|
2019-02-21 17:41:30 +08:00
|
|
|
.path = input_name,
|
|
|
|
.mode = PERF_DATA_MODE_READ,
|
|
|
|
.force = sched->force,
|
2013-10-15 22:27:32 +08:00
|
|
|
};
|
2015-03-03 09:28:41 +08:00
|
|
|
int rc = -1;
|
2012-06-28 00:08:42 +08:00
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
session = perf_session__new(&data, false, &sched->tool);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (session == NULL) {
|
|
|
|
pr_debug("No Memory for session\n");
|
|
|
|
return -1;
|
|
|
|
}
|
2009-12-12 07:24:02 +08:00
|
|
|
|
2014-08-12 14:40:45 +08:00
|
|
|
symbol__init(&session->header.env);
|
2014-08-12 14:40:41 +08:00
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
if (perf_session__set_tracepoints_handlers(session, handlers))
|
|
|
|
goto out_delete;
|
perf tools: Save some loops using perf_evlist__id2evsel
Since we already ask for PERF_SAMPLE_ID and use it to quickly find the
associated evsel, add handler func + data to struct perf_evsel to avoid
using chains of if(strcmp(event_name)) and also to avoid all the linear
list searches via trace_event_find.
To demonstrate the technique convert 'perf sched' to it:
# perf sched record sleep 5m
And then:
Performance counter stats for '/tmp/oldperf sched lat':
646.929438 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,901 page-faults # 0.032 M/sec
1,290,144,450 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,606,158,439 instructions # 1.24 insns per cycle
339,088,395 branches # 524.151 M/sec
4,550,735 branch-misses # 1.34% of all branches
0.647524759 seconds time elapsed
Versus:
Performance counter stats for 'perf sched lat':
473.564691 task-clock # 0.999 CPUs utilized
9 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
20,903 page-faults # 0.044 M/sec
944,367,984 cycles # 1.994 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,442,385,571 instructions # 1.53 insns per cycle
308,383,106 branches # 651.195 M/sec
4,481,784 branch-misses # 1.45% of all branches
0.474215751 seconds time elapsed
[root@emilia ~]#
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1kbzpl74lwi6lavpqke2u2p3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-11-29 03:57:40 +08:00
|
|
|
|
2010-05-15 00:16:55 +08:00
|
|
|
if (perf_session__has_traces(session, "record -R")) {
|
2015-03-03 22:58:45 +08:00
|
|
|
int err = perf_session__process_events(session);
|
2012-09-09 09:53:06 +08:00
|
|
|
if (err) {
|
|
|
|
pr_err("Failed to process events, error %d", err);
|
|
|
|
goto out_delete;
|
|
|
|
}
|
2011-08-09 05:03:34 +08:00
|
|
|
|
2015-02-15 01:50:11 +08:00
|
|
|
sched->nr_events = session->evlist->stats.nr_events[0];
|
|
|
|
sched->nr_lost_events = session->evlist->stats.total_lost;
|
|
|
|
sched->nr_lost_chunks = session->evlist->stats.nr_events[PERF_RECORD_LOST];
|
2010-05-15 00:16:55 +08:00
|
|
|
}
|
2009-12-28 07:37:02 +08:00
|
|
|
|
2015-03-03 09:28:41 +08:00
|
|
|
rc = 0;
|
2012-09-09 09:53:06 +08:00
|
|
|
out_delete:
|
|
|
|
perf_session__delete(session);
|
2015-03-03 09:28:41 +08:00
|
|
|
return rc;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/*
|
|
|
|
* scheduling times are printed as msec.usec
|
|
|
|
*/
|
|
|
|
static inline void print_sched_time(unsigned long long nsecs, int width)
|
|
|
|
{
|
|
|
|
unsigned long msecs;
|
|
|
|
unsigned long usecs;
|
|
|
|
|
|
|
|
msecs = nsecs / NSEC_PER_MSEC;
|
|
|
|
nsecs -= msecs * NSEC_PER_MSEC;
|
|
|
|
usecs = nsecs / NSEC_PER_USEC;
|
|
|
|
printf("%*lu.%03lu ", width, msecs, usecs);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* returns runtime data for event, allocating memory for it the
|
|
|
|
* first time it is used.
|
|
|
|
*/
|
|
|
|
static struct evsel_runtime *perf_evsel__get_runtime(struct perf_evsel *evsel)
|
|
|
|
{
|
|
|
|
struct evsel_runtime *r = evsel->priv;
|
|
|
|
|
|
|
|
if (r == NULL) {
|
|
|
|
r = zalloc(sizeof(struct evsel_runtime));
|
|
|
|
evsel->priv = r;
|
|
|
|
}
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* save last time event was seen per cpu
|
|
|
|
*/
|
|
|
|
static void perf_evsel__save_time(struct perf_evsel *evsel,
|
|
|
|
u64 timestamp, u32 cpu)
|
|
|
|
{
|
|
|
|
struct evsel_runtime *r = perf_evsel__get_runtime(evsel);
|
|
|
|
|
|
|
|
if (r == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if ((cpu >= r->ncpu) || (r->last_time == NULL)) {
|
|
|
|
int i, n = __roundup_pow_of_two(cpu+1);
|
|
|
|
void *p = r->last_time;
|
|
|
|
|
|
|
|
p = realloc(r->last_time, n * sizeof(u64));
|
|
|
|
if (!p)
|
|
|
|
return;
|
|
|
|
|
|
|
|
r->last_time = p;
|
|
|
|
for (i = r->ncpu; i < n; ++i)
|
|
|
|
r->last_time[i] = (u64) 0;
|
|
|
|
|
|
|
|
r->ncpu = n;
|
|
|
|
}
|
|
|
|
|
|
|
|
r->last_time[cpu] = timestamp;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* returns last time this event was seen on the given cpu */
|
|
|
|
static u64 perf_evsel__get_time(struct perf_evsel *evsel, u32 cpu)
|
|
|
|
{
|
|
|
|
struct evsel_runtime *r = perf_evsel__get_runtime(evsel);
|
|
|
|
|
|
|
|
if ((r == NULL) || (r->last_time == NULL) || (cpu >= r->ncpu))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return r->last_time[cpu];
|
|
|
|
}
|
|
|
|
|
2016-12-22 14:03:48 +08:00
|
|
|
static int comm_width = 30;
|
2016-11-16 14:06:29 +08:00
|
|
|
|
|
|
|
static char *timehist_get_commstr(struct thread *thread)
|
|
|
|
{
|
|
|
|
static char str[32];
|
|
|
|
const char *comm = thread__comm_str(thread);
|
|
|
|
pid_t tid = thread->tid;
|
|
|
|
pid_t pid = thread->pid_;
|
|
|
|
int n;
|
|
|
|
|
|
|
|
if (pid == 0)
|
|
|
|
n = scnprintf(str, sizeof(str), "%s", comm);
|
|
|
|
|
|
|
|
else if (tid != pid)
|
|
|
|
n = scnprintf(str, sizeof(str), "%s[%d/%d]", comm, tid, pid);
|
|
|
|
|
|
|
|
else
|
|
|
|
n = scnprintf(str, sizeof(str), "%s[%d]", comm, tid);
|
|
|
|
|
|
|
|
if (n > comm_width)
|
|
|
|
comm_width = n;
|
|
|
|
|
|
|
|
return str;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:33 +08:00
|
|
|
static void timehist_header(struct perf_sched *sched)
|
2016-11-16 14:06:29 +08:00
|
|
|
{
|
2016-11-16 14:06:33 +08:00
|
|
|
u32 ncpus = sched->max_cpu + 1;
|
|
|
|
u32 i, j;
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
printf("%15s %6s ", "time", "cpu");
|
|
|
|
|
2016-11-16 14:06:33 +08:00
|
|
|
if (sched->show_cpu_visual) {
|
|
|
|
printf(" ");
|
|
|
|
for (i = 0, j = 0; i < ncpus; ++i) {
|
|
|
|
printf("%x", j++);
|
|
|
|
if (j > 15)
|
|
|
|
j = 0;
|
|
|
|
}
|
|
|
|
printf(" ");
|
|
|
|
}
|
|
|
|
|
2016-12-22 14:03:48 +08:00
|
|
|
printf(" %-*s %9s %9s %9s", comm_width,
|
2016-11-16 14:06:29 +08:00
|
|
|
"task name", "wait time", "sch delay", "run time");
|
|
|
|
|
2017-01-13 18:45:22 +08:00
|
|
|
if (sched->show_state)
|
|
|
|
printf(" %s", "state");
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
printf("\n");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* units row
|
|
|
|
*/
|
|
|
|
printf("%15s %-6s ", "", "");
|
|
|
|
|
2016-11-16 14:06:33 +08:00
|
|
|
if (sched->show_cpu_visual)
|
|
|
|
printf(" %*s ", ncpus, "");
|
|
|
|
|
2017-01-13 18:45:22 +08:00
|
|
|
printf(" %-*s %9s %9s %9s", comm_width,
|
2016-12-22 14:03:48 +08:00
|
|
|
"[tid/pid]", "(msec)", "(msec)", "(msec)");
|
2016-11-16 14:06:29 +08:00
|
|
|
|
2017-01-13 18:45:22 +08:00
|
|
|
if (sched->show_state)
|
|
|
|
printf(" %5s", "");
|
|
|
|
|
|
|
|
printf("\n");
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/*
|
|
|
|
* separator
|
|
|
|
*/
|
|
|
|
printf("%.15s %.6s ", graph_dotted_line, graph_dotted_line);
|
|
|
|
|
2016-11-16 14:06:33 +08:00
|
|
|
if (sched->show_cpu_visual)
|
|
|
|
printf(" %.*s ", ncpus, graph_dotted_line);
|
|
|
|
|
2016-12-22 14:03:48 +08:00
|
|
|
printf(" %.*s %.9s %.9s %.9s", comm_width,
|
2016-11-16 14:06:29 +08:00
|
|
|
graph_dotted_line, graph_dotted_line, graph_dotted_line,
|
|
|
|
graph_dotted_line);
|
|
|
|
|
2017-01-13 18:45:22 +08:00
|
|
|
if (sched->show_state)
|
|
|
|
printf(" %.5s", graph_dotted_line);
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
2017-01-13 18:45:22 +08:00
|
|
|
static char task_state_char(struct thread *thread, int state)
|
|
|
|
{
|
|
|
|
static const char state_to_char[] = TASK_STATE_TO_CHAR_STR;
|
|
|
|
unsigned bit = state ? ffs(state) : 0;
|
|
|
|
|
|
|
|
/* 'I' for idle */
|
|
|
|
if (thread->tid == 0)
|
|
|
|
return 'I';
|
|
|
|
|
|
|
|
return bit < sizeof(state_to_char) - 1 ? state_to_char[bit] : '?';
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:31 +08:00
|
|
|
static void timehist_print_sample(struct perf_sched *sched,
|
2017-03-14 09:56:29 +08:00
|
|
|
struct perf_evsel *evsel,
|
2016-11-16 14:06:31 +08:00
|
|
|
struct perf_sample *sample,
|
2016-11-16 14:06:32 +08:00
|
|
|
struct addr_location *al,
|
2016-11-30 01:15:44 +08:00
|
|
|
struct thread *thread,
|
2017-01-13 18:45:22 +08:00
|
|
|
u64 t, int state)
|
2016-11-16 14:06:29 +08:00
|
|
|
{
|
|
|
|
struct thread_runtime *tr = thread__priv(thread);
|
2017-03-14 09:56:29 +08:00
|
|
|
const char *next_comm = perf_evsel__strval(evsel, sample, "next_comm");
|
|
|
|
const u32 next_pid = perf_evsel__intval(evsel, sample, "next_pid");
|
2016-11-16 14:06:33 +08:00
|
|
|
u32 max_cpus = sched->max_cpu + 1;
|
2016-11-16 14:06:29 +08:00
|
|
|
char tstr[64];
|
2017-03-14 09:56:29 +08:00
|
|
|
char nstr[30];
|
2017-01-13 18:45:21 +08:00
|
|
|
u64 wait_time;
|
2016-11-16 14:06:29 +08:00
|
|
|
|
2016-11-30 01:15:44 +08:00
|
|
|
timestamp__scnprintf_usec(t, tstr, sizeof(tstr));
|
2016-11-16 14:06:29 +08:00
|
|
|
printf("%15s [%04d] ", tstr, sample->cpu);
|
|
|
|
|
2016-11-16 14:06:33 +08:00
|
|
|
if (sched->show_cpu_visual) {
|
|
|
|
u32 i;
|
|
|
|
char c;
|
|
|
|
|
|
|
|
printf(" ");
|
|
|
|
for (i = 0; i < max_cpus; ++i) {
|
|
|
|
/* flag idle times with 'i'; others are sched events */
|
|
|
|
if (i == sample->cpu)
|
|
|
|
c = (thread->tid == 0) ? 'i' : 's';
|
|
|
|
else
|
|
|
|
c = ' ';
|
|
|
|
printf("%c", c);
|
|
|
|
}
|
|
|
|
printf(" ");
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
printf(" %-*s ", comm_width, timehist_get_commstr(thread));
|
|
|
|
|
2017-01-13 18:45:21 +08:00
|
|
|
wait_time = tr->dt_sleep + tr->dt_iowait + tr->dt_preempt;
|
|
|
|
print_sched_time(wait_time, 6);
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
print_sched_time(tr->dt_delay, 6);
|
|
|
|
print_sched_time(tr->dt_run, 6);
|
2016-11-16 14:06:31 +08:00
|
|
|
|
2017-01-13 18:45:22 +08:00
|
|
|
if (sched->show_state)
|
|
|
|
printf(" %5c ", task_state_char(thread, state));
|
|
|
|
|
2017-03-14 09:56:29 +08:00
|
|
|
if (sched->show_next) {
|
|
|
|
snprintf(nstr, sizeof(nstr), "next: %s[%d]", next_comm, next_pid);
|
|
|
|
printf(" %-*s", comm_width, nstr);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sched->show_wakeups && !sched->show_next)
|
2016-11-16 14:06:31 +08:00
|
|
|
printf(" %-*s", comm_width, "");
|
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
if (thread->tid == 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (sched->show_callchain)
|
|
|
|
printf(" ");
|
|
|
|
|
|
|
|
sample__fprintf_sym(sample, al, 0,
|
|
|
|
EVSEL__PRINT_SYM | EVSEL__PRINT_ONELINE |
|
2016-11-24 09:11:13 +08:00
|
|
|
EVSEL__PRINT_CALLCHAIN_ARROW |
|
|
|
|
EVSEL__PRINT_SKIP_IGNORED,
|
2016-11-16 14:06:32 +08:00
|
|
|
&callchain_cursor, stdout);
|
|
|
|
|
|
|
|
out:
|
2016-11-16 14:06:29 +08:00
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Explanation of delta-time stats:
|
|
|
|
*
|
|
|
|
* t = time of current schedule out event
|
|
|
|
* tprev = time of previous sched out event
|
|
|
|
* also time of schedule-in event for current task
|
|
|
|
* last_time = time of last sched change event for current task
|
|
|
|
* (i.e, time process was last scheduled out)
|
|
|
|
* ready_to_run = time of wakeup for current task
|
|
|
|
*
|
|
|
|
* -----|------------|------------|------------|------
|
|
|
|
* last ready tprev t
|
|
|
|
* time to run
|
|
|
|
*
|
|
|
|
* |-------- dt_wait --------|
|
|
|
|
* |- dt_delay -|-- dt_run --|
|
|
|
|
*
|
|
|
|
* dt_run = run time of current task
|
|
|
|
* dt_wait = time between last schedule out event for task and tprev
|
|
|
|
* represents time spent off the cpu
|
|
|
|
* dt_delay = time between wakeup and schedule-in of task
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void timehist_update_runtime_stats(struct thread_runtime *r,
|
|
|
|
u64 t, u64 tprev)
|
|
|
|
{
|
|
|
|
r->dt_delay = 0;
|
2017-01-13 18:45:21 +08:00
|
|
|
r->dt_sleep = 0;
|
|
|
|
r->dt_iowait = 0;
|
|
|
|
r->dt_preempt = 0;
|
2016-11-16 14:06:29 +08:00
|
|
|
r->dt_run = 0;
|
2017-01-13 18:45:21 +08:00
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
if (tprev) {
|
|
|
|
r->dt_run = t - tprev;
|
|
|
|
if (r->ready_to_run) {
|
|
|
|
if (r->ready_to_run > tprev)
|
|
|
|
pr_debug("time travel: wakeup time for task > previous sched_switch event\n");
|
|
|
|
else
|
|
|
|
r->dt_delay = tprev - r->ready_to_run;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (r->last_time > tprev)
|
|
|
|
pr_debug("time travel: last sched out time for task > previous sched_switch event\n");
|
2017-01-13 18:45:21 +08:00
|
|
|
else if (r->last_time) {
|
|
|
|
u64 dt_wait = tprev - r->last_time;
|
|
|
|
|
|
|
|
if (r->last_state == TASK_RUNNING)
|
|
|
|
r->dt_preempt = dt_wait;
|
|
|
|
else if (r->last_state == TASK_UNINTERRUPTIBLE)
|
|
|
|
r->dt_iowait = dt_wait;
|
|
|
|
else
|
|
|
|
r->dt_sleep = dt_wait;
|
|
|
|
}
|
2016-11-16 14:06:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
update_stats(&r->run_stats, r->dt_run);
|
2017-01-13 18:45:23 +08:00
|
|
|
|
|
|
|
r->total_run_time += r->dt_run;
|
|
|
|
r->total_delay_time += r->dt_delay;
|
|
|
|
r->total_sleep_time += r->dt_sleep;
|
|
|
|
r->total_iowait_time += r->dt_iowait;
|
|
|
|
r->total_preempt_time += r->dt_preempt;
|
2016-11-16 14:06:29 +08:00
|
|
|
}
|
|
|
|
|
2016-12-08 22:47:50 +08:00
|
|
|
static bool is_idle_sample(struct perf_sample *sample,
|
|
|
|
struct perf_evsel *evsel)
|
2016-11-16 14:06:29 +08:00
|
|
|
{
|
|
|
|
/* pid 0 == swapper == idle task */
|
2016-12-08 22:47:50 +08:00
|
|
|
if (strcmp(perf_evsel__name(evsel), "sched:sched_switch") == 0)
|
|
|
|
return perf_evsel__intval(evsel, sample, "prev_pid") == 0;
|
2016-11-16 14:06:29 +08:00
|
|
|
|
2016-12-08 22:47:50 +08:00
|
|
|
return sample->pid == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void save_task_callchain(struct perf_sched *sched,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
|
|
|
struct callchain_cursor *cursor = &callchain_cursor;
|
|
|
|
struct thread *thread;
|
2016-11-16 14:06:32 +08:00
|
|
|
|
|
|
|
/* want main thread for process - has maps */
|
|
|
|
thread = machine__findnew_thread(machine, sample->pid, sample->pid);
|
|
|
|
if (thread == NULL) {
|
|
|
|
pr_debug("Failed to get thread for pid %d.\n", sample->pid);
|
2016-12-08 22:47:50 +08:00
|
|
|
return;
|
2016-11-16 14:06:32 +08:00
|
|
|
}
|
|
|
|
|
2018-05-29 03:07:56 +08:00
|
|
|
if (!sched->show_callchain || sample->callchain == NULL)
|
2016-12-08 22:47:50 +08:00
|
|
|
return;
|
2016-11-16 14:06:32 +08:00
|
|
|
|
|
|
|
if (thread__resolve_callchain(thread, cursor, evsel, sample,
|
2016-11-24 09:11:14 +08:00
|
|
|
NULL, NULL, sched->max_stack + 2) != 0) {
|
2017-02-17 16:17:38 +08:00
|
|
|
if (verbose > 0)
|
2017-06-27 22:22:31 +08:00
|
|
|
pr_err("Failed to resolve callchain. Skipping\n");
|
2016-11-16 14:06:32 +08:00
|
|
|
|
2016-12-08 22:47:50 +08:00
|
|
|
return;
|
2016-11-16 14:06:32 +08:00
|
|
|
}
|
2016-11-24 09:11:12 +08:00
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
callchain_cursor_commit(cursor);
|
2016-11-24 09:11:12 +08:00
|
|
|
|
|
|
|
while (true) {
|
|
|
|
struct callchain_cursor_node *node;
|
|
|
|
struct symbol *sym;
|
|
|
|
|
|
|
|
node = callchain_cursor_current(cursor);
|
|
|
|
if (node == NULL)
|
|
|
|
break;
|
|
|
|
|
|
|
|
sym = node->sym;
|
2017-02-14 03:52:15 +08:00
|
|
|
if (sym) {
|
2016-11-24 09:11:12 +08:00
|
|
|
if (!strcmp(sym->name, "schedule") ||
|
|
|
|
!strcmp(sym->name, "__schedule") ||
|
|
|
|
!strcmp(sym->name, "preempt_schedule"))
|
|
|
|
sym->ignore = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
callchain_cursor_advance(cursor);
|
|
|
|
}
|
2016-11-16 14:06:29 +08:00
|
|
|
}
|
|
|
|
|
2016-12-08 22:47:51 +08:00
|
|
|
static int init_idle_thread(struct thread *thread)
|
|
|
|
{
|
|
|
|
struct idle_thread_runtime *itr;
|
|
|
|
|
|
|
|
thread__set_comm(thread, idle_comm, 0);
|
|
|
|
|
|
|
|
itr = zalloc(sizeof(*itr));
|
|
|
|
if (itr == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
init_stats(&itr->tr.run_stats);
|
|
|
|
callchain_init(&itr->callchain);
|
|
|
|
callchain_cursor_reset(&itr->cursor);
|
|
|
|
thread__set_priv(thread, itr);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/*
|
|
|
|
* Track idle stats per cpu by maintaining a local thread
|
|
|
|
* struct for the idle task on each cpu.
|
|
|
|
*/
|
|
|
|
static int init_idle_threads(int ncpu)
|
|
|
|
{
|
2016-12-08 22:47:51 +08:00
|
|
|
int i, ret;
|
2016-11-16 14:06:29 +08:00
|
|
|
|
|
|
|
idle_threads = zalloc(ncpu * sizeof(struct thread *));
|
|
|
|
if (!idle_threads)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2016-12-06 11:40:05 +08:00
|
|
|
idle_max_cpu = ncpu;
|
2016-11-16 14:06:29 +08:00
|
|
|
|
|
|
|
/* allocate the actual thread struct if needed */
|
|
|
|
for (i = 0; i < ncpu; ++i) {
|
|
|
|
idle_threads[i] = thread__new(0, 0);
|
|
|
|
if (idle_threads[i] == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2016-12-08 22:47:51 +08:00
|
|
|
ret = init_idle_thread(idle_threads[i]);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
2016-11-16 14:06:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_idle_threads(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (idle_threads == NULL)
|
|
|
|
return;
|
|
|
|
|
2016-12-06 11:40:05 +08:00
|
|
|
for (i = 0; i < idle_max_cpu; ++i) {
|
2016-11-16 14:06:29 +08:00
|
|
|
if ((idle_threads[i]))
|
|
|
|
thread__delete(idle_threads[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
free(idle_threads);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct thread *get_idle_thread(int cpu)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* expand/allocate array of pointers to local thread
|
|
|
|
* structs if needed
|
|
|
|
*/
|
|
|
|
if ((cpu >= idle_max_cpu) || (idle_threads == NULL)) {
|
|
|
|
int i, j = __roundup_pow_of_two(cpu+1);
|
|
|
|
void *p;
|
|
|
|
|
|
|
|
p = realloc(idle_threads, j * sizeof(struct thread *));
|
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
idle_threads = (struct thread **) p;
|
2016-12-06 11:40:05 +08:00
|
|
|
for (i = idle_max_cpu; i < j; ++i)
|
2016-11-16 14:06:29 +08:00
|
|
|
idle_threads[i] = NULL;
|
|
|
|
|
|
|
|
idle_max_cpu = j;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* allocate a new thread struct if needed */
|
|
|
|
if (idle_threads[cpu] == NULL) {
|
|
|
|
idle_threads[cpu] = thread__new(0, 0);
|
|
|
|
|
|
|
|
if (idle_threads[cpu]) {
|
2016-12-08 22:47:51 +08:00
|
|
|
if (init_idle_thread(idle_threads[cpu]) < 0)
|
|
|
|
return NULL;
|
2016-11-16 14:06:29 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return idle_threads[cpu];
|
|
|
|
}
|
|
|
|
|
2018-05-29 03:07:56 +08:00
|
|
|
static void save_idle_callchain(struct perf_sched *sched,
|
|
|
|
struct idle_thread_runtime *itr,
|
2016-12-08 22:47:52 +08:00
|
|
|
struct perf_sample *sample)
|
|
|
|
{
|
2018-05-29 03:07:56 +08:00
|
|
|
if (!sched->show_callchain || sample->callchain == NULL)
|
2016-12-08 22:47:52 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
callchain_cursor__copy(&itr->cursor, &callchain_cursor);
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
static struct thread *timehist_get_thread(struct perf_sched *sched,
|
|
|
|
struct perf_sample *sample,
|
2016-11-16 14:06:29 +08:00
|
|
|
struct machine *machine,
|
|
|
|
struct perf_evsel *evsel)
|
|
|
|
{
|
|
|
|
struct thread *thread;
|
|
|
|
|
2016-12-08 22:47:50 +08:00
|
|
|
if (is_idle_sample(sample, evsel)) {
|
2016-11-16 14:06:29 +08:00
|
|
|
thread = get_idle_thread(sample->cpu);
|
|
|
|
if (thread == NULL)
|
|
|
|
pr_err("Failed to get idle thread for cpu %d.\n", sample->cpu);
|
|
|
|
|
|
|
|
} else {
|
2016-12-06 11:40:03 +08:00
|
|
|
/* there were samples with tid 0 but non-zero pid */
|
|
|
|
thread = machine__findnew_thread(machine, sample->pid,
|
|
|
|
sample->tid ?: sample->pid);
|
2016-11-16 14:06:29 +08:00
|
|
|
if (thread == NULL) {
|
|
|
|
pr_debug("Failed to get thread for tid %d. skipping sample.\n",
|
|
|
|
sample->tid);
|
|
|
|
}
|
2016-12-08 22:47:50 +08:00
|
|
|
|
|
|
|
save_task_callchain(sched, sample, evsel, machine);
|
2016-12-08 22:47:52 +08:00
|
|
|
if (sched->idle_hist) {
|
|
|
|
struct thread *idle;
|
|
|
|
struct idle_thread_runtime *itr;
|
|
|
|
|
|
|
|
idle = get_idle_thread(sample->cpu);
|
|
|
|
if (idle == NULL) {
|
|
|
|
pr_err("Failed to get idle thread for cpu %d.\n", sample->cpu);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
itr = thread__priv(idle);
|
|
|
|
if (itr == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
itr->last_thread = thread;
|
|
|
|
|
|
|
|
/* copy task callchain when entering to idle */
|
|
|
|
if (perf_evsel__intval(evsel, sample, "next_pid") == 0)
|
2018-05-29 03:07:56 +08:00
|
|
|
save_idle_callchain(sched, itr, sample);
|
2016-12-08 22:47:52 +08:00
|
|
|
}
|
2016-11-16 14:06:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return thread;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
static bool timehist_skip_sample(struct perf_sched *sched,
|
2016-12-08 22:47:53 +08:00
|
|
|
struct thread *thread,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample)
|
2016-11-16 14:06:29 +08:00
|
|
|
{
|
|
|
|
bool rc = false;
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
if (thread__is_filtered(thread)) {
|
2016-11-16 14:06:29 +08:00
|
|
|
rc = true;
|
2016-11-16 14:06:30 +08:00
|
|
|
sched->skipped_samples++;
|
|
|
|
}
|
2016-11-16 14:06:29 +08:00
|
|
|
|
2016-12-08 22:47:53 +08:00
|
|
|
if (sched->idle_hist) {
|
|
|
|
if (strcmp(perf_evsel__name(evsel), "sched:sched_switch"))
|
|
|
|
rc = true;
|
|
|
|
else if (perf_evsel__intval(evsel, sample, "prev_pid") != 0 &&
|
|
|
|
perf_evsel__intval(evsel, sample, "next_pid") != 0)
|
|
|
|
rc = true;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:31 +08:00
|
|
|
static void timehist_print_wakeup_event(struct perf_sched *sched,
|
2016-12-08 22:47:53 +08:00
|
|
|
struct perf_evsel *evsel,
|
2016-11-16 14:06:31 +08:00
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine,
|
|
|
|
struct thread *awakened)
|
|
|
|
{
|
|
|
|
struct thread *thread;
|
|
|
|
char tstr[64];
|
|
|
|
|
|
|
|
thread = machine__findnew_thread(machine, sample->pid, sample->tid);
|
|
|
|
if (thread == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* show wakeup unless both awakee and awaker are filtered */
|
2016-12-08 22:47:53 +08:00
|
|
|
if (timehist_skip_sample(sched, thread, evsel, sample) &&
|
|
|
|
timehist_skip_sample(sched, awakened, evsel, sample)) {
|
2016-11-16 14:06:31 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
timestamp__scnprintf_usec(sample->time, tstr, sizeof(tstr));
|
|
|
|
printf("%15s [%04d] ", tstr, sample->cpu);
|
2016-11-16 14:06:33 +08:00
|
|
|
if (sched->show_cpu_visual)
|
|
|
|
printf(" %*s ", sched->max_cpu + 1, "");
|
2016-11-16 14:06:31 +08:00
|
|
|
|
|
|
|
printf(" %-*s ", comm_width, timehist_get_commstr(thread));
|
|
|
|
|
|
|
|
/* dt spacer */
|
|
|
|
printf(" %9s %9s %9s ", "", "", "");
|
|
|
|
|
|
|
|
printf("awakened: %s", timehist_get_commstr(awakened));
|
|
|
|
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static int timehist_sched_wakeup_event(struct perf_tool *tool,
|
2016-11-16 14:06:29 +08:00
|
|
|
union perf_event *event __maybe_unused,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
2016-11-16 14:06:31 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2016-11-16 14:06:29 +08:00
|
|
|
struct thread *thread;
|
|
|
|
struct thread_runtime *tr = NULL;
|
|
|
|
/* want pid of awakened task not pid in sample */
|
|
|
|
const u32 pid = perf_evsel__intval(evsel, sample, "pid");
|
|
|
|
|
|
|
|
thread = machine__findnew_thread(machine, 0, pid);
|
|
|
|
if (thread == NULL)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
tr = thread__get_runtime(thread);
|
|
|
|
if (tr == NULL)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (tr->ready_to_run == 0)
|
|
|
|
tr->ready_to_run = sample->time;
|
|
|
|
|
2016-11-16 14:06:31 +08:00
|
|
|
/* show wakeups if requested */
|
2016-11-30 01:15:44 +08:00
|
|
|
if (sched->show_wakeups &&
|
|
|
|
!perf_time__skip_sample(&sched->ptime, sample->time))
|
2016-12-08 22:47:53 +08:00
|
|
|
timehist_print_wakeup_event(sched, evsel, sample, machine, thread);
|
2016-11-16 14:06:31 +08:00
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-11-26 00:28:41 +08:00
|
|
|
static void timehist_print_migration_event(struct perf_sched *sched,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine,
|
|
|
|
struct thread *migrated)
|
|
|
|
{
|
|
|
|
struct thread *thread;
|
|
|
|
char tstr[64];
|
|
|
|
u32 max_cpus = sched->max_cpu + 1;
|
|
|
|
u32 ocpu, dcpu;
|
|
|
|
|
|
|
|
if (sched->summary_only)
|
|
|
|
return;
|
|
|
|
|
|
|
|
max_cpus = sched->max_cpu + 1;
|
|
|
|
ocpu = perf_evsel__intval(evsel, sample, "orig_cpu");
|
|
|
|
dcpu = perf_evsel__intval(evsel, sample, "dest_cpu");
|
|
|
|
|
|
|
|
thread = machine__findnew_thread(machine, sample->pid, sample->tid);
|
|
|
|
if (thread == NULL)
|
|
|
|
return;
|
|
|
|
|
2016-12-08 22:47:53 +08:00
|
|
|
if (timehist_skip_sample(sched, thread, evsel, sample) &&
|
|
|
|
timehist_skip_sample(sched, migrated, evsel, sample)) {
|
2016-11-26 00:28:41 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
timestamp__scnprintf_usec(sample->time, tstr, sizeof(tstr));
|
|
|
|
printf("%15s [%04d] ", tstr, sample->cpu);
|
|
|
|
|
|
|
|
if (sched->show_cpu_visual) {
|
|
|
|
u32 i;
|
|
|
|
char c;
|
|
|
|
|
|
|
|
printf(" ");
|
|
|
|
for (i = 0; i < max_cpus; ++i) {
|
|
|
|
c = (i == sample->cpu) ? 'm' : ' ';
|
|
|
|
printf("%c", c);
|
|
|
|
}
|
|
|
|
printf(" ");
|
|
|
|
}
|
|
|
|
|
|
|
|
printf(" %-*s ", comm_width, timehist_get_commstr(thread));
|
|
|
|
|
|
|
|
/* dt spacer */
|
|
|
|
printf(" %9s %9s %9s ", "", "", "");
|
|
|
|
|
|
|
|
printf("migrated: %s", timehist_get_commstr(migrated));
|
|
|
|
printf(" cpu %d => %d", ocpu, dcpu);
|
|
|
|
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static int timehist_migrate_task_event(struct perf_tool *tool,
|
|
|
|
union perf_event *event __maybe_unused,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
|
|
|
struct thread *thread;
|
|
|
|
struct thread_runtime *tr = NULL;
|
|
|
|
/* want pid of migrated task not pid in sample */
|
|
|
|
const u32 pid = perf_evsel__intval(evsel, sample, "pid");
|
|
|
|
|
|
|
|
thread = machine__findnew_thread(machine, 0, pid);
|
|
|
|
if (thread == NULL)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
tr = thread__get_runtime(thread);
|
|
|
|
if (tr == NULL)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
tr->migrations++;
|
|
|
|
|
|
|
|
/* show migrations if requested */
|
|
|
|
timehist_print_migration_event(sched, evsel, sample, machine, thread);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
static int timehist_sched_change_event(struct perf_tool *tool,
|
2016-11-16 14:06:29 +08:00
|
|
|
union perf_event *event,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
2016-11-16 14:06:31 +08:00
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
2016-11-30 01:15:44 +08:00
|
|
|
struct perf_time_interval *ptime = &sched->ptime;
|
2016-11-16 14:06:29 +08:00
|
|
|
struct addr_location al;
|
|
|
|
struct thread *thread;
|
|
|
|
struct thread_runtime *tr = NULL;
|
2016-11-30 01:15:44 +08:00
|
|
|
u64 tprev, t = sample->time;
|
2016-11-16 14:06:29 +08:00
|
|
|
int rc = 0;
|
2017-01-13 18:45:22 +08:00
|
|
|
int state = perf_evsel__intval(evsel, sample, "prev_state");
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
|
|
|
|
if (machine__resolve(machine, &al, sample) < 0) {
|
|
|
|
pr_err("problem processing %d event. skipping it\n",
|
|
|
|
event->header.type);
|
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
thread = timehist_get_thread(sched, sample, machine, evsel);
|
2016-11-16 14:06:29 +08:00
|
|
|
if (thread == NULL) {
|
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-12-08 22:47:53 +08:00
|
|
|
if (timehist_skip_sample(sched, thread, evsel, sample))
|
2016-11-16 14:06:29 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
tr = thread__get_runtime(thread);
|
|
|
|
if (tr == NULL) {
|
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
tprev = perf_evsel__get_time(evsel, sample->cpu);
|
|
|
|
|
2016-11-30 01:15:44 +08:00
|
|
|
/*
|
|
|
|
* If start time given:
|
|
|
|
* - sample time is under window user cares about - skip sample
|
|
|
|
* - tprev is under window user cares about - reset to start of window
|
|
|
|
*/
|
|
|
|
if (ptime->start && ptime->start > t)
|
|
|
|
goto out;
|
|
|
|
|
2016-12-22 14:03:49 +08:00
|
|
|
if (tprev && ptime->start > tprev)
|
2016-11-30 01:15:44 +08:00
|
|
|
tprev = ptime->start;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If end time given:
|
|
|
|
* - previous sched event is out of window - we are done
|
|
|
|
* - sample time is beyond window user cares about - reset it
|
|
|
|
* to close out stats for time window interest
|
|
|
|
*/
|
|
|
|
if (ptime->end) {
|
|
|
|
if (tprev > ptime->end)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (t > ptime->end)
|
|
|
|
t = ptime->end;
|
|
|
|
}
|
|
|
|
|
2016-12-08 22:47:54 +08:00
|
|
|
if (!sched->idle_hist || thread->tid == 0) {
|
|
|
|
timehist_update_runtime_stats(tr, t, tprev);
|
|
|
|
|
|
|
|
if (sched->idle_hist) {
|
|
|
|
struct idle_thread_runtime *itr = (void *)tr;
|
|
|
|
struct thread_runtime *last_tr;
|
|
|
|
|
|
|
|
BUG_ON(thread->tid != 0);
|
|
|
|
|
|
|
|
if (itr->last_thread == NULL)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* add current idle time as last thread's runtime */
|
|
|
|
last_tr = thread__get_runtime(itr->last_thread);
|
|
|
|
if (last_tr == NULL)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
timehist_update_runtime_stats(last_tr, t, tprev);
|
|
|
|
/*
|
|
|
|
* remove delta time of last thread as it's not updated
|
|
|
|
* and otherwise it will show an invalid value next
|
|
|
|
* time. we only care total run time and run stat.
|
|
|
|
*/
|
|
|
|
last_tr->dt_run = 0;
|
|
|
|
last_tr->dt_delay = 0;
|
2017-01-13 18:45:21 +08:00
|
|
|
last_tr->dt_sleep = 0;
|
|
|
|
last_tr->dt_iowait = 0;
|
|
|
|
last_tr->dt_preempt = 0;
|
2016-12-08 22:47:54 +08:00
|
|
|
|
2016-12-08 22:47:55 +08:00
|
|
|
if (itr->cursor.nr)
|
|
|
|
callchain_append(&itr->callchain, &itr->cursor, t - tprev);
|
|
|
|
|
2016-12-08 22:47:54 +08:00
|
|
|
itr->last_thread = NULL;
|
|
|
|
}
|
|
|
|
}
|
2016-11-30 01:15:44 +08:00
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
if (!sched->summary_only)
|
2017-03-14 09:56:29 +08:00
|
|
|
timehist_print_sample(sched, evsel, sample, &al, thread, t, state);
|
2016-11-16 14:06:29 +08:00
|
|
|
|
|
|
|
out:
|
2016-12-22 14:03:50 +08:00
|
|
|
if (sched->hist_time.start == 0 && t >= ptime->start)
|
|
|
|
sched->hist_time.start = t;
|
|
|
|
if (ptime->end == 0 || t <= ptime->end)
|
|
|
|
sched->hist_time.end = t;
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
if (tr) {
|
|
|
|
/* time of this sched_switch event becomes last time task seen */
|
|
|
|
tr->last_time = sample->time;
|
|
|
|
|
2017-01-13 18:45:21 +08:00
|
|
|
/* last state is used to determine where to account wait time */
|
2017-01-13 18:45:22 +08:00
|
|
|
tr->last_state = state;
|
2017-01-13 18:45:21 +08:00
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/* sched out event for task so reset ready to run time */
|
|
|
|
tr->ready_to_run = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
perf_evsel__save_time(evsel, sample->time, sample->cpu);
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int timehist_sched_switch_event(struct perf_tool *tool,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine __maybe_unused)
|
|
|
|
{
|
|
|
|
return timehist_sched_change_event(tool, event, evsel, sample, machine);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int process_lost(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine __maybe_unused)
|
|
|
|
{
|
|
|
|
char tstr[64];
|
|
|
|
|
|
|
|
timestamp__scnprintf_usec(sample->time, tstr, sizeof(tstr));
|
|
|
|
printf("%15s ", tstr);
|
|
|
|
printf("lost %" PRIu64 " events on cpu %d\n", event->lost.lost, sample->cpu);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
static void print_thread_runtime(struct thread *t,
|
|
|
|
struct thread_runtime *r)
|
|
|
|
{
|
|
|
|
double mean = avg_stats(&r->run_stats);
|
|
|
|
float stddev;
|
|
|
|
|
|
|
|
printf("%*s %5d %9" PRIu64 " ",
|
|
|
|
comm_width, timehist_get_commstr(t), t->ppid,
|
|
|
|
(u64) r->run_stats.n);
|
|
|
|
|
|
|
|
print_sched_time(r->total_run_time, 8);
|
|
|
|
stddev = rel_stddev_stats(stddev_stats(&r->run_stats), mean);
|
|
|
|
print_sched_time(r->run_stats.min, 6);
|
|
|
|
printf(" ");
|
|
|
|
print_sched_time((u64) mean, 6);
|
|
|
|
printf(" ");
|
|
|
|
print_sched_time(r->run_stats.max, 6);
|
|
|
|
printf(" ");
|
|
|
|
printf("%5.2f", stddev);
|
2016-11-26 00:28:41 +08:00
|
|
|
printf(" %5" PRIu64, r->migrations);
|
2016-11-16 14:06:30 +08:00
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
2017-01-13 18:45:23 +08:00
|
|
|
static void print_thread_waittime(struct thread *t,
|
|
|
|
struct thread_runtime *r)
|
|
|
|
{
|
|
|
|
printf("%*s %5d %9" PRIu64 " ",
|
|
|
|
comm_width, timehist_get_commstr(t), t->ppid,
|
|
|
|
(u64) r->run_stats.n);
|
|
|
|
|
|
|
|
print_sched_time(r->total_run_time, 8);
|
|
|
|
print_sched_time(r->total_sleep_time, 6);
|
|
|
|
printf(" ");
|
|
|
|
print_sched_time(r->total_iowait_time, 6);
|
|
|
|
printf(" ");
|
|
|
|
print_sched_time(r->total_preempt_time, 6);
|
|
|
|
printf(" ");
|
|
|
|
print_sched_time(r->total_delay_time, 6);
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
struct total_run_stats {
|
2017-01-13 18:45:23 +08:00
|
|
|
struct perf_sched *sched;
|
2016-11-16 14:06:30 +08:00
|
|
|
u64 sched_count;
|
|
|
|
u64 task_count;
|
|
|
|
u64 total_run_time;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __show_thread_runtime(struct thread *t, void *priv)
|
|
|
|
{
|
|
|
|
struct total_run_stats *stats = priv;
|
|
|
|
struct thread_runtime *r;
|
|
|
|
|
|
|
|
if (thread__is_filtered(t))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
r = thread__priv(t);
|
|
|
|
if (r && r->run_stats.n) {
|
|
|
|
stats->task_count++;
|
|
|
|
stats->sched_count += r->run_stats.n;
|
|
|
|
stats->total_run_time += r->total_run_time;
|
2017-01-13 18:45:23 +08:00
|
|
|
|
|
|
|
if (stats->sched->show_state)
|
|
|
|
print_thread_waittime(t, r);
|
|
|
|
else
|
|
|
|
print_thread_runtime(t, r);
|
2016-11-16 14:06:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int show_thread_runtime(struct thread *t, void *priv)
|
|
|
|
{
|
|
|
|
if (t->dead)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return __show_thread_runtime(t, priv);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int show_deadthread_runtime(struct thread *t, void *priv)
|
|
|
|
{
|
|
|
|
if (!t->dead)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return __show_thread_runtime(t, priv);
|
|
|
|
}
|
|
|
|
|
2016-12-08 22:47:55 +08:00
|
|
|
static size_t callchain__fprintf_folded(FILE *fp, struct callchain_node *node)
|
|
|
|
{
|
|
|
|
const char *sep = " <- ";
|
|
|
|
struct callchain_list *chain;
|
|
|
|
size_t ret = 0;
|
|
|
|
char bf[1024];
|
|
|
|
bool first;
|
|
|
|
|
|
|
|
if (node == NULL)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
ret = callchain__fprintf_folded(fp, node->parent);
|
|
|
|
first = (ret == 0);
|
|
|
|
|
|
|
|
list_for_each_entry(chain, &node->val, list) {
|
|
|
|
if (chain->ip >= PERF_CONTEXT_MAX)
|
|
|
|
continue;
|
|
|
|
if (chain->ms.sym && chain->ms.sym->ignore)
|
|
|
|
continue;
|
|
|
|
ret += fprintf(fp, "%s%s", first ? "" : sep,
|
|
|
|
callchain_list__sym_name(chain, bf, sizeof(bf),
|
|
|
|
false));
|
|
|
|
first = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:19 +08:00
|
|
|
static size_t timehist_print_idlehist_callchain(struct rb_root_cached *root)
|
2016-12-08 22:47:55 +08:00
|
|
|
{
|
|
|
|
size_t ret = 0;
|
|
|
|
FILE *fp = stdout;
|
|
|
|
struct callchain_node *chain;
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_node *rb_node = rb_first_cached(root);
|
2016-12-08 22:47:55 +08:00
|
|
|
|
|
|
|
printf(" %16s %8s %s\n", "Idle time (msec)", "Count", "Callchains");
|
|
|
|
printf(" %.16s %.8s %.50s\n", graph_dotted_line, graph_dotted_line,
|
|
|
|
graph_dotted_line);
|
|
|
|
|
|
|
|
while (rb_node) {
|
|
|
|
chain = rb_entry(rb_node, struct callchain_node, rb_node);
|
|
|
|
rb_node = rb_next(rb_node);
|
|
|
|
|
|
|
|
ret += fprintf(fp, " ");
|
|
|
|
print_sched_time(chain->hit, 12);
|
|
|
|
ret += 16; /* print_sched_time returns 2nd arg + 4 */
|
|
|
|
ret += fprintf(fp, " %8d ", chain->count);
|
|
|
|
ret += callchain__fprintf_folded(fp, chain);
|
|
|
|
ret += fprintf(fp, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
static void timehist_print_summary(struct perf_sched *sched,
|
|
|
|
struct perf_session *session)
|
|
|
|
{
|
|
|
|
struct machine *m = &session->machines.host;
|
|
|
|
struct total_run_stats totals;
|
|
|
|
u64 task_count;
|
|
|
|
struct thread *t;
|
|
|
|
struct thread_runtime *r;
|
|
|
|
int i;
|
2016-12-22 14:03:50 +08:00
|
|
|
u64 hist_time = sched->hist_time.end - sched->hist_time.start;
|
2016-11-16 14:06:30 +08:00
|
|
|
|
|
|
|
memset(&totals, 0, sizeof(totals));
|
2017-01-13 18:45:23 +08:00
|
|
|
totals.sched = sched;
|
2016-11-16 14:06:30 +08:00
|
|
|
|
2016-12-08 22:47:54 +08:00
|
|
|
if (sched->idle_hist) {
|
|
|
|
printf("\nIdle-time summary\n");
|
|
|
|
printf("%*s parent sched-out ", comm_width, "comm");
|
|
|
|
printf(" idle-time min-idle avg-idle max-idle stddev migrations\n");
|
2017-01-13 18:45:23 +08:00
|
|
|
} else if (sched->show_state) {
|
|
|
|
printf("\nWait-time summary\n");
|
|
|
|
printf("%*s parent sched-in ", comm_width, "comm");
|
|
|
|
printf(" run-time sleep iowait preempt delay\n");
|
2016-12-08 22:47:54 +08:00
|
|
|
} else {
|
|
|
|
printf("\nRuntime summary\n");
|
|
|
|
printf("%*s parent sched-in ", comm_width, "comm");
|
|
|
|
printf(" run-time min-run avg-run max-run stddev migrations\n");
|
|
|
|
}
|
2016-11-16 14:06:30 +08:00
|
|
|
printf("%*s (count) ", comm_width, "");
|
2017-01-13 18:45:23 +08:00
|
|
|
printf(" (msec) (msec) (msec) (msec) %s\n",
|
|
|
|
sched->show_state ? "(msec)" : "%");
|
2016-11-26 00:28:41 +08:00
|
|
|
printf("%.117s\n", graph_dotted_line);
|
2016-11-16 14:06:30 +08:00
|
|
|
|
|
|
|
machine__for_each_thread(m, show_thread_runtime, &totals);
|
|
|
|
task_count = totals.task_count;
|
|
|
|
if (!task_count)
|
|
|
|
printf("<no still running tasks>\n");
|
|
|
|
|
|
|
|
printf("\nTerminated tasks:\n");
|
|
|
|
machine__for_each_thread(m, show_deadthread_runtime, &totals);
|
|
|
|
if (task_count == totals.task_count)
|
|
|
|
printf("<no terminated tasks>\n");
|
|
|
|
|
|
|
|
/* CPU idle stats not tracked when samples were skipped */
|
2016-12-08 22:47:54 +08:00
|
|
|
if (sched->skipped_samples && !sched->idle_hist)
|
2016-11-16 14:06:30 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
printf("\nIdle stats:\n");
|
2016-12-06 11:40:05 +08:00
|
|
|
for (i = 0; i < idle_max_cpu; ++i) {
|
2016-11-16 14:06:30 +08:00
|
|
|
t = idle_threads[i];
|
|
|
|
if (!t)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
r = thread__priv(t);
|
|
|
|
if (r && r->run_stats.n) {
|
|
|
|
totals.sched_count += r->run_stats.n;
|
|
|
|
printf(" CPU %2d idle for ", i);
|
|
|
|
print_sched_time(r->total_run_time, 6);
|
2016-12-22 14:03:50 +08:00
|
|
|
printf(" msec (%6.2f%%)\n", 100.0 * r->total_run_time / hist_time);
|
2016-11-16 14:06:30 +08:00
|
|
|
} else
|
|
|
|
printf(" CPU %2d idle entire time window\n", i);
|
|
|
|
}
|
|
|
|
|
2018-05-29 03:07:56 +08:00
|
|
|
if (sched->idle_hist && sched->show_callchain) {
|
2016-12-08 22:47:55 +08:00
|
|
|
callchain_param.mode = CHAIN_FOLDED;
|
|
|
|
callchain_param.value = CCVAL_PERIOD;
|
|
|
|
|
|
|
|
callchain_register_param(&callchain_param);
|
|
|
|
|
|
|
|
printf("\nIdle stats by callchain:\n");
|
|
|
|
for (i = 0; i < idle_max_cpu; ++i) {
|
|
|
|
struct idle_thread_runtime *itr;
|
|
|
|
|
|
|
|
t = idle_threads[i];
|
|
|
|
if (!t)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
itr = thread__priv(t);
|
|
|
|
if (itr == NULL)
|
|
|
|
continue;
|
|
|
|
|
2018-12-07 03:18:19 +08:00
|
|
|
callchain_param.sort(&itr->sorted_root.rb_root, &itr->callchain,
|
2016-12-08 22:47:55 +08:00
|
|
|
0, &callchain_param);
|
|
|
|
|
|
|
|
printf(" CPU %2d:", i);
|
|
|
|
print_sched_time(itr->tr.total_run_time, 6);
|
|
|
|
printf(" msec\n");
|
|
|
|
timehist_print_idlehist_callchain(&itr->sorted_root);
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
printf("\n"
|
|
|
|
" Total number of unique tasks: %" PRIu64 "\n"
|
2016-12-22 14:03:50 +08:00
|
|
|
"Total number of context switches: %" PRIu64 "\n",
|
2016-11-16 14:06:30 +08:00
|
|
|
totals.task_count, totals.sched_count);
|
|
|
|
|
2016-12-22 14:03:50 +08:00
|
|
|
printf(" Total run time (msec): ");
|
2016-11-16 14:06:30 +08:00
|
|
|
print_sched_time(totals.total_run_time, 2);
|
|
|
|
printf("\n");
|
2016-12-22 14:03:50 +08:00
|
|
|
|
|
|
|
printf(" Total scheduling time (msec): ");
|
|
|
|
print_sched_time(hist_time, 2);
|
|
|
|
printf(" (x %d)\n", sched->max_cpu);
|
2016-11-16 14:06:30 +08:00
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
typedef int (*sched_handler)(struct perf_tool *tool,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine);
|
|
|
|
|
|
|
|
static int perf_timehist__process_sample(struct perf_tool *tool,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
|
|
|
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
|
|
|
|
int err = 0;
|
|
|
|
int this_cpu = sample->cpu;
|
|
|
|
|
|
|
|
if (this_cpu > sched->max_cpu)
|
|
|
|
sched->max_cpu = this_cpu;
|
|
|
|
|
|
|
|
if (evsel->handler != NULL) {
|
|
|
|
sched_handler f = evsel->handler;
|
|
|
|
|
|
|
|
err = f(tool, event, evsel, sample, machine);
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
static int timehist_check_attr(struct perf_sched *sched,
|
|
|
|
struct perf_evlist *evlist)
|
|
|
|
{
|
|
|
|
struct perf_evsel *evsel;
|
|
|
|
struct evsel_runtime *er;
|
|
|
|
|
|
|
|
list_for_each_entry(evsel, &evlist->entries, node) {
|
|
|
|
er = perf_evsel__get_runtime(evsel);
|
|
|
|
if (er == NULL) {
|
|
|
|
pr_err("Failed to allocate memory for evsel runtime data\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2018-05-29 03:00:29 +08:00
|
|
|
if (sched->show_callchain && !evsel__has_callchain(evsel)) {
|
2016-11-16 14:06:32 +08:00
|
|
|
pr_info("Samples do not have callchains.\n");
|
|
|
|
sched->show_callchain = 0;
|
|
|
|
symbol_conf.use_callchain = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
static int perf_sched__timehist(struct perf_sched *sched)
|
|
|
|
{
|
|
|
|
const struct perf_evsel_str_handler handlers[] = {
|
|
|
|
{ "sched:sched_switch", timehist_sched_switch_event, },
|
|
|
|
{ "sched:sched_wakeup", timehist_sched_wakeup_event, },
|
|
|
|
{ "sched:sched_wakeup_new", timehist_sched_wakeup_event, },
|
|
|
|
};
|
2016-11-26 00:28:41 +08:00
|
|
|
const struct perf_evsel_str_handler migrate_handlers[] = {
|
|
|
|
{ "sched:sched_migrate_task", timehist_migrate_task_event, },
|
|
|
|
};
|
2017-01-24 05:07:59 +08:00
|
|
|
struct perf_data data = {
|
2019-02-21 17:41:30 +08:00
|
|
|
.path = input_name,
|
|
|
|
.mode = PERF_DATA_MODE_READ,
|
|
|
|
.force = sched->force,
|
2016-11-16 14:06:29 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct perf_session *session;
|
2016-11-16 14:06:30 +08:00
|
|
|
struct perf_evlist *evlist;
|
2016-11-16 14:06:29 +08:00
|
|
|
int err = -1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* event handlers for timehist option
|
|
|
|
*/
|
|
|
|
sched->tool.sample = perf_timehist__process_sample;
|
|
|
|
sched->tool.mmap = perf_event__process_mmap;
|
|
|
|
sched->tool.comm = perf_event__process_comm;
|
|
|
|
sched->tool.exit = perf_event__process_exit;
|
|
|
|
sched->tool.fork = perf_event__process_fork;
|
|
|
|
sched->tool.lost = process_lost;
|
|
|
|
sched->tool.attr = perf_event__process_attr;
|
|
|
|
sched->tool.tracing_data = perf_event__process_tracing_data;
|
|
|
|
sched->tool.build_id = perf_event__process_build_id;
|
|
|
|
|
|
|
|
sched->tool.ordered_events = true;
|
|
|
|
sched->tool.ordering_requires_timestamps = true;
|
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
symbol_conf.use_callchain = sched->show_callchain;
|
|
|
|
|
2017-01-24 05:07:59 +08:00
|
|
|
session = perf_session__new(&data, false, &sched->tool);
|
2016-11-16 14:06:29 +08:00
|
|
|
if (session == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
evlist = session->evlist;
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
symbol__init(&session->header.env);
|
|
|
|
|
2016-11-30 01:15:44 +08:00
|
|
|
if (perf_time__parse_str(&sched->ptime, sched->time_str) != 0) {
|
|
|
|
pr_err("Invalid time string\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:32 +08:00
|
|
|
if (timehist_check_attr(sched, evlist) != 0)
|
|
|
|
goto out;
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
setup_pager();
|
|
|
|
|
|
|
|
/* setup per-evsel handlers */
|
|
|
|
if (perf_session__set_tracepoints_handlers(session, handlers))
|
|
|
|
goto out;
|
|
|
|
|
perf sched timehist: Improve error message when analyzing wrong file
Arnaldo reported an unhelpful error message when running perf sched
timehist on a file that did not contain sched tracepoints:
[root@jouet ~]# perf sched timehist
No trace sample to read. Did you call 'perf record -R'?
[root@jouet ~]# perf evlist -v
cycles:ppp: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1
Change the has_traces check to look for the sched_switch event. Analysis
for perf sched timehist requires at least this event.
Now when analyzing a file without sched tracepoints you get:
root@f21-vbox:/tmp$ perf sched timehist
No sched_switch events found. Have you run 'perf sched record'?
Signed-off-by: David Ahern <dsahern@gmail.com>
Reported-and-Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480451988-43673-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-30 04:39:48 +08:00
|
|
|
/* sched_switch event at a minimum needs to exist */
|
|
|
|
if (!perf_evlist__find_tracepoint_by_name(session->evlist,
|
|
|
|
"sched:sched_switch")) {
|
|
|
|
pr_err("No sched_switch events found. Have you run 'perf sched record'?\n");
|
2016-11-16 14:06:29 +08:00
|
|
|
goto out;
|
perf sched timehist: Improve error message when analyzing wrong file
Arnaldo reported an unhelpful error message when running perf sched
timehist on a file that did not contain sched tracepoints:
[root@jouet ~]# perf sched timehist
No trace sample to read. Did you call 'perf record -R'?
[root@jouet ~]# perf evlist -v
cycles:ppp: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1
Change the has_traces check to look for the sched_switch event. Analysis
for perf sched timehist requires at least this event.
Now when analyzing a file without sched tracepoints you get:
root@f21-vbox:/tmp$ perf sched timehist
No sched_switch events found. Have you run 'perf sched record'?
Signed-off-by: David Ahern <dsahern@gmail.com>
Reported-and-Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480451988-43673-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-30 04:39:48 +08:00
|
|
|
}
|
2016-11-16 14:06:29 +08:00
|
|
|
|
2016-11-26 00:28:41 +08:00
|
|
|
if (sched->show_migrations &&
|
|
|
|
perf_session__set_tracepoints_handlers(session, migrate_handlers))
|
|
|
|
goto out;
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
/* pre-allocate struct for per-CPU idle stats */
|
|
|
|
sched->max_cpu = session->header.env.nr_cpus_online;
|
|
|
|
if (sched->max_cpu == 0)
|
|
|
|
sched->max_cpu = 4;
|
|
|
|
if (init_idle_threads(sched->max_cpu))
|
|
|
|
goto out;
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
/* summary_only implies summary option, but don't overwrite summary if set */
|
|
|
|
if (sched->summary_only)
|
|
|
|
sched->summary = sched->summary_only;
|
|
|
|
|
|
|
|
if (!sched->summary_only)
|
2016-11-16 14:06:33 +08:00
|
|
|
timehist_header(sched);
|
2016-11-16 14:06:29 +08:00
|
|
|
|
|
|
|
err = perf_session__process_events(session);
|
|
|
|
if (err) {
|
|
|
|
pr_err("Failed to process events, error %d", err);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:30 +08:00
|
|
|
sched->nr_events = evlist->stats.nr_events[0];
|
|
|
|
sched->nr_lost_events = evlist->stats.total_lost;
|
|
|
|
sched->nr_lost_chunks = evlist->stats.nr_events[PERF_RECORD_LOST];
|
|
|
|
|
|
|
|
if (sched->summary)
|
|
|
|
timehist_print_summary(sched, session);
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
out:
|
|
|
|
free_idle_threads();
|
|
|
|
perf_session__delete(session);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void print_bad_events(struct perf_sched *sched)
|
2009-09-16 23:40:48 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->nr_unordered_timestamps && sched->nr_timestamps) {
|
2009-09-16 23:40:48 +08:00
|
|
|
printf(" INFO: %.3f%% unordered timestamps (%ld out of %ld)\n",
|
2012-09-12 04:29:27 +08:00
|
|
|
(double)sched->nr_unordered_timestamps/(double)sched->nr_timestamps*100.0,
|
|
|
|
sched->nr_unordered_timestamps, sched->nr_timestamps);
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->nr_lost_events && sched->nr_events) {
|
2009-09-16 23:40:48 +08:00
|
|
|
printf(" INFO: %.3f%% lost events (%ld out of %ld, in %ld chunks)\n",
|
2012-09-12 04:29:27 +08:00
|
|
|
(double)sched->nr_lost_events/(double)sched->nr_events * 100.0,
|
|
|
|
sched->nr_lost_events, sched->nr_events, sched->nr_lost_chunks);
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->nr_context_switch_bugs && sched->nr_timestamps) {
|
2009-09-16 23:40:48 +08:00
|
|
|
printf(" INFO: %.3f%% context switch bugs (%ld out of %ld)",
|
2012-09-12 04:29:27 +08:00
|
|
|
(double)sched->nr_context_switch_bugs/(double)sched->nr_timestamps*100.0,
|
|
|
|
sched->nr_context_switch_bugs, sched->nr_timestamps);
|
|
|
|
if (sched->nr_lost_events)
|
2009-09-16 23:40:48 +08:00
|
|
|
printf(" (due to lost events?)");
|
|
|
|
printf("\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:19 +08:00
|
|
|
static void __merge_work_atoms(struct rb_root_cached *root, struct work_atoms *data)
|
2015-05-22 21:18:40 +08:00
|
|
|
{
|
2018-12-07 03:18:19 +08:00
|
|
|
struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL;
|
2015-05-22 21:18:40 +08:00
|
|
|
struct work_atoms *this;
|
|
|
|
const char *comm = thread__comm_str(data->thread), *this_comm;
|
2018-12-07 03:18:19 +08:00
|
|
|
bool leftmost = true;
|
2015-05-22 21:18:40 +08:00
|
|
|
|
|
|
|
while (*new) {
|
|
|
|
int cmp;
|
|
|
|
|
|
|
|
this = container_of(*new, struct work_atoms, node);
|
|
|
|
parent = *new;
|
|
|
|
|
|
|
|
this_comm = thread__comm_str(this->thread);
|
|
|
|
cmp = strcmp(comm, this_comm);
|
|
|
|
if (cmp > 0) {
|
|
|
|
new = &((*new)->rb_left);
|
|
|
|
} else if (cmp < 0) {
|
|
|
|
new = &((*new)->rb_right);
|
2018-12-07 03:18:19 +08:00
|
|
|
leftmost = false;
|
2015-05-22 21:18:40 +08:00
|
|
|
} else {
|
|
|
|
this->num_merged++;
|
|
|
|
this->total_runtime += data->total_runtime;
|
|
|
|
this->nb_atoms += data->nb_atoms;
|
|
|
|
this->total_lat += data->total_lat;
|
|
|
|
list_splice(&data->work_list, &this->work_list);
|
|
|
|
if (this->max_lat < data->max_lat) {
|
|
|
|
this->max_lat = data->max_lat;
|
|
|
|
this->max_lat_at = data->max_lat_at;
|
|
|
|
}
|
|
|
|
zfree(&data);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
data->num_merged++;
|
|
|
|
rb_link_node(&data->node, parent, new);
|
2018-12-07 03:18:19 +08:00
|
|
|
rb_insert_color_cached(&data->node, root, leftmost);
|
2015-05-22 21:18:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void perf_sched__merge_lat(struct perf_sched *sched)
|
|
|
|
{
|
|
|
|
struct work_atoms *data;
|
|
|
|
struct rb_node *node;
|
|
|
|
|
|
|
|
if (sched->skip_merge)
|
|
|
|
return;
|
|
|
|
|
2018-12-07 03:18:19 +08:00
|
|
|
while ((node = rb_first_cached(&sched->atom_root))) {
|
|
|
|
rb_erase_cached(node, &sched->atom_root);
|
2015-05-22 21:18:40 +08:00
|
|
|
data = rb_entry(node, struct work_atoms, node);
|
|
|
|
__merge_work_atoms(&sched->merged_atom_root, data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int perf_sched__lat(struct perf_sched *sched)
|
2009-09-16 23:40:48 +08:00
|
|
|
{
|
|
|
|
struct rb_node *next;
|
|
|
|
|
|
|
|
setup_pager();
|
2013-08-08 10:50:44 +08:00
|
|
|
|
2015-03-03 09:28:41 +08:00
|
|
|
if (perf_sched__read_events(sched))
|
2012-09-09 09:53:06 +08:00
|
|
|
return -1;
|
2013-08-08 10:50:44 +08:00
|
|
|
|
2015-05-22 21:18:40 +08:00
|
|
|
perf_sched__merge_lat(sched);
|
2012-09-12 04:29:27 +08:00
|
|
|
perf_sched__sort_lat(sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2014-03-17 22:18:21 +08:00
|
|
|
printf("\n -----------------------------------------------------------------------------------------------------------------\n");
|
|
|
|
printf(" Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at |\n");
|
|
|
|
printf(" -----------------------------------------------------------------------------------------------------------------\n");
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2018-12-07 03:18:19 +08:00
|
|
|
next = rb_first_cached(&sched->sorted_atom_root);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
|
|
|
while (next) {
|
|
|
|
struct work_atoms *work_list;
|
|
|
|
|
|
|
|
work_list = rb_entry(next, struct work_atoms, node);
|
2012-09-12 04:29:27 +08:00
|
|
|
output_lat_thread(sched, work_list);
|
2009-09-16 23:40:48 +08:00
|
|
|
next = rb_next(next);
|
2015-03-03 09:28:41 +08:00
|
|
|
thread__zput(work_list->thread);
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
|
2014-03-17 22:18:21 +08:00
|
|
|
printf(" -----------------------------------------------------------------------------------------------------------------\n");
|
2011-01-23 06:37:02 +08:00
|
|
|
printf(" TOTAL: |%11.3f ms |%9" PRIu64 " |\n",
|
2016-08-08 23:23:49 +08:00
|
|
|
(double)sched->all_runtime / NSEC_PER_MSEC, sched->all_count);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
|
|
|
printf(" ---------------------------------------------------\n");
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
print_bad_events(sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
printf("\n");
|
|
|
|
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:26 +08:00
|
|
|
static int setup_map_cpus(struct perf_sched *sched)
|
|
|
|
{
|
2016-04-12 21:29:31 +08:00
|
|
|
struct cpu_map *map;
|
|
|
|
|
2016-04-12 21:29:26 +08:00
|
|
|
sched->max_cpu = sysconf(_SC_NPROCESSORS_CONF);
|
|
|
|
|
|
|
|
if (sched->map.comp) {
|
|
|
|
sched->map.comp_cpus = zalloc(sched->max_cpu * sizeof(int));
|
2016-04-12 21:29:30 +08:00
|
|
|
if (!sched->map.comp_cpus)
|
|
|
|
return -1;
|
2016-04-12 21:29:26 +08:00
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:31 +08:00
|
|
|
if (!sched->map.cpus_str)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
map = cpu_map__new(sched->map.cpus_str);
|
|
|
|
if (!map) {
|
|
|
|
pr_err("failed to get cpus map from %s\n", sched->map.cpus_str);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sched->map.cpus = map;
|
2016-04-12 21:29:26 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:29 +08:00
|
|
|
static int setup_color_pids(struct perf_sched *sched)
|
|
|
|
{
|
|
|
|
struct thread_map *map;
|
|
|
|
|
|
|
|
if (!sched->map.color_pids_str)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
map = thread_map__new_by_tid_str(sched->map.color_pids_str);
|
|
|
|
if (!map) {
|
|
|
|
pr_err("failed to get thread map from %s\n", sched->map.color_pids_str);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sched->map.color_pids = map;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-04-12 21:29:30 +08:00
|
|
|
static int setup_color_cpus(struct perf_sched *sched)
|
|
|
|
{
|
|
|
|
struct cpu_map *map;
|
|
|
|
|
|
|
|
if (!sched->map.color_cpus_str)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
map = cpu_map__new(sched->map.color_cpus_str);
|
|
|
|
if (!map) {
|
|
|
|
pr_err("failed to get thread map from %s\n", sched->map.color_cpus_str);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sched->map.color_cpus = map;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int perf_sched__map(struct perf_sched *sched)
|
2009-09-16 23:40:48 +08:00
|
|
|
{
|
2016-04-12 21:29:26 +08:00
|
|
|
if (setup_map_cpus(sched))
|
|
|
|
return -1;
|
2009-09-18 00:24:55 +08:00
|
|
|
|
2016-04-12 21:29:29 +08:00
|
|
|
if (setup_color_pids(sched))
|
|
|
|
return -1;
|
|
|
|
|
2016-04-12 21:29:30 +08:00
|
|
|
if (setup_color_cpus(sched))
|
|
|
|
return -1;
|
|
|
|
|
2009-09-16 23:40:48 +08:00
|
|
|
setup_pager();
|
2015-03-03 09:28:41 +08:00
|
|
|
if (perf_sched__read_events(sched))
|
2012-09-09 09:53:06 +08:00
|
|
|
return -1;
|
2012-09-12 04:29:27 +08:00
|
|
|
print_bad_events(sched);
|
2012-09-09 09:53:06 +08:00
|
|
|
return 0;
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static int perf_sched__replay(struct perf_sched *sched)
|
2009-09-16 23:40:48 +08:00
|
|
|
{
|
|
|
|
unsigned long i;
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
calibrate_run_measurement_overhead(sched);
|
|
|
|
calibrate_sleep_measurement_overhead(sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
test_calibrations(sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2015-03-03 09:28:41 +08:00
|
|
|
if (perf_sched__read_events(sched))
|
2012-09-09 09:53:06 +08:00
|
|
|
return -1;
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
printf("nr_run_events: %ld\n", sched->nr_run_events);
|
|
|
|
printf("nr_sleep_events: %ld\n", sched->nr_sleep_events);
|
|
|
|
printf("nr_wakeup_events: %ld\n", sched->nr_wakeup_events);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sched->targetless_wakeups)
|
|
|
|
printf("target-less wakeups: %ld\n", sched->targetless_wakeups);
|
|
|
|
if (sched->multitarget_wakeups)
|
|
|
|
printf("multi-target wakeups: %ld\n", sched->multitarget_wakeups);
|
|
|
|
if (sched->nr_run_events_optimized)
|
2009-09-16 23:40:48 +08:00
|
|
|
printf("run atoms optimized: %ld\n",
|
2012-09-12 04:29:27 +08:00
|
|
|
sched->nr_run_events_optimized);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
print_task_traces(sched);
|
|
|
|
add_cross_task_wakeups(sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
create_tasks(sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
printf("------------------------------------------------------------\n");
|
2012-09-12 04:29:27 +08:00
|
|
|
for (i = 0; i < sched->replay_repeat; i++)
|
|
|
|
run_one_test(sched);
|
2012-09-09 09:53:06 +08:00
|
|
|
|
|
|
|
return 0;
|
2009-09-16 23:40:48 +08:00
|
|
|
}
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
static void setup_sorting(struct perf_sched *sched, const struct option *options,
|
|
|
|
const char * const usage_msg[])
|
2009-09-13 09:36:29 +08:00
|
|
|
{
|
2012-09-12 04:29:27 +08:00
|
|
|
char *tmp, *tok, *str = strdup(sched->sort_order);
|
2009-09-13 09:36:29 +08:00
|
|
|
|
|
|
|
for (tok = strtok_r(str, ", ", &tmp);
|
|
|
|
tok; tok = strtok_r(NULL, ", ", &tmp)) {
|
2012-09-12 04:29:27 +08:00
|
|
|
if (sort_dimension__add(tok, &sched->sort_list) < 0) {
|
2015-10-24 23:49:27 +08:00
|
|
|
usage_with_options_msg(usage_msg, options,
|
|
|
|
"Unknown --sort key: `%s'", tok);
|
2009-09-13 09:36:29 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
free(str);
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
sort_dimension__add("pid", &sched->cmp_pid);
|
2009-09-13 09:36:29 +08:00
|
|
|
}
|
|
|
|
|
2009-09-13 15:44:29 +08:00
|
|
|
static int __cmd_record(int argc, const char **argv)
|
|
|
|
{
|
|
|
|
unsigned int rec_argc, i, j;
|
|
|
|
const char **rec_argv;
|
2012-09-12 04:29:27 +08:00
|
|
|
const char * const record_args[] = {
|
|
|
|
"record",
|
|
|
|
"-a",
|
|
|
|
"-R",
|
|
|
|
"-m", "1024",
|
|
|
|
"-c", "1",
|
|
|
|
"-e", "sched:sched_switch",
|
|
|
|
"-e", "sched:sched_stat_wait",
|
|
|
|
"-e", "sched:sched_stat_sleep",
|
|
|
|
"-e", "sched:sched_stat_iowait",
|
|
|
|
"-e", "sched:sched_stat_runtime",
|
|
|
|
"-e", "sched:sched_process_fork",
|
|
|
|
"-e", "sched:sched_wakeup",
|
2014-05-05 15:05:53 +08:00
|
|
|
"-e", "sched:sched_wakeup_new",
|
2012-09-12 04:29:27 +08:00
|
|
|
"-e", "sched:sched_migrate_task",
|
|
|
|
};
|
2009-09-13 15:44:29 +08:00
|
|
|
|
|
|
|
rec_argc = ARRAY_SIZE(record_args) + argc - 1;
|
|
|
|
rec_argv = calloc(rec_argc + 1, sizeof(char *));
|
|
|
|
|
2011-01-10 20:48:47 +08:00
|
|
|
if (rec_argv == NULL)
|
2010-11-13 10:35:06 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2009-09-13 15:44:29 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(record_args); i++)
|
|
|
|
rec_argv[i] = strdup(record_args[i]);
|
|
|
|
|
|
|
|
for (j = 1; j < (unsigned int)argc; j++, i++)
|
|
|
|
rec_argv[i] = argv[j];
|
|
|
|
|
|
|
|
BUG_ON(i != rec_argc);
|
|
|
|
|
2017-03-27 22:47:20 +08:00
|
|
|
return cmd_record(i, rec_argv);
|
2009-09-13 15:44:29 +08:00
|
|
|
}
|
|
|
|
|
2017-03-27 22:47:20 +08:00
|
|
|
int cmd_sched(int argc, const char **argv)
|
2009-09-11 18:12:54 +08:00
|
|
|
{
|
perf tools: Replace automatic const char[] variables by statics
An automatic const char[] variable gets initialized at runtime, just
like any other automatic variable. For long strings, that uses a lot of
stack and wastes time building the string; e.g. for the "No %s
allocation events..." case one has:
444516: 48 b8 4e 6f 20 25 73 20 61 6c movabs $0x6c61207325206f4e,%rax # "No %s al"
...
444674: 48 89 45 80 mov %rax,-0x80(%rbp)
444678: 48 b8 6c 6f 63 61 74 69 6f 6e movabs $0x6e6f697461636f6c,%rax # "location"
444682: 48 89 45 88 mov %rax,-0x78(%rbp)
444686: 48 b8 20 65 76 65 6e 74 73 20 movabs $0x2073746e65766520,%rax # " events "
444690: 66 44 89 55 c4 mov %r10w,-0x3c(%rbp)
444695: 48 89 45 90 mov %rax,-0x70(%rbp)
444699: 48 b8 66 6f 75 6e 64 2e 20 20 movabs $0x20202e646e756f66,%rax
Make them all static so that the compiler just references objects in .rodata.
Committer testing:
Ok, using dwarves's codiff tool:
$ codiff --functions /tmp/perf.before ~/bin/perf
builtin-sched.c:
cmd_sched | -48
1 function changed, 48 bytes removed, diff: -48
builtin-report.c:
cmd_report | -32
1 function changed, 32 bytes removed, diff: -32
builtin-kmem.c:
cmd_kmem | -64
build_alloc_func_list | -50
2 functions changed, 114 bytes removed, diff: -114
builtin-c2c.c:
perf_c2c__report | -390
1 function changed, 390 bytes removed, diff: -390
ui/browsers/header.c:
tui__header_window | -104
1 function changed, 104 bytes removed, diff: -104
/home/acme/bin/perf:
9 functions changed, 688 bytes removed, diff: -688
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20181102230624.20064-1-linux@rasmusvillemoes.dk
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-03 07:06:23 +08:00
|
|
|
static const char default_sort_order[] = "avg, max, switch, runtime";
|
2013-10-22 15:34:15 +08:00
|
|
|
struct perf_sched sched = {
|
|
|
|
.tool = {
|
|
|
|
.sample = perf_sched__process_tracepoint_sample,
|
2018-03-06 11:37:37 +08:00
|
|
|
.comm = perf_sched__process_comm,
|
perf tools: Add PERF_RECORD_NAMESPACES to include namespaces related info
Introduce a new option to record PERF_RECORD_NAMESPACES events emitted
by the kernel when fork, clone, setns or unshare are invoked. And update
perf-record documentation with the new option to record namespace
events.
Committer notes:
Combined it with a later patch to allow printing it via 'perf report -D'
and be able to test the feature introduced in this patch. Had to move
here also perf_ns__name(), that was introduced in another later patch.
Also used PRIu64 and PRIx64 to fix the build in some enfironments wrt:
util/event.c:1129:39: error: format '%lx' expects argument of type 'long unsigned int', but argument 6 has type 'long long unsigned int' [-Werror=format=]
ret += fprintf(fp, "%u/%s: %lu/0x%lx%s", idx
^
Testing it:
# perf record --namespaces -a
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 1.083 MB perf.data (423 samples) ]
#
# perf report -D
<SNIP>
3 2028902078892 0x115140 [0xa0]: PERF_RECORD_NAMESPACES 14783/14783 - nr_namespaces: 7
[0/net: 3/0xf0000081, 1/uts: 3/0xeffffffe, 2/ipc: 3/0xefffffff, 3/pid: 3/0xeffffffc,
4/user: 3/0xeffffffd, 5/mnt: 3/0xf0000000, 6/cgroup: 3/0xeffffffb]
0x1151e0 [0x30]: event: 9
.
. ... raw event: size 48 bytes
. 0000: 09 00 00 00 02 00 30 00 c4 71 82 68 0c 7f 00 00 ......0..q.h....
. 0010: a9 39 00 00 a9 39 00 00 94 28 fe 63 d8 01 00 00 .9...9...(.c....
. 0020: 03 00 00 00 00 00 00 00 ce c4 02 00 00 00 00 00 ................
<SNIP>
NAMESPACES events: 1
<SNIP>
#
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sargun Dhillon <sargun@sargun.me>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/148891930386.25309.18412039920746995488.stgit@hbathini.in.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-08 04:41:43 +08:00
|
|
|
.namespaces = perf_event__process_namespaces,
|
2013-10-22 15:34:15 +08:00
|
|
|
.lost = perf_event__process_lost,
|
|
|
|
.fork = perf_sched__process_fork_event,
|
2014-07-06 20:18:21 +08:00
|
|
|
.ordered_events = true,
|
2013-10-22 15:34:15 +08:00
|
|
|
},
|
|
|
|
.cmp_pid = LIST_HEAD_INIT(sched.cmp_pid),
|
|
|
|
.sort_list = LIST_HEAD_INIT(sched.sort_list),
|
|
|
|
.start_work_mutex = PTHREAD_MUTEX_INITIALIZER,
|
|
|
|
.work_done_wait_mutex = PTHREAD_MUTEX_INITIALIZER,
|
|
|
|
.sort_order = default_sort_order,
|
|
|
|
.replay_repeat = 10,
|
|
|
|
.profile_cpu = -1,
|
|
|
|
.next_shortname1 = 'A',
|
|
|
|
.next_shortname2 = '0',
|
2015-05-22 21:18:40 +08:00
|
|
|
.skip_merge = 0,
|
2016-11-16 14:06:32 +08:00
|
|
|
.show_callchain = 1,
|
|
|
|
.max_stack = 5,
|
2013-10-22 15:34:15 +08:00
|
|
|
};
|
2016-10-24 11:00:03 +08:00
|
|
|
const struct option sched_options[] = {
|
|
|
|
OPT_STRING('i', "input", &input_name, "file",
|
|
|
|
"input file name"),
|
|
|
|
OPT_INCR('v', "verbose", &verbose,
|
|
|
|
"be more verbose (show symbol address, etc)"),
|
|
|
|
OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
|
|
|
|
"dump raw trace in ASCII"),
|
2016-12-06 11:40:01 +08:00
|
|
|
OPT_BOOLEAN('f', "force", &sched.force, "don't complain, do it"),
|
2016-10-24 11:00:03 +08:00
|
|
|
OPT_END()
|
|
|
|
};
|
2012-09-12 04:29:27 +08:00
|
|
|
const struct option latency_options[] = {
|
|
|
|
OPT_STRING('s', "sort", &sched.sort_order, "key[,key2...]",
|
|
|
|
"sort by key(s): runtime, switch, avg, max"),
|
|
|
|
OPT_INTEGER('C', "CPU", &sched.profile_cpu,
|
|
|
|
"CPU to profile on"),
|
2015-05-22 21:18:40 +08:00
|
|
|
OPT_BOOLEAN('p', "pids", &sched.skip_merge,
|
|
|
|
"latency stats per pid instead of per comm"),
|
2016-10-24 11:00:03 +08:00
|
|
|
OPT_PARENT(sched_options)
|
2012-09-12 04:29:27 +08:00
|
|
|
};
|
|
|
|
const struct option replay_options[] = {
|
|
|
|
OPT_UINTEGER('r', "repeat", &sched.replay_repeat,
|
|
|
|
"repeat the workload replay N times (-1: infinite)"),
|
2016-10-24 11:00:03 +08:00
|
|
|
OPT_PARENT(sched_options)
|
2012-09-12 04:29:27 +08:00
|
|
|
};
|
2016-04-12 21:29:26 +08:00
|
|
|
const struct option map_options[] = {
|
|
|
|
OPT_BOOLEAN(0, "compact", &sched.map.comp,
|
|
|
|
"map output in compact mode"),
|
2016-04-12 21:29:29 +08:00
|
|
|
OPT_STRING(0, "color-pids", &sched.map.color_pids_str, "pids",
|
|
|
|
"highlight given pids in map"),
|
2016-04-12 21:29:30 +08:00
|
|
|
OPT_STRING(0, "color-cpus", &sched.map.color_cpus_str, "cpus",
|
|
|
|
"highlight given CPUs in map"),
|
2016-04-12 21:29:31 +08:00
|
|
|
OPT_STRING(0, "cpus", &sched.map.cpus_str, "cpus",
|
|
|
|
"display given CPUs in map"),
|
2016-10-24 11:00:03 +08:00
|
|
|
OPT_PARENT(sched_options)
|
2016-04-12 21:29:26 +08:00
|
|
|
};
|
2016-11-16 14:06:29 +08:00
|
|
|
const struct option timehist_options[] = {
|
|
|
|
OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name,
|
|
|
|
"file", "vmlinux pathname"),
|
|
|
|
OPT_STRING(0, "kallsyms", &symbol_conf.kallsyms_name,
|
|
|
|
"file", "kallsyms pathname"),
|
2016-11-16 14:06:32 +08:00
|
|
|
OPT_BOOLEAN('g', "call-graph", &sched.show_callchain,
|
|
|
|
"Display call chains if present (default on)"),
|
|
|
|
OPT_UINTEGER(0, "max-stack", &sched.max_stack,
|
|
|
|
"Maximum number of functions to display backtrace."),
|
2016-11-16 14:06:29 +08:00
|
|
|
OPT_STRING(0, "symfs", &symbol_conf.symfs, "directory",
|
|
|
|
"Look for files with symbols relative to this directory"),
|
2016-11-16 14:06:30 +08:00
|
|
|
OPT_BOOLEAN('s', "summary", &sched.summary_only,
|
|
|
|
"Show only syscall summary with statistics"),
|
|
|
|
OPT_BOOLEAN('S', "with-summary", &sched.summary,
|
|
|
|
"Show all syscalls and summary with statistics"),
|
2016-11-16 14:06:31 +08:00
|
|
|
OPT_BOOLEAN('w', "wakeups", &sched.show_wakeups, "Show wakeup events"),
|
2017-03-14 09:56:29 +08:00
|
|
|
OPT_BOOLEAN('n', "next", &sched.show_next, "Show next task"),
|
2016-11-26 00:28:41 +08:00
|
|
|
OPT_BOOLEAN('M', "migrations", &sched.show_migrations, "Show migration events"),
|
2016-11-16 14:06:33 +08:00
|
|
|
OPT_BOOLEAN('V', "cpu-visual", &sched.show_cpu_visual, "Add CPU visual"),
|
2016-12-08 22:47:54 +08:00
|
|
|
OPT_BOOLEAN('I', "idle-hist", &sched.idle_hist, "Show idle events only"),
|
2016-11-30 01:15:44 +08:00
|
|
|
OPT_STRING(0, "time", &sched.time_str, "str",
|
|
|
|
"Time span for analysis (start,stop)"),
|
2017-01-13 18:45:22 +08:00
|
|
|
OPT_BOOLEAN(0, "state", &sched.show_state, "Show task state when sched-out"),
|
2017-09-02 01:49:12 +08:00
|
|
|
OPT_STRING('p', "pid", &symbol_conf.pid_list_str, "pid[,pid...]",
|
|
|
|
"analyze events only for given process id(s)"),
|
|
|
|
OPT_STRING('t', "tid", &symbol_conf.tid_list_str, "tid[,tid...]",
|
|
|
|
"analyze events only for given thread id(s)"),
|
2016-11-16 14:06:29 +08:00
|
|
|
OPT_PARENT(sched_options)
|
|
|
|
};
|
|
|
|
|
2012-09-12 04:29:27 +08:00
|
|
|
const char * const latency_usage[] = {
|
|
|
|
"perf sched latency [<options>]",
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
const char * const replay_usage[] = {
|
|
|
|
"perf sched replay [<options>]",
|
|
|
|
NULL
|
|
|
|
};
|
2016-04-12 21:29:26 +08:00
|
|
|
const char * const map_usage[] = {
|
|
|
|
"perf sched map [<options>]",
|
|
|
|
NULL
|
|
|
|
};
|
2016-11-16 14:06:29 +08:00
|
|
|
const char * const timehist_usage[] = {
|
|
|
|
"perf sched timehist [<options>]",
|
|
|
|
NULL
|
|
|
|
};
|
2014-03-15 11:17:54 +08:00
|
|
|
const char *const sched_subcommands[] = { "record", "latency", "map",
|
2016-11-16 14:06:29 +08:00
|
|
|
"replay", "script",
|
|
|
|
"timehist", NULL };
|
2014-03-15 11:17:54 +08:00
|
|
|
const char *sched_usage[] = {
|
|
|
|
NULL,
|
2012-09-12 04:29:27 +08:00
|
|
|
NULL
|
|
|
|
};
|
|
|
|
struct trace_sched_handler lat_ops = {
|
|
|
|
.wakeup_event = latency_wakeup_event,
|
|
|
|
.switch_event = latency_switch_event,
|
|
|
|
.runtime_event = latency_runtime_event,
|
|
|
|
.migrate_task_event = latency_migrate_task_event,
|
|
|
|
};
|
|
|
|
struct trace_sched_handler map_ops = {
|
|
|
|
.switch_event = map_switch_event,
|
|
|
|
};
|
|
|
|
struct trace_sched_handler replay_ops = {
|
|
|
|
.wakeup_event = replay_wakeup_event,
|
|
|
|
.switch_event = replay_switch_event,
|
|
|
|
.fork_event = replay_fork_event,
|
|
|
|
};
|
2013-10-22 15:34:16 +08:00
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(sched.curr_pid); i++)
|
|
|
|
sched.curr_pid[i] = -1;
|
2012-09-12 04:29:27 +08:00
|
|
|
|
2014-03-15 11:17:54 +08:00
|
|
|
argc = parse_options_subcommand(argc, argv, sched_options, sched_subcommands,
|
|
|
|
sched_usage, PARSE_OPT_STOP_AT_NON_OPTION);
|
2009-09-11 18:12:54 +08:00
|
|
|
if (!argc)
|
|
|
|
usage_with_options(sched_usage, sched_options);
|
2009-09-11 18:12:54 +08:00
|
|
|
|
2009-12-07 12:04:49 +08:00
|
|
|
/*
|
2010-11-17 01:45:39 +08:00
|
|
|
* Aliased to 'perf script' for now:
|
2009-12-07 12:04:49 +08:00
|
|
|
*/
|
2010-11-17 01:45:39 +08:00
|
|
|
if (!strcmp(argv[0], "script"))
|
2017-03-27 22:47:20 +08:00
|
|
|
return cmd_script(argc, argv);
|
2009-12-07 12:04:49 +08:00
|
|
|
|
2009-09-13 15:44:29 +08:00
|
|
|
if (!strncmp(argv[0], "rec", 3)) {
|
|
|
|
return __cmd_record(argc, argv);
|
|
|
|
} else if (!strncmp(argv[0], "lat", 3)) {
|
2012-09-12 04:29:27 +08:00
|
|
|
sched.tp_handler = &lat_ops;
|
2009-09-11 18:12:54 +08:00
|
|
|
if (argc > 1) {
|
|
|
|
argc = parse_options(argc, argv, latency_options, latency_usage, 0);
|
|
|
|
if (argc)
|
|
|
|
usage_with_options(latency_usage, latency_options);
|
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
setup_sorting(&sched, latency_options, latency_usage);
|
|
|
|
return perf_sched__lat(&sched);
|
2009-09-16 23:40:48 +08:00
|
|
|
} else if (!strcmp(argv[0], "map")) {
|
2016-04-12 21:29:26 +08:00
|
|
|
if (argc) {
|
2016-04-12 21:29:29 +08:00
|
|
|
argc = parse_options(argc, argv, map_options, map_usage, 0);
|
2016-04-12 21:29:26 +08:00
|
|
|
if (argc)
|
|
|
|
usage_with_options(map_usage, map_options);
|
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
sched.tp_handler = &map_ops;
|
|
|
|
setup_sorting(&sched, latency_options, latency_usage);
|
|
|
|
return perf_sched__map(&sched);
|
2009-09-11 18:12:54 +08:00
|
|
|
} else if (!strncmp(argv[0], "rep", 3)) {
|
2012-09-12 04:29:27 +08:00
|
|
|
sched.tp_handler = &replay_ops;
|
2009-09-11 18:12:54 +08:00
|
|
|
if (argc) {
|
|
|
|
argc = parse_options(argc, argv, replay_options, replay_usage, 0);
|
|
|
|
if (argc)
|
|
|
|
usage_with_options(replay_usage, replay_options);
|
|
|
|
}
|
2012-09-12 04:29:27 +08:00
|
|
|
return perf_sched__replay(&sched);
|
2016-11-16 14:06:29 +08:00
|
|
|
} else if (!strcmp(argv[0], "timehist")) {
|
|
|
|
if (argc) {
|
|
|
|
argc = parse_options(argc, argv, timehist_options,
|
|
|
|
timehist_usage, 0);
|
|
|
|
if (argc)
|
|
|
|
usage_with_options(timehist_usage, timehist_options);
|
|
|
|
}
|
2017-03-14 09:56:29 +08:00
|
|
|
if ((sched.show_wakeups || sched.show_next) &&
|
|
|
|
sched.summary_only) {
|
|
|
|
pr_err(" Error: -s and -[n|w] are mutually exclusive.\n");
|
2016-11-16 14:06:31 +08:00
|
|
|
parse_options_usage(timehist_usage, timehist_options, "s", true);
|
2017-03-14 09:56:29 +08:00
|
|
|
if (sched.show_wakeups)
|
|
|
|
parse_options_usage(NULL, timehist_options, "w", true);
|
|
|
|
if (sched.show_next)
|
|
|
|
parse_options_usage(NULL, timehist_options, "n", true);
|
2016-11-16 14:06:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2016-11-16 14:06:29 +08:00
|
|
|
return perf_sched__timehist(&sched);
|
2009-09-11 18:12:54 +08:00
|
|
|
} else {
|
|
|
|
usage_with_options(sched_usage, sched_options);
|
|
|
|
}
|
|
|
|
|
2009-09-11 18:12:54 +08:00
|
|
|
return 0;
|
2009-09-11 18:12:54 +08:00
|
|
|
}
|