Separate the option parsing cleanly and add two variants:
- 'perf sched latency' (can be abbreviated via 'perf sched lat')
- 'perf sched replay' (can be abbreviated via 'perf sched rep')
Also add a repeat count option to replay and add a separation
set of options for replay.
Do the sorting setup only in the latency sub-command.
Display separate help screens for 'perf sched' and
'perf sched replay -h' - i.e. further separation of the
sub-commands.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Implement multidimensional sorting on perf sched so that
you can sort either by number of switches, latency average,
latency maximum, runtime.
perf sched -l -s avg,max (this is the default)
-----------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms |
-----------------------------------------------------------------------------------
gnome-power-man | 0.113 ms | 1 | avg: 4998.531 ms | max: 4998.531 ms |
xfdesktop | 1.190 ms | 7 | avg: 136.475 ms | max: 940.933 ms |
xfce-mcs-manage | 2.194 ms | 22 | avg: 38.534 ms | max: 735.174 ms |
notification-da | 2.749 ms | 31 | avg: 27.436 ms | max: 731.791 ms |
xfce4-session | 3.343 ms | 28 | avg: 26.796 ms | max: 734.891 ms |
xfwm4 | 3.159 ms | 22 | avg: 12.406 ms | max: 241.333 ms |
xchat | 42.789 ms | 214 | avg: 11.886 ms | max: 100.349 ms |
xfce4-terminal | 5.386 ms | 22 | avg: 11.414 ms | max: 241.611 ms |
firefox | 151.992 ms | 123 | avg: 9.543 ms | max: 153.717 ms |
xfce4-panel | 24.324 ms | 47 | avg: 8.189 ms | max: 242.352 ms |
:5090 | 6.932 ms | 111 | avg: 8.131 ms | max: 102.665 ms |
events/0 | 0.758 ms | 12 | avg: 1.964 ms | max: 21.879 ms |
Xorg | 280.558 ms | 340 | avg: 1.864 ms | max: 99.526 ms |
geany | 63.391 ms | 295 | avg: 1.099 ms | max: 9.334 ms |
reiserfs/0 | 0.039 ms | 2 | avg: 0.854 ms | max: 1.487 ms |
kondemand/0 | 8.251 ms | 245 | avg: 0.691 ms | max: 34.372 ms |
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We are dividing a time in ns by 1e9. This is a nsec to sec
conversion. What we want is msecs. Fix it by dividing by 1e6.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Add a field in the thread atom list that keeps track of the
total and max latencies and also the total runtime. This makes
a faster output and also prepares for sorting.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently in perf sched, we are measuring the scheduler wakeup
latencies.
Now we also want measure the time a task wait to be scheduled
after it gets preempted.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
To measures the latencies, we capture the sched atoms data into
a specific structure named struct lat_snapshot.
As this structure can be used for other purposes of scheduler
profiling and mirrors what happens in a thread work atom, lets
rename it to struct work_atom and propagate this renaming in
other functions and structures names to keep it coherent.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Extend the latency tracking structure with scheduling atom
runtime info - and sum it up during per task display.
(Also clean up a few details.)
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
After:
-----------------------------------------------------------------------------------
Task | runtime ms | switches | average delay ms | maximum delay ms |
-----------------------------------------------------------------------------------
migration/0 | 0.000 ms | 1 | avg: 0.047 ms | max: 0.047 ms |
ksoftirqd/0 | 0.000 ms | 1 | avg: 0.039 ms | max: 0.039 ms |
migration/1 | 0.000 ms | 3 | avg: 0.013 ms | max: 0.016 ms |
migration/3 | 0.000 ms | 2 | avg: 0.003 ms | max: 0.004 ms |
migration/4 | 0.000 ms | 1 | avg: 0.022 ms | max: 0.022 ms |
distccd | 0.000 ms | 1 | avg: 0.004 ms | max: 0.004 ms |
distccd | 0.000 ms | 1 | avg: 0.014 ms | max: 0.014 ms |
distccd | 0.000 ms | 2 | avg: 0.000 ms | max: 0.000 ms |
distccd | 0.000 ms | 2 | avg: 0.012 ms | max: 0.019 ms |
distccd | 0.000 ms | 1 | avg: 0.002 ms | max: 0.002 ms |
as | 0.000 ms | 2 | avg: 0.019 ms | max: 0.019 ms |
as | 0.000 ms | 3 | avg: 0.015 ms | max: 0.017 ms |
as | 0.000 ms | 1 | avg: 0.009 ms | max: 0.009 ms |
perf | 0.000 ms | 1 | avg: 0.001 ms | max: 0.001 ms |
gcc | 0.000 ms | 1 | avg: 0.021 ms | max: 0.021 ms |
run-mozilla.sh | 0.000 ms | 2 | avg: 0.010 ms | max: 0.017 ms |
mozilla-plugin- | 0.000 ms | 1 | avg: 0.006 ms | max: 0.006 ms |
gcc | 0.000 ms | 2 | avg: 0.013 ms | max: 0.013 ms |
-----------------------------------------------------------------------------------
(The runtime ms column is not filled in yet.)
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
- Separate the latency and the replay commands more cleanly
- Use consistent naming
- Display help page on 'perf sched' outlining comments,
instead of aborting
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Add the -l --latency option that reports statistics about the
scheduler latencies.
For now, the latencies are measured in the following sequence
scope:
- task A is sleeping (D or S state)
- task B wakes up A
^
|
|
latency timeframe
|
|
v
- task A is scheduled in
Start by recording every scheduler events:
perf record -e sched:*
and then fetch the results:
perf sched -l
Tasks count total avg max
migration/0 2 39849 19924 28826
ksoftirqd/0 7 756383 108054 373014
migration/1 5 45391 9078 10452
ksoftirqd/1 2 399055 199527 359130
events/0 8 4780110 597513 4500250
events/1 9 6353057 705895 2986012
kblockd/0 42 37805097 900121 5077684
The snapshot are in nanoseconds.
- Count: number of snapshots taken for the given task
- Total: total latencies in nanosec
- Avg : average of latency between wake up and sched in
- Max : max snapshot latency
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Create a sched event structure of handlers in which various
sched events reader can plug their own callbacks.
This makes easier the addition of new perf sched sub commands.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
perf sched raises the following error when it meets a sched
switch event:
perf: builtin-sched.c:286: register_pid: Assertion `!(pid >= 65536)' failed.
Abandon
Currently in x86-64, the sched switch events have a hole in the
middle of the structure:
u16 common_type;
u8 common_flags;
u8 common_preempt_count;
u32 common_pid;
u32 common_tgid;
char prev_comm[16];
u32 prev_pid;
u32 prev_prio;
<--- there
u64 prev_state;
char next_comm[16];
u32 next_pid;
u32 next_prio;
Gcc inserts a 4 bytes hole there for prev_state to be u64
aligned. And the events are exported to userspace with this
hole.
But in userspace, from perf sched, we fetch it using a
structure that has a new field in the beginning: u32 size. This
is because our trace is exported with its size as a field. But
now that we have this new field, the hole in the middle
disappears because it makes prev_state becoming well aligned.
And since we are using a pointer to the raw trace using this
struct, instead of reading prev_state, we are reading the hole.
We could fix it by keeping the size seperate from the struct
but actually there a lot of other potential problems: some
fields may be saved as long in a 64 bits system and later read
as long in a 32 bits system. Also this direct cast doesn't care
about the endianness differences between the host traced
machine and the machine in which we do the post processing.
So instead of using such dangerous direct casts, fetch the
values using the trace parsing API that already takes care of
all these problems.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Various small cleanups - removal of debug printks and dead
functions, etc.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Import the schedbench.c tool that i wrote some time ago to
simulate scheduler behavior but never finished. It's a good
basis for perf sched nevertheless.
Most of its guts are not hooked up to the perf event loop
yet - that will be done in the patches to come.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This turn-key tool allows scheduler measurements to be
conducted and the results be displayed numerically.
First baby step towards that goal: clone the new command off of
perf trace.
Fix a few other details along the way:
- add (minimal) perf trace documentation
- reorder a few places
- list perf trace in the mainporcelain list as well
as it's a very useful utility.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>