Commit Graph

359 Commits

Author SHA1 Message Date
Jiri Olsa 2bcf73069b perf annotate: Loop group events directly in annotation__calc_percent()
We need to bring in 'struct hists' object and for that we need 'struct
perf_evsel' object in the scope.

Switching the group data loop with the evsel group loop.  It does the
same thing, but it brings evsel object, that we can use later get the
'struct hists' object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-8-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:47 -03:00
Jiri Olsa 48a1e4f238 perf annotate: Rename hist to sym_hist in annotation__calc_percent
We will need to bring in 'struct hists' variable in this scope, so it's
better we do this rename first.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:47 -03:00
Jiri Olsa 0440af74dc perf annotate: Rename local sample variables to data
Based on previous rename, changing also the local variable names to fit
properly.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-6-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:47 -03:00
Jiri Olsa c2f938ba5a perf annotate: Rename struct annotation_line::samples* to data*
The name 'samples*' is little confusing because we have nested 'struct
sym_hist_entry' under annotation_line struct, which holds 'nr_samples'
as well.

Also the holding struct name is 'annotation_data' so the 'data' name
fits better.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-5-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:46 -03:00
Jiri Olsa 0683d13c1a perf annotate: Get rid of annotation__scnprintf_samples_period()
We have more current function tto get the title for annotation,
which is hists__scnprintf_title. They both have same output as
far as the annotation's header line goes.

They differ in counting of the nr_samples, hists__scnprintf_title
provides more accurate number based on the setup of the
symbol_conf.filter_relative variable.

Plus it also displays any uid/thread/dso/socket filters/zooms
if there are set any, which annotation__scnprintf_samples_period
does not.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:46 -03:00
Jiri Olsa 5ecf7d30eb perf annotate: Make annotation_line__max_percent static
There's no outside user of it.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:45 -03:00
Jiri Olsa 7a3e71e0d8 perf annotate: Make symbol__annotate_fprintf2() local
There's no outside user of it.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: https://lkml.kernel.org/r/20180804130521.11408-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-08 15:55:45 -03:00
Arnaldo Carvalho de Melo 8d628d26b9 perf annnotate: Make __symbol__inc_addr_samples handle src->histograms == NULL
Making it a bit more robust, this took place here when a sample appeared
right after:

  ffffffff8a925000 D __nosave_end

And before the next considered symbol, which, using kallsyms make us
over guess the size of __nosave_end, and then the sequence:

  hist_entry__inc_addr_samples ->
    symbol__inc_addr_samples ->
      symbol__hists ->
        annotated_source__alloc_histograms

Ends up not liking to allocate gigabytes of ram for annotation...

This will be alleviated by considering BSS symbols, which we should but
don't so far, and then we should investigate those samples further.

The testcase was to have:

   perf top -e cycles/call-graph=fp/,cache-misses/call-graph=dwarf/,instructions

Running for a while till it segfaulted trying to access NULL notes->src->histograms.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-ndfjtpiop3tdcnyjgp320ra8@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-06 12:52:08 -03:00
Arnaldo Carvalho de Melo f178fd2d49 perf annotate: Move objdump_path to struct annotation_options
One more step in grouping annotation options.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-sogzdhugoavm6fyw60jnb0vs@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:53 -03:00
Arnaldo Carvalho de Melo a47e843edc perf annotate: Move disassembler_style global to annotation_options
Continuing to group annotation specific stuff into a struct.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-p3cdhltj58jt0byjzg3g7obx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:53 -03:00
Arnaldo Carvalho de Melo 1eddd9e410 perf annotate: Adopt anotation options from symbol_conf
Continuing to group annotation options in an annotation specific struct.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-astei92tzxp4yccag5pxb2h7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:53 -03:00
Arnaldo Carvalho de Melo 380195e2b0 perf annotate: Pass annotation_options to symbol__annotate()
Now all callers to symbol__disassemble() can hand it the per-tool
annotation_options, which will allow us to remove lots of stuff
from symbol_options, the kitchen sink of perf configs, reducing its
size and getting annotation specific stuff grouped together.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-vpr7ys7ggvs2fzpg8wbjcw7e@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:53 -03:00
Arnaldo Carvalho de Melo 982d410bc6 perf annotate stdio: Use annotation_options consistently
Accross all the routines, this way we can have eventually have a
consistent set of defaults for all UIs.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-6qgtixurjgdk5u0n3rw78ges@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:52 -03:00
Arnaldo Carvalho de Melo 14c8dde170 perf annotate: Replace symbol__alloc_hists() with symbol__hists()
Its a bit shorter, so ditch the old symbol__alloc_hists() function.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-m7tienxk7dijh5ln62yln1m9@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:52 -03:00
Arnaldo Carvalho de Melo 0693f7588a perf annotate: Stop using symbol_conf.nr_events global in symbol__hists()
Since now we have evsel->evlist->nr_entries in the single place calling
this function, use it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-9mgosbqa977h39j4i9ys8t75@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:52 -03:00
Arnaldo Carvalho de Melo c6b635eece perf annotate: Introduce symbol__cycle_hists()
In this case we're wanting just notes->src->cycles_hist, allocating it if needed.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-pqj81aneunhftlntm66tmhz0@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo e8ea922a7e perf annotate: Introduce symbol__hists()
In this case we're wanting just notes->src->histograms, allocating it if needed.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-4iatualjskia7sojmdb65cmm@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo e1a91a834d perf annotate: __symbol__inc_addr_samples() needs just annotated_source
It only operates on the histograms, so no need for the encompassing
'struct annotation'.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-2se2v7rrjil0kwqywks04ey2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo be3e26d99c perf annotate: Introduce annotated_source__alloc_histograms
So that we can call it independently, in contexts were we know we
already have notes->src allocated.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-f5fn7tr1asey6g013wavpn4c@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo ca39650309 perf annotate: Introduce constructor/destructor for annotated_source
More stuff will go in there, all the parts that are not needed when a
symbol had no samples and that were mistakenly added to 'struct
annotation'.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-u4761kyzhixw9ydk6kib3f0o@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo 116c626b9a perf annotate: Split allocation of annotated_source struct
So that we can allocate just the notes->src->cyc_hist, that, unlike
notes->src->histograms, is not per event, and in paths where we
need to lazily allocate notes->src->cyc_hist we don't have the
number of events handy to also allocate ->histograms.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-tsx7dhxzpi0criyx0sio3pz3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo f40dd6d1b4 perf annotate: __symbol__acount_cycles doesn't need notes
It only operates on the notes->src->cyc_hist, just pass that to it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-zd1cu4zwmu21k0cxlr83y6vr@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:51 -03:00
Arnaldo Carvalho de Melo e345f3bd9b perf annotate: Pass perf_evsel instead of just evsel->idx
The code gets shorter and we'll be able to use evsel->evlist in a
followup patch.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-t0s7vy19wq5kak74kavm8swf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-06-04 10:28:50 -03:00
Jin Yao 787e4da9f9 perf annotate: Show group event string for stdio
When we enable the group, for tui/stdio2, the output first line includes
the group event string. While for stdio, it will show only one event.

For example,

perf record -e cycles,branches ./div
perf annotate --group --stdio

    Percent |      Source code & Disassembly of div for cycles (44407 samples)
    ......

The first line doesn't include the event 'branches'.

With this patch, it will show the correct group even string.

perf annotate --group --stdio

    Percent |      Source code & Disassembly of div for cycles, branches (44407 samples)
    ......

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Suggested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1526989115-14435-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-23 10:26:40 -03:00
Jin Yao 3e71fc0319 perf annotate: Create hotkey 'c' to show min/max cycles
In the 'perf annotate' view, a new hotkey 'c' is created for showing the
min/max cycles.

For example, when press 'c', the annotate view is:

  Percent│ IPC     Cycle(min/max)
         │
         │
         │                             Disassembly of section .text:
         │
         │                             000000000003aab0 <random@@GLIBC_2.2.5>:
    8.22 │3.92                           sub    $0x18,%rsp
         │3.92                           mov    $0x1,%esi
         │3.92                           xor    %eax,%eax
         │3.92                           cmpl   $0x0,argp_program_version_hook@@G
         │3.92             1(2/1)      ↓ je     20
         │                               lock   cmpxchg %esi,__abort_msg@@GLIBC_P
         │                             ↓ jne    29
         │                             ↓ jmp    43
         │1.10                     20:   cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+
    8.93 │1.10             1(5/1)      ↓ je     43

When press 'c' again, the annotate view is switched back:

  Percent│ IPC Cycle
         │
         │
         │                Disassembly of section .text:
         │
         │                000000000003aab0 <random@@GLIBC_2.2.5>:
    8.22 │3.92              sub    $0x18,%rsp
         │3.92              mov    $0x1,%esi
         │3.92              xor    %eax,%eax
         │3.92              cmpl   $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x
         │3.92     1      ↓ je     20
         │                  lock   cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0
         │                ↓ jne    29
         │                ↓ jmp    43
         │1.10        20:   cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0
    8.93 │1.10     1      ↓ je     43

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1526569118-14217-3-git-send-email-yao.jin@linux.intel.com
[ Rename all maxmin to minmax ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-19 06:42:49 -03:00
Jin Yao 48659ebf37 perf annotate: Record the min/max cycles
Currently perf has a feature to account cycles for LBRs

For example, on skylake:

  perf record -b ...
  perf report or perf annotate

And then browsing the annotate browser gives average cycle counts for
program blocks.

For some analysis it would be useful if we could know not only the
average cycles but also the min and max cycles.

This patch records the min and max cycles.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1526569118-14217-2-git-send-email-yao.jin@linux.intel.com
[ Switch from max/min to min/max ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-18 16:31:41 -03:00
Jin Yao 04d2600ab6 perf annotate: Display all available events on --stdio
When we perform the following command lines:

  $ perf record -e "{cycles,branches}" ./div
  $ perf annotate main --stdio

The output shows only the first event, "cycles" and the displaying
format is not correct.

   Percent         |      Source code & Disassembly of div for cycles (44550 samples)
  -----------------------------------------------------------------------------------
                   :
                   :
                   :
                   :            Disassembly of section .text:
                   :
                   :            00000000004004b0 <main>:
                   :            main():
                   :
                   :                    return i;
                   :            }
                   :
                   :            int main(void)
                   :            {
      0.00 :   4004b0:       push   %rbx
                   :                    int i;
                   :                    int flag;
                   :                    volatile double x = 1212121212, y = 121212;
                   :
                   :                    s_randseed = time(0);
      0.00 :   4004b1:       xor    %edi,%edi
                   :                    srand(s_randseed);
      0.00 :   4004b3:       mov    $0x77359400,%ebx
                   :
                   :                    return i;
                   :            }

The issue is that the value of the 'nr_percent' variable is hardcoded to
1.  This patch fixes it.

With this patch, the output is:

   Percent         |      Source code & Disassembly of div for cycles (44550 samples)
  -----------------------------------------------------------------------------------
                   :
                   :
                   :
                   :            Disassembly of section .text:
                   :
                   :            00000000004004b0 <main>:
                   :            main():
                   :
                   :                    return i;
                   :            }
                   :
                   :            int main(void)
                   :            {
      0.00    0.00 :   4004b0:       push   %rbx
                   :                    int i;
                   :                    int flag;
                   :                    volatile double x = 1212121212, y = 121212;
                   :
                   :                    s_randseed = time(0);
      0.00    0.00 :   4004b1:       xor    %edi,%edi
                   :                    srand(s_randseed);
      0.00    0.00 :   4004b3:       mov    $0x77359400,%ebx
                   :
                   :                    return i;
                   :            }

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: f681d593d1 ("perf annotate: Remove disasm__calc_percent() from disasm_line__print()")
Link: http://lkml.kernel.org/r/1525881435-4092-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-10 15:19:30 -03:00
Arnaldo Carvalho de Melo 43c4023152 perf annotate: Allow setting the offset level in .perfconfig
The default is 1 (jump_target):

  # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
  Samples: 3K of event 'cycles:ppp', 3000 Hz, Event count (approx.): 2766398574
  _raw_spin_lock_irqsave() /proc/kcore
    0.26        nop
    4.61        push   %rbx
   19.33        pushfq
    7.97        pop    %rax
    0.32        nop
    0.06        mov    %rax,%rbx
   14.63        cli
    0.06        nop
                xor    %eax,%eax
                mov    $0x1,%edx
   49.94        lock   cmpxchg %edx,(%rdi)
    0.16        test   %eax,%eax
              ↓ jne    2b
    2.66        mov    %rbx,%rax
                pop    %rbx
              ← retq
          2b:   mov    %eax,%esi
              → callq  *ffffffffb30eaed0
                mov    %rbx,%rax
                pop    %rbx
              ← retq
  #

But one can ask for showing offsets for call instructions by setting
this:

  # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
  Samples: 3K of event 'cycles:ppp', 3000 Hz, Event count (approx.): 2766398574
  _raw_spin_lock_irqsave() /proc/kcore
    0.26        nop
    4.61        push   %rbx
   19.33        pushfq
    7.97        pop    %rax
    0.32        nop
    0.06        mov    %rax,%rbx
   14.63        cli
    0.06        nop
                xor    %eax,%eax
                mov    $0x1,%edx
   49.94        lock   cmpxchg %edx,(%rdi)
    0.16        test   %eax,%eax
              ↓ jne    2b
    2.66        mov    %rbx,%rax
                pop    %rbx
              ← retq
          2b:   mov    %eax,%esi
          2d: → callq  *ffffffffb30eaed0
                mov    %rbx,%rax
                pop    %rbx
              ← retq
  #

Or using a big value to ask for all offsets to be shown:

  # cat ~/.perfconfig
  [annotate]

	offset_level = 100

	hide_src_code = true
  # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
  Samples: 3K of event 'cycles:ppp', 3000 Hz, Event count (approx.): 2766398574
  _raw_spin_lock_irqsave() /proc/kcore
    0.26   0:   nop
    4.61   5:   push   %rbx
   19.33   6:   pushfq
    7.97   7:   pop    %rax
    0.32   8:   nop
    0.06   d:   mov    %rax,%rbx
   14.63  10:   cli
    0.06  11:   nop
          17:   xor    %eax,%eax
          19:   mov    $0x1,%edx
   49.94  1e:   lock   cmpxchg %edx,(%rdi)
    0.16  22:   test   %eax,%eax
          24: ↓ jne    2b
    2.66  26:   mov    %rbx,%rax
          29:   pop    %rbx
          2a: ← retq
          2b:   mov    %eax,%esi
          2d: → callq  *ffffffffb30eaed0
          32:   mov    %rbx,%rax
          35:   pop    %rbx
          36: ← retq
   #

This also affects the TUI, i.e. the default 'perf annotate' and 'perf
top/report' -> A hotkey -> annotate interfaces, when slang-devel is present
in the build, i.e.:

  # perf version --build-options | grep slang
              libslang: [ on  ]  # HAVE_SLANG_SUPPORT
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Martin Liška <mliska@suse.cz>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-venm6x5zrt40eu8hxdsmqxz6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-13 10:00:05 -03:00
Arnaldo Carvalho de Melo 592c10e217 perf annotate: Allow showing offsets in more than just jump targets
Jesper wanted to see offsets at callq sites when doing some performance
investigation related to retpolines, so save him some time by providing
an 'struct annotation_options' to control where offsets should appear:
just on jump targets? That + call instructions? All?

This puts in place the logic to show the offsets, now we need to wire
this up in the TUI browser (next patch) and on the 'perf annotate --stdio2"
interface, where we need a more general mechanism to setup the
'annotation_options' struct from the command line.

Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Liška <mliska@suse.cz>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-m3jc9c3swobye9tj08gnh5i7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-12 10:32:39 -03:00
Arnaldo Carvalho de Melo c0459a0925 perf annotate: Show group details on the title line
To match what is shown in the main 'perf report/top' title lines, i.e.
if a group is being shown, either a real group (recorded with "-e
'{a,b,c}') or a forced group (using 'perf report --group' for a
perf.data file recorded without {}) we will show multiple columns,
one per event, but we were failing to show the group details, so, for:

 # perf report --header-only | grep cmdline
 # cmdline : /home/acme/bin/perf record -e {cycles,instructions,cache-misses}
 # perf report --group

The first line was showing just "cycles", now it shows the correct line,
which is:

  Samples: 578  of events 'anon group { cycles, instructions, cache-misses }', 4000 Hz, Event count (approx.): 487421794
  syscall_return_via_sysret  /lib/modules/4.16.0-rc7/build/vmlinux
    0.22   2.97   0.00 │    ↓ jmp    6c
                       │      mov    %cr3,%rdi
    1.33  10.89   4.00 │    ↓ jmp    62
                       │      mov    %rdi,%rax
<SNIP>

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 6920e2854e ("perf annotate browser: Show extra title line with event information")
Link: https://lkml.kernel.org/n/tip-i41tqh17c2dabnyzjh99r1oz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-05 11:18:39 -03:00
Arnaldo Carvalho de Melo 520d3f01ea perf annotate stdio2: Print more descriptive event information header
To match the recently added event header information to --tui, e.g.:

  # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
  Samples: 128  of event 'cycles:ppp', 4000 Hz, Event count (approx.): 48617682
  _raw_spin_lock_irqsave() /proc/kcore
    0.78        nop
    7.03        push   %rbx
    3.12        pushfq
    6.25        pop    %rax
                nop
                mov    %rax,%rbx
    3.12        cli
                nop
                xor    %eax,%eax
                mov    $0x1,%edx
   79.69        lock   cmpxchg %edx,(%rdi)
                test   %eax,%eax
              ↓ jne    2b
                mov    %rbx,%rax
                pop    %rbx
              ← retq
          2b:   mov    %eax,%esi
              → callq  *ffffffffb30eaed0
                mov    %rbx,%rax
                pop    %rbx
              ← retq
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Martin Liška <mliska@suse.cz>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-ujy46x7cldyhyxelyf2b9quy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-03 16:05:13 -03:00
Arnaldo Carvalho de Melo b213eac245 perf annotate: Introduce annotation__scnprintf_samples_period() method
To print a string using the total period (nr_events) and the number of
samples for a given annotation, i.e. for a given symbol, the counterpart
to hists__scnprintf_samples_period(), that is for all the samples in a
session (be it a live session, think 'perf top' or a perf.data file,
think 'perf report').

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Martin Liška <mliska@suse.cz>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196935
Link: https://lkml.kernel.org/n/tip-goj2wu4fxutc8vd46mw3yg14@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-03 15:22:55 -03:00
Arnaldo Carvalho de Melo 980b68ec06 perf annotate: Use absolute addresses to calculate jump target offsets
These types of jumps were confusing the annotate browser:

entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f78dc/build/vmlinux

entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f78dc/build/vmlinux
  Percent│ffffffff81a00020:   swapgs
  <SNIP>
         │ffffffff81a00128: ↓ jae    ffffffff81a00139 <syscall_return_via_sysret+0x53>
  <SNIP>
         │ffffffff81a00155: → jmpq   *0x825d2d(%rip)   # ffffffff82225e88 <pv_cpu_ops+0xe8>

I.e. the syscall_return_via_sysret function is actually "inside" the
entry_SYSCALL_64 function, and the offsets in jumps like these (+0x53)
are relative to syscall_return_via_sysret, not to syscall_return_via_sysret.

Or this may be some artifact in how the assembler marks the start and
end of a function and how this ends up in the ELF symtab for vmlinux,
i.e. syscall_return_via_sysret() isn't "inside" entry_SYSCALL_64, but
just right after it.

From readelf -sw vmlinux:

 80267: ffffffff81a00020   315 NOTYPE  GLOBAL DEFAULT    1 entry_SYSCALL_64
   316: ffffffff81a000e6     0 NOTYPE  LOCAL  DEFAULT    1 syscall_return_via_sysret

 0xffffffff81a00020 + 315 > 0xffffffff81a000e6

So instead of looking for offsets after that last '+' sign, calculate
offsets for jump target addresses that are inside the function being
disassembled from the absolute address, 0xffffffff81a00139 in this case,
subtracting from it the objdump address for the start of the function
being disassembled, entry_SYSCALL_64() in this case.

So, before this patch:

entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f78dc/build/vmlinux
Percent│       pop    %r10
       │       pop    %r9
       │       pop    %r8
       │       pop    %rax
       │       pop    %rsi
       │       pop    %rdx
       │       pop    %rsi
       │       mov    %rsp,%rdi
       │       mov    %gs:0x5004,%rsp
       │       pushq  0x28(%rdi)
       │       pushq  (%rdi)
       │       push   %rax
       │     ↑ jmp    6c
       │       mov    %cr3,%rdi
       │     ↑ jmp    62
       │       mov    %rdi,%rax
       │       and    $0x7ff,%rdi
       │       bt     %rdi,%gs:0x2219a
       │     ↑ jae    53
       │       btr    %rdi,%gs:0x2219a
       │       mov    %rax,%rdi
       │     ↑ jmp    5b

After:

entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f78dc/build/vmlinux
  0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode
       │       pop    %r10
       │       pop    %r9
       │       pop    %r8
       │       pop    %rax
       │       pop    %rsi
       │       pop    %rdx
       │       pop    %rsi
       │       mov    %rsp,%rdi
       │       mov    %gs:0x5004,%rsp
       │       pushq  0x28(%rdi)
       │       pushq  (%rdi)
       │       push   %rax
       │     ↓ jmp    132
       │       mov    %cr3,%rdi
       │    ┌──jmp    128
       │    │  mov    %rdi,%rax
       │    │  and    $0x7ff,%rdi
       │    │  bt     %rdi,%gs:0x2219a
       │    │↓ jae    119
       │    │  btr    %rdi,%gs:0x2219a
       │    │  mov    %rax,%rdi
       │    │↓ jmp    121
       │119:│  mov    %rax,%rdi
       │    │  bts    $0x3f,%rdi
       │121:│  or     $0x800,%rdi
       │128:└─→or     $0x1000,%rdi
       │       mov    %rdi,%cr3
       │132:   pop    %rax
       │       pop    %rdi
       │       pop    %rsp
       │     → jmpq   *0x825d2d(%rip)        # ffffffff82225e88 <pv_cpu_ops+0xe8>

With those at least navigating to the right destination, an improvement
for these cases seems to be to be to somehow mark those inner functions,
which in this case could be:

entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f78dc/build/vmlinux
       │syscall_return_via_sysret:
       │       pop    %r15
       │       pop    %r14
       │       pop    %r13
       │       pop    %r12
       │       pop    %rbp
       │       pop    %rbx
       │       pop    %rsi
       │       pop    %r10
       │       pop    %r9
       │       pop    %r8
       │       pop    %rax
       │       pop    %rsi
       │       pop    %rdx
       │       pop    %rsi
       │       mov    %rsp,%rdi
       │       mov    %gs:0x5004,%rsp
       │       pushq  0x28(%rdi)
       │       pushq  (%rdi)
       │       push   %rax
       │     ↓ jmp    132
       │       mov    %cr3,%rdi
       │    ┌──jmp    128
       │    │  mov    %rdi,%rax
       │    │  and    $0x7ff,%rdi
       │    │  bt     %rdi,%gs:0x2219a
       │    │↓ jae    119
       │    │  btr    %rdi,%gs:0x2219a
       │    │  mov    %rax,%rdi
       │    │↓ jmp    121
       │119:│  mov    %rax,%rdi
       │    │  bts    $0x3f,%rdi
       │121:│  or     $0x800,%rdi
       │128:└─→or     $0x1000,%rdi
       │       mov    %rdi,%cr3
       │132:   pop    %rax
       │       pop    %rdi
       │       pop    %rsp
       │     → jmpq   *0x825d2d(%rip)        # ffffffff82225e88 <pv_cpu_ops+0xe8>

This all gets much better viewed if one uses 'perf report --ignore-vmlinux'
forcing the usage of /proc/kcore + /proc/kallsyms, when the above
actually gets down to:

  # perf report --ignore-vmlinux
  ## do '/64', will show the function names containing '64',
  ## navigate to /entry_SYSCALL_64_after_hwframe.annotation,
  ## press 'A' to annotate, then 'P' to print that annotation
  ## to a file
  ## From another xterm (or see on screen, this 'P' thing is for
  ## getting rid of those right side scroll bars/spaces):
  # cat /entry_SYSCALL_64_after_hwframe.annotation
  entry_SYSCALL_64_after_hwframe() /proc/kcore
  Event: cycles:ppp

  Percent
              Disassembly of section load0:

              ffffffff9aa00044 <load0>:
   11.97        push   %rax
    4.85        push   %rdi
                push   %rsi
    2.59        push   %rdx
    2.27        push   %rcx
    0.32        pushq  $0xffffffffffffffda
    1.29        push   %r8
                xor    %r8d,%r8d
    1.62        push   %r9
    0.65        xor    %r9d,%r9d
    1.62        push   %r10
                xor    %r10d,%r10d
    5.50        push   %r11
                xor    %r11d,%r11d
    3.56        push   %rbx
                xor    %ebx,%ebx
    4.21        push   %rbp
                xor    %ebp,%ebp
    2.59        push   %r12
    0.97        xor    %r12d,%r12d
    3.24        push   %r13
                xor    %r13d,%r13d
    2.27        push   %r14
                xor    %r14d,%r14d
    4.21        push   %r15
                xor    %r15d,%r15d
    0.97        mov    %rsp,%rdi
    5.50      → callq  do_syscall_64
   14.56        mov    0x58(%rsp),%rcx
    7.44        mov    0x80(%rsp),%r11
    0.32        cmp    %rcx,%r11
              → jne    swapgs_restore_regs_and_return_to_usermode
    0.32        shl    $0x10,%rcx
    0.32        sar    $0x10,%rcx
    3.24        cmp    %rcx,%r11
              → jne    swapgs_restore_regs_and_return_to_usermode
    2.27        cmpq   $0x33,0x88(%rsp)
    1.29      → jne    swapgs_restore_regs_and_return_to_usermode
                mov    0x30(%rsp),%r11
    8.74        cmp    %r11,0x90(%rsp)
              → jne    swapgs_restore_regs_and_return_to_usermode
    0.32        test   $0x10100,%r11
              → jne    swapgs_restore_regs_and_return_to_usermode
    0.32        cmpq   $0x2b,0xa0(%rsp)
    0.65      → jne    swapgs_restore_regs_and_return_to_usermode

I.e. using kallsyms makes the function start/end be done differently
than using what is in the vmlinux ELF symtab and actually the hits
goes to entry_SYSCALL_64_after_hwframe, which is a GLOBAL() after the
start of entry_SYSCALL_64:

  ENTRY(entry_SYSCALL_64)
          UNWIND_HINT_EMPTY
  <SNIP>
          pushq   $__USER_CS                      /* pt_regs->cs */
          pushq   %rcx                            /* pt_regs->ip */
  GLOBAL(entry_SYSCALL_64_after_hwframe)
          pushq   %rax                            /* pt_regs->orig_ax */

          PUSH_AND_CLEAR_REGS rax=$-ENOSYS

And it goes and ends at:

          cmpq    $__USER_DS, SS(%rsp)            /* SS must match SYSRET */
          jne     swapgs_restore_regs_and_return_to_usermode

          /*
           * We win! This label is here just for ease of understanding
           * perf profiles. Nothing jumps here.
           */
  syscall_return_via_sysret:
          /* rcx and r11 are already restored (see code above) */
          UNWIND_HINT_EMPTY
          POP_REGS pop_rdi=0 skip_r11rcx=1

So perhaps some people should really just play with '--ignore-vmlinux'
to force /proc/kcore + kallsyms.

One idea is to do both, i.e. have a vmlinux annotation and a
kcore+kallsyms one, when possible, and even show the patched location,
etc.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-r11knxv8voesav31xokjiuo6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-23 16:46:53 -03:00
Arnaldo Carvalho de Melo c448234cfe perf annotate: Defer searching for comma in raw line till it is needed
That strchr() in jump__scnprintf() needs to be nuked somehow, as it,
IIRC is already done in jump__parse() and if needed at scnprintf() time,
should be stashed in the struct filled in parse() time.

For now jus defer it to just before where it is used.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-j0t5hagnphoz9xw07bh3ha3g@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-23 16:46:19 -03:00
Arnaldo Carvalho de Melo e4cc91b802 perf annotate: Support jumping from one function to another
For instance:

  entry_SYSCALL_64  /lib/modules/4.16.0-rc5-00086-gdf09348f78dc/build/vmlinux
    5.50 │     → callq  do_syscall_64
   14.56 │       mov    0x58(%rsp),%rcx
    7.44 │       mov    0x80(%rsp),%r11
    0.32 │       cmp    %rcx,%r11
         │     → jne    swapgs_restore_regs_and_return_to_usermode
    0.32 │       shl    $0x10,%rcx
    0.32 │       sar    $0x10,%rcx
    3.24 │       cmp    %rcx,%r11
         │     → jne    swapgs_restore_regs_and_return_to_usermode
    2.27 │       cmpq   $0x33,0x88(%rsp)
    1.29 │     → jne    swapgs_restore_regs_and_return_to_usermode
         │       mov    0x30(%rsp),%r11
    8.74 │       cmp    %r11,0x90(%rsp)
         │     → jne    swapgs_restore_regs_and_return_to_usermode
    0.32 │       test   $0x10100,%r11
         │     → jne    swapgs_restore_regs_and_return_to_usermode
    0.32 │       cmpq   $0x2b,0xa0(%rsp)
    0.65 │     → jne    swapgs_restore_regs_and_return_to_usermode

It'll behave just like a "call" instruction, i.e. press enter or right
arrow over one such line and the browser will navigate to the annotated
disassembly of that function, which when exited, via left arrow or esc,
will come back to the calling function.

Now to support jump to an offset on a different function...

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-78o508mqvr8inhj63ddtw7mo@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-23 16:46:18 -03:00
Arnaldo Carvalho de Melo 2eff061162 perf annotate: Add "_local" to jump/offset validation routines
Because they all really check if we can access data structures/visual
constructs where a "jump" instruction targets code in the same function,
i.e. things like:

  __pthread_mutex_lock  /usr/lib64/libpthread-2.26.so
  1.95 │       mov    __pthread_force_elision,%ecx
       │    ┌──test   %ecx,%ecx
  0.07 │    ├──je     60
       │    │  test   $0x300,%esi
       │    │↓ jne    60
       │    │  or     $0x100,%esi
       │    │  mov    %esi,0x10(%rdi)
       │ 42:│  mov    %esi,%edx
       │    │  lea    0x16(%r8),%rsi
       │    │  mov    %r8,%rdi
       │    │  and    $0x80,%edx
       │    │  add    $0x8,%rsp
       │    │→ jmpq   __lll_lock_elision
       │    │  nop
  0.29 │ 60:└─→and    $0x80,%esi
  0.07 │       mov    $0x1,%edi
  0.29 │       xor    %eax,%eax
  2.53 │       lock   cmpxchg %edi,(%r8)

And not things like that "jmpq __lll_lock_elision", that instead should behave
like a "call" instruction and "jump" to the disassembly of "___lll_lock_elision".

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-3cwx39u3h66dfw9xjrlt7ca2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-23 16:46:16 -03:00
Arnaldo Carvalho de Melo 751b1783da perf annotate: Mark jumps to outher functions with the call arrow
Things like this in _cpp_lex_token (gcc's cc1 program):

     cpp_named_operator2name@@Base+0xa72

Point to a place that is after the cpp_named_operator2name boundaries,
i.e.  in the ELF symbol table for cc1 cpp_named_operator2name is marked
as being 32-bytes long, but it in fact is much larger than that, so we
seem to need a symbols__find() routine that looks for >= current->start
and  < next_symbol->start, possibly just for C++ objects?

For now lets just make some progress by marking jumps to outside the
current function as call like.

Actual navigation will come next, with further understanding of how the
symbol searching and disassembly should be done.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-aiys0a0bsgm3e00hbi6fg7yy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 16:19:55 -03:00
Arnaldo Carvalho de Melo 85a84e4f81 perf annotate: Pass function descriptor to its instruction parsing routines
We need that to figure out if jumps have targets in a different
function.

E.g. _cpp_lex_token(), in /usr/libexec/gcc/x86_64-redhat-linux/5.3.1/cc1
has a line like this:

  jne    c469be <cpp_named_operator2name@@Base+0xa72>

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-ris0ioziyp469pofpzix2atb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 16:19:41 -03:00
Arnaldo Carvalho de Melo 425859ff0d perf annotate: No need to calculate notes->start twice
Since we already set notes->start to map__rip_2objdump(map, sym->start)
in symbol__annotate2(), no need to calculate that address again in
symbol__calc_lines(), just use notes->start.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-ycxlg8mm5ueuj21w6gi62l7g@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 12:53:43 -03:00
Arnaldo Carvalho de Melo d9bd766584 perf annotate browser: Add 'P' hotkey to dump annotation to file
Just like we have in the histograms browser used as the main screen for
'perf top --tui' and 'perf report --tui', to print the current
annotation to a file with a named composed by the symbol name and the
".annotation" suffix.

Here is one example of pressing 'A' on 'perf top' to live annotate a
kernel function and then press 'P' to dump that annotation, the
resulting file:

  # cat _raw_spin_lock_irqsave.annotation
  _raw_spin_lock_irqsave() /proc/kcore
  Event: cycles:ppp

    7.14        nop
   21.43        push   %rbx
    7.14        pushfq
                pop    %rax
                nop
                mov    %rax,%rbx
                cli
                nop
                xor    %eax,%eax
                mov    $0x1,%edx
   64.29        lock   cmpxchg %edx,(%rdi)
                test   %eax,%eax
              ↓ jne    2b
                mov    %rbx,%rax
                pop    %rbx
              ← retq
          2b:   mov    %eax,%esi
              → callq  queued_spin_lock_slowpath
                mov    %rbx,%rax
                pop    %rbx
              ← retq
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-zzmnrwugb5vtk7bvg0rbx150@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 12:53:43 -03:00
Arnaldo Carvalho de Melo 864298f224 perf annotate: Add function header to --stdio2
# perf annotate --stdio2 _raw_spin_lock_irqsave
  _raw_spin_lock_irqsave() /lib/modules/4.16.0-rc4/build/vmlinux
  Event: anon group { cycles, instructions }

    0.00   3.17      → callq  __fentry__
    0.00   7.94        push   %rbx
    7.69  36.51      → callq  __page_file_index
                       mov    %rax,%rbx
    7.69   3.17      → callq  *ffffffff82225cd0
                       xor    %eax,%eax
                       mov    $0x1,%edx
   80.77  49.21        lock   cmpxchg %edx,(%rdi)
                       test   %eax,%eax
                     ↓ jne    2b
    3.85   0.00        mov    %rbx,%rax
                       pop    %rbx
                     ← retq
                 2b:   mov    %eax,%esi
                     → callq  queued_spin_lock_slowpath
                       mov    %rbx,%rax
                       pop    %rbx
                     ← retq
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-i86yfyzl8m194ioxgj1jo32f@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 12:53:41 -03:00
Arnaldo Carvalho de Melo 3563289208 perf annotate: Use the default annotation options for --stdio2
With an empty '[annotate]' section in ~/.perfconfig:

  # perf record -a --all-kernel -e '{cycles,instructions}:P' sleep 5
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 2.243 MB perf.data (5513 samples) ]
  # perf annotate --stdio2 _raw_spin_lock | head -20

                     Disassembly of section .text:

                     ffffffff81868790 <_raw_spin_lock>:
                     _raw_spin_lock():
                     EXPORT_SYMBOL(_raw_spin_trylock_bh);
                     #endif

                     #ifndef CONFIG_INLINE_SPIN_LOCK
                     void __lockfunc _raw_spin_lock(raw_spinlock_t *lock)
                     {
                     → callq  __fentry__
                     atomic_cmpxchg():
                             return xadd(&v->counter, -i);
                     }

                     static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
                     {
  # perf annotate --stdio2 _raw_spin_lock | head -20
                     → callq  __fentry__
                       xor    %eax,%eax
                       mov    $0x1,%edx
   87.50 100.00        lock   cmpxchg %edx,(%rdi)
    6.25   0.00        test   %eax,%eax
                     ↓ jne    16
    6.25   0.00        repz   retq
                 16:   mov    %eax,%esi
                     ↑ jmpq   ffffffff810e96b0 <queued_spin_lock_slowpath>
  #
  # cat ~/.perfconfig
  [annotate]

    hide_src_code = false
    show_linenr = true
  # perf annotate --stdio2 _raw_spin_lock | head -20

                 3   Disassembly of section .text:

                 5   ffffffff81868790 <_raw_spin_lock>:
                 6   _raw_spin_lock():
                 143 EXPORT_SYMBOL(_raw_spin_trylock_bh);
                 144 #endif

                 146 #ifndef CONFIG_INLINE_SPIN_LOCK
                 147 void __lockfunc _raw_spin_lock(raw_spinlock_t *lock)
                 148 {
                     → callq  __fentry__
                 150 atomic_cmpxchg():
                 187         return xadd(&v->counter, -i);
                 188 }

                 190 static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
                 191 {
  #
  # cat ~/.perfconfig
  [annotate]

    hide_src_code = true
    show_total_period = true
  # perf annotate --stdio2 _raw_spin_lock | head -20
                               → callq  __fentry__
                                 xor    %eax,%eax
                                 mov    $0x1,%edx
      1411316      152339        lock   cmpxchg %edx,(%rdi)
       344694           0        test   %eax,%eax
                               ↓ jne    16
        80806           0        repz   retq
                           16:   mov    %eax,%esi
                               ↑ jmpq   ffffffff810e96b0 <queued_spin_lock_slowpath>
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-nu4rxg5zkdtgs1b2gc40p7v7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 12:53:41 -03:00
Arnaldo Carvalho de Melo 7f0b6fde31 perf annotate: Move the default annotate options to the library
One more thing that goes from the TUI code to be used more widely,
for instance it'll affect the default options used by:

  perf annotate --stdio2

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-0nsz0dm0akdbo30vgja2a10e@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 12:53:40 -03:00
Arnaldo Carvalho de Melo befd2a38a6 perf annotate: Introduce the --stdio2 output mode
This uses the TUI augmented formatting routines, modulo interactivity.

  # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave
  _raw_spin_lock_irqsave() /proc/kcore
  Event: cycles:ppp

  Percent

              Disassembly of section load0:

              ffffffff9a8734b0 <load0>:
                nop
                push   %rbx
   50.00        pushfq
                pop    %rax
                nop
                mov    %rax,%rbx
                cli
                nop
                xor    %eax,%eax
                mov    $0x1,%edx
   50.00        lock   cmpxchg %edx,(%rdi)
                test   %eax,%eax
              ↓ jne    2b
                mov    %rbx,%rax
                pop    %rbx
              ← retq
          2b:   mov    %eax,%esi
              → callq  queued_spin_lock_slowpath
                mov    %rbx,%rax
                pop    %rbx
              ← retq

Tested-by: Jin Yao <yao.jin@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-6cte5o8z84mbivbvqlg14uh1@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-21 12:53:26 -03:00
Arnaldo Carvalho de Melo c298304bd7 perf annotate: Use a ops table for annotation_line__write()
To simplify the passing of arguments, the --stdio2 code will have to set
all the fields with operations printing to stdout.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-pcs3c7vdy9ucygxflo4nl1o7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 15:36:18 -03:00
Arnaldo Carvalho de Melo a1e9b74cc2 perf annotate: Finish the generalization of annotate_browser__write()
We pass some more callbacks and all of annotate_browser__write() seems
to be free of TUI code (except for some arrow constants, will fix).

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-5uo6yvwnxtsbe8y6v0ysaakf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:30 -03:00
Arnaldo Carvalho de Melo 2ba5eca104 perf annotate: Introduce annotation_line__print_start() out of TUI code
For the --tui and --stdio2 cases using callbacks for print() and
set_percent_color() end up being the easiest path, real GUI remains as
an exercise.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-1o7az1ng55g2g6ppr2jpeuct@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:30 -03:00
Arnaldo Carvalho de Melo 2f025ea0ba perf annotate: Introduce annotation_line__max_percent()
Out of the annotate_browser__write() routine, to be used in the --stdio2
mode.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-0he0wyy4haswqi1qb35x37do@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:30 -03:00
Arnaldo Carvalho de Melo ecda45bd6c perf annotate: Introduce symbol__annotate2 method
That does all the extended boilerplate the TUI browser did, leaving the
symbol__annotate() function to be used by the old --stdio output mode.

Now the upcoming --stdio2 output mode should just use this one to set
things up.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-e2x8wuf6gvdhzdryo229vj4i@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:30 -03:00
Arnaldo Carvalho de Melo b8b0d81985 perf annotate: Introduce init_column_widths() method out of TUI code
More non-TUI stuff goes to the UI-agnostic library

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-hngv7rpqvtta69ouj7ne770q@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:29 -03:00
Arnaldo Carvalho de Melo 7232bf7a89 perf annotate: Move update_column_widths() to the generic lib
Previous patch left it where it was to ease review, move it to its
right place.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-ikdjr014p7k5kachgyjrgiey@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:29 -03:00
Arnaldo Carvalho de Melo 5bc49f6120 perf annotate: Introduce set_offsets() method out of TUI code
More non-strictly TUI code being moved to the UI neutral annotation
library, to be used in the upcoming --stdio2 output mode.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-ek20dnd8z2y5v54pcepihybz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:29 -03:00
Arnaldo Carvalho de Melo 0db45bcfac perf annotate: Move mark_jump_targets from the TUI to the annotation library
This also is not TUI specific, should be used in the upcoming --stdio2
mode.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-v827xec8z3hxrmgp7bwa6ohs@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:29 -03:00
Arnaldo Carvalho de Melo f56c083bc4 perf annotate: Move compute_ipc() to annotation library
Out of the TUI code, as it has nothing specific to that UI and should be
used in the other output modes as well.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-0jahghvqdodb8vu2591pkv3d@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-20 13:19:28 -03:00
Arnaldo Carvalho de Melo 4c9cb2c2b4 perf annotate: Use ops->target.name when available for unresolved call targets
There is a bug where when using 'perf annotate timerqueue_add' the
target for its only routine called with the 'callq' instruction,
'rb_insert_color', doesn't get resolved from its address when parsing
that 'callq' instruction.

That symbol resolution works when using 'perf report --tui' and then
doing annotation for 'timerqueue_add' from there, the vmlinux
dso->symbols rb_tree somehow gets in a state that we can't find that
address, that is a bug that has to be further investigated.

But since the objdump output has the function name, i.e. the raw objdump
disassembled line looks like:

So, before:

  # perf annotate timerqueue_add

              │      mov    %rbx,%rdi
              │      mov    %rbx,(%rdx)
              │    → callq  *ffffffff8184dc80
              │      mov    0x8(%rbp),%rdx
              │      test   %rdx,%rdx
              │    ↓ je     67

  # perf report

              │      mov    %rbx,%rdi
              │      mov    %rbx,(%rdx)
              │    → callq  rb_insert_color
              │      mov    0x8(%rbp),%rdx
              │      test   %rdx,%rdx
              │    ↓ je     67

And after both look the same:

  # perf annotate timerqueue_add

              │      mov    %rbx,%rdi
              │      mov    %rbx,(%rdx)
              │    → callq  rb_insert_color
              │      mov    0x8(%rbp),%rdx
              │      test   %rdx,%rdx
              │    ↓ je     67

From 'perf report' one can annotate and navigate to that 'rb_insert_color'
function, but not directly from 'perf annotate timerqueue_add', that
remains to be investigated and fixed.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-nkktz6355rhqtq7o8atr8f8r@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-19 13:51:52 -03:00
Arnaldo Carvalho de Melo 6810158d52 perf annotate: Use asprintf when formatting objdump command line
We were using a local buffer with an arbitrary size, that would have to
get increased to avoid truncation as warned by gcc 8:

  util/annotate.c: In function 'symbol__disassemble':
  util/annotate.c:1488:4: error: '%s' directive output may be truncated writing up to 4095 bytes into a region of size between 3966 and 8086 [-Werror=format-truncation=]
      "%s %s%s --start-address=0x%016" PRIx64
      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  util/annotate.c:1498:20:
      symfs_filename, symfs_filename);
                      ~~~~~~~~~~~~~~
  util/annotate.c:1490:50: note: format string is defined here
      " -l -d %s %s -C \"%s\" 2>/dev/null|grep -v \"%s:\"|expand",
                                                  ^~
  In file included from /usr/include/stdio.h:861,
                   from util/color.h:5,
                   from util/sort.h:8,
                   from util/annotate.c:14:
  /usr/include/bits/stdio2.h:67:10: note: '__builtin___snprintf_chk' output 116 or more bytes (assuming 8331) into a destination of size 8192
     return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          __bos (__s), __fmt, __va_arg_pack ());
          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

So switch to asprintf, that will make sure enough space is available.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-qagoy2dmbjpc9gdnaj0r3mml@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-16 13:56:38 -03:00
Thomas Richter 0b58a77ca8 perf annotate: Fix s390 target function disassembly
'perf annotate' displays function call assembler instructions with a
right arrow. Hitting enter on this line/instruction causes the browser
to disassemble this target function and show it on the screen.  On s390
this results in an error message 'The called function was not found.'

The function call assembly line parsing does not handle the s390 bras
and brasl instructions. Function call__parse expects the target as first
operand:

	callq	e9140 <__fxstat>

S390 has a register number as first operand:

	brasl	%r14,41d60 <abort>

Therefore the target addresses on s390 are always zero which is an
invalid address.

Introduce a s390 specific call parsing function which skips the first
operand on s390.

Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180307134325.96106-1-tmricht@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-08 10:05:59 -03:00
Arnaldo Carvalho de Melo 696703af37 perf annotate: Find 'call' instruction target symbol at parsing time
So that we do it just once, not everytime we press enter or -> on a
'call' instruction line.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-uysyojl1e6nm94amzzzs08tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-05 09:58:45 -03:00
Jin Yao 935f5a9d45 perf report: Fix a wrong offset issue when using /proc/kcore
When a valid vmlinux is not found, 'perf report' falls back to look at
/proc/kcore. In this case, it will report the impossible large offset.

For example:

  # perf record -b -e cycles:k find /etc/ > /dev/null
  # perf report --stdio --branch-history

    22.77%  _vm_normal_page+18446603336221188162
            |
            ---page_remove_rmap +18446603336221188324
               page_remove_rmap +18446603336221188487 (cycles:5)
               unlock_page_memcg +18446603336221188096
               page_remove_rmap +18446603336221188327 (cycles:1)

The issue is the value which is passed to parameter 'addr' in
__get_srcline() is the objdump address. It's not correct if we calculate
the offset by using 'addr - sym->start'.

This patch creates a new parameter 'ip' in __get_srcline(). It is not
converted to objdump address.

With this patch, the perf report output is:

    22.77%  _vm_normal_page+66
            |
            ---page_remove_rmap +228
               page_remove_rmap +391 (cycles:5)
               unlock_page_memcg +0
               page_remove_rmap +231 (cycles:1)
               page_remove_rmap +236

Committer testing:

Make sure you get any valid vmlinux out of the way, using '-v' on the
'perf report' case and deleting it from places where perf searches them,
like your kernel build dir and the build-id cache, in ~/.debug/.

Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1514564812-17344-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-08 11:11:57 -03:00
Arnaldo Carvalho de Melo 4e8fbc1c97 perf env: Adopt perf_env__arch() from the annotate code
And use it in the libunwind case, with both passing a valid perf_env to
extract the arch to be normalized from and passing NULL with the same
semantic as in the annotate code: to get it from uname() uts.machine.

Now the code to generate per arch errno translation tables (int/string)
can use it to decode perf.data files recorded in a different arch than
that where 'perf trace' (or any other analysis tool) runs.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-p2epffgash69w38kvj3ntpc9@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-27 12:15:52 -03:00
Arnaldo Carvalho de Melo 3285debaf5 perf annotate: Use perf_env when obtaining the arch name
Paving the way to reuse these routines in other areas, like when
generating errno tables.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-rh1qv051vb8gfdcswskrn53h@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-27 12:15:51 -03:00
Arnaldo Carvalho de Melo 5449f13c55 perf annotate: Get the cpuid from evsel->evlist->env in symbol__annotate()
To reduce its function signature, since we get this from 'evsel' which
is already one of its arguments.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-070eap7t6uicg9c3w086xy2z@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-27 12:15:51 -03:00
Thomas Richter 35a8a148d8 perf annotate: Fix objdump comment parsing for Intel mov dissassembly
The command 'perf annotate' parses the output of objdump and also
investigates the comments produced by objdump. For example the
output of objdump produces (on x86):

23eee:  4c 8b 3d 13 01 21 00 mov 0x210113(%rip),%r15
                                # 234008 <stderr@@GLIBC_2.2.5+0x9a8>

and the function mov__parse() is called to investigate the complete
line. Mov__parse() breaks this line into several parts and finally
calls function comment__symbol() to parse the data after the comment
character '#'. Comment__symbol() expects a hexadecimal address followed
by a symbol in '<' and '>' brackets.

However the 2nd parameter given to function comment__symbol()
always points to the comment character '#'. The address parsing
always returns 0 because the character '#' is not a digit and
strtoull() fails without being noticed.

Fix this by advancing the second parameter to function comment__symbol()
by one byte before invocation and add an error check after strtoull()
has been called.

Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Acked-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Fixes: 6de783b6f5 ("perf annotate: Resolve symbols using objdump comment")
Link: http://lkml.kernel.org/r/20171128075632.72182-1-tmricht@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 10:24:30 -03:00
Jiri Olsa 05d3f1a1d5 perf tools: Move symbol__calc_percent() call to outside symbol__disassemble()
We need to call symbol__calc_percent() periodicaly for top, so it's no
longer convenient to keep it in symbol__disassemble().

Let's separate the symbol__disassemble() to allocate and init
the symbol annotation structs and symbol__calc_percent() to
compute the lines percentages based on symbol hists data.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-gtnp8t4tb00q6lag07psn5nq@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-17 12:16:26 -03:00
Jiri Olsa 9e4e0a9d2e perf tools: Change (symbol|annotation)__calc_percent return type to void
There's no need for symbol__calc_percent and annotation__calc_percent
functions to return any value, since it's always zero. Changing both
function to return void.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-z0gs28hh24m4gia1t1ctraye@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-17 12:16:25 -03:00
Ravi Bangoria 648388ae68 perf annotate: Do not truncate instruction names at 6 chars
There are many instructions, esp on PowerPC, whose mnemonics are longer
than 6 characters. Using precision limit causes truncation of such
mnemonics.

Fix this by removing precision limit. Note that, 'width' is still 6, so
alignment won't get affected for length <= 6.

Before:

   li     r11,-1
   xscvdp vs1,vs1
   add.   r10,r10,r11

After:

  li     r11,-1
  xscvdpsxds vs1,vs1
  add.   r10,r10,r11

Reported-by: Donald Stence <dstence@us.ibm.com>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Link: http://lkml.kernel.org/r/20171114032540.4564-1-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:50:00 -03:00
Jiri Olsa f48e7c4070 perf annotate: Align source and offset lines
Align source with offset lines, which are more advanced, because of the
address column.

  Before:
         :      static void *worker_thread(void *__tdata)
         :      {
    0.00 :        48a971:       push   %rbp
    0.00 :        48a972:       mov    %rsp,%rbp
    0.00 :        48a975:       sub    $0x30,%rsp
    0.00 :        48a979:       mov    %rdi,-0x28(%rbp)
    0.00 :        48a97d:       mov    %fs:0x28,%rax
    0.00 :        48a986:       mov    %rax,-0x8(%rbp)
    0.00 :        48a98a:       xor    %eax,%eax
         :              struct thread_data *td = __tdata;
    0.00 :        48a98c:       mov    -0x28(%rbp),%rax
    0.00 :        48a990:       mov    %rax,-0x10(%rbp)
         :              int m = 0, i;
    0.00 :        48a994:       movl   $0x0,-0x1c(%rbp)
         :              int ret;
         :
         :              for (i = 0; i < loops; i++) {
    0.00 :        48a99b:       movl   $0x0,-0x18(%rbp)

  After:
         :              static void *worker_thread(void *__tdata)
         :              {
    0.00 :       48a971:       push   %rbp
    0.00 :       48a972:       mov    %rsp,%rbp
    0.00 :       48a975:       sub    $0x30,%rsp
    0.00 :       48a979:       mov    %rdi,-0x28(%rbp)
    0.00 :       48a97d:       mov    %fs:0x28,%rax
    0.00 :       48a986:       mov    %rax,-0x8(%rbp)
    0.00 :       48a98a:       xor    %eax,%eax
         :                      struct thread_data *td = __tdata;
    0.00 :       48a98c:       mov    -0x28(%rbp),%rax
    0.00 :       48a990:       mov    %rax,-0x10(%rbp)
         :                      int m = 0, i;
    0.00 :       48a994:       movl   $0x0,-0x1c(%rbp)
         :                      int ret;
         :
         :                      for (i = 0; i < loops; i++) {
    0.00 :       48a99b:       movl   $0x0,-0x18(%rbp)

It makes bigger different when displaying script sources, where the
comment lines looks oddly shifted from the lines which actually hold
code. I'll send script support separately.

Committer note:

Do not use a fixed column width for the addresses, as kernel ones se
more than 10 columns, look at the last offset and get the right width.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-36-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:49:50 -03:00
Jiri Olsa 29971f9a82 perf annotate: Factor annotation_line__print from disasm_line__print
Move generic annotation line display code into annotation_line__print
function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-25-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:49:44 -03:00
Jiri Olsa 8f25b8197d perf annotate: Add annotation_line__print function
Separating struct annotation_line display function, it will hold the
generic line display code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-24-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:49:44 -03:00
Jiri Olsa 81e436a0b3 perf annotate: Remove disasm__calc_percent function
Remove disasm__calc_percent() function, because it's no longer needed.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-22-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:46:14 -03:00
Jiri Olsa f681d593d1 perf annotate: Remove disasm__calc_percent() from disasm_line__print()
Remove disasm__calc_percent() from disasm_line__print(), because we
already have the data calculated in struct annotation_line.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-20-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:41:04 -03:00
Jiri Olsa 8b4c74dc5c perf annotate: Add symbol__calc_lines function
Replace symbol__get_source_line() with symbol__calc_lines(), which
calculates the source line tree over the struct annotation_line.

This will allow us to remove redundant struct source_line in following
patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-19-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:37:49 -03:00
Jiri Olsa 073ae601ed perf annotate: Add symbol__calc_percent function
Add symbol__calc_percent function, that calculates annotation data for
symbol and put the data in the struct annotation_line::samples array.

Committer notes:

Made symbol__calc_percent non static to be used in the next two patches,
which will get some fixups from jolsa, doing it this way to keep this
bisectable.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-18-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-16 14:37:49 -03:00
Jiri Olsa 7e304557ea perf annotate: Add samples into struct annotation_line
Add samples array into struct annotation_line to hold the annotation
data. The data is populated in the following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-17-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:40:00 -03:00
Jiri Olsa f8eb37bd7c perf annotate: Add annotated_source__purge function
Mov disasm__purge() to annotated_source__purge() to make it work over a
generic struct annotation_line.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-16-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:40:00 -03:00
Jiri Olsa c835e1914c perf annotate: Add annotation_line__(new|delete) functions
Changing the way the annotation lines are allocated and adding
annotation_line__(new|delete) functions to deal with this.

Before the allocation schema was as follows:

  -----------------------------------------------------------
  struct disasm_line | struct annotation_line | private space
  -----------------------------------------------------------

Where the private space is used in TUI code to store computed
annotation data for events. The stdio code computes the data
on the fly.

The goal is to compute and store annotation line's data directly
in the struct annotation_line itself, so this patch changes the
line allocation schema as follows:

  ------------------------------------------------------------
  privsize space | struct disasm_line | struct annotation_line
  ------------------------------------------------------------

Moving struct annotation_line to the end, because in following
changes we will move here the non-fixed length event's data.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-15-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:59 -03:00
Jiri Olsa 82b9d7ff09 perf annotate: Add annotation_line__add function
Rename disasm__add() into annotation_line__add() to make it work over a
generic struct annotation_line.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-13-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:59 -03:00
Jiri Olsa c4c724364d perf annotate: Add annotation_line__next function
Rename disasm__get_next_ip_line() to annotation_line__next() to make it
work over a generic struct annotation_line.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-12-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:59 -03:00
Jiri Olsa d03a686ea6 perf annotate: Add evsel into struct annotation_line_args
Add evsel into struct annotate_args to reduce the number of arguments
that need to travel all the way to line allocation.

This change also allow us to move the arch name initialization under
symbol__annotate function.

Link: http://lkml.kernel.org/n/tip-a9ok53rrgt1s5e8uglyvy6qt@git.kernel.org
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-11-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:59 -03:00
Jiri Olsa 4748834f96 perf annotate: Add offset/line/line_nr into struct annotate_args
Add offset/line/line_nr into struct annotate_args to reduce the number
of arguments that need to travel all the way to line allocation.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-10-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:58 -03:00
Jiri Olsa 1a04db70dc perf annotate: Add map into struct annotate_args
Add map into struct annotate_args to reduce the number of arguments
that need to travel all the way to line allocation.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-9-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:58 -03:00
Jiri Olsa 24fe7b8893 perf annotate: Add arch into struct annotate_args
Add arch into struct annotate_args to reduce the number of arguments
that need to travel all the way to line allocation.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-8-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:58 -03:00
Jiri Olsa ea07c5aaed perf annotate: Add struct annotate_args
Adding struct annotate_args to reduce the number of arguments, that need
to travel all the way to line allocation. This makes the code easier to
read and ease up the changes for following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:58 -03:00
Jiri Olsa c34df25b40 perf annotate: Add symbol__annotate function
Add symbol__annotate function to have generic annotation function to be
called for all annotation sources.

It calls the generic annotation init and then the specific annotation
data retrieval function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-6-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:58 -03:00
Jiri Olsa d5490b9647 perf annotate: Move line/offset into annotation_line struct
Move the line/line_nr/offset menbers to the annotation_line struct to be
used as generic members for any annotation source.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:57 -03:00
Jiri Olsa a17c4ca0dd perf annotate: Add annotation_line struct
In order to make the annotation support generic, addadding 'struct
annotation_line', which will hold generic data common to annotation
sources (such as the one for python scripts, coming on upcoming
patches).

Having this, we can add different annotation line support other than
objdump disasm.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-11-13 09:39:57 -03:00
Ingo Molnar 15bcdc9477 Merge branch 'linus' into perf/core, to fix conflicts
Conflicts:
	tools/perf/arch/arm/annotate/instructions.c
	tools/perf/arch/arm64/annotate/instructions.c
	tools/perf/arch/powerpc/annotate/instructions.c
	tools/perf/arch/s390/annotate/instructions.c
	tools/perf/arch/x86/tests/intel-cqm.c
	tools/perf/ui/tui/progress.c
	tools/perf/util/zlib.c

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-11-07 10:30:18 +01:00
Ravi Bangoria 331c7cb307 perf symbols: Fix memory corruption because of zero length symbols
Perf top is often crashing at very random locations on powerpc.  After
investigating, I found the crash only happens when sample is of zero
length symbol. Powerpc kernel has many such symbols which does not
contain length details in vmlinux binary and thus start and end
addresses of such symbols are same.

Structure

  struct sym_hist {
        u64                   nr_samples;
        u64                   period;
        struct sym_hist_entry addr[0];
  };

has last member 'addr[]' of size zero. 'addr[]' is an array of addresses
that belongs to one symbol (function). If function consist of 100
instructions, 'addr' points to an array of 100 'struct sym_hist_entry'
elements. For zero length symbol, it points to the *empty* array, i.e.
no members in the array and thus offset 0 is also invalid for such
array.

  static int __symbol__inc_addr_samples(...)
  {
        ...
        offset = addr - sym->start;
        h = annotation__histogram(notes, evidx);
        h->nr_samples++;
        h->addr[offset].nr_samples++;
        h->period += sample->period;
        h->addr[offset].period += sample->period;
        ...
  }

Here, when 'addr' is same as 'sym->start', 'offset' becomes 0, which is
valid for normal symbols but *invalid* for zero length symbols and thus
updating h->addr[offset] causes memory corruption.

Fix this by adding one dummy element for zero length symbols.

Link: https://lkml.org/lkml/2016/10/10/148
Fixes: edee44be59 ("perf annotate: Don't throw error for zero length symbols")
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Link: http://lkml.kernel.org/r/1508854806-10542-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-10-25 13:01:09 -03:00
Jiri Olsa 696e2457e9 perf annotate: Remove arch::cpuid_parse callback
There's no need for extra cpuid_parse arch callback, it can be handled
directly in init callback.

Adding the init function to x86 to cover the cpuid initialization.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-10-23 11:20:54 -03:00
Taeung Song 1ac39372e0 perf annotate stdio: Support --show-nr-samples option
Add --show-nr-samples option to "perf annotate" so that it matches "perf
report".

Committer note:

Note that it can't be used together with --show-total-period, which
seems like a silly limitation, that can be lifted at some point.

Made it bail out if not on --stdio.

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1503046008-5511-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-18 10:31:53 -03:00
Arnaldo Carvalho de Melo 48cc330852 perf annotate: Fix storing per line sym_hist_entry
The existing loop incremented the offset while using it as the array
index, when we went to an array of sym_hist_entry instances, we
should've moved the increment to outside of the array element reference,
oops, fix it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 461c17f00f ("perf annotate: Store the sample period in each histogram bucket")
Link: http://lkml.kernel.org/n/tip-s3dm6uyrazlpag3f0psfia07@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-28 12:53:05 -03:00
Arnaldo Carvalho de Melo ce9ee4a2de perf annotate stdio: Set enough columns for --show-total-period
Now that we set the first column header according to wether
--show-total-period is being used, we need to size it accordingly.

Based-on-a-patch-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-pu504ffnit4m334k09hxcbs3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 17:16:46 -03:00
Taeung Song 38d2dcd0cc perf annotate stdio: Fix column header when using --show-total-period
Currently the first column header is always "Percent", fix it to show
correct column name based on given options, i.e. if using
--show-total-period, show "Event count" as a first column.

Reported-by: Milian Wolff <milian.wolff@kdab.com>
Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/c3c902e7-95bc-16d4-366f-12eb034c5c8d@gmail.com
[ Extracted from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-25 22:46:37 -03:00
Taeung Song 585d93c5ff perf annotate stdio: Fix --show-total-period
We were showing the total number of samples, not the total period as
asked by the user, fix it.

Reported-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Martin Liška <mliska@suse.cz>
Cc: Milian Wolff <milian.wolff@kdab.com>
Link: http://lkml.kernel.org/n/tip-lh2nh89rtqn5x5vbfthw6qml@git.kernel.org
Fixes: 0c4a5bcea4 ("perf annotate: Display total number of samples with --show-total-period")
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-25 11:23:36 -03:00
Taeung Song 461c17f00f perf annotate: Store the sample period in each histogram bucket
We'll use it soon, when fixing --show-total-period.

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1500500215-16646-1-git-send-email-treeze.taeung@gmail.com
[ split from a larger patch, do the math in __symbol__inc_addr_samples() ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-21 12:02:38 -03:00
Taeung Song bab89f6aed perf hists: Pass perf_sample to __symbol__inc_addr_samples()
To pave the way to use perf_sample fields in the annotate code, storing
sample->period in sym_hist->addr->period and its sum in
sym_hist->period.

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1500500215-16646-1-git-send-email-treeze.taeung@gmail.com
[ split and adjusted from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-21 08:23:50 -03:00
Taeung Song 8158683da3 perf annotate: Rename 'sum' to 'nr_samples' in struct sym_hist
To make it more clear that it is the sum of all the nr_samples fields in the
addr[] entries, i.e.:

  sym_hist->nr_samples = sum(sym_hist->addr[0 ..  symbol__size(sym)]->nr_samples)

Committer notes:

Taeung had renamed it to total_samples, but using nr_samples, as in the
added explanation above, looks clearer and establishes the direct
connection, making clear it is about the _number_ of samples.

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1500500211-16599-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-21 08:23:49 -03:00
Taeung Song 896bccd3cb perf annotate: Introduce struct sym_hist_entry
struct sym_hist has addr[] but it should have not only number of samples
but also the sample period.  So use new struct symhist_entry to pave the
way to have that.

Committer notes:

This initial patch will only introduce the struct sym_hist_entry and use
only the nr_samples member, which makes the code clearer and paves the
way to save the period as well.

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Suggested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1500500205-16553-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-21 08:23:38 -03:00
Krister Johansen d2396999c9 perf buildid-cache: Cache debuginfo
If a stripped binary is placed in the cache, the user is in a situation
where there's a cached elf file present, but it doesn't have any symtab
to use for name resolution.  Grab the debuginfo for binaries that don't
end in .ko.  This yields a better chance of resolving symbols from older
traces.

Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1499305693-1599-7-git-send-email-kjlx@templeofstupid.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:14:11 -03:00
Jin Yao 7e63a13a26 perf annotate: Implement visual marker for macro fusion
For marking fused instructions clearly this patch adds a line before the
first instruction of pair and joins it with the arrow of the jump to its
target.

For example, when "je" is selected in annotate view, the line before
cmpl is displayed and joins the arrow of "je".

       │   ┌──cmpl   $0x0,argp_program_version_hook
 81.93 │   ├──je     20
       │   │  lock   cmpxchg %esi,0x38a9a4(%rip)
       │   │↓ jne    29
       │   │↓ jmp    43
 11.47 │20:└─→cmpxch %esi,0x38a999(%rip)

That means the cmpl+je is a fused instruction pair and they should be
considered together.

Changelog:

v3: Use Arnaldo's fix to improve the arrow origin rendering.  To get the
    evsel->evlist->env->cpuid, save the evsel in annotate_browser.

v2: new function "ins__is_fused" to check if the instructions are fused.

Signed-off-by: Yao Jin <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1499403995-19857-3-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:13:49 -03:00
Jin Yao 69fb09f6cc perf annotate: Check for fused instructions
Macro fusion merges two instructions to a single micro-op. Intel core
platform performs this hardware optimization under limited
circumstances.

For example, CMP + JCC can be "fused" and executed /retired together.
While with sampling this can result in the sample sometimes being on the
JCC and sometimes on the CMP.  So for the fused instruction pair, they
could be considered together.

On Nehalem, fused instruction pairs:

  cmp/test + jcc.

On other new CPU:

  cmp/test/add/sub/and/inc/dec + jcc.

This patch adds an x86-specific function which checks if 2 instructions
are in a "fused" pair. For non-x86 arch, the function is just NULL.

Changelog:

v4: Move the CPU model checking to symbol__disassemble and save the CPU
    family/model in arch structure.

    It avoids checking every time when jump arrow printed.

v3: Add checking for Nehalem (CMP, TEST). For other newer Intel CPUs
    just check it by default (CMP, TEST, ADD, SUB, AND, INC, DEC).

v2: Remove the original weak function. Arnaldo points out that doing it
    as a weak function that will be overridden by the host arch doesn't
    work. So now it's implemented as an arch-specific function.

Committer fix:

Do not access evsel->evlist->env->cpuid, ->env can be null, introduce
perf_evsel__env_cpuid(), just like perf_evsel__env_arch(), also used in
this function call.

The original patch was segfaulting 'perf top' + annotation.

But this essentially disables this fused instructions augmentation in
'perf top', the right thing is to get the cpuid from the running kernel,
left for a later patch tho.

Signed-off-by: Yao Jin <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1499403995-19857-2-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:11:25 -03:00
Jin Yao dcaa394807 perf annotate: Return arch from symbol__disassemble() and save it in browser
In annotate browser, we will add support to check fused instructions.
While this is x86-specific feature so we need the annotate browser to
know what the arch it runs on.

symbol__disassemble() has figured out the arch. This patch just lets the
arch return from symbol__disassemble and save the arch in annotate
browser.

Signed-off-by: Yao Jin <yao.jin@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1497840958-4759-2-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-19 15:27:09 -03:00
Namhyung Kim 3c84fd5304 perf annotate: Use dso__decompress_kmodule_path()
Convert open-coded decompress routine to use the function.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: kernel-team@lge.com
Link: http://lkml.kernel.org/r/20170608073109.30699-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-08 15:39:02 -03:00
Namhyung Kim 3619ef76b3 perf annotate: Fix symbolic link of build-id cache
The commit 6ebd2547dd ("perf annotate: Fix a bug following symbolic
link of a build-id file") changed to use dirname to follow the symlink.
But it only considers new-style build-id cache names so old names fail
on readlink() and force to use system path which might not available.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: kernel-team@lge.com
Fixes: 6ebd2547dd ("perf annotate: Fix a bug following symbolic link of a build-id file")
Link: http://lkml.kernel.org/r/20170608073109.30699-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-08 15:38:41 -03:00
Kim Phillips b13bbeee5e perf annotate: Fix branch instruction with multiple operands
'perf annotate' is dropping the cr* fields from branch instructions.

Fix it by adding support to display branch instructions having
multiple operands.

Power Arch objdump of int_sqrt:

 20.36 | c0000000004d2694:   subf   r10,r10,r3
       | c0000000004d2698: v bgt    cr6,c0000000004d26a0 <int_sqrt+0x40>
  1.82 | c0000000004d269c:   mr     r3,r10
 29.18 | c0000000004d26a0:   mr     r10,r8
       | c0000000004d26a4: v bgt    cr7,c0000000004d26ac <int_sqrt+0x4c>
       | c0000000004d26a8:   mr     r10,r7

Power Arch Before Patch:

 20.36 |       subf   r10,r10,r3
       |     v bgt    40
  1.82 |       mr     r3,r10
 29.18 | 40:   mr     r10,r8
       |     v bgt    4c
       |       mr     r10,r7

Power Arch After patch:

 20.36 |       subf   r10,r10,r3
       |     v bgt    cr6,40
  1.82 |       mr     r3,r10
 29.18 | 40:   mr     r10,r8
       |     v bgt    cr7,4c
       |       mr     r10,r7

Also support AArch64 conditional branch instructions, which can
have up to three operands:

Aarch64 Non-simplified (raw objdump) view:

       │ffff0000083cd11c: ↑ cbz    w0, ffff0000083cd100 <security_fil▒
...
  4.44 │ffff000│083cd134: ↓ tbnz   w0, #26, ffff0000083cd190 <securit▒
...
  1.37 │ffff000│083cd144: ↓ tbnz   w22, #5, ffff0000083cd1a4 <securit▒
       │ffff000│083cd148:   mov    w19, #0x20000                   //▒
  1.02 │ffff000│083cd14c: ↓ tbz    w22, #2, ffff0000083cd1ac <securit▒
...
  0.68 │ffff000└──3cd16c: ↑ cbnz   w0, ffff0000083cd120 <security_fil▒

Aarch64 Simplified, before this patch:

       │    ↑ cbz    40
...
  4.44 │   │↓ tbnz   w0, #26, ffff0000083cd190 <security_file_permiss▒
...
  1.37 │   │↓ tbnz   w22, #5, ffff0000083cd1a4 <security_file_permiss▒
       │   │  mov    w19, #0x20000                   // #131072
  1.02 │   │↓ tbz    w22, #2, ffff0000083cd1ac <security_file_permiss▒
...
  0.68 │   └──cbnz   60

the cbz operand is missing, and the tbz doesn't get simplified processing
at all because the parsing function failed to match an address.

Aarch64 Simplified, After this patch applied:

       │    ↑ cbz    w0, 40
...
  4.44 │   │↓ tbnz   w0, #26, d0
...
  1.37 │   │↓ tbnz   w22, #5, e4
       │   │  mov    w19, #0x20000                   // #131072
  1.02 │   │↓ tbz    w22, #2, ec
...
  0.68 │   └──cbnz   w0, 60

Originally-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Reported-by: Anton Blanchard <anton@samba.org>
Reported-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Kim Phillips <kim.phillips@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Link: http://lkml.kernel.org/r/20170601092959.f60d98912e8a1b66fd1e4c0e@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-01 14:48:36 -03:00
Ravi Bangoria 7b4500bc51 perf annotate: Fix failure when filename has special chars
When filename contains special chars, perf annotate fails
with an error:

  $ perf annotate --vmlinux ./vmlinux\(test\) --stdio native_safe_halt
    sh: -c: line 0: syntax error near unexpected token `('
    sh: -c: line 0: `objdump  --start-address=0xffffffff8184e840
    --stop-address=0xffffffff8184e848 -l -d --no-show-raw -S -C
    ./vmlinux(test) 2>/dev/null|grep -v ./vmlinux(test):|expand'

Fix it by surrounding filename in double quotes.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Adam Stylinski <adam.stylinski@etegent.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Link: http://lkml.kernel.org/r/20170505101417.2117-1-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-05-27 10:10:18 -03:00
Arnaldo Carvalho de Melo a43783aeec perf tools: Include errno.h where needed
Removing it from util.h, part of an effort to disentangle the includes
hell, that makes changes to util.h or something included by it to cause
a complete rebuild of the tools.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-ztrjy52q1rqcchuy3rubfgt2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-19 13:01:51 -03:00
Arnaldo Carvalho de Melo a067558e2f perf tools: Move extra string util functions to util/string2.h
Moving them from util.h, where they don't belong. Since libc already
have string.h, name it slightly differently, as string2.h.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-eh3vz5sqxsrdd8lodoro4jrw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-19 13:01:51 -03:00
Arnaldo Carvalho de Melo 3d689ed609 perf tools: Move sane ctype stuff from util.h to sane_ctype.h
More stuff that came from git, out of the hodge-podge that is util.h

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-e3lana4gctz3ub4hn4y29hkw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-19 13:01:48 -03:00
Arnaldo Carvalho de Melo fd20e8111c perf tools: Including missing inttypes.h header
Needed to use the PRI[xu](32,64) formatting macros.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-wkbho8kaw24q67dd11q0j39f@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-19 13:01:46 -03:00
Arnaldo Carvalho de Melo 877a7a1105 perf tools: Add include <linux/kernel.h> where ARRAY_SIZE() is used
To pave the way for further cleanups where linux/kernel.h may stop being
included in some header.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-qqxan6tfsl6qx3l0v3nwgjvk@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-19 13:01:44 -03:00
Taeung Song 986a5bc028 perf annotate: Use stripped line instead of raw disassemble line
When parsing disassemble lines for source line number, use a stripped
line instead of raw line.

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1491612748-1605-3-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-11 16:22:23 -03:00
Taeung Song 4597cf0664 perf annotate: Refactor the code to parse disassemble lines with {l,r}trim()
When parsing disassemble lines, use ltrim() and rtrim() to strip them,
not using just while loop and isspace().

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1491612748-1605-2-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-11 16:22:22 -03:00
Christian Borntraeger d9f8dfa9ba perf annotate s390: Implement jump types for perf annotate
Implement simple detection for all kind of jumps and branches.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Andreas Krebbel <krebbel@linux.vnet.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-s390 <linux-s390@vger.kernel.org>
Cc: stable@kernel.org # v4.10+
Link: http://lkml.kernel.org/r/1491465112-45819-3-git-send-email-borntraeger@de.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-11 15:23:40 -03:00
Christian Borntraeger e77852b32d perf annotate s390: Fix perf annotate error -95 (4.10 regression)
since 4.10 perf annotate exits on s390 with an "unknown error -95".
Turns out that commit 786c1b5184 ("perf annotate: Start supporting
cross arch annotation") added a hard requirement for architecture
support when objdump is used but only provided x86 and arm support.
Meanwhile power was added so lets add s390 as well.

While at it make sure to implement the branch and jump types.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Andreas Krebbel <krebbel@linux.vnet.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-s390 <linux-s390@vger.kernel.org>
Cc: stable@kernel.org # v4.10+
Fixes: 786c1b5184 "perf annotate: Start supporting cross arch annotation"
Link: http://lkml.kernel.org/r/1491465112-45819-2-git-send-email-borntraeger@de.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-11 15:23:39 -03:00
Taeung Song 99094a5e94 perf annotate: Fix missing number of samples for source_line_samples
The option 'show-total-period' works fine without a option '-l'.  But if
running 'perf annotate --stdio -l --show-total-period', you can see a
problem showing only zero '0' for number of samples.

Before:
    $ perf annotate --stdio -l --show-total-period
...
       0 :        400816:       push   %rbp
       0 :        400817:       mov    %rsp,%rbp
       0 :        40081a:       mov    %edi,-0x24(%rbp)
       0 :        40081d:       mov    %rsi,-0x30(%rbp)
       0 :        400821:       mov    -0x24(%rbp),%eax
       0 :        400824:       mov    -0x30(%rbp),%rdx
       0 :        400828:       mov    (%rdx),%esi
       0 :        40082a:       mov    $0x0,%edx
...

The reason is it was missed to set number of samples of
source_line_samples, so set it ordinarily.

After:
    $ perf annotate --stdio -l --show-total-period
...
       3 :        400816:       push   %rbp
       4 :        400817:       mov    %rsp,%rbp
       0 :        40081a:       mov    %edi,-0x24(%rbp)
       0 :        40081d:       mov    %rsi,-0x30(%rbp)
       1 :        400821:       mov    -0x24(%rbp),%eax
       2 :        400824:       mov    -0x30(%rbp),%rdx
       0 :        400828:       mov    (%rdx),%esi
       1 :        40082a:       mov    $0x0,%edx
...

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Martin Liska <mliska@suse.cz>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 0c4a5bcea4 ("perf annotate: Display total number of samples with --show-total-period")
Link: http://lkml.kernel.org/r/1490703125-13643-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-04 21:08:00 -03:00
Taeung Song 2e933b1274 perf annotate: Fix a bug of division by zero when calculating percent
Currently perf-annotate with --print-line can print
-nan(0x8000000000000) because of division by zero when calculating
percent. The division by zero happens when a sum of samples is zero in
symbol__get_source_line(), so fix it.

For example:

After running 'perf record' like below,

    $ perf record -e "{cycles,page-faults,branch-misses}" ./a.out

Before:

    $ perf annotate --stdio -l

  Sorted summary for file /home/taeung/workspace/a.out
  ----------------------------------------------

   32.89    -nan    7.04 a.c:38
   25.14    -nan    0.00 a.c:34
   16.26    -nan   56.34 a.c:31
   15.88    -nan    1.41 a.c:37
    5.67    -nan    0.00 a.c:39
    1.13    -nan   35.21 a.c:26
    0.95    -nan    0.00 a.c:44
    0.57    -nan    0.00 a.c:32
   Percent                 |      Source code & Disassembly of a.out for cycles (529 samples)
  -----------------------------------------------------------------------------------------
                         :
  ...

   a.c:26    0.57    -nan    4.23 :         40081a:       mov    %edi,-0x24(%rbp)
   a.c:26    0.00    -nan    9.86 :         40081d:       mov    %rsi,-0x30(%rbp)

  ...

However, if a sum of samples is zero (e.g. 'page-faults'),
skip calculating percent.

After:

    $ perf annotate --stdio -l

  Sorted summary for file /home/taeung/workspace/a.out
  ----------------------------------------------

   32.89    0.00    7.04 a.c:38
   25.14    0.00    0.00 a.c:34
   16.26    0.00   56.34 a.c:31
   15.88    0.00    1.41 a.c:37
    5.67    0.00    0.00 a.c:39
    1.13    0.00   35.21 a.c:26
    0.95    0.00    0.00 a.c:44
    0.57    0.00    0.00 a.c:32
   Percent                 |      Source code & Disassembly of old for cycles (529 samples)
  -----------------------------------------------------------------------------------------
                         :
  ...

  a.c:26    0.57    0.00    4.23 :         40081a:       mov    %edi,-0x24(%rbp)
  a.c:26    0.00    0.00    9.86 :         40081d:       mov    %rsi,-0x30(%rbp)

  ...

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1490598638-13947-3-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-27 15:04:56 -03:00
Taeung Song 6ebd2547dd perf annotate: Fix a bug following symbolic link of a build-id file
It is wrong way to read link name from a build-id file.  Because a
build-id file is not anymore a symbolic link but build-id directory of
it is symbolic link, so fix it.

For example, if build-id file name gotten from
dso__build_id_filename() is as below,

  /root/.debug/.build-id/4f/75c7d197c951659d1c1b8b5fd49bcdf8f3f8b1/elf

To correctly read link name of build-id, use the build-id dir path that
is a symbolic link, instead of the above build-id file name like below.

  /root/.debug/.build-id/4f/75c7d197c951659d1c1b8b5fd49bcdf8f3f8b1

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1490598638-13947-2-git-send-email-treeze.taeung@gmail.com
Fixes: 01412261d9 ("perf buildid-cache: Use path/to/bin/buildid/elf instead of path/to/bin/buildid")
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-27 14:58:20 -03:00
Milian Wolff 5dfa210e40 perf report: Enable sorting by srcline as key
Often it is interesting to know how costly a given source line is in
total. Previously, one had to build these sums manually based on all
addresses that pointed to the same source line. This patch introduces
srcline as a sort key, which will do the aggregation for us.

Paired with the recent addition of showing inline frames, this makes
perf report much more useful for many C++ work loads.

The following shows the new feature in action. First, let's show the
status quo output when we sort by address. The result contains many hist
entries that generate the same output:

  ~~~~~~~~~~~~~~~~
  $ perf report --stdio --inline -g address
  # Children      Self  Command       Shared Object        Symbol
  # ........  ........  ............  ...................  .........................................
  #
      99.89%    35.34%  cpp-inlining  cpp-inlining         [.] main
            |
            |--64.55%--main complex:655
            |          /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 (inline)
            |          /usr/include/c++/6.3.1/complex:664 (inline)
            |          |
            |          |--60.31%--hypot +20
            |          |          |
            |          |          |--8.52%--__hypot_finite +273
            |          |          |
            |          |          |--7.32%--__hypot_finite +411
...
             --35.34%--_start +4194346
                       __libc_start_main +241
                       |
                       |--6.65%--main random.tcc:3326
                       |          /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1809 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1818 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:185 (inline)
                       |
                       |--2.70%--main random.tcc:3326
                       |          /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1809 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1818 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:185 (inline)
                       |
                       |--1.69%--main random.tcc:3326
                       |          /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1809 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1818 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:185 (inline)
  ...
  ~~~~~~~~~~~~~~~~

With this patch and `-g srcline` we instead get the following output:

  ~~~~~~~~~~~~~~~~
  $ perf report --stdio --inline -g srcline
  # Children      Self  Command       Shared Object        Symbol
  # ........  ........  ............  ...................  .........................................
  #
      99.89%    35.34%  cpp-inlining  cpp-inlining         [.] main
            |
            |--64.55%--main complex:655
            |          /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 (inline)
            |          /usr/include/c++/6.3.1/complex:664 (inline)
            |          |
            |          |--64.02%--hypot
            |          |          |
            |          |           --59.81%--__hypot_finite
            |          |
            |           --0.53%--cabs
            |
             --35.34%--_start
                       __libc_start_main
                       |
                       |--12.48%--main random.tcc:3326
                       |          /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1809 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:1818 (inline)
                       |          /usr/include/c++/6.3.1/bits/random.h:185 (inline)
  ...
  ~~~~~~~~~~~~~~~~

Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20170318214928.9047-1-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-27 12:13:28 -03:00
Arnaldo Carvalho de Melo ed7b339fb5 perf annotate: Add comment clarifying how the source code line is parsed
The source code line number (lineno) needs to be kept in accross calls
to symbol__parse_objdump_line() when parsing the output of 'objdump -l
-dS', so that it can associate it with the instructions till the next
line.

See disasm_line__new() and struct disasm_line::line_nr.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-7hpx8f8ybdpiujceysaj229w@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 16:00:50 -03:00
Taeung Song e7cb9de211 perf annotate: More exactly grep -v of the objdump command
The 'grep -v "filename"' applied to the objdump command output cause a
side effect eliminating filename:linenr of output of 'objdump -l' if the
object file name and source file name are the same, fix it.

E.g. the output of the following objdump command in symbol__disassemble():

    $ objdump -l -d -S -C /home/taeung/hello --start-address=...

    /home/taeung/hello:     file format elf64-x86-64

    Disassembly of section .text:

    0000000000400526 <main>:
    main():
    /home/taeung/hello.c:4

    void main()
    {
      400526:	55                   	push   %rbp
      400527:	48 89 e5             	mov    %rsp,%rbp
    /home/taeung/hello.c:5
    ...

But it uses grep -v "filename" e.g. "/home/taeung/hello" in the objdump
command to remove the first line containing file name and file format
("/home/taeung/hello:     file format elf64-x86-64"):

Before:

    $ objdump -l -d -S -C /home/taeung/hello | grep /home/taeung/hello

But this causes a side effect, removing filename:linenr too, because the
object file and source file have the same name e.g. "/home/taueng/hello",
"/home/taeung/hello.c"

So more do a better match by using grep -v as below to correctly remove
that first line:

    "/home/taeung/hello:     file format elf64-x86-64"

After:

    $ objdump -l -d -S -C /home/taeung/hello | grep /home/taeung/hello:

Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1489978617-31396-5-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-21 15:42:25 -03:00
Namhyung Kim bb963e1650 perf utils: Check verbose flag properly
It now can have negative value to suppress the message entirely.  So it
needs to check it being positive.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: kernel-team@lge.com
Link: http://lkml.kernel.org/r/20170217081742.17417-3-namhyung@kernel.org
[ Adjust fuzz on tools/perf/util/pmu.c, add > 0 checks in many other places ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-20 11:35:54 -03:00
Ravi Bangoria edee44be59 perf annotate: Don't throw error for zero length symbols
'perf report --tui' exits with error when it finds a sample of zero
length symbol (i.e. addr == sym->start == sym->end). Actually these are
valid samples. Don't exit TUI and show report with such symbols.

Reported-and-Tested-by: Anton Blanchard <anton@samba.org>
Link: https://lkml.org/lkml/2016/10/8/189
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: stable@kernel.org # v4.9+
Link: http://lkml.kernel.org/r/1479804050-5028-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-20 12:00:32 -03:00
Ravi Bangoria e216874cc1 perf annotate: Fix jump target outside of function address range
If jump target is outside of function range, perf is not handling it
correctly. Especially when target address is lesser than function start
address, target offset will be negative. But, target address declared to
be unsigned, converts negative number into 2's complement. See below
example. Here target of 'jumpq' instruction at 34cf8 is 34ac0 which is
lesser than function start address(34cf0).

        34ac0 - 34cf0 = -0x230 = 0xfffffffffffffdd0

Objdump output:

  0000000000034cf0 <__sigaction>:
  __GI___sigaction():
    34cf0: lea    -0x20(%rdi),%eax
    34cf3: cmp    -bashx1,%eax
    34cf6: jbe    34d00 <__sigaction+0x10>
    34cf8: jmpq   34ac0 <__GI___libc_sigaction>
    34cfd: nopl   (%rax)
    34d00: mov    0x386161(%rip),%rax        # 3bae68 <_DYNAMIC+0x2e8>
    34d07: movl   -bashx16,%fs:(%rax)
    34d0e: mov    -bashxffffffff,%eax
    34d13: retq

perf annotate before applying patch:

  __GI___sigaction  /usr/lib64/libc-2.22.so
           lea    -0x20(%rdi),%eax
           cmp    -bashx1,%eax
        v  jbe    10
        v  jmpq   fffffffffffffdd0
           nop
    10:    mov    _DYNAMIC+0x2e8,%rax
           movl   -bashx16,%fs:(%rax)
           mov    -bashxffffffff,%eax
           retq

perf annotate after applying patch:

  __GI___sigaction  /usr/lib64/libc-2.22.so
           lea    -0x20(%rdi),%eax
           cmp    -bashx1,%eax
        v  jbe    10
        ^  jmpq   34ac0 <__GI___libc_sigaction>
           nop
    10:    mov    _DYNAMIC+0x2e8,%rax
           movl   -bashx16,%fs:(%rax)
           mov    -bashxffffffff,%eax
           retq

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/1480953407-7605-3-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:46 -03:00
Ravi Bangoria 3ee2eb6da2 perf annotate: Support jump instruction with target as second operand
Architectures like PowerPC have jump instructions that includes a target
address as a second operand. For example, 'bne cr7,0xc0000000000f6154'.
Add support for such instruction in perf annotate.

objdump o/p:
  c0000000000f6140:   ld     r9,1032(r31)
  c0000000000f6144:   cmpdi  cr7,r9,0
  c0000000000f6148:   bne    cr7,0xc0000000000f6154
  c0000000000f614c:   ld     r9,2312(r30)
  c0000000000f6150:   std    r9,1032(r31)
  c0000000000f6154:   ld     r9,88(r31)

Corresponding perf annotate o/p:

Before patch:
         ld     r9,1032(r31)
         cmpdi  cr7,r9,0
      v  bne    3ffffffffff09f2c
         ld     r9,2312(r30)
         std    r9,1032(r31)
  74:    ld     r9,88(r31)

After patch:
         ld     r9,1032(r31)
         cmpdi  cr7,r9,0
      v  bne    74
         ld     r9,2312(r30)
         std    r9,1032(r31)
  74:    ld     r9,88(r31)

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/1480953407-7605-2-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:46 -03:00
Ravi Bangoria bec60e50af perf annotate: Show raw form for jump instruction with indirect target
For jump instructions that does not include target address as direct operand,
show the original disassembled line for them. This is needed for certain
powerpc jump instructions that use target address in a register (such as bctr,
btar, ...).

Before:
     ld     r12,32088(r12)
     mtctr  r12
  v  bctr   ffffffffffffca2c
     std    r2,24(r1)
     addis  r12,r2,-1

After:
     ld     r12,32088(r12)
     mtctr  r12
  v  bctr
     std    r2,24(r1)
     addis  r12,r2,-1

Committer notes:

Testing it using a perf.data file and vmlinux for powerpc64,
cross-annotating it on a x86_64 workstation:

Before:

  .__bpf_prog_run  vmlinux.powerpc
         │        std    r10,512(r9)                      ▒
         │        lbz    r9,0(r31)                        ▒
         │        rldicr r9,r9,3,60                       ▒
         │        ldx    r9,r30,r9                        ▒
         │        mtctr  r9                               ▒
  100.00 │      ↓ bctr   3fffffffffe01510                 ▒
         │        lwa    r10,4(r31)                       ▒
         │        lwz    r9,0(r31)                        ▒
  <SNIP>
  Invalid jump offset: 3fffffffffe01510

After:

  .__bpf_prog_run  vmlinux.powerpc
         │        std    r10,512(r9)                      ▒
         │        lbz    r9,0(r31)                        ▒
         │        rldicr r9,r9,3,60                       ▒
         │        ldx    r9,r30,r9                        ▒
         │        mtctr  r9                               ▒
  100.00 │      ↓ bctr                                    ▒
         │        lwa    r10,4(r31)                       ▒
         │        lwz    r9,0(r31)                        ▒
  <SNIP>
  Invalid jump offset: 3fffffffffe01510

This, in turn, uncovers another problem with jumps without operands, the
ENTER/-> operation, to jump to the target, still continues using the bogus
target :-)

BTW, this was the file used for the above tests:

  [acme@jouet ravi_bangoria]$ perf report --header-only -i perf.data.f22vm.powerdev
  # ========
  # captured on: Thu Nov 24 12:40:38 2016
  # hostname : pdev-f22-qemu
  # os release : 4.4.10-200.fc22.ppc64
  # perf version : 4.9.rc1.g6298ce
  # arch : ppc64
  # nrcpus online : 48
  # nrcpus avail : 48
  # cpudesc : POWER7 (architected), altivec supported
  # cpuid : 74,513
  # total memory : 4158976 kB
  # cmdline : /home/ravi/Workspace/linux/tools/perf/perf record -a
  # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|CPU|PERIOD, disabled = 1, inherit = 1, mmap = 1, c
  # HEADER_CPU_TOPOLOGY info available, use -I to display
  # HEADER_NUMA_TOPOLOGY info available, use -I to display
  # pmu mappings: cpu = 4, software = 1, tracepoint = 2, breakpoint = 5
  # missing features: HEADER_TRACING_DATA HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT HEADER_CACHE
  # ========
  #
  [acme@jouet ravi_bangoria]$

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/1480953407-7605-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-05 17:21:57 -03:00
Kim Phillips 0fcb1da4ab perf annotate: AArch64 support
This is a regex converted version from the original:

	https://lkml.org/lkml/2016/5/19/461

Add basic support to recognise AArch64 assembly. This allows perf to
identify AArch64 instructions that branch to other parts within the
same function, thereby properly annotating them.

Rebased onto new cross-arch annotation bits:

	https://lkml.org/lkml/2016/11/25/546

Sample output:

security_file_permission  vmlinux
  5.80 │    ← ret                                                  ▒
       │70:   ldr    w0, [x21,#68]                                 ▒
  4.44 │    ↓ tbnz   d0                                            ▒
       │      mov    w0, #0x24                       // #36        ▒
  1.37 │      ands   w0, w22, w0                                   ▒
       │    ↑ b.eq   60                                            ▒
  1.37 │    ↓ tbnz   e4                                            ▒
       │      mov    w19, #0x20000                   // #131072    ▒
  1.02 │    ↓ tbz    ec                                            ▒
       │90:┌─→ldr    x3, [x21,#24]                                 ▒
  1.37 │   │  add    x21, x21, #0x10                               ▒
       │   │  mov    w2, w19                                       ▒
  1.02 │   │  mov    x0, x21                                       ▒
       │   │  mov    x1, x3                                        ▒
  1.71 │   │  ldr    x20, [x3,#48]                                 ▒
       │   │→ bl     __fsnotify_parent                             ▒
  0.68 │   │↑ cbnz   60                                            ▒
       │   │  mov    x2, x21                                       ▒
  1.37 │   │  mov    w1, w19                                       ▒
       │   │  mov    x0, x20                                       ▒
  0.68 │   │  mov    w5, #0x0                        // #0         ▒
       │   │  mov    x4, #0x0                        // #0         ▒
  1.71 │   │  mov    w3, #0x1                        // #1         ▒
       │   │→ bl     fsnotify                                      ▒
  1.37 │   │↑ b      60                                            ▒
       │d0:│  mov    w0, #0x0                        // #0         ▒
       │   │  ldp    x19, x20, [sp,#16]                            ▒
       │   │  ldp    x21, x22, [sp,#32]                            ▒
       │   │  ldp    x29, x30, [sp],#48                            ▒
       │   │← ret                                                  ▒
       │e4:│  mov    w19, #0x10000                   // #65536     ▒
       │   └──b      90                                            ◆
       │ec:   brk    #0x800                                        ▒
Press 'h' for help on key bindings

Signed-off-by: Kim Phillips <kim.phillips@arm.com>
Signed-off-by: Chris Ryder <chris.ryder@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/20161130092344.012e18e3e623bea395162f95@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-01 13:03:19 -03:00
Kim Phillips 859afa6ca9 perf annotate: Use arch->objdump.comment_char in dec__parse()
Presume neglected in commit 786c1b5 "perf annotate: Start supporting
cross arch annotation".  This doesn't fix a bug since none of the
affected arches support parsing dec/inc instructions yet.

Signed-off-by: Kim Phillips <kim.phillips@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Ryder <chris.ryder@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/20161130092333.1cca5dd2c77e1790d61c1e9c@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-01 13:03:18 -03:00
Ravi Bangoria dbdebdc538 perf annotate: Initial PowerPC support
Support the PowerPC architecture using the ins_ops association
method.

Committer notes:

Testing it with a perf.data file collected on a PowerPC machine and
cross-annotated on a x86_64 workstation, using the associated vmlinux
file:

$ perf report -i perf.data.f22vm.powerdev --vmlinux vmlinux.powerpc
  .ktime_get  vmlinux.powerpc
        │      clrldi r9,r28,63
   8.57 │   ┌──bne    e0                   <- TUI cursor positioned here
        │54:│  lwsync
   2.86 │   │  std    r2,40(r1)
        │   │  ld     r9,144(r31)
        │   │  ld     r3,136(r31)
        │   │  ld     r30,184(r31)
        │   │  ld     r10,0(r9)
        │   │  mtctr  r10
        │   │  ld     r2,8(r9)
   8.57 │   │→ bctrl
        │   │  ld     r2,40(r1)
        │   │  ld     r10,160(r31)
        │   │  ld     r5,152(r31)
        │   │  lwz    r7,168(r31)
        │   │  ld     r9,176(r31)
   8.57 │   │  lwz    r6,172(r31)
        │   │  lwsync
   2.86 │   │  lwz    r8,128(r31)
        │   │  cmpw   cr7,r8,r28
   2.86 │   │↑ bne    48
        │   │  subf   r10,r10,r3
        │   │  mr     r3,r29
        │   │  and    r10,r10,r5
   2.86 │   │  mulld  r10,r10,r7
        │   │  add    r9,r10,r9
        │   │  srd    r9,r9,r6
        │   │  add    r9,r9,r30
        │   │  std    r9,0(r29)
        │   │  addi   r1,r1,144
        │   │  ld     r0,16(r1)
        │   │  ld     r28,-32(r1)
        │   │  ld     r29,-24(r1)
        │   │  ld     r30,-16(r1)
        │   │  mtlr   r0
        │   │  ld     r31,-8(r1)
        │   │← blr
   5.71 │e0:└─→mr     r1,r1
  11.43 │      mr     r2,r2
  11.43 │      lwz    r28,128(r31)
  Press 'h' for help on key bindings

  $ perf report -i perf.data.f22vm.powerdev --header-only
  # ========
  # captured on: Thu Nov 24 12:40:38 2016
  # hostname : pdev-f22-qemu
  # os release : 4.4.10-200.fc22.ppc64
  # perf version : 4.9.rc1.g6298ce
  # arch : ppc64
  # nrcpus online : 48
  # nrcpus avail : 48
  # cpudesc : POWER7 (architected), altivec supported
  # cpuid : 74,513
  # total memory : 4158976 kB
  # cmdline : /home/ravi/Workspace/linux/tools/perf/perf record -a
  # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|CPU|PERIOD, disabled = 1, inherit = 1, mmap = 1, comm = 1, freq = 1, task = 1, precise_ip = 3, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
  # HEADER_CPU_TOPOLOGY info available, use -I to display
  # HEADER_NUMA_TOPOLOGY info available, use -I to display
  # pmu mappings: cpu = 4, software = 1, tracepoint = 2, breakpoint = 5
  # missing features: HEADER_TRACING_DATA HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT HEADER_CACHE
  # ========
  #
  $

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Link: http://lkml.kernel.org/n/tip-tbjnp40ddoxxl474uvhwi6g4@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:38:56 -03:00
Arnaldo Carvalho de Melo acc9bfb5fa perf annotate: Improve support for ARM
By using arch->init() to set up some regular expressions to associate
ins_ops to ARM instructions, ditching that old table that has
instructions not present on ARM.

Take advantage of having an arch->init() to hide more arm specific stuff
from the common code, like the objdump details.

The regular expressions comes from a patch written by Kim Phillips.

Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-77m7lufz9ajjimkrebtg5ead@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:38:56 -03:00
Arnaldo Carvalho de Melo 0781ea9234 perf annotate: Allow arches to have a init routine and a priv area
Arches like ARM will want to use regular expressions when deciding what
instructions to associate with what ins_ops, provide infrastructure for
that.

Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-7dmnk9el2ipu3nxog092k9z5@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:38:55 -03:00
Arnaldo Carvalho de Melo 2a1ff812c4 perf annotate: Introduce alternative method of keeping instructions table
Some arches may want to dynamically populate the table using regular
expressions on the instruction names to associate them with a set of
parsing/formatting/etc functions (struct ins_ops), so provide a fallback
for when the ins__find() method fails.

That fall back will be able to resize the arch->instructions, setting
arch->nr_instructions appropriately, helper functions to associate an
ins_ops to an instruction name, growing the arch->instructions if needed
and resorting it are provided, all the arch specific callback needs to
do is to decide if the missing instruction should be added to
arch->instructions with a ins_ops association.

Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-auu13yradxf7g5dgtpnzt97a@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:38:44 -03:00
Arnaldo Carvalho de Melo 75b49202d8 perf annotate: Remove duplicate 'name' field from disasm_line
The disasm_line::name field is always equal to ins::name, being used
just to locate the instruction's ins_ops from the per-arch instructions
table.

Eliminate this duplication, nuking that field and instead make
ins__find() return an ins_ops, store it in disasm_line::ins.ops, and
keep just in disasm_line::ins.name what was in disasm_line::name, this
way we end up not keeping a reference to entries in the per-arch
instructions table.

This in turn will help supporting multiple ways to manage the per-arch
instructions table, allowing resorting that array, for instance, when
the entries will move after references to its addresses were made. The
same problem is avoided when one grows the array with realloc.

So architectures simply keeping a constant array will work as well as
architectures building the table using regular expressions or other
logic that involves resorting the table.

Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-vr899azvabnw9gtuepuqfd9t@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:24:16 -03:00
Arnaldo Carvalho de Melo 763d8960a1 perf annotate: Add per arch instructions annotate handlers
Another step in supporting cross annotation.

The arch specific tables are put in:

   tools/perf/arch/$ARCH/annotation/instructions.c

which, so far, just plug instructions to a bunch of parsers/formatters,
but may have more as the need arises.

This is an alternative implementation to a previous attempt made by Ravi
Bangoria.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-g3wt282lfa51j4qd0813e3az@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-17 17:31:59 -03:00
Arnaldo Carvalho de Melo 9c2fb451bd perf annotate: Allow arches to specify functions to skip
This is to cope with an ARM specific kludge introduced in the original
patch supporting ARM annotation, cfef25b8da ("perf annotate: ARM
support") that made functions with a '+' in its name to be skipped when
processing call instructions.

With this patchkit it should be possible to collect a perf.data file on
a ARM machine and then annotate it on a x86 workstation and have those
ARM kludges used.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-2fi3sy7q3sssdi7m7cbe07gy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-17 17:12:56 -03:00
Arnaldo Carvalho de Melo 786c1b5184 perf annotate: Start supporting cross arch annotation
Introduce a 'struct arch', where arch specific stuff will live, starting
with objdump's choice of comment delimitation character, that is '#' in
x86 while a ';' in arm.

This has some bits and pieces from a patch submitted by Ravi.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-f337tzjjcl8vtapgvjxmhrbx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-17 17:12:50 -03:00
Arnaldo Carvalho de Melo 5f62d4fd35 perf annotate: Resolve 'call' operands to function names
Before this patch the '_raw_spin_lock_irqsave' and 'update_rq_clock' operands
were appearing just as hexadecimal numbers:

  update_blocked_averages  /proc/kcore
       │       push   %r12
       │       push   %rbx
       │       and    $0xfffffffffffffff0,%rsp
       │       sub    $0x40,%rsp
       │       add    -0x662cac00(,%rdi,8),%rax
       │       mov    %rax,%rbx
       │       mov    %rax,%rdi
       │       mov    %rax,0x38(%rsp)
       │     → callq  _raw_spin_lock_irqsave
       │       mov    %rbx,%rdi
       │       mov    %rax,0x30(%rsp)
       │     → callq  update_rq_clock
       │       mov    0x8d0(%rbx),%rax
       │       lea    0x8d0(%rbx),%r11

To check that all is right one can always use the 'o' hotkey and see
the original objdump -dS output, that for this case is:

  update_blocked_averages  /proc/kcore
       │ffffffff990d5489:   push   %r12
       │ffffffff990d548b:   push   %rbx
       │ffffffff990d548c:   and    $0xfffffffffffffff0,%rsp
       │ffffffff990d5490:   sub    $0x40,%rsp
       │ffffffff990d5494:   add    -0x662cac00(,%rdi,8),%rax
       │ffffffff990d549c:   mov    %rax,%rbx
       │ffffffff990d549f:   mov    %rax,%rdi
       │ffffffff990d54a2:   mov    %rax,0x38(%rsp)
       │ffffffff990d54a7: → callq  0xffffffff997eb7a0
       │ffffffff990d54ac:   mov    %rbx,%rdi
       │ffffffff990d54af:   mov    %rax,0x30(%rsp)
       │ffffffff990d54b4: → callq  0xffffffff990c7720
       │ffffffff990d54b9:   mov    0x8d0(%rbx),%rax
       │ffffffff990d54c0:   lea    0x8d0(%rbx),%r11

Use the 'h' hotkey to see a list of available hotkeys.

More work needed to cover operands for other instructions, such as 'mov',
that can resolve variable names, etc.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-xqgtw9mzmzcjgwkis9kiiv1p@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-20 12:28:30 -03:00
Arnaldo Carvalho de Melo bff5c30613 perf annotate: Pass the symbol's map/dso to the instruction parsers
So that things like:

       → callq  0xffffffff993e3230

found while disassembling /proc/kcore can be beautified by later
patches, that will resolve that address to a function, looking it up in
/proc/kallsyms.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-p76myuke4j7gplg54amaklxk@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-20 12:28:29 -03:00
Ravi Bangoria 88a7fcf961 perf annotate: Do not ignore call instruction with indirect target
Do not ignore call instruction with indirect target when its already
identified as a call. This is an extension of commit e8ea156195 ("perf
annotate: Use raw form for register indirect call instructions") to
generalize annotation for all instructions with indirect calls.

This is needed for certain powerpc call instructions that use address in
a register (such as bctrl, btarl, ...).

Apart from that, when kcore is used to disassemble function, all call
instructions were ignored. This patch will fix it as a side effect by
not ignoring them. For example,

Before (with kcore):
       mov    %r13,%rdi
       callq  0xffffffff811a7e70
     ^ jmpq   64
       mov    %gs:0x7ef41a6e(%rip),%al

After (with kcore):
       mov    %r13,%rdi
     > callq  0xffffffff811a7e70
     ^ jmpq   64
       mov    %gs:0x7ef41a6e(%rip),%al

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
[Suggested about 'bctrl' instruction]
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Chris Riyder <chris.ryder@arm.com>
Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Taeung Song <treeze.taeung@gmail.com>
Link: http://lkml.kernel.org/r/1471611578-11255-5-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-20 12:28:29 -03:00
Peter Zijlstra 70fbe05745 perf annotate: Add branch stack / basic block
I wanted to know the hottest path through a function and figured the
branch-stack (LBR) information should be able to help out with that.

The below uses the branch-stack to create basic blocks and generate
statistics from them.

        from    to              branch_i
        * ----> *
                |
                | block
                v
                * ----> *
                from    to      branch_i+1

The blocks are broken down into non-overlapping ranges, while tracking
if the start of each range is an entry point and/or the end of a range
is a branch.

Each block iterates all ranges it covers (while splitting where required
to exactly match the block) and increments the 'coverage' count.

For the range including the branch we increment the taken counter, as
well as the pred counter if flags.predicted.

Using these number we can find if an instruction:

 - had coverage; given by:

        br->coverage / br->sym->max_coverage

   This metric ensures each symbol has a 100% spot, which reflects the
   observation that each symbol must have a most covered/hottest
   block.

 - is a branch target: br->is_target && br->start == add

 - for targets, how much of a branch's coverages comes from it:

	target->entry / branch->coverage

 - is a branch: br->is_branch && br->end == addr

 - for branches, how often it was taken:

        br->taken / br->coverage

   after all, all execution that didn't take the branch would have
   incremented the coverage and continued onward to a later branch.

 - for branches, how often it was predicted:

        br->pred / br->taken

The coverage percentage is used to color the address and asm sections;
for low (<1%) coverage we use NORMAL (uncolored), indicating that these
instructions are not 'important'. For high coverage (>75%) we color the
address RED.

For each branch, we add an asm comment after the instruction with
information on how often it was taken and predicted.

Output looks like (sans color, which does loose a lot of the
information :/)

$ perf record --branch-filter u,any -e cycles:p ./branches 27
$ perf annotate branches

 Percent |	Source code & Disassembly of branches for cycles:pu (217 samples)
---------------------------------------------------------------------------------
         :	branches():
    0.00 :	  40057a:       push   %rbp
    0.00 :	  40057b:       mov    %rsp,%rbp
    0.00 :	  40057e:       sub    $0x20,%rsp
    0.00 :	  400582:       mov    %rdi,-0x18(%rbp)
    0.00 :	  400586:       mov    %rsi,-0x20(%rbp)
    0.00 :	  40058a:       mov    -0x18(%rbp),%rax
    0.00 :	  40058e:       mov    %rax,-0x10(%rbp)
    0.00 :	  400592:       movq   $0x0,-0x8(%rbp)
    0.00 :	  40059a:       jmpq   400656 <branches+0xdc>
    1.84 :	  40059f:       mov    -0x10(%rbp),%rax	# +100.00%
    3.23 :	  4005a3:       and    $0x1,%eax
    1.84 :	  4005a6:       test   %rax,%rax
    0.00 :	  4005a9:       je     4005bf <branches+0x45>	# -54.50% (p:42.00%)
    0.46 :	  4005ab:       mov    0x200bbe(%rip),%rax        # 601170 <acc>
   12.90 :	  4005b2:       add    $0x1,%rax
    2.30 :	  4005b6:       mov    %rax,0x200bb3(%rip)        # 601170 <acc>
    0.46 :	  4005bd:       jmp    4005d1 <branches+0x57>	# -100.00% (p:100.00%)
    0.92 :	  4005bf:       mov    0x200baa(%rip),%rax        # 601170 <acc>	# +49.54%
   13.82 :	  4005c6:       sub    $0x1,%rax
    0.46 :	  4005ca:       mov    %rax,0x200b9f(%rip)        # 601170 <acc>
    2.30 :	  4005d1:       mov    -0x10(%rbp),%rax	# +50.46%
    0.46 :	  4005d5:       mov    %rax,%rdi
    0.46 :	  4005d8:       callq  400526 <lfsr>	# -100.00% (p:100.00%)
    0.00 :	  4005dd:       mov    %rax,-0x10(%rbp)	# +100.00%
    0.92 :	  4005e1:       mov    -0x18(%rbp),%rax
    0.00 :	  4005e5:       and    $0x1,%eax
    0.00 :	  4005e8:       test   %rax,%rax
    0.00 :	  4005eb:       je     4005ff <branches+0x85>	# -100.00% (p:100.00%)
    0.00 :	  4005ed:       mov    0x200b7c(%rip),%rax        # 601170 <acc>
    0.00 :	  4005f4:       shr    $0x2,%rax
    0.00 :	  4005f8:       mov    %rax,0x200b71(%rip)        # 601170 <acc>
    0.00 :	  4005ff:       mov    -0x10(%rbp),%rax	# +100.00%
    7.37 :	  400603:       and    $0x1,%eax
    3.69 :	  400606:       test   %rax,%rax
    0.00 :	  400609:       jne    400612 <branches+0x98>	# -59.25% (p:42.99%)
    1.84 :	  40060b:       mov    $0x1,%eax
   14.29 :	  400610:       jmp    400617 <branches+0x9d>	# -100.00% (p:100.00%)
    1.38 :	  400612:       mov    $0x0,%eax	# +57.65%
   10.14 :	  400617:       test   %al,%al	# +42.35%
    0.00 :	  400619:       je     40062f <branches+0xb5>	# -57.65% (p:100.00%)
    0.46 :	  40061b:       mov    0x200b4e(%rip),%rax        # 601170 <acc>
    2.76 :	  400622:       sub    $0x1,%rax
    0.00 :	  400626:       mov    %rax,0x200b43(%rip)        # 601170 <acc>
    0.46 :	  40062d:       jmp    400641 <branches+0xc7>	# -100.00% (p:100.00%)
    0.92 :	  40062f:       mov    0x200b3a(%rip),%rax        # 601170 <acc>	# +56.13%
    2.30 :	  400636:       add    $0x1,%rax
    0.92 :	  40063a:       mov    %rax,0x200b2f(%rip)        # 601170 <acc>
    0.92 :	  400641:       mov    -0x10(%rbp),%rax	# +43.87%
    2.30 :	  400645:       mov    %rax,%rdi
    0.00 :	  400648:       callq  400526 <lfsr>	# -100.00% (p:100.00%)
    0.00 :	  40064d:       mov    %rax,-0x10(%rbp)	# +100.00%
    1.84 :	  400651:       addq   $0x1,-0x8(%rbp)
    0.92 :	  400656:       mov    -0x8(%rbp),%rax
    5.07 :	  40065a:       cmp    -0x20(%rbp),%rax
    0.00 :	  40065e:       jb     40059f <branches+0x25>	# -100.00% (p:100.00%)
    0.00 :	  400664:       nop
    0.00 :	  400665:       leaveq
    0.00 :	  400666:       retq

(Note: the --branch-filter u,any was used to avoid spurious target and
branch points due to interrupts/faults, they show up as very small -/+
annotations on 'weird' locations)

Committer note:

Please take a look at:

  http://vger.kernel.org/~acme/perf/annotate_basic_blocks.png

To see the colors.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: David Carrillo-Cisneros <davidcc@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
[ Moved sym->max_coverage to 'struct annotate', aka symbol__annotate(sym) ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-08 13:44:03 -03:00
Arnaldo Carvalho de Melo be39db9f29 perf symbols: Remove symbol_filter_t machinery
We're not using it anymore, few users were, but we really could do
without it, simplify lots of functions by removing it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-1zng8wdznn00iiz08bb7q3vn@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-05 11:14:50 -03:00
Arnaldo Carvalho de Melo b01141f4f5 perf annotate: Initialize the priv are in symbol__new()
We need to initializa some fields (right now just a mutex) when we
allocate the per symbol annotation struct, so do it at the symbol
constructor instead of (ab)using the filter mechanism for that.

This way we remove one of the few cases we have for that symbol filter,
which will eventually led to removing it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-cvz34avlz1lez888lob95390@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-30 10:56:34 -03:00
Arnaldo Carvalho de Melo 05ed3ac941 perf disassemble: Extract logic to find file to pass to objdump to a separate function
Disentangling this a bit further, more to come.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-7bjv2xazuyzs0xw01mlwosn5@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-23 15:37:33 -03:00
Arnaldo Carvalho de Melo 3caee094d1 perf disassemble: Simplify logic for picking the filename to disassemble
Lots of changes to support kcore, compressed modules, build-id files
left us with some spaguetti code, simplify it a bit, more to come.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-h70p7x451li3f2fhs44vzmm8@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-23 15:37:33 -03:00
Arnaldo Carvalho de Melo c12944f7fa perf disassemble: Move check for kallsyms + !kcore
We don't need to do all that filename logic to then just have to test
something unrelated and bail out, move it to the start of the function.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-lk1v4srtsktonnyp6t1o0uhx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-23 15:37:33 -03:00
Arnaldo Carvalho de Melo c17c17e8c2 perf annotate: Plug filename string leak
If dso__build_id_filename(..., NULL, ...) returns !NULL its because it
allocated it, so, when reaching the  'if (dso__is_kcore()) test, we
already checked that and were just "fallbacking" to using
dso->long_name, but without freeing filename, thus leaking it.

Fix it by adding the dso__is_kcore() test to the 'or' group just after
it, the one containing the full fallback code, including freeing the
filename.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: ee205503f2 ("perf tools: Fix annotation with kcore")
Link: http://lkml.kernel.org/n/tip-qi4rpjq8yo6myvg99kkgt0xz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-01 18:49:13 -03:00
Arnaldo Carvalho de Melo ee51d85139 perf annotate: Introduce strerror for handling symbol__disassemble() errors
We were just using pr_error() which makes it difficult for non stdio UIs
to provide errors using its widgets, as they need to somehow catch what
was passed to pr_error().

Fix it by introducing a __strerror() interface like the ones used
elsewhere, for instance target__strerror().

This is just the initial step, more work will be done, but first some
error handling bugs noticed while working on this need to be dealt with.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-dgd22zl2xg7x4vcnoa83jxfb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-01 18:18:16 -03:00
Arnaldo Carvalho de Melo 5cb725a972 perf annotate: Rename symbol__annotate() to symbol__disassemble()
This function will not annotate anything, it will just disassembly the
given map->dso and symbol.

It currently does this by parsing the output of 'objdump --disassemble',
but this could conceivably be done using a library or an offshot of
the kernel's instruction decoder (arch/x86/lib/inat.c), etc.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-2xpfl4bfnrd6x584b390qok7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-01 17:06:46 -03:00
Arnaldo Carvalho de Melo 9955d0be16 perf annotate: Use pipe + fork instead of popen
We will need to redirect the stderr as well, so open code popen as
a starting point.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-k0zt9svg4bswiglem7ornts4@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-29 11:12:39 -03:00
Peter Zijlstra (Intel) 135cce1bf1 perf annotate: Add number of samples to the header
Staring at annotations of large functions is useless if there's only a
few samples in them. Report the number of samples in the header to make
this easier to determine.

Committer note:

The change amounts to:

  - Percent | Source code & Disassembly of perf-vdso.so for cycles:u
  ------------------------------------------------------------------
  + Percent | Source code & Disassembly of perf-vdso.so for cycles:u (3278 samples)
  +--------------------------------------------------------------------------------

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20160630082955.GA30921@twins.programming.kicks-ass.net
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-06-30 18:27:42 -03:00