Commit Graph

22 Commits

Author SHA1 Message Date
Srinivasa D S ef53d9c5e4 kprobes: improve kretprobe scalability with hashed locking
Currently list of kretprobe instances are stored in kretprobe object (as
used_instances,free_instances) and in kretprobe hash table.  We have one
global kretprobe lock to serialise the access to these lists.  This causes
only one kretprobe handler to execute at a time.  Hence affects system
performance, particularly on SMP systems and when return probe is set on
lot of functions (like on all systemcalls).

Solution proposed here gives fine-grain locks that performs better on SMP
system compared to present kretprobe implementation.

Solution:

 1) Instead of having one global lock to protect kretprobe instances
    present in kretprobe object and kretprobe hash table.  We will have
    two locks, one lock for protecting kretprobe hash table and another
    lock for kretporbe object.

 2) We hold lock present in kretprobe object while we modify kretprobe
    instance in kretprobe object and we hold per-hash-list lock while
    modifying kretprobe instances present in that hash list.  To prevent
    deadlock, we never grab a per-hash-list lock while holding a kretprobe
    lock.

 3) We can remove used_instances from struct kretprobe, as we can
    track used instances of kretprobe instances using kretprobe hash
    table.

Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system
with return probes set on all systemcalls looks like this.

cacheline              non-cacheline             Un-patched kernel
aligned patch 	       aligned patch
===============================================================================
real    9m46.784s       9m54.412s                  10m2.450s
user    40m5.715s       40m7.142s                  40m4.273s
sys     2m57.754s       2m58.583s                  3m17.430s
===========================================================

Time duration for kernel compilation ("make -j 8) on the same system, when
kernel is not probed.
=========================
real    9m26.389s
user    40m8.775s
sys     2m7.283s
=========================

Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com>
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 10:53:30 -07:00
Peter Zijlstra d54191b85e Kprobe smoke test lockdep warning
On Mon, 2008-04-21 at 18:54 -0400, Masami Hiramatsu wrote:
> Thank you for reporting.
>
> Actually, kprobes tries to fixup thread's flags in post_kprobe_handler
> (which is called from kprobe_exceptions_notify) by
> trace_hardirqs_fixup_flags(pt_regs->flags). However, even the irq flag
> is set in pt_regs->flags, true hardirq is still off until returning
> from do_debug. Thus, lockdep assumes that hardirq is off without annotation.
>
> IMHO, one possible solution is that fixing hardirq flags right after
> notify_die in do_debug instead of in post_kprobe_handler.

My reply to BZ 10489:

> [    2.707509] Kprobe smoke test started
> [    2.709300] ------------[ cut here ]------------
> [    2.709420] WARNING: at kernel/lockdep.c:2658 check_flags+0x4d/0x12c()
> [    2.709541] Modules linked in:
> [    2.709588] Pid: 1, comm: swapper Not tainted 2.6.25.jml.057 #1
> [    2.709588]  [<c0126acc>] warn_on_slowpath+0x41/0x51
> [    2.709588]  [<c010bafc>] ? save_stack_trace+0x1d/0x3b
> [    2.709588]  [<c0140a83>] ? save_trace+0x37/0x89
> [    2.709588]  [<c011987d>] ? kernel_map_pages+0x103/0x11c
> [    2.709588]  [<c0109803>] ? native_sched_clock+0xca/0xea
> [    2.709588]  [<c0142958>] ? mark_held_locks+0x41/0x5c
> [    2.709588]  [<c0382580>] ? kprobe_exceptions_notify+0x322/0x3af
> [    2.709588]  [<c0142aff>] ? trace_hardirqs_on+0xf1/0x119
> [    2.709588]  [<c03825b3>] ? kprobe_exceptions_notify+0x355/0x3af
> [    2.709588]  [<c0140823>] check_flags+0x4d/0x12c
> [    2.709588]  [<c0143c9d>] lock_release+0x58/0x195
> [    2.709588]  [<c038347c>] ? __atomic_notifier_call_chain+0x0/0x80
> [    2.709588]  [<c03834d6>] __atomic_notifier_call_chain+0x5a/0x80
> [    2.709588]  [<c0383508>] atomic_notifier_call_chain+0xc/0xe
> [    2.709588]  [<c013b6d4>] notify_die+0x2d/0x2f
> [    2.709588]  [<c038168a>] do_debug+0x67/0xfe
> [    2.709588]  [<c0381287>] debug_stack_correct+0x27/0x30
> [    2.709588]  [<c01564c0>] ? kprobe_target+0x1/0x34
> [    2.709588]  [<c0156572>] ? init_test_probes+0x50/0x186
> [    2.709588]  [<c04fae48>] init_kprobes+0x85/0x8c
> [    2.709588]  [<c04e947b>] kernel_init+0x13d/0x298
> [    2.709588]  [<c04e933e>] ? kernel_init+0x0/0x298
> [    2.709588]  [<c04e933e>] ? kernel_init+0x0/0x298
> [    2.709588]  [<c0105ef7>] kernel_thread_helper+0x7/0x10
> [    2.709588]  =======================
> [    2.709588] ---[ end trace 778e504de7e3b1e3 ]---
> [    2.709588] possible reason: unannotated irqs-off.
> [    2.709588] irq event stamp: 370065
> [    2.709588] hardirqs last  enabled at (370065): [<c0382580>] kprobe_exceptions_notify+0x322/0x3af
> [    2.709588] hardirqs last disabled at (370064): [<c0381bb7>] do_int3+0x1d/0x7d
> [    2.709588] softirqs last  enabled at (370050): [<c012b464>] __do_softirq+0xfa/0x100
> [    2.709588] softirqs last disabled at (370045): [<c0107438>] do_softirq+0x74/0xd9
> [    2.714751] Kprobe smoke test passed successfully

how I love this stuff...

Ok, do_debug() is a trap, this can happen at any time regardless of the
machine's IRQ state. So the first thing we do is fix up the IRQ state.
Then we call this die notifier stuff; and return with messed up IRQ
state... YAY.

So, kprobes fudges it..

  notify_die(DIE_DEBUG)
    kprobe_exceptions_notify()
      post_kprobe_handler()
        modify regs->flags
        trace_hardirqs_fixup_flags(regs->flags);  <--- must be it

So what's the use of modifying flags if they're not meant to take effect
at some point.

/me tries to reproduce issue; enable kprobes test thingy && boot

OK, that reproduces..

So the below makes it work - but I'm not getting this code; at the time
I wrote that stuff I CC'ed each and every kprobe maintainer listed in
the usual places but got no reposonse - can some please explain this
stuff to me?

Are the saved flags only for the TF bit or are they made in full effect
later (and if so, where) ?

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-15 11:18:03 +02:00
gorcunov@gmail.com a5c15d419d x86: replace most VM86 flags with flags from processor-flags.h
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:33 +02:00
Yakov Lerner acb5b8a2dd x86, kprobes: correct post-eip value in post_hander()
I was trying to get the address of instruction to be executed
next after the kprobed instruction.  But regs->eip in post_handler()
contains value which is useless to the user. It's pre-corrected value.
This value is difficult to use without access to resume_execution(), which
is not exported anyway.
I moved the invocation of post_handler() to *after* resume_execution().
Now regs->eip contains meaningful value in post_handler().

I do not think this change breaks any backward-compatibility.
To make meaning of the old value, post_handler() would need access to
resume_execution() which is not exported.  I have difficulty to believe
that previous, uncorrected, regs->eip can be meaningfully used in
post_handler().

Signed-off-by: Yakov Lerner <iler.ml@gmail.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:13 +02:00
Jan Beulich 5b0e508415 x86: prevent unconditional writes to DebugCtl MSR
Otherwise, enabling (or better, subsequent disabling) of single
stepping would cause a kernel oops on CPUs not having this MSR.

The patch could have been added a conditional to the MSR write in
user_disable_single_step(), but centralizing the updates seems safer
and (looking forward) better manageable.

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: Markus Metzger <markus.t.metzger@intel.com>

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:40:58 +02:00
Harvey Harrison f1452d424d x86, kprobes: remove sparse warnings from x86
arch/x86/kernel/kprobes.c:584:16: warning: symbol 'kretprobe_trampoline_holder' was not declared. Should it be static?
arch/x86/kernel/kprobes.c:676:6: warning: symbol 'trampoline_handler' was not declared. Should it be static?

Make them static and add the __used attribute, approach taken from the
arm kprobes implementation.

kretprobe_trampoline_holder uses inline assemly to define the global
symbol kretprobe_trampoline, but nothing ever calls the holder explicitly.

trampoline handler is only called from inline assembly in the same file,
mark it used and static.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-02-19 16:18:28 +01:00
Jan Engelhardt ade1af7712 x86: remove unneded casts
x86: remove unneeded casts

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:33:23 +01:00
Abhishek Sagar fb8830e72d x86: fix singlestep handling in reenter_kprobe
Highlight peculiar cases in singles-step kprobe handling.

In reenter_kprobe(), a breakpoint in KPROBE_HIT_SS case can only occur
when single-stepping a breakpoint on which a probe was installed. Since
such probes are single-stepped inline, identifying these cases is
unambiguous. All other cases leading up to KPROBE_HIT_SS are possible
bugs. Identify and WARN_ON such cases.

Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:33:13 +01:00
Harvey Harrison f13bd3e793 x86: use wrmsrl in kprobes.c, step.c
Where x86_32 passed zero in the high 32 bits, use wrmsrl which
will zero extend for us.  This allows ifdefs for 32/64 bit to
be eliminated.

Eliminate ifdef in step.c.  Similar cleanup was done when unifying
kprobes_32|64.c and wrmsr() was chosen there over wrmsrl().  This
patch changes these to wrmsrl.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:33:12 +01:00
Harvey Harrison 1017579a8c x86: trivial whitespace in kprobes.c
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:33:01 +01:00
Abhishek Sagar f315decbd0 x86: kprobes change kprobe_handler flow
Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com>
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:50 +01:00
Quentin Barnes b506a9d08b x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.

Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.

The code in question is in kprobe_exceptions_notify() which
does:
====
          /* kprobe_running() needs smp_processor_id() */
          preempt_disable();
          if (kprobe_running() &&
              kprobe_fault_handler(args->regs, args->trapnr))
                  ret = NOTIFY_STOP;
          preempt_enable();
====

For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.

The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled.  That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.

But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled.  If that happens, the
assertion in smp_processor_id() happens and we're dead.  So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.

Once I figured out what was going on, I considered this an
inappropriate approach.  If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running().  I wrote the ARM
kprobe code like this:
====
          /* To be potentially processing a kprobe fault and to
           * trust the result from kprobe_running(), we have
           * be non-preemptible. */
          if (!preemptible() && kprobe_running() &&
              kprobe_fault_handler(args->regs, args->trapnr))
                  ret = NOTIFY_STOP;
====

The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel.  As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).

This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists.  Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html

Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 13:32:32 +01:00
Harvey Harrison b976015637 x86: kprobes change kprobe_handler flow
Make the control flow of kprobe_handler more obvious.

Collapse the separate if blocks/gotos with if/else blocks
this unifies the duplication of the check for a breakpoint
instruction race with another cpu.

Create two jump targets:
	preempt_out: re-enables preemption before returning ret
	out: only returns ret

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:19 +01:00
Harvey Harrison 31f80e45ea x86: kprobes remove fix_riprel #ifdef
Move #ifdef around function definiton into the function and
unconditionally return on X86_32.  Saves an ifdef from the
one callsite.

[ mingo@elte.hu: minor cleanup. ]

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:16 +01:00
Harvey Harrison 9930927f36 x86: introduce REX prefix helper for kprobes
Fold some small ifdefs into a helper function.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:14 +01:00
Masami Hiramatsu 59e87cdcd2 x86: move deeply indented code to reenter_kprobe
Move some deeply indented code related to re-entrance processing
from kprobe_handler() to reenter_kprobe().

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:02 +01:00
Harvey Harrison 40102d4a41 x86: add reenter_kprobe helper
[ mhiramat@redhat.com: updated it to latest x86.git ]

Factor common X86_32, X86_64 kprobe reenter logic from deeply
indented section to helper function.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
2008-01-30 13:32:02 +01:00
Masami Hiramatsu ddc66df876 x86: fix kprobe_handler reenable preemption
Fix a preemption bug in kprobe_handler(). It has to call preempt_enable()
before returning.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:01 +01:00
Harvey Harrison e7b5e11eaa x86: kprobes leftover cleanups
Eliminate __always_inline, all of these static functions are
only called once.  Minor whitespace cleanup.  Eliminate one
supefluous return at end of void function.  Change the one
#ifndef to #ifdef to match the sense of the rest of the config
tests.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:43 +01:00
Harvey Harrison 6d48583ba9 x86: unify extable_{32|64}.c
Introduce fixup_exception() on 64-bit and use it in kprobes to
eliminate an #ifdef.

Only 64-bit needs search_extable() due to a stepping bug.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:41 +01:00
Glauber de Oliveira Costa 053de04441 x86: get rid of _MASK flags
There's no need for the *_MASK flags (TF_MASK, IF_MASK, etc), found in
processor.h (both _32 and _64). They have a one-to-one mapping with the
EFLAGS value. This patch removes the definitions, and use the already
existent X86_EFLAGS_ version when applicable.

[ roland@redhat.com: KVM build fixes. ]

Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:27 +01:00
Masami Hiramatsu d6be29b871 x86: kprobes code for x86 unification
This patch unifies kprobes code.

- Unify kprobes_*.h to kprobes.h
- Unify kprobes_*.c to kprobes.c
  (Differences are separated by ifdefs)
 - Most differences are related to REX prefix and rip relatives.
 - Two inline assembly code are different.
 - One difference in kprobe_handlre()
 - One fixup exception code is different, but it will be unified
   if mm/extable_*.c are unified.
- Merge history logs into arch/x86/kernel/kprobes.c.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:21 +01:00