Commit Graph

11 Commits

Author SHA1 Message Date
Quentin Barnes b506a9d08b x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.

Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.

The code in question is in kprobe_exceptions_notify() which
does:
====
          /* kprobe_running() needs smp_processor_id() */
          preempt_disable();
          if (kprobe_running() &&
              kprobe_fault_handler(args->regs, args->trapnr))
                  ret = NOTIFY_STOP;
          preempt_enable();
====

For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.

The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled.  That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.

But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled.  If that happens, the
assertion in smp_processor_id() happens and we're dead.  So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.

Once I figured out what was going on, I considered this an
inappropriate approach.  If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running().  I wrote the ARM
kprobe code like this:
====
          /* To be potentially processing a kprobe fault and to
           * trust the result from kprobe_running(), we have
           * be non-preemptible. */
          if (!preemptible() && kprobe_running() &&
              kprobe_fault_handler(args->regs, args->trapnr))
                  ret = NOTIFY_STOP;
====

The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel.  As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).

This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists.  Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html

Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 13:32:32 +01:00
Harvey Harrison b976015637 x86: kprobes change kprobe_handler flow
Make the control flow of kprobe_handler more obvious.

Collapse the separate if blocks/gotos with if/else blocks
this unifies the duplication of the check for a breakpoint
instruction race with another cpu.

Create two jump targets:
	preempt_out: re-enables preemption before returning ret
	out: only returns ret

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:19 +01:00
Harvey Harrison 31f80e45ea x86: kprobes remove fix_riprel #ifdef
Move #ifdef around function definiton into the function and
unconditionally return on X86_32.  Saves an ifdef from the
one callsite.

[ mingo@elte.hu: minor cleanup. ]

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:16 +01:00
Harvey Harrison 9930927f36 x86: introduce REX prefix helper for kprobes
Fold some small ifdefs into a helper function.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:14 +01:00
Masami Hiramatsu 59e87cdcd2 x86: move deeply indented code to reenter_kprobe
Move some deeply indented code related to re-entrance processing
from kprobe_handler() to reenter_kprobe().

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:02 +01:00
Harvey Harrison 40102d4a41 x86: add reenter_kprobe helper
[ mhiramat@redhat.com: updated it to latest x86.git ]

Factor common X86_32, X86_64 kprobe reenter logic from deeply
indented section to helper function.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
2008-01-30 13:32:02 +01:00
Masami Hiramatsu ddc66df876 x86: fix kprobe_handler reenable preemption
Fix a preemption bug in kprobe_handler(). It has to call preempt_enable()
before returning.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:32:01 +01:00
Harvey Harrison e7b5e11eaa x86: kprobes leftover cleanups
Eliminate __always_inline, all of these static functions are
only called once.  Minor whitespace cleanup.  Eliminate one
supefluous return at end of void function.  Change the one
#ifndef to #ifdef to match the sense of the rest of the config
tests.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:43 +01:00
Harvey Harrison 6d48583ba9 x86: unify extable_{32|64}.c
Introduce fixup_exception() on 64-bit and use it in kprobes to
eliminate an #ifdef.

Only 64-bit needs search_extable() due to a stepping bug.

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:41 +01:00
Glauber de Oliveira Costa 053de04441 x86: get rid of _MASK flags
There's no need for the *_MASK flags (TF_MASK, IF_MASK, etc), found in
processor.h (both _32 and _64). They have a one-to-one mapping with the
EFLAGS value. This patch removes the definitions, and use the already
existent X86_EFLAGS_ version when applicable.

[ roland@redhat.com: KVM build fixes. ]

Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:27 +01:00
Masami Hiramatsu d6be29b871 x86: kprobes code for x86 unification
This patch unifies kprobes code.

- Unify kprobes_*.h to kprobes.h
- Unify kprobes_*.c to kprobes.c
  (Differences are separated by ifdefs)
 - Most differences are related to REX prefix and rip relatives.
 - Two inline assembly code are different.
 - One difference in kprobe_handlre()
 - One fixup exception code is different, but it will be unified
   if mm/extable_*.c are unified.
- Merge history logs into arch/x86/kernel/kprobes.c.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30 13:31:21 +01:00