2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Kernel Probes (KProbes)
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; either version 2 of the License, or
|
|
|
|
* (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write to the Free Software
|
|
|
|
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
|
|
|
|
*
|
|
|
|
* Copyright (C) IBM Corporation, 2002, 2004
|
|
|
|
*
|
|
|
|
* 2002-Oct Created by Vamsi Krishna S <vamsi_krishna@in.ibm.com> Kernel
|
|
|
|
* Probes initial implementation ( includes contributions from
|
|
|
|
* Rusty Russell).
|
|
|
|
* 2004-July Suparna Bhattacharya <suparna@in.ibm.com> added jumper probes
|
|
|
|
* interface to access function arguments.
|
2008-01-30 20:31:21 +08:00
|
|
|
* 2004-Oct Jim Keniston <jkenisto@us.ibm.com> and Prasanna S Panchamukhi
|
|
|
|
* <prasanna@in.ibm.com> adapted for x86_64 from i386.
|
2005-04-17 06:20:36 +08:00
|
|
|
* 2005-Mar Roland McGrath <roland@redhat.com>
|
|
|
|
* Fixed to handle %rip-relative addressing mode correctly.
|
2008-01-30 20:31:21 +08:00
|
|
|
* 2005-May Hien Nguyen <hien@us.ibm.com>, Jim Keniston
|
|
|
|
* <jkenisto@us.ibm.com> and Prasanna S Panchamukhi
|
|
|
|
* <prasanna@in.ibm.com> added function-return probes.
|
|
|
|
* 2005-May Rusty Lynch <rusty.lynch@intel.com>
|
2012-03-05 21:32:22 +08:00
|
|
|
* Added function return probes functionality
|
2008-01-30 20:31:21 +08:00
|
|
|
* 2006-Feb Masami Hiramatsu <hiramatu@sdl.hitachi.co.jp> added
|
2012-03-05 21:32:22 +08:00
|
|
|
* kprobe-booster and kretprobe-booster for i386.
|
2008-01-30 20:31:21 +08:00
|
|
|
* 2007-Dec Masami Hiramatsu <mhiramat@redhat.com> added kprobe-booster
|
2012-03-05 21:32:22 +08:00
|
|
|
* and kretprobe-booster for x86-64
|
2008-01-30 20:31:21 +08:00
|
|
|
* 2007-Dec Masami Hiramatsu <mhiramat@redhat.com>, Arjan van de Ven
|
2012-03-05 21:32:22 +08:00
|
|
|
* <arjan@infradead.org> and Jim Keniston <jkenisto@us.ibm.com>
|
|
|
|
* unified x86 kprobes code.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
#include <linux/kprobes.h>
|
|
|
|
#include <linux/ptrace.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/slab.h>
|
x86: code clarification patch to Kprobes arch code
When developing the Kprobes arch code for ARM, I ran across some code
found in x86 and s390 Kprobes arch code which I didn't consider as
good as it could be.
Once I figured out what the code was doing, I changed the code
for ARM Kprobes to work the way I felt was more appropriate.
I've tested the code this way in ARM for about a year and would
like to push the same change to the other affected architectures.
The code in question is in kprobe_exceptions_notify() which
does:
====
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
====
For the moment, ignore the code having the preempt_disable()/
preempt_enable() pair in it.
The problem is that kprobe_running() needs to call smp_processor_id()
which will assert if preemption is enabled. That sanity check by
smp_processor_id() makes perfect sense since calling it with preemption
enabled would return an unreliable result.
But the function kprobe_exceptions_notify() can be called from a
context where preemption could be enabled. If that happens, the
assertion in smp_processor_id() happens and we're dead. So what
the original author did (speculation on my part!) is put in the
preempt_disable()/preempt_enable() pair to simply defeat the check.
Once I figured out what was going on, I considered this an
inappropriate approach. If kprobe_exceptions_notify() is called
from a preemptible context, we can't be in a kprobe processing
context at that time anyways since kprobes requires preemption to
already be disabled, so just check for preemption enabled, and if
so, blow out before ever calling kprobe_running(). I wrote the ARM
kprobe code like this:
====
/* To be potentially processing a kprobe fault and to
* trust the result from kprobe_running(), we have
* be non-preemptible. */
if (!preemptible() && kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
====
The above code has been working fine for ARM Kprobes for a year.
So I changed the x86 code (2.6.24-rc6) to be the same way and ran
the Systemtap tests on that kernel. As on ARM, Systemtap on x86
comes up with the same test results either way, so it's a neutral
external functional change (as expected).
This issue has been discussed previously on linux-arm-kernel and the
Systemtap mailing lists. Pointers to the by base for the two
discussions:
http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html
http://sourceware.org/ml/systemtap/2007-q1/msg00251.html
Signed-off-by: Quentin Barnes <qbarnes@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30 20:32:32 +08:00
|
|
|
#include <linux/hardirq.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/preempt.h>
|
2017-02-09 01:51:35 +08:00
|
|
|
#include <linux/sched/debug.h>
|
2016-09-20 05:04:18 +08:00
|
|
|
#include <linux/extable.h>
|
2007-05-08 15:27:03 +08:00
|
|
|
#include <linux/kdebug.h>
|
2009-08-14 04:34:28 +08:00
|
|
|
#include <linux/kallsyms.h>
|
2010-02-25 21:34:46 +08:00
|
|
|
#include <linux/ftrace.h>
|
2016-02-29 12:22:40 +08:00
|
|
|
#include <linux/frame.h>
|
2016-10-14 22:07:23 +08:00
|
|
|
#include <linux/kasan.h>
|
2017-05-25 18:38:17 +08:00
|
|
|
#include <linux/moduleloader.h>
|
2005-06-28 06:17:01 +08:00
|
|
|
|
2016-04-27 03:23:24 +08:00
|
|
|
#include <asm/text-patching.h>
|
2008-01-30 20:31:21 +08:00
|
|
|
#include <asm/cacheflush.h>
|
|
|
|
#include <asm/desc.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/pgtable.h>
|
2016-12-25 03:46:01 +08:00
|
|
|
#include <linux/uaccess.h>
|
2007-07-22 17:12:31 +08:00
|
|
|
#include <asm/alternative.h>
|
2009-08-14 04:34:28 +08:00
|
|
|
#include <asm/insn.h>
|
2009-06-02 02:17:06 +08:00
|
|
|
#include <asm/debugreg.h>
|
2017-05-09 06:58:47 +08:00
|
|
|
#include <asm/set_memory.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-09-28 16:15:22 +08:00
|
|
|
#include "common.h"
|
2012-03-05 21:32:22 +08:00
|
|
|
|
2005-11-07 17:00:12 +08:00
|
|
|
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
|
|
|
|
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-10-13 05:14:10 +08:00
|
|
|
#define stack_addr(regs) ((unsigned long *)kernel_stack_pointer(regs))
|
2008-01-30 20:31:21 +08:00
|
|
|
|
|
|
|
#define W(row, b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf)\
|
|
|
|
(((b0##UL << 0x0)|(b1##UL << 0x1)|(b2##UL << 0x2)|(b3##UL << 0x3) | \
|
|
|
|
(b4##UL << 0x4)|(b5##UL << 0x5)|(b6##UL << 0x6)|(b7##UL << 0x7) | \
|
|
|
|
(b8##UL << 0x8)|(b9##UL << 0x9)|(ba##UL << 0xa)|(bb##UL << 0xb) | \
|
|
|
|
(bc##UL << 0xc)|(bd##UL << 0xd)|(be##UL << 0xe)|(bf##UL << 0xf)) \
|
|
|
|
<< (row % 32))
|
|
|
|
/*
|
|
|
|
* Undefined/reserved opcodes, conditional jump, Opcode Extension
|
|
|
|
* Groups, and some special opcodes can not boost.
|
2011-10-26 23:03:38 +08:00
|
|
|
* This is non-const and volatile to keep gcc from statically
|
|
|
|
* optimizing it out, as variable_test_bit makes gcc think only
|
2012-09-28 16:15:22 +08:00
|
|
|
* *(unsigned long*) is used.
|
2008-01-30 20:31:21 +08:00
|
|
|
*/
|
2011-10-26 23:03:38 +08:00
|
|
|
static volatile u32 twobyte_is_boostable[256 / 32] = {
|
2008-01-30 20:31:21 +08:00
|
|
|
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
|
|
|
|
/* ---------------------------------------------- */
|
|
|
|
W(0x00, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0) | /* 00 */
|
2015-02-10 09:34:05 +08:00
|
|
|
W(0x10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1) , /* 10 */
|
2008-01-30 20:31:21 +08:00
|
|
|
W(0x20, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 20 */
|
|
|
|
W(0x30, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 30 */
|
|
|
|
W(0x40, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 40 */
|
|
|
|
W(0x50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 50 */
|
|
|
|
W(0x60, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1) | /* 60 */
|
|
|
|
W(0x70, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1) , /* 70 */
|
|
|
|
W(0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 80 */
|
|
|
|
W(0x90, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) , /* 90 */
|
|
|
|
W(0xa0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1) | /* a0 */
|
|
|
|
W(0xb0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1) , /* b0 */
|
|
|
|
W(0xc0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1) | /* c0 */
|
|
|
|
W(0xd0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1) , /* d0 */
|
|
|
|
W(0xe0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1) | /* e0 */
|
|
|
|
W(0xf0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0) /* f0 */
|
|
|
|
/* ----------------------------------------------- */
|
|
|
|
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
|
|
|
|
};
|
|
|
|
#undef W
|
|
|
|
|
2007-10-16 16:27:49 +08:00
|
|
|
struct kretprobe_blackpoint kretprobe_blacklist[] = {
|
|
|
|
{"__switch_to", }, /* This function switches only current task, but
|
|
|
|
doesn't switch kernel stack.*/
|
|
|
|
{NULL, NULL} /* Terminator */
|
|
|
|
};
|
2012-03-05 21:32:22 +08:00
|
|
|
|
2007-10-16 16:27:49 +08:00
|
|
|
const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static nokprobe_inline void
|
2017-08-18 16:24:00 +08:00
|
|
|
__synthesize_relative_insn(void *dest, void *from, void *to, u8 op)
|
2008-01-30 20:31:21 +08:00
|
|
|
{
|
2010-02-25 21:34:46 +08:00
|
|
|
struct __arch_relative_insn {
|
|
|
|
u8 op;
|
2008-01-30 20:31:21 +08:00
|
|
|
s32 raddr;
|
2012-09-28 16:15:22 +08:00
|
|
|
} __packed *insn;
|
2010-02-25 21:34:46 +08:00
|
|
|
|
2017-08-18 16:24:00 +08:00
|
|
|
insn = (struct __arch_relative_insn *)dest;
|
2010-02-25 21:34:46 +08:00
|
|
|
insn->raddr = (s32)((long)(to) - ((long)(from) + 5));
|
|
|
|
insn->op = op;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
|
2017-08-18 16:24:00 +08:00
|
|
|
void synthesize_reljump(void *dest, void *from, void *to)
|
2010-02-25 21:34:46 +08:00
|
|
|
{
|
2017-08-18 16:24:00 +08:00
|
|
|
__synthesize_relative_insn(dest, from, to, RELATIVEJUMP_OPCODE);
|
2008-01-30 20:31:21 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(synthesize_reljump);
|
2008-01-30 20:31:21 +08:00
|
|
|
|
2012-03-05 21:32:22 +08:00
|
|
|
/* Insert a call instruction at address 'from', which calls address 'to'.*/
|
2017-08-18 16:24:00 +08:00
|
|
|
void synthesize_relcall(void *dest, void *from, void *to)
|
2012-03-05 21:32:22 +08:00
|
|
|
{
|
2017-08-18 16:24:00 +08:00
|
|
|
__synthesize_relative_insn(dest, from, to, RELATIVECALL_OPCODE);
|
2012-03-05 21:32:22 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(synthesize_relcall);
|
2012-03-05 21:32:22 +08:00
|
|
|
|
2008-01-30 20:32:14 +08:00
|
|
|
/*
|
2010-06-29 13:53:50 +08:00
|
|
|
* Skip the prefixes of the instruction.
|
2008-01-30 20:32:14 +08:00
|
|
|
*/
|
2014-04-17 16:18:14 +08:00
|
|
|
static kprobe_opcode_t *skip_prefixes(kprobe_opcode_t *insn)
|
2008-01-30 20:32:14 +08:00
|
|
|
{
|
2010-06-29 13:53:50 +08:00
|
|
|
insn_attr_t attr;
|
|
|
|
|
|
|
|
attr = inat_get_opcode_attribute((insn_byte_t)*insn);
|
|
|
|
while (inat_is_legacy_prefix(attr)) {
|
|
|
|
insn++;
|
|
|
|
attr = inat_get_opcode_attribute((insn_byte_t)*insn);
|
|
|
|
}
|
2008-01-30 20:32:14 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2010-06-29 13:53:50 +08:00
|
|
|
if (inat_is_rex_prefix(attr))
|
|
|
|
insn++;
|
2008-01-30 20:32:14 +08:00
|
|
|
#endif
|
2010-06-29 13:53:50 +08:00
|
|
|
return insn;
|
2008-01-30 20:32:14 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(skip_prefixes);
|
2008-01-30 20:32:14 +08:00
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
/*
|
2017-03-29 13:05:06 +08:00
|
|
|
* Returns non-zero if INSN is boostable.
|
2008-01-30 20:31:21 +08:00
|
|
|
* RIP relative instructions are adjusted at copying time in 64 bits mode
|
2008-01-30 20:31:21 +08:00
|
|
|
*/
|
2017-03-29 13:05:06 +08:00
|
|
|
int can_boost(struct insn *insn, void *addr)
|
2008-01-30 20:31:21 +08:00
|
|
|
{
|
|
|
|
kprobe_opcode_t opcode;
|
|
|
|
|
2017-03-01 00:23:24 +08:00
|
|
|
if (search_exception_tables((unsigned long)addr))
|
2009-03-17 06:57:22 +08:00
|
|
|
return 0; /* Page fault may occur on this address. */
|
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
/* 2nd-byte opcode */
|
2017-03-29 13:05:06 +08:00
|
|
|
if (insn->opcode.nbytes == 2)
|
|
|
|
return test_bit(insn->opcode.bytes[1],
|
2008-01-30 20:31:21 +08:00
|
|
|
(unsigned long *)twobyte_is_boostable);
|
2017-03-29 12:59:15 +08:00
|
|
|
|
2017-03-29 13:05:06 +08:00
|
|
|
if (insn->opcode.nbytes != 1)
|
2017-03-29 12:59:15 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Can't boost Address-size override prefix */
|
2017-03-29 13:05:06 +08:00
|
|
|
if (unlikely(inat_is_address_size_prefix(insn->attr)))
|
2017-03-29 12:59:15 +08:00
|
|
|
return 0;
|
|
|
|
|
2017-03-29 13:05:06 +08:00
|
|
|
opcode = insn->opcode.bytes[0];
|
2008-01-30 20:31:21 +08:00
|
|
|
|
|
|
|
switch (opcode & 0xf0) {
|
|
|
|
case 0x60:
|
2017-03-29 12:59:15 +08:00
|
|
|
/* can't boost "bound" */
|
|
|
|
return (opcode != 0x62);
|
2008-01-30 20:31:21 +08:00
|
|
|
case 0x70:
|
|
|
|
return 0; /* can't boost conditional jump */
|
2017-03-29 12:56:56 +08:00
|
|
|
case 0x90:
|
|
|
|
return opcode != 0x9a; /* can't boost call far */
|
2008-01-30 20:31:21 +08:00
|
|
|
case 0xc0:
|
|
|
|
/* can't boost software-interruptions */
|
|
|
|
return (0xc1 < opcode && opcode < 0xcc) || opcode == 0xcf;
|
|
|
|
case 0xd0:
|
|
|
|
/* can boost AA* and XLAT */
|
|
|
|
return (opcode == 0xd4 || opcode == 0xd5 || opcode == 0xd7);
|
|
|
|
case 0xe0:
|
|
|
|
/* can boost in/out and absolute jmps */
|
|
|
|
return ((opcode & 0x04) || opcode == 0xea);
|
|
|
|
case 0xf0:
|
|
|
|
/* clear and set flags are boostable */
|
|
|
|
return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
|
|
|
|
default:
|
|
|
|
/* CS override prefix and call are not boostable */
|
|
|
|
return (opcode != 0x2e && opcode != 0x9a);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-05 21:32:22 +08:00
|
|
|
static unsigned long
|
|
|
|
__recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
|
2009-08-14 04:34:28 +08:00
|
|
|
{
|
|
|
|
struct kprobe *kp;
|
2015-02-20 22:07:29 +08:00
|
|
|
unsigned long faddr;
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
|
2009-08-14 04:34:28 +08:00
|
|
|
kp = get_kprobe((void *)addr);
|
2015-02-20 22:07:29 +08:00
|
|
|
faddr = ftrace_location(addr);
|
2015-02-20 22:07:30 +08:00
|
|
|
/*
|
|
|
|
* Addresses inside the ftrace location are refused by
|
|
|
|
* arch_check_ftrace_location(). Something went terribly wrong
|
|
|
|
* if such an address is checked here.
|
|
|
|
*/
|
|
|
|
if (WARN_ON(faddr && faddr != addr))
|
|
|
|
return 0UL;
|
2015-02-20 22:07:29 +08:00
|
|
|
/*
|
|
|
|
* Use the current code if it is not modified by Kprobe
|
|
|
|
* and it cannot be modified by ftrace.
|
|
|
|
*/
|
|
|
|
if (!kp && !faddr)
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
return addr;
|
2009-08-14 04:34:28 +08:00
|
|
|
|
|
|
|
/*
|
2015-02-20 22:07:29 +08:00
|
|
|
* Basically, kp->ainsn.insn has an original instruction.
|
|
|
|
* However, RIP-relative instruction can not do single-stepping
|
|
|
|
* at different place, __copy_instruction() tweaks the displacement of
|
|
|
|
* that instruction. In that case, we can't recover the instruction
|
|
|
|
* from the kp->ainsn.insn.
|
|
|
|
*
|
|
|
|
* On the other hand, in case on normal Kprobe, kp->opcode has a copy
|
|
|
|
* of the first byte of the probed instruction, which is overwritten
|
|
|
|
* by int3. And the instruction at kp->addr is not modified by kprobes
|
|
|
|
* except for the first byte, we can recover the original instruction
|
|
|
|
* from it and kp->opcode.
|
2009-08-14 04:34:28 +08:00
|
|
|
*
|
2015-02-20 22:07:29 +08:00
|
|
|
* In case of Kprobes using ftrace, we do not have a copy of
|
|
|
|
* the original instruction. In fact, the ftrace location might
|
|
|
|
* be modified at anytime and even could be in an inconsistent state.
|
|
|
|
* Fortunately, we know that the original code is the ideal 5-byte
|
|
|
|
* long NOP.
|
2009-08-14 04:34:28 +08:00
|
|
|
*/
|
2017-03-29 13:03:56 +08:00
|
|
|
if (probe_kernel_read(buf, (void *)addr,
|
|
|
|
MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
|
|
|
|
return 0UL;
|
|
|
|
|
2015-02-20 22:07:29 +08:00
|
|
|
if (faddr)
|
|
|
|
memcpy(buf, ideal_nops[NOP_ATOMIC5], 5);
|
|
|
|
else
|
|
|
|
buf[0] = kp->opcode;
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
return (unsigned long)buf;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recover the probed instruction at addr for further analysis.
|
|
|
|
* Caller must lock kprobes by kprobe_mutex, or disable preemption
|
|
|
|
* for preventing to release referencing kprobes.
|
2017-03-29 13:03:56 +08:00
|
|
|
* Returns zero if the instruction can not get recovered (or access failed).
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
*/
|
2012-03-05 21:32:22 +08:00
|
|
|
unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr)
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
{
|
|
|
|
unsigned long __addr;
|
|
|
|
|
|
|
|
__addr = __recover_optprobed_insn(buf, addr);
|
|
|
|
if (__addr != addr)
|
|
|
|
return __addr;
|
|
|
|
|
|
|
|
return __recover_probed_insn(buf, addr);
|
2009-08-14 04:34:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if paddr is at an instruction boundary */
|
2014-04-17 16:17:47 +08:00
|
|
|
static int can_probe(unsigned long paddr)
|
2009-08-14 04:34:28 +08:00
|
|
|
{
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
unsigned long addr, __addr, offset = 0;
|
2009-08-14 04:34:28 +08:00
|
|
|
struct insn insn;
|
|
|
|
kprobe_opcode_t buf[MAX_INSN_SIZE];
|
|
|
|
|
2010-09-15 09:04:29 +08:00
|
|
|
if (!kallsyms_lookup_size_offset(paddr, NULL, &offset))
|
2009-08-14 04:34:28 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Decode instructions */
|
|
|
|
addr = paddr - offset;
|
|
|
|
while (addr < paddr) {
|
|
|
|
/*
|
|
|
|
* Check if the instruction has been modified by another
|
|
|
|
* kprobe, in which case we replace the breakpoint by the
|
|
|
|
* original instruction in our buffer.
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
* Also, jump optimization will change the breakpoint to
|
|
|
|
* relative-jump. Since the relative-jump itself is
|
|
|
|
* normally used, we just go through if there is no kprobe.
|
2009-08-14 04:34:28 +08:00
|
|
|
*/
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
__addr = recover_probed_instruction(buf, addr);
|
2015-02-20 22:07:30 +08:00
|
|
|
if (!__addr)
|
|
|
|
return 0;
|
2014-11-14 23:39:57 +08:00
|
|
|
kernel_insn_init(&insn, (void *)__addr, MAX_INSN_SIZE);
|
2009-08-14 04:34:28 +08:00
|
|
|
insn_get_length(&insn);
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Another debugging subsystem might insert this breakpoint.
|
|
|
|
* In that case, we can't recover it.
|
|
|
|
*/
|
|
|
|
if (insn.opcode.bytes[0] == BREAKPOINT_INSTRUCTION)
|
|
|
|
return 0;
|
2009-08-14 04:34:28 +08:00
|
|
|
addr += insn.length;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (addr == paddr);
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:31:21 +08:00
|
|
|
* Returns non-zero if opcode modifies the interrupt flag.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2014-04-17 16:17:47 +08:00
|
|
|
static int is_IF_modifier(kprobe_opcode_t *insn)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-06-29 13:53:50 +08:00
|
|
|
/* Skip prefixes */
|
|
|
|
insn = skip_prefixes(insn);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
switch (*insn) {
|
|
|
|
case 0xfa: /* cli */
|
|
|
|
case 0xfb: /* sti */
|
|
|
|
case 0xcf: /* iret/iretd */
|
|
|
|
case 0x9d: /* popf/popfd */
|
|
|
|
return 1;
|
|
|
|
}
|
2008-01-30 20:32:14 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-03-29 12:58:06 +08:00
|
|
|
* Copy an instruction with recovering modified instruction by kprobes
|
|
|
|
* and adjust the displacement if the instruction uses the %rip-relative
|
2017-08-18 16:24:00 +08:00
|
|
|
* addressing mode. Note that since @real will be the final place of copied
|
|
|
|
* instruction, displacement must be adjust by @real, not @dest.
|
2017-03-29 12:58:06 +08:00
|
|
|
* This returns the length of copied instruction, or 0 if it has an error.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2017-08-18 16:24:00 +08:00
|
|
|
int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-02-25 21:34:46 +08:00
|
|
|
kprobe_opcode_t buf[MAX_INSN_SIZE];
|
2014-11-14 23:39:57 +08:00
|
|
|
unsigned long recovered_insn =
|
|
|
|
recover_probed_instruction(buf, (unsigned long)src);
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
|
2017-03-29 13:05:06 +08:00
|
|
|
if (!recovered_insn || !insn)
|
2015-02-20 22:07:30 +08:00
|
|
|
return 0;
|
2015-03-17 18:09:18 +08:00
|
|
|
|
2017-03-29 13:05:06 +08:00
|
|
|
/* This can access kernel text if given address is not recovered */
|
|
|
|
if (probe_kernel_read(dest, (void *)recovered_insn, MAX_INSN_SIZE))
|
x86/kprobes: Fix instruction recovery on optimized path
Current probed-instruction recovery expects that only breakpoint
instruction modifies instruction. However, since kprobes jump
optimization can replace original instructions with a jump,
that expectation is not enough. And it may cause instruction
decoding failure on the function where an optimized probe
already exists.
This bug can reproduce easily as below:
1) find a target function address (any kprobe-able function is OK)
$ grep __secure_computing /proc/kallsyms
ffffffff810c19d0 T __secure_computing
2) decode the function
$ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
vmlinux: file format elf64-x86-64
Disassembly of section .text:
ffffffff810c19d0 <__secure_computing>:
ffffffff810c19d0: 55 push %rbp
ffffffff810c19d1: 48 89 e5 mov %rsp,%rbp
ffffffff810c19d4: e8 67 8f 72 00 callq
ffffffff817ea940 <mcount>
ffffffff810c19d9: 65 48 8b 04 25 40 b8 mov %gs:0xb840,%rax
ffffffff810c19e0: 00 00
ffffffff810c19e2: 83 b8 88 05 00 00 01 cmpl $0x1,0x588(%rax)
ffffffff810c19e9: 74 05 je ffffffff810c19f0 <__secure_computing+0x20>
3) put a kprobe-event at an optimize-able place, where no
call/jump places within the 5 bytes.
$ su -
# cd /sys/kernel/debug/tracing
# echo p __secure_computing+0x9 > kprobe_events
4) enable it and check it is optimized.
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9 [OPTIMIZED]
5) put another kprobe on an instruction after previous probe in
the same function.
# echo p __secure_computing+0x12 >> kprobe_events
bash: echo: write error: Invalid argument
# dmesg | tail -n 1
[ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
6) however, if the kprobes optimization is disabled, it works.
# echo 0 > /proc/sys/debug/kprobes-optimization
# cat ../kprobes/list
ffffffff810c19d9 k __secure_computing+0x9
# echo p __secure_computing+0x12 >> kprobe_events
(no error)
This is because kprobes doesn't recover the instruction
which is overwritten with a relative jump by another kprobe
when finding instruction boundary.
It only recovers the breakpoint instruction.
This patch fixes kprobes to recover such instructions.
With this fix:
# echo p __secure_computing+0x9 > kprobe_events
# echo 1 > events/kprobes/p___secure_computing_9/enable
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
# echo p __secure_computing+0x12 >> kprobe_events
# cat ../kprobes/list
ffffffff810c1aa9 k __secure_computing+0x9 [OPTIMIZED]
ffffffff810c1ab2 k __secure_computing+0x12 [DISABLED]
Changes in v4:
- Fix a bug to ensure optimized probe is really optimized
by jump.
- Remove kprobe_optready() dependency.
- Cleanup code for preparing optprobe separation.
Changes in v3:
- Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
To fix the error, split optprobe instruction recovering
path from kprobes path.
- Cleanup comments/styles.
Changes in v2:
- Fix a bug to recover original instruction address in
RIP-relative instruction fixup.
- Moved on tip/master.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: yrl.pp-manager.tt@hitachi.com
Cc: systemtap@sourceware.org
Cc: anderson@redhat.com
Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomain
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-03-05 21:32:09 +08:00
|
|
|
return 0;
|
2017-03-29 13:03:56 +08:00
|
|
|
|
2017-03-29 13:05:06 +08:00
|
|
|
kernel_insn_init(insn, dest, MAX_INSN_SIZE);
|
|
|
|
insn_get_length(insn);
|
|
|
|
|
|
|
|
/* Another subsystem puts a breakpoint, failed to recover */
|
|
|
|
if (insn->opcode.bytes[0] == BREAKPOINT_INSTRUCTION)
|
2017-03-29 13:03:56 +08:00
|
|
|
return 0;
|
2010-02-25 21:34:46 +08:00
|
|
|
|
2018-05-09 20:58:15 +08:00
|
|
|
/* We should not singlestep on the exception masking instructions */
|
|
|
|
if (insn_masking_exception(insn))
|
|
|
|
return 0;
|
|
|
|
|
2010-02-25 21:34:46 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2017-03-29 12:58:06 +08:00
|
|
|
/* Only x86_64 has RIP relative instructions */
|
2017-03-29 13:05:06 +08:00
|
|
|
if (insn_rip_relative(insn)) {
|
2009-08-14 04:34:36 +08:00
|
|
|
s64 newdisp;
|
|
|
|
u8 *disp;
|
|
|
|
/*
|
|
|
|
* The copied instruction uses the %rip-relative addressing
|
|
|
|
* mode. Adjust the displacement for the difference between
|
|
|
|
* the original location of this instruction and the location
|
|
|
|
* of the copy that will actually be run. The tricky bit here
|
|
|
|
* is making sure that the sign extension happens correctly in
|
|
|
|
* this calculation, since we need a signed 32-bit result to
|
|
|
|
* be sign-extended to 64 bits when it's added to the %rip
|
|
|
|
* value and yield the same 64-bit result that the sign-
|
|
|
|
* extension of the original signed 32-bit displacement would
|
|
|
|
* have given.
|
|
|
|
*/
|
2017-03-29 13:05:06 +08:00
|
|
|
newdisp = (u8 *) src + (s64) insn->displacement.value
|
2017-08-18 16:24:00 +08:00
|
|
|
- (u8 *) real;
|
2013-04-04 18:42:30 +08:00
|
|
|
if ((s64) (s32) newdisp != newdisp) {
|
|
|
|
pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
|
|
|
|
return 0;
|
|
|
|
}
|
2017-03-29 13:05:06 +08:00
|
|
|
disp = (u8 *) dest + insn_offset_displacement(insn);
|
2009-08-14 04:34:36 +08:00
|
|
|
*(s32 *) disp = (s32) newdisp;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-01-30 20:31:21 +08:00
|
|
|
#endif
|
2017-03-29 13:05:06 +08:00
|
|
|
return insn->length;
|
2008-01-30 20:32:16 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-29 13:00:25 +08:00
|
|
|
/* Prepare reljump right after instruction to boost */
|
2017-08-18 16:24:00 +08:00
|
|
|
static int prepare_boost(kprobe_opcode_t *buf, struct kprobe *p,
|
|
|
|
struct insn *insn)
|
2017-03-29 13:00:25 +08:00
|
|
|
{
|
2017-08-18 16:24:00 +08:00
|
|
|
int len = insn->length;
|
|
|
|
|
2017-03-29 13:05:06 +08:00
|
|
|
if (can_boost(insn, p->addr) &&
|
2017-08-18 16:24:00 +08:00
|
|
|
MAX_INSN_SIZE - len >= RELATIVEJUMP_SIZE) {
|
2017-03-29 13:00:25 +08:00
|
|
|
/*
|
|
|
|
* These instructions can be executed directly if it
|
|
|
|
* jumps back to correct address.
|
|
|
|
*/
|
2017-08-18 16:24:00 +08:00
|
|
|
synthesize_reljump(buf + len, p->ainsn.insn + len,
|
2017-03-29 13:05:06 +08:00
|
|
|
p->addr + insn->length);
|
2017-08-18 16:24:00 +08:00
|
|
|
len += RELATIVEJUMP_SIZE;
|
2017-03-29 13:01:35 +08:00
|
|
|
p->ainsn.boostable = true;
|
2017-03-29 13:00:25 +08:00
|
|
|
} else {
|
2017-03-29 13:01:35 +08:00
|
|
|
p->ainsn.boostable = false;
|
2017-03-29 13:00:25 +08:00
|
|
|
}
|
2017-08-18 16:24:00 +08:00
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Make page to RO mode when allocate it */
|
|
|
|
void *alloc_insn_page(void)
|
|
|
|
{
|
|
|
|
void *page;
|
|
|
|
|
|
|
|
page = module_alloc(PAGE_SIZE);
|
2019-04-26 08:11:30 +08:00
|
|
|
if (!page)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First make the page read-only, and only then make it executable to
|
|
|
|
* prevent it from being W+X in between.
|
|
|
|
*/
|
|
|
|
set_memory_ro((unsigned long)page, 1);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* TODO: Once additional kernel code protection mechanisms are set, ensure
|
|
|
|
* that the page was not maliciously altered and it is still zeroed.
|
|
|
|
*/
|
|
|
|
set_memory_x((unsigned long)page, 1);
|
2017-08-18 16:24:00 +08:00
|
|
|
|
|
|
|
return page;
|
2017-03-29 13:00:25 +08:00
|
|
|
}
|
|
|
|
|
2017-05-25 18:38:17 +08:00
|
|
|
/* Recover page to RW mode before releasing it */
|
|
|
|
void free_insn_page(void *page)
|
|
|
|
{
|
2019-04-26 08:11:30 +08:00
|
|
|
/*
|
|
|
|
* First make the page non-executable, and only then make it writable to
|
|
|
|
* prevent it from being W+X in between.
|
|
|
|
*/
|
|
|
|
set_memory_nx((unsigned long)page, 1);
|
|
|
|
set_memory_rw((unsigned long)page, 1);
|
2017-05-25 18:38:17 +08:00
|
|
|
module_memfree(page);
|
|
|
|
}
|
|
|
|
|
2014-04-17 16:17:47 +08:00
|
|
|
static int arch_copy_kprobe(struct kprobe *p)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-03-29 13:05:06 +08:00
|
|
|
struct insn insn;
|
2017-08-18 16:24:00 +08:00
|
|
|
kprobe_opcode_t buf[MAX_INSN_SIZE];
|
2017-03-29 13:00:25 +08:00
|
|
|
int len;
|
2013-06-05 11:12:16 +08:00
|
|
|
|
2012-03-05 21:32:16 +08:00
|
|
|
/* Copy an instruction with recovering if other optprobe modifies it.*/
|
2017-08-18 16:24:00 +08:00
|
|
|
len = __copy_instruction(buf, p->addr, p->ainsn.insn, &insn);
|
2017-03-29 13:00:25 +08:00
|
|
|
if (!len)
|
2013-06-05 11:12:16 +08:00
|
|
|
return -EINVAL;
|
2012-03-05 21:32:16 +08:00
|
|
|
|
2010-02-25 21:34:46 +08:00
|
|
|
/*
|
2012-03-05 21:32:16 +08:00
|
|
|
* __copy_instruction can modify the displacement of the instruction,
|
|
|
|
* but it doesn't affect boostable check.
|
2010-02-25 21:34:46 +08:00
|
|
|
*/
|
2017-08-18 16:24:00 +08:00
|
|
|
len = prepare_boost(buf, p, &insn);
|
2017-03-29 13:02:46 +08:00
|
|
|
|
2013-03-14 19:52:43 +08:00
|
|
|
/* Check whether the instruction modifies Interrupt Flag or not */
|
2017-08-18 16:24:00 +08:00
|
|
|
p->ainsn.if_modifier = is_IF_modifier(buf);
|
2013-03-14 19:52:43 +08:00
|
|
|
|
2012-03-05 21:32:16 +08:00
|
|
|
/* Also, displacement change doesn't affect the first byte */
|
2017-08-18 16:24:00 +08:00
|
|
|
p->opcode = buf[0];
|
|
|
|
|
|
|
|
/* OK, write back the instruction(s) into ROX insn buffer */
|
|
|
|
text_poke(p->ainsn.insn, buf, len);
|
2013-06-05 11:12:16 +08:00
|
|
|
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:17:47 +08:00
|
|
|
int arch_prepare_kprobe(struct kprobe *p)
|
2008-01-30 20:31:21 +08:00
|
|
|
{
|
2017-07-21 22:45:52 +08:00
|
|
|
int ret;
|
|
|
|
|
2010-02-03 05:49:18 +08:00
|
|
|
if (alternatives_text_reserved(p->addr, p->addr))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2009-08-14 04:34:28 +08:00
|
|
|
if (!can_probe((unsigned long)p->addr))
|
|
|
|
return -EILSEQ;
|
2008-01-30 20:31:21 +08:00
|
|
|
/* insn: must be on special executable page on x86. */
|
|
|
|
p->ainsn.insn = get_insn_slot();
|
|
|
|
if (!p->ainsn.insn)
|
|
|
|
return -ENOMEM;
|
2013-06-05 11:12:16 +08:00
|
|
|
|
2017-07-21 22:45:52 +08:00
|
|
|
ret = arch_copy_kprobe(p);
|
|
|
|
if (ret) {
|
|
|
|
free_insn_slot(p->ainsn.insn, 0);
|
|
|
|
p->ainsn.insn = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
2008-01-30 20:31:21 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:17:47 +08:00
|
|
|
void arch_arm_kprobe(struct kprobe *p)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2007-07-22 17:12:31 +08:00
|
|
|
text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:17:47 +08:00
|
|
|
void arch_disarm_kprobe(struct kprobe *p)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2007-07-22 17:12:31 +08:00
|
|
|
text_poke(p->addr, &p->opcode, 1);
|
2005-06-23 15:09:25 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:17:47 +08:00
|
|
|
void arch_remove_kprobe(struct kprobe *p)
|
2005-06-23 15:09:25 +08:00
|
|
|
{
|
2009-01-07 06:41:50 +08:00
|
|
|
if (p->ainsn.insn) {
|
2017-03-29 13:01:35 +08:00
|
|
|
free_insn_slot(p->ainsn.insn, p->ainsn.boostable);
|
2009-01-07 06:41:50 +08:00
|
|
|
p->ainsn.insn = NULL;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static nokprobe_inline void
|
|
|
|
save_previous_kprobe(struct kprobe_ctlblk *kcb)
|
2005-06-23 15:09:37 +08:00
|
|
|
{
|
2005-11-07 17:00:12 +08:00
|
|
|
kcb->prev_kprobe.kp = kprobe_running();
|
|
|
|
kcb->prev_kprobe.status = kcb->kprobe_status;
|
2008-01-30 20:31:21 +08:00
|
|
|
kcb->prev_kprobe.old_flags = kcb->kprobe_old_flags;
|
|
|
|
kcb->prev_kprobe.saved_flags = kcb->kprobe_saved_flags;
|
2005-06-23 15:09:37 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static nokprobe_inline void
|
|
|
|
restore_previous_kprobe(struct kprobe_ctlblk *kcb)
|
2005-06-23 15:09:37 +08:00
|
|
|
{
|
2010-12-07 01:16:25 +08:00
|
|
|
__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
|
2005-11-07 17:00:12 +08:00
|
|
|
kcb->kprobe_status = kcb->prev_kprobe.status;
|
2008-01-30 20:31:21 +08:00
|
|
|
kcb->kprobe_old_flags = kcb->prev_kprobe.old_flags;
|
|
|
|
kcb->kprobe_saved_flags = kcb->prev_kprobe.saved_flags;
|
2005-06-23 15:09:37 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static nokprobe_inline void
|
|
|
|
set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
|
|
|
|
struct kprobe_ctlblk *kcb)
|
2005-06-23 15:09:37 +08:00
|
|
|
{
|
2010-12-07 01:16:25 +08:00
|
|
|
__this_cpu_write(current_kprobe, p);
|
2008-01-30 20:31:21 +08:00
|
|
|
kcb->kprobe_saved_flags = kcb->kprobe_old_flags
|
2008-01-30 20:31:27 +08:00
|
|
|
= (regs->flags & (X86_EFLAGS_TF | X86_EFLAGS_IF));
|
2013-03-14 19:52:43 +08:00
|
|
|
if (p->ainsn.if_modifier)
|
2008-01-30 20:31:27 +08:00
|
|
|
kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
|
2005-06-23 15:09:37 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static nokprobe_inline void clear_btf(void)
|
2008-01-30 20:30:54 +08:00
|
|
|
{
|
2010-03-25 21:51:51 +08:00
|
|
|
if (test_thread_flag(TIF_BLOCKSTEP)) {
|
|
|
|
unsigned long debugctl = get_debugctlmsr();
|
|
|
|
|
|
|
|
debugctl &= ~DEBUGCTLMSR_BTF;
|
|
|
|
update_debugctlmsr(debugctl);
|
|
|
|
}
|
2008-01-30 20:30:54 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static nokprobe_inline void restore_btf(void)
|
2008-01-30 20:30:54 +08:00
|
|
|
{
|
2010-03-25 21:51:51 +08:00
|
|
|
if (test_thread_flag(TIF_BLOCKSTEP)) {
|
|
|
|
unsigned long debugctl = get_debugctlmsr();
|
|
|
|
|
|
|
|
debugctl |= DEBUGCTLMSR_BTF;
|
|
|
|
update_debugctlmsr(debugctl);
|
|
|
|
}
|
2008-01-30 20:30:54 +08:00
|
|
|
}
|
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
{
|
2008-01-30 20:31:21 +08:00
|
|
|
unsigned long *sara = stack_addr(regs);
|
2005-06-28 06:17:10 +08:00
|
|
|
|
2007-05-08 15:34:14 +08:00
|
|
|
ri->ret_addr = (kprobe_opcode_t *) *sara;
|
2019-02-24 00:49:52 +08:00
|
|
|
ri->fp = sara;
|
2008-01-30 20:31:21 +08:00
|
|
|
|
2007-05-08 15:34:14 +08:00
|
|
|
/* Replace the return addr with trampoline addr */
|
|
|
|
*sara = (unsigned long) &kretprobe_trampoline;
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(arch_prepare_kretprobe);
|
2008-01-30 20:32:50 +08:00
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
|
|
|
|
struct kprobe_ctlblk *kcb, int reenter)
|
2008-01-30 20:32:50 +08:00
|
|
|
{
|
2010-02-25 21:34:46 +08:00
|
|
|
if (setup_detour_execution(p, regs, reenter))
|
|
|
|
return;
|
|
|
|
|
2010-02-03 05:49:04 +08:00
|
|
|
#if !defined(CONFIG_PREEMPT)
|
2017-03-29 13:01:35 +08:00
|
|
|
if (p->ainsn.boostable && !p->post_handler) {
|
2008-01-30 20:32:50 +08:00
|
|
|
/* Boost up -- we can execute copied instructions directly */
|
2010-02-25 21:34:23 +08:00
|
|
|
if (!reenter)
|
|
|
|
reset_current_kprobe();
|
|
|
|
/*
|
|
|
|
* Reentering boosted probe doesn't reset current_kprobe,
|
|
|
|
* nor set current_kprobe, because it doesn't use single
|
|
|
|
* stepping.
|
|
|
|
*/
|
2008-01-30 20:32:50 +08:00
|
|
|
regs->ip = (unsigned long)p->ainsn.insn;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
2010-02-25 21:34:23 +08:00
|
|
|
if (reenter) {
|
|
|
|
save_previous_kprobe(kcb);
|
|
|
|
set_current_kprobe(p, regs, kcb);
|
|
|
|
kcb->kprobe_status = KPROBE_REENTER;
|
|
|
|
} else
|
|
|
|
kcb->kprobe_status = KPROBE_HIT_SS;
|
|
|
|
/* Prepare real single stepping */
|
|
|
|
clear_btf();
|
|
|
|
regs->flags |= X86_EFLAGS_TF;
|
|
|
|
regs->flags &= ~X86_EFLAGS_IF;
|
|
|
|
/* single step inline if the instruction is an int3 */
|
|
|
|
if (p->opcode == BREAKPOINT_INSTRUCTION)
|
|
|
|
regs->ip = (unsigned long)p->addr;
|
|
|
|
else
|
|
|
|
regs->ip = (unsigned long)p->ainsn.insn;
|
2008-01-30 20:32:50 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(setup_singlestep);
|
2008-01-30 20:32:50 +08:00
|
|
|
|
2008-01-30 20:32:02 +08:00
|
|
|
/*
|
|
|
|
* We have reentered the kprobe_handler(), since another probe was hit while
|
|
|
|
* within the handler. We save the original kprobes variables and just single
|
|
|
|
* step on the instruction of the new probe without calling any user handlers.
|
|
|
|
*/
|
2014-04-17 16:18:14 +08:00
|
|
|
static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
|
|
|
|
struct kprobe_ctlblk *kcb)
|
2008-01-30 20:32:02 +08:00
|
|
|
{
|
2008-01-30 20:32:50 +08:00
|
|
|
switch (kcb->kprobe_status) {
|
|
|
|
case KPROBE_HIT_SSDONE:
|
|
|
|
case KPROBE_HIT_ACTIVE:
|
2014-04-17 16:16:51 +08:00
|
|
|
case KPROBE_HIT_SS:
|
2008-01-30 20:33:13 +08:00
|
|
|
kprobes_inc_nmissed_count(p);
|
2010-02-25 21:34:23 +08:00
|
|
|
setup_singlestep(p, regs, kcb, 1);
|
2008-01-30 20:32:50 +08:00
|
|
|
break;
|
2014-04-17 16:16:51 +08:00
|
|
|
case KPROBE_REENTER:
|
2009-08-28 01:22:58 +08:00
|
|
|
/* A probe has been hit in the codepath leading up to, or just
|
|
|
|
* after, single-stepping of a probed instruction. This entire
|
|
|
|
* codepath should strictly reside in .kprobes.text section.
|
|
|
|
* Raise a BUG or we'll continue in an endless reentering loop
|
|
|
|
* and eventually a stack overflow.
|
|
|
|
*/
|
2018-04-28 20:37:03 +08:00
|
|
|
pr_err("Unrecoverable kprobe detected.\n");
|
2009-08-28 01:22:58 +08:00
|
|
|
dump_kprobe(p);
|
|
|
|
BUG();
|
2008-01-30 20:32:50 +08:00
|
|
|
default:
|
|
|
|
/* impossible cases */
|
|
|
|
WARN_ON(1);
|
2008-01-30 20:33:13 +08:00
|
|
|
return 0;
|
2008-01-30 20:32:02 +08:00
|
|
|
}
|
2008-01-30 20:32:50 +08:00
|
|
|
|
2008-01-30 20:32:02 +08:00
|
|
|
return 1;
|
2008-01-30 20:32:02 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(reenter_kprobe);
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
/*
|
|
|
|
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
|
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 23:09:05 +08:00
|
|
|
* remain disabled throughout this function.
|
2008-01-30 20:31:21 +08:00
|
|
|
*/
|
2014-04-17 16:18:14 +08:00
|
|
|
int kprobe_int3_handler(struct pt_regs *regs)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:31:21 +08:00
|
|
|
kprobe_opcode_t *addr;
|
2008-01-30 20:32:50 +08:00
|
|
|
struct kprobe *p;
|
2005-11-07 17:00:14 +08:00
|
|
|
struct kprobe_ctlblk *kcb;
|
|
|
|
|
2015-03-19 09:33:33 +08:00
|
|
|
if (user_mode(regs))
|
2014-07-12 01:27:01 +08:00
|
|
|
return 0;
|
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
|
2005-11-07 17:00:14 +08:00
|
|
|
/*
|
2018-06-20 00:16:17 +08:00
|
|
|
* We don't want to be preempted for the entire duration of kprobe
|
|
|
|
* processing. Since int3 and debug trap disables irqs and we clear
|
|
|
|
* IF while singlestepping, it must be no preemptible.
|
2005-11-07 17:00:14 +08:00
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:32:50 +08:00
|
|
|
kcb = get_kprobe_ctlblk();
|
2008-01-30 20:32:19 +08:00
|
|
|
p = get_kprobe(addr);
|
2008-01-30 20:32:50 +08:00
|
|
|
|
2008-01-30 20:32:19 +08:00
|
|
|
if (p) {
|
|
|
|
if (kprobe_running()) {
|
2008-01-30 20:32:50 +08:00
|
|
|
if (reenter_kprobe(p, regs, kcb))
|
|
|
|
return 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2008-01-30 20:32:19 +08:00
|
|
|
set_current_kprobe(p, regs, kcb);
|
|
|
|
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
|
2008-01-30 20:32:50 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:32:50 +08:00
|
|
|
* If we have no pre-handler or it returned 0, we
|
|
|
|
* continue with normal processing. If we have a
|
2018-06-20 00:05:35 +08:00
|
|
|
* pre-handler and it returned non-zero, that means
|
|
|
|
* user handler setup registers to exit to another
|
|
|
|
* instruction, we must skip the single stepping.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-01-30 20:32:50 +08:00
|
|
|
if (!p->pre_handler || !p->pre_handler(p, regs))
|
2010-02-25 21:34:23 +08:00
|
|
|
setup_singlestep(p, regs, kcb, 0);
|
2018-06-20 00:16:17 +08:00
|
|
|
else
|
2018-06-20 00:15:45 +08:00
|
|
|
reset_current_kprobe();
|
2008-01-30 20:32:50 +08:00
|
|
|
return 1;
|
2008-01-30 20:32:19 +08:00
|
|
|
}
|
2010-04-28 06:33:49 +08:00
|
|
|
} else if (*addr != BREAKPOINT_INSTRUCTION) {
|
|
|
|
/*
|
|
|
|
* The breakpoint instruction was removed right
|
|
|
|
* after we hit it. Another cpu has removed
|
|
|
|
* either a probepoint or a debugger breakpoint
|
|
|
|
* at this address. In either case, no further
|
|
|
|
* handling of this interrupt is appropriate.
|
|
|
|
* Back up over the (now missing) int3 and run
|
|
|
|
* the original instruction.
|
|
|
|
*/
|
|
|
|
regs->ip = (unsigned long)addr;
|
|
|
|
return 1;
|
2008-01-30 20:32:50 +08:00
|
|
|
} /* else: not a kprobe fault; let the kernel handle it */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:32:50 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(kprobe_int3_handler);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
/*
|
2008-01-30 20:31:21 +08:00
|
|
|
* When a retprobed function returns, this code saves registers and
|
|
|
|
* calls trampoline_handler() runs, which calls the kretprobe's handler.
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
*/
|
2016-01-22 06:49:28 +08:00
|
|
|
asm(
|
|
|
|
".global kretprobe_trampoline\n"
|
|
|
|
".type kretprobe_trampoline, @function\n"
|
|
|
|
"kretprobe_trampoline:\n"
|
2008-01-30 20:31:21 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2016-01-22 06:49:28 +08:00
|
|
|
/* We don't bother saving the ss register */
|
|
|
|
" pushq %rsp\n"
|
|
|
|
" pushfq\n"
|
|
|
|
SAVE_REGS_STRING
|
|
|
|
" movq %rsp, %rdi\n"
|
|
|
|
" call trampoline_handler\n"
|
|
|
|
/* Replace saved sp with true return address. */
|
|
|
|
" movq %rax, 152(%rsp)\n"
|
|
|
|
RESTORE_REGS_STRING
|
|
|
|
" popfq\n"
|
2008-01-30 20:31:21 +08:00
|
|
|
#else
|
2016-01-22 06:49:28 +08:00
|
|
|
" pushf\n"
|
|
|
|
SAVE_REGS_STRING
|
|
|
|
" movl %esp, %eax\n"
|
|
|
|
" call trampoline_handler\n"
|
|
|
|
/* Move flags to cs */
|
|
|
|
" movl 56(%esp), %edx\n"
|
|
|
|
" movl %edx, 52(%esp)\n"
|
|
|
|
/* Replace saved flags with true return address. */
|
|
|
|
" movl %eax, 56(%esp)\n"
|
|
|
|
RESTORE_REGS_STRING
|
|
|
|
" popf\n"
|
2008-01-30 20:31:21 +08:00
|
|
|
#endif
|
2016-01-22 06:49:28 +08:00
|
|
|
" ret\n"
|
|
|
|
".size kretprobe_trampoline, .-kretprobe_trampoline\n"
|
|
|
|
);
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(kretprobe_trampoline);
|
2016-02-29 12:22:40 +08:00
|
|
|
STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
|
2019-02-24 00:50:49 +08:00
|
|
|
static struct kprobe kretprobe_kprobe = {
|
|
|
|
.addr = (void *)kretprobe_trampoline,
|
|
|
|
};
|
|
|
|
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
/*
|
2008-01-30 20:31:21 +08:00
|
|
|
* Called from kretprobe_trampoline
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
*/
|
2018-12-08 03:38:09 +08:00
|
|
|
static __used void *trampoline_handler(struct pt_regs *regs)
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
{
|
2019-02-24 00:50:49 +08:00
|
|
|
struct kprobe_ctlblk *kcb;
|
2006-10-02 17:17:33 +08:00
|
|
|
struct kretprobe_instance *ri = NULL;
|
2006-10-02 17:17:35 +08:00
|
|
|
struct hlist_head *head, empty_rp;
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
struct hlist_node *tmp;
|
2005-11-07 17:00:14 +08:00
|
|
|
unsigned long flags, orig_ret_address = 0;
|
2008-01-30 20:31:21 +08:00
|
|
|
unsigned long trampoline_address = (unsigned long)&kretprobe_trampoline;
|
2010-08-15 14:18:04 +08:00
|
|
|
kprobe_opcode_t *correct_ret_addr = NULL;
|
2019-02-24 00:49:52 +08:00
|
|
|
void *frame_pointer;
|
|
|
|
bool skipped = false;
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
|
2019-02-24 00:50:49 +08:00
|
|
|
preempt_disable();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set a dummy kprobe for avoiding kretprobe recursion.
|
|
|
|
* Since kretprobe never run in kprobe handler, kprobe must not
|
|
|
|
* be running at this point.
|
|
|
|
*/
|
|
|
|
kcb = get_kprobe_ctlblk();
|
|
|
|
__this_cpu_write(current_kprobe, &kretprobe_kprobe);
|
|
|
|
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
|
|
|
|
|
2006-10-02 17:17:35 +08:00
|
|
|
INIT_HLIST_HEAD(&empty_rp);
|
2008-07-25 16:46:04 +08:00
|
|
|
kretprobe_hash_lock(current, &head, &flags);
|
2008-01-30 20:31:21 +08:00
|
|
|
/* fixup registers */
|
2008-01-30 20:31:21 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2008-01-30 20:31:21 +08:00
|
|
|
regs->cs = __KERNEL_CS;
|
2019-02-24 00:49:52 +08:00
|
|
|
/* On x86-64, we use pt_regs->sp for return address holder. */
|
|
|
|
frame_pointer = ®s->sp;
|
2008-01-30 20:31:21 +08:00
|
|
|
#else
|
|
|
|
regs->cs = __KERNEL_CS | get_kernel_rpl();
|
2009-03-23 22:14:52 +08:00
|
|
|
regs->gs = 0;
|
2019-02-24 00:49:52 +08:00
|
|
|
/* On x86-32, we use pt_regs->flags for return address holder. */
|
|
|
|
frame_pointer = ®s->flags;
|
2008-01-30 20:31:21 +08:00
|
|
|
#endif
|
2008-01-30 20:31:21 +08:00
|
|
|
regs->ip = trampoline_address;
|
2008-01-30 20:31:21 +08:00
|
|
|
regs->orig_ax = ~0UL;
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
|
2005-06-28 06:17:10 +08:00
|
|
|
/*
|
|
|
|
* It is possible to have multiple instances associated with a given
|
2008-01-30 20:31:21 +08:00
|
|
|
* task either because multiple functions in the call path have
|
2008-10-17 01:02:37 +08:00
|
|
|
* return probes installed on them, and/or more than one
|
2005-06-28 06:17:10 +08:00
|
|
|
* return probe was registered for a target function.
|
|
|
|
*
|
|
|
|
* We can handle this because:
|
2008-01-30 20:31:21 +08:00
|
|
|
* - instances are always pushed into the head of the list
|
2005-06-28 06:17:10 +08:00
|
|
|
* - when multiple return probes are registered for the same
|
2008-01-30 20:31:21 +08:00
|
|
|
* function, the (chronologically) first instance's ret_addr
|
|
|
|
* will be the real return address, and all the rest will
|
|
|
|
* point to kretprobe_trampoline.
|
2005-06-28 06:17:10 +08:00
|
|
|
*/
|
2017-02-06 17:55:43 +08:00
|
|
|
hlist_for_each_entry(ri, head, hlist) {
|
2006-10-02 17:17:33 +08:00
|
|
|
if (ri->task != current)
|
2005-06-28 06:17:10 +08:00
|
|
|
/* another task is sharing our hash bucket */
|
2006-10-02 17:17:33 +08:00
|
|
|
continue;
|
2019-02-24 00:49:52 +08:00
|
|
|
/*
|
|
|
|
* Return probes must be pushed on this hash list correct
|
|
|
|
* order (same as return order) so that it can be poped
|
|
|
|
* correctly. However, if we find it is pushed it incorrect
|
|
|
|
* order, this means we find a function which should not be
|
|
|
|
* probed, because the wrong order entry is pushed on the
|
|
|
|
* path of processing other kretprobe itself.
|
|
|
|
*/
|
|
|
|
if (ri->fp != frame_pointer) {
|
|
|
|
if (!skipped)
|
|
|
|
pr_warn("kretprobe is stacked incorrectly. Trying to fixup.\n");
|
|
|
|
skipped = true;
|
|
|
|
continue;
|
|
|
|
}
|
2005-06-28 06:17:10 +08:00
|
|
|
|
2010-08-15 14:18:04 +08:00
|
|
|
orig_ret_address = (unsigned long)ri->ret_addr;
|
2019-02-24 00:49:52 +08:00
|
|
|
if (skipped)
|
|
|
|
pr_warn("%ps must be blacklisted because of incorrect kretprobe order\n",
|
|
|
|
ri->rp->kp.addr);
|
2010-08-15 14:18:04 +08:00
|
|
|
|
|
|
|
if (orig_ret_address != trampoline_address)
|
|
|
|
/*
|
|
|
|
* This is the real return address. Any other
|
|
|
|
* instances associated with this task are for
|
|
|
|
* other calls deeper on the call stack
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
kretprobe_assert(ri, orig_ret_address, trampoline_address);
|
|
|
|
|
|
|
|
correct_ret_addr = ri->ret_addr;
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_safe(ri, tmp, head, hlist) {
|
2010-08-15 14:18:04 +08:00
|
|
|
if (ri->task != current)
|
|
|
|
/* another task is sharing our hash bucket */
|
|
|
|
continue;
|
2019-02-24 00:49:52 +08:00
|
|
|
if (ri->fp != frame_pointer)
|
|
|
|
continue;
|
2010-08-15 14:18:04 +08:00
|
|
|
|
|
|
|
orig_ret_address = (unsigned long)ri->ret_addr;
|
2008-01-30 20:31:21 +08:00
|
|
|
if (ri->rp && ri->rp->handler) {
|
2010-12-07 01:16:25 +08:00
|
|
|
__this_cpu_write(current_kprobe, &ri->rp->kp);
|
2010-08-15 14:18:04 +08:00
|
|
|
ri->ret_addr = correct_ret_addr;
|
2005-06-28 06:17:10 +08:00
|
|
|
ri->rp->handler(ri, regs);
|
2019-02-24 00:50:49 +08:00
|
|
|
__this_cpu_write(current_kprobe, &kretprobe_kprobe);
|
2008-01-30 20:31:21 +08:00
|
|
|
}
|
2005-06-28 06:17:10 +08:00
|
|
|
|
2006-10-02 17:17:35 +08:00
|
|
|
recycle_rp_inst(ri, &empty_rp);
|
2005-06-28 06:17:10 +08:00
|
|
|
|
|
|
|
if (orig_ret_address != trampoline_address)
|
|
|
|
/*
|
|
|
|
* This is the real return address. Any other
|
|
|
|
* instances associated with this task are for
|
|
|
|
* other calls deeper on the call stack
|
|
|
|
*/
|
|
|
|
break;
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
}
|
2005-06-28 06:17:10 +08:00
|
|
|
|
2008-07-25 16:46:04 +08:00
|
|
|
kretprobe_hash_unlock(current, &flags);
|
2005-06-28 06:17:10 +08:00
|
|
|
|
2019-02-24 00:50:49 +08:00
|
|
|
__this_cpu_write(current_kprobe, NULL);
|
|
|
|
preempt_enable();
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
|
2006-10-02 17:17:35 +08:00
|
|
|
hlist_del(&ri->hlist);
|
|
|
|
kfree(ri);
|
|
|
|
}
|
2008-01-30 20:31:21 +08:00
|
|
|
return (void *)orig_ret_address;
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(trampoline_handler);
|
[PATCH] x86_64 specific function return probes
The following patch adds the x86_64 architecture specific implementation
for function return probes.
Function return probes is a mechanism built on top of kprobes that allows
a caller to register a handler to be called when a given function exits.
For example, to instrument the return path of sys_mkdir:
static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
{
printk("sys_mkdir exited\n");
return 0;
}
static struct kretprobe return_probe = {
.handler = sys_mkdir_exit,
};
<inside setup function>
return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
if (register_kretprobe(&return_probe)) {
printk(KERN_DEBUG "Unable to register return probe!\n");
/* do error path */
}
<inside cleanup function>
unregister_kretprobe(&return_probe);
The way this works is that:
* At system initialization time, kernel/kprobes.c installs a kprobe
on a function called kretprobe_trampoline() that is implemented in
the arch/x86_64/kernel/kprobes.c (More on this later)
* When a return probe is registered using register_kretprobe(),
kernel/kprobes.c will install a kprobe on the first instruction of the
targeted function with the pre handler set to arch_prepare_kretprobe()
which is implemented in arch/x86_64/kernel/kprobes.c.
* arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
- nodes for hanging this instance in an empty or free list
- a pointer to the return probe
- the original return address
- a pointer to the stack address
With all this stowed away, arch_prepare_kretprobe() then sets the return
address for the targeted function to a special trampoline function called
kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
* The kprobe completes as normal, with control passing back to the target
function that executes as normal, and eventually returns to our trampoline
function.
* Since a kprobe was installed on kretprobe_trampoline() during system
initialization, control passes back to kprobes via the architecture
specific function trampoline_probe_handler() which will lookup the
instance in an hlist maintained by kernel/kprobes.c, and then call
the handler function.
* When trampoline_probe_handler() is done, the kprobes infrastructure
single steps the original instruction (in this case just a top), and
then calls trampoline_post_handler(). trampoline_post_handler() then
looks up the instance again, puts the instance back on the free list,
and then makes a long jump back to the original return instruction.
So to recap, to instrument the exit path of a function this implementation
will cause four interruptions:
- A breakpoint at the very beginning of the function allowing us to
switch out the return address
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
- A breakpoint in the trampoline function where our instrumented function
returned to
- A single step interruption to execute the original instruction that
we replaced with the break instruction (normal kprobe flow)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 15:09:23 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Called after single-stepping. p->addr is the address of the
|
|
|
|
* instruction whose first byte has been replaced by the "int 3"
|
|
|
|
* instruction. To avoid the SMP problems that can occur when we
|
|
|
|
* temporarily put back the original opcode to single-step, we
|
|
|
|
* single-stepped a copy of the instruction. The address of this
|
|
|
|
* copy is p->ainsn.insn.
|
|
|
|
*
|
|
|
|
* This function prepares to return from the post-single-step
|
|
|
|
* interrupt. We have to fix up the stack as follows:
|
|
|
|
*
|
|
|
|
* 0) Except in the case of absolute or indirect jump or call instructions,
|
2008-01-30 20:30:56 +08:00
|
|
|
* the new ip is relative to the copied instruction. We need to make
|
2005-04-17 06:20:36 +08:00
|
|
|
* it relative to the original instruction.
|
|
|
|
*
|
|
|
|
* 1) If the single-stepped instruction was pushfl, then the TF and IF
|
2008-01-30 20:30:56 +08:00
|
|
|
* flags are set in the just-pushed flags, and may need to be cleared.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* 2) If the single-stepped instruction was a call, the return address
|
|
|
|
* that is atop the stack is the address following the copied instruction.
|
|
|
|
* We need to make it the address following the original instruction.
|
2008-01-30 20:31:21 +08:00
|
|
|
*
|
|
|
|
* If this is the first time we've single-stepped the instruction at
|
|
|
|
* this probepoint, and the instruction is boostable, boost it: add a
|
|
|
|
* jump instruction after the copied instruction, that jumps to the next
|
|
|
|
* instruction after the probepoint.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2014-04-17 16:18:14 +08:00
|
|
|
static void resume_execution(struct kprobe *p, struct pt_regs *regs,
|
|
|
|
struct kprobe_ctlblk *kcb)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:31:21 +08:00
|
|
|
unsigned long *tos = stack_addr(regs);
|
|
|
|
unsigned long copy_ip = (unsigned long)p->ainsn.insn;
|
|
|
|
unsigned long orig_ip = (unsigned long)p->addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
kprobe_opcode_t *insn = p->ainsn.insn;
|
|
|
|
|
2010-06-29 13:53:50 +08:00
|
|
|
/* Skip prefixes */
|
|
|
|
insn = skip_prefixes(insn);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:31:27 +08:00
|
|
|
regs->flags &= ~X86_EFLAGS_TF;
|
2005-04-17 06:20:36 +08:00
|
|
|
switch (*insn) {
|
2007-12-19 01:05:58 +08:00
|
|
|
case 0x9c: /* pushfl */
|
2008-01-30 20:31:27 +08:00
|
|
|
*tos &= ~(X86_EFLAGS_TF | X86_EFLAGS_IF);
|
2008-01-30 20:31:21 +08:00
|
|
|
*tos |= kcb->kprobe_old_flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2007-12-19 01:05:58 +08:00
|
|
|
case 0xc2: /* iret/ret/lret */
|
|
|
|
case 0xc3:
|
2005-05-06 07:15:40 +08:00
|
|
|
case 0xca:
|
2007-12-19 01:05:58 +08:00
|
|
|
case 0xcb:
|
|
|
|
case 0xcf:
|
|
|
|
case 0xea: /* jmp absolute -- ip is correct */
|
|
|
|
/* ip is already adjusted, no more changes required */
|
2017-03-29 13:01:35 +08:00
|
|
|
p->ainsn.boostable = true;
|
2007-12-19 01:05:58 +08:00
|
|
|
goto no_change;
|
|
|
|
case 0xe8: /* call relative - Fix return addr */
|
2008-01-30 20:31:21 +08:00
|
|
|
*tos = orig_ip + (*tos - copy_ip);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2008-01-30 20:31:43 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
2008-01-30 20:31:21 +08:00
|
|
|
case 0x9a: /* call absolute -- same as call absolute, indirect */
|
|
|
|
*tos = orig_ip + (*tos - copy_ip);
|
|
|
|
goto no_change;
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
case 0xff:
|
2006-05-21 06:00:21 +08:00
|
|
|
if ((insn[1] & 0x30) == 0x10) {
|
2008-01-30 20:31:21 +08:00
|
|
|
/*
|
|
|
|
* call absolute, indirect
|
|
|
|
* Fix return addr; ip is correct.
|
|
|
|
* But this is not boostable
|
|
|
|
*/
|
|
|
|
*tos = orig_ip + (*tos - copy_ip);
|
2007-12-19 01:05:58 +08:00
|
|
|
goto no_change;
|
2008-01-30 20:31:21 +08:00
|
|
|
} else if (((insn[1] & 0x31) == 0x20) ||
|
|
|
|
((insn[1] & 0x31) == 0x21)) {
|
|
|
|
/*
|
|
|
|
* jmp near and far, absolute indirect
|
|
|
|
* ip is correct. And this is boostable
|
|
|
|
*/
|
2017-03-29 13:01:35 +08:00
|
|
|
p->ainsn.boostable = true;
|
2007-12-19 01:05:58 +08:00
|
|
|
goto no_change;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
regs->ip += orig_ip - copy_ip;
|
2008-01-30 20:30:56 +08:00
|
|
|
|
2007-12-19 01:05:58 +08:00
|
|
|
no_change:
|
2008-01-30 20:30:54 +08:00
|
|
|
restore_btf();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(resume_execution);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
/*
|
|
|
|
* Interrupts are disabled on entry as trap1 is an interrupt gate and they
|
tree-wide: fix assorted typos all over the place
That is "success", "unknown", "through", "performance", "[re|un]mapping"
, "access", "default", "reasonable", "[con]currently", "temperature"
, "channel", "[un]used", "application", "example","hierarchy", "therefore"
, "[over|under]flow", "contiguous", "threshold", "enough" and others.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2009-11-14 23:09:05 +08:00
|
|
|
* remain disabled throughout this function.
|
2008-01-30 20:31:21 +08:00
|
|
|
*/
|
2014-04-17 16:18:14 +08:00
|
|
|
int kprobe_debug_handler(struct pt_regs *regs)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2005-11-07 17:00:12 +08:00
|
|
|
struct kprobe *cur = kprobe_running();
|
|
|
|
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
|
|
|
|
|
|
|
|
if (!cur)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
|
2008-03-16 16:21:21 +08:00
|
|
|
resume_execution(cur, regs, kcb);
|
|
|
|
regs->flags |= kcb->kprobe_saved_flags;
|
|
|
|
|
2005-11-07 17:00:12 +08:00
|
|
|
if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) {
|
|
|
|
kcb->kprobe_status = KPROBE_HIT_SSDONE;
|
|
|
|
cur->post_handler(cur, regs, 0);
|
2005-06-23 15:09:37 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:31:21 +08:00
|
|
|
/* Restore back the original saved kprobes variables and continue. */
|
2005-11-07 17:00:12 +08:00
|
|
|
if (kcb->kprobe_status == KPROBE_REENTER) {
|
|
|
|
restore_previous_kprobe(kcb);
|
2005-06-23 15:09:37 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2005-11-07 17:00:12 +08:00
|
|
|
reset_current_kprobe();
|
2005-06-23 15:09:37 +08:00
|
|
|
out:
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:30:56 +08:00
|
|
|
* if somebody else is singlestepping across a probe point, flags
|
2005-04-17 06:20:36 +08:00
|
|
|
* will have TF set, in which case, continue the remaining processing
|
|
|
|
* of do_debug, as if this is not a probe hit.
|
|
|
|
*/
|
2008-01-30 20:31:27 +08:00
|
|
|
if (regs->flags & X86_EFLAGS_TF)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(kprobe_debug_handler);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-04-17 16:18:14 +08:00
|
|
|
int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2005-11-07 17:00:12 +08:00
|
|
|
struct kprobe *cur = kprobe_running();
|
|
|
|
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
|
|
|
|
|
2014-04-17 16:16:44 +08:00
|
|
|
if (unlikely(regs->ip == (unsigned long)cur->ainsn.insn)) {
|
|
|
|
/* This must happen on single-stepping */
|
|
|
|
WARN_ON(kcb->kprobe_status != KPROBE_HIT_SS &&
|
|
|
|
kcb->kprobe_status != KPROBE_REENTER);
|
2006-03-26 17:38:23 +08:00
|
|
|
/*
|
|
|
|
* We are here because the instruction being single
|
|
|
|
* stepped caused a page fault. We reset the current
|
2008-01-30 20:30:56 +08:00
|
|
|
* kprobe and the ip points back to the probe address
|
2006-03-26 17:38:23 +08:00
|
|
|
* and allow the page fault handler to continue as a
|
|
|
|
* normal page fault.
|
|
|
|
*/
|
2008-01-30 20:30:56 +08:00
|
|
|
regs->ip = (unsigned long)cur->addr;
|
2016-06-11 22:06:53 +08:00
|
|
|
/*
|
|
|
|
* Trap flag (TF) has been set here because this fault
|
|
|
|
* happened where the single stepping will be done.
|
|
|
|
* So clear it by resetting the current kprobe:
|
|
|
|
*/
|
|
|
|
regs->flags &= ~X86_EFLAGS_TF;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the TF flag was set before the kprobe hit,
|
|
|
|
* don't touch it:
|
|
|
|
*/
|
2008-01-30 20:31:21 +08:00
|
|
|
regs->flags |= kcb->kprobe_old_flags;
|
2016-06-11 22:06:53 +08:00
|
|
|
|
2006-03-26 17:38:23 +08:00
|
|
|
if (kcb->kprobe_status == KPROBE_REENTER)
|
|
|
|
restore_previous_kprobe(kcb);
|
|
|
|
else
|
|
|
|
reset_current_kprobe();
|
2014-04-17 16:16:44 +08:00
|
|
|
} else if (kcb->kprobe_status == KPROBE_HIT_ACTIVE ||
|
|
|
|
kcb->kprobe_status == KPROBE_HIT_SSDONE) {
|
2006-03-26 17:38:23 +08:00
|
|
|
/*
|
|
|
|
* We increment the nmissed count for accounting,
|
2008-01-30 20:31:21 +08:00
|
|
|
* we can also use npre/npostfault count for accounting
|
2006-03-26 17:38:23 +08:00
|
|
|
* these specific fault cases.
|
|
|
|
*/
|
|
|
|
kprobes_inc_nmissed_count(cur);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We come here because instructions in the pre/post
|
|
|
|
* handler caused the page_fault, this could happen
|
|
|
|
* if handler tries to access user space by
|
|
|
|
* copy_from_user(), get_user() etc. Let the
|
|
|
|
* user-specified handler try to fix it first.
|
|
|
|
*/
|
|
|
|
if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
|
|
|
|
return 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2014-04-17 16:16:44 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2014-04-17 16:18:14 +08:00
|
|
|
NOKPROBE_SYMBOL(kprobe_fault_handler);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-12-17 16:21:24 +08:00
|
|
|
int __init arch_populate_kprobe_blacklist(void)
|
|
|
|
{
|
2019-02-13 00:12:44 +08:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
|
|
|
|
(unsigned long)__irqentry_text_end);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2018-12-17 16:21:24 +08:00
|
|
|
return kprobe_add_area_blacklist((unsigned long)__entry_text_start,
|
|
|
|
(unsigned long)__entry_text_end);
|
|
|
|
}
|
|
|
|
|
2005-07-06 09:54:50 +08:00
|
|
|
int __init arch_init_kprobes(void)
|
2005-06-28 06:17:10 +08:00
|
|
|
{
|
2013-07-18 19:47:50 +08:00
|
|
|
return 0;
|
2005-06-28 06:17:10 +08:00
|
|
|
}
|
2007-05-08 15:34:16 +08:00
|
|
|
|
2014-04-17 16:17:47 +08:00
|
|
|
int arch_trampoline_kprobe(struct kprobe *p)
|
2007-05-08 15:34:16 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|