kprobes: Avoid false KASAN reports during stack copy
Kprobes save and restore raw stack chunks with memcpy(). With KASAN these chunks can contain poisoned stack redzones, as the result memcpy() interceptor produces false stack out-of-bounds reports. Use __memcpy() instead of memcpy() for stack copying. __memcpy() is not instrumented by KASAN and does not lead to the false reports. Currently there is a spew of KASAN reports during boot if CONFIG_KPROBES_SANITY_TEST is enabled: [ ] Kprobe smoke test: started [ ] ================================================================== [ ] BUG: KASAN: stack-out-of-bounds in setjmp_pre_handler+0x17c/0x280 at addr ffff88085259fba8 [ ] Read of size 64 by task swapper/0/1 [ ] page:ffffea00214967c0 count:0 mapcount:0 mapping: (null) index:0x0 [ ] flags: 0x2fffff80000000() [ ] page dumped because: kasan: bad access detected [...] Reported-by: CAI Qian <caiqian@redhat.com> Tested-by: CAI Qian <caiqian@redhat.com> Signed-off-by: Dmitry Vyukov <dvyukov@google.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Alexander Potapenko <glider@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kasan-dev@googlegroups.com [ Improved various details. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
1001354ca3
commit
9254139ad0
|
@ -1057,9 +1057,10 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
|
|||
* tailcall optimization. So, to be absolutely safe
|
||||
* we also save and restore enough stack bytes to cover
|
||||
* the argument area.
|
||||
* Use __memcpy() to avoid KASAN stack out-of-bounds reports as we copy
|
||||
* raw stack chunk with redzones:
|
||||
*/
|
||||
memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr,
|
||||
MIN_STACK_SIZE(addr));
|
||||
__memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr, MIN_STACK_SIZE(addr));
|
||||
regs->flags &= ~X86_EFLAGS_IF;
|
||||
trace_hardirqs_off();
|
||||
regs->ip = (unsigned long)(jp->entry);
|
||||
|
@ -1118,7 +1119,7 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
|
|||
/* It's OK to start function graph tracing again */
|
||||
unpause_graph_tracing();
|
||||
*regs = kcb->jprobe_saved_regs;
|
||||
memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));
|
||||
__memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));
|
||||
preempt_enable_no_resched();
|
||||
return 1;
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue