2008-11-07 04:55:21 +08:00
|
|
|
#ifndef _LINUX_FTRACE_IRQ_H
|
|
|
|
#define _LINUX_FTRACE_IRQ_H
|
|
|
|
|
|
|
|
|
ring-buffer: add NMI protection for spinlocks
Impact: prevent deadlock in NMI
The ring buffers are not yet totally lockless with writing to
the buffer. When a writer crosses a page, it grabs a per cpu spinlock
to protect against a reader. The spinlocks taken by a writer are not
to protect against other writers, since a writer can only write to
its own per cpu buffer. The spinlocks protect against readers that
can touch any cpu buffer. The writers are made to be reentrant
with the spinlocks disabling interrupts.
The problem arises when an NMI writes to the buffer, and that write
crosses a page boundary. If it grabs a spinlock, it can be racing
with another writer (since disabling interrupts does not protect
against NMIs) or with a reader on the same CPU. Luckily, most of the
users are not reentrant and protects against this issue. But if a
user of the ring buffer becomes reentrant (which is what the ring
buffers do allow), if the NMI also writes to the ring buffer then
we risk the chance of a deadlock.
This patch moves the ftrace_nmi_enter called by nmi_enter() to the
ring buffer code. It replaces the current ftrace_nmi_enter that is
used by arch specific code to arch_ftrace_nmi_enter and updates
the Kconfig to handle it.
When an NMI is called, it will set a per cpu variable in the ring buffer
code and will clear it when the NMI exits. If a write to the ring buffer
crosses page boundaries inside an NMI, a trylock is used on the spin
lock instead. If the spinlock fails to be acquired, then the entry
is discarded.
This bug appeared in the ftrace work in the RT tree, where event tracing
is reentrant. This workaround solved the deadlocks that appeared there.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-06 07:43:07 +08:00
|
|
|
#ifdef CONFIG_FTRACE_NMI_ENTER
|
2016-08-05 00:49:53 +08:00
|
|
|
extern void arch_ftrace_nmi_enter(void);
|
|
|
|
extern void arch_ftrace_nmi_exit(void);
|
2008-11-07 04:55:21 +08:00
|
|
|
#else
|
2016-08-05 00:49:53 +08:00
|
|
|
static inline void arch_ftrace_nmi_enter(void) { }
|
|
|
|
static inline void arch_ftrace_nmi_exit(void) { }
|
2008-11-07 04:55:21 +08:00
|
|
|
#endif
|
|
|
|
|
2016-08-05 00:49:53 +08:00
|
|
|
#ifdef CONFIG_HWLAT_TRACER
|
|
|
|
extern bool trace_hwlat_callback_enabled;
|
|
|
|
extern void trace_hwlat_callback(bool enter);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static inline void ftrace_nmi_enter(void)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_HWLAT_TRACER
|
|
|
|
if (trace_hwlat_callback_enabled)
|
|
|
|
trace_hwlat_callback(true);
|
|
|
|
#endif
|
|
|
|
arch_ftrace_nmi_enter();
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void ftrace_nmi_exit(void)
|
|
|
|
{
|
|
|
|
arch_ftrace_nmi_exit();
|
|
|
|
#ifdef CONFIG_HWLAT_TRACER
|
|
|
|
if (trace_hwlat_callback_enabled)
|
|
|
|
trace_hwlat_callback(false);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2008-11-07 04:55:21 +08:00
|
|
|
#endif /* _LINUX_FTRACE_IRQ_H */
|