2019-05-19 21:51:43 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2016-05-21 08:00:33 +08:00
|
|
|
/*
|
|
|
|
* internal.h - printk internal definitions
|
|
|
|
*/
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
#ifdef CONFIG_PRINTK
|
|
|
|
|
2021-06-16 00:52:48 +08:00
|
|
|
/* Flags for a single printk record. */
|
|
|
|
enum printk_info_flags {
|
|
|
|
LOG_NEWLINE = 2, /* text ended with a newline */
|
|
|
|
LOG_CONT = 8, /* text is a fragment of a continuation line */
|
|
|
|
};
|
|
|
|
|
2020-09-21 19:18:45 +08:00
|
|
|
__printf(4, 0)
|
2018-06-27 22:20:28 +08:00
|
|
|
int vprintk_store(int facility, int level,
|
2020-09-21 19:18:45 +08:00
|
|
|
const struct dev_printk_info *dev_info,
|
2018-06-27 22:20:28 +08:00
|
|
|
const char *fmt, va_list args);
|
|
|
|
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
__printf(1, 0) int vprintk_default(const char *fmt, va_list args);
|
2017-04-20 16:52:31 +08:00
|
|
|
__printf(1, 0) int vprintk_deferred(const char *fmt, va_list args);
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
|
printk: queue wake_up_klogd irq_work only if per-CPU areas are ready
printk_deferred(), similarly to printk_safe/printk_nmi, does not
immediately attempt to print a new message on the consoles, avoiding
calls into non-reentrant kernel paths, e.g. scheduler or timekeeping,
which potentially can deadlock the system.
Those printk() flavors, instead, rely on per-CPU flush irq_work to print
messages from safer contexts. For same reasons (recursive scheduler or
timekeeping calls) printk() uses per-CPU irq_work in order to wake up
user space syslog/kmsg readers.
However, only printk_safe/printk_nmi do make sure that per-CPU areas
have been initialised and that it's safe to modify per-CPU irq_work.
This means that, for instance, should printk_deferred() be invoked "too
early", that is before per-CPU areas are initialised, printk_deferred()
will perform illegal per-CPU access.
Lech Perczak [0] reports that after commit 1b710b1b10ef ("char/random:
silence a lockdep splat with printk()") user-space syslog/kmsg readers
are not able to read new kernel messages.
The reason is printk_deferred() being called too early (as was pointed
out by Petr and John).
Fix printk_deferred() and do not queue per-CPU irq_work before per-CPU
areas are initialized.
Link: https://lore.kernel.org/lkml/aa0732c6-5c4e-8a8b-a1c1-75ebe3dca05b@camlintechnologies.com/
Reported-by: Lech Perczak <l.perczak@camlintechnologies.com>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Tested-by: Jann Horn <jannh@google.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-03-03 19:30:02 +08:00
|
|
|
bool printk_percpu_data_ready(void);
|
|
|
|
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
#define printk_safe_enter_irqsave(flags) \
|
|
|
|
do { \
|
|
|
|
local_irq_save(flags); \
|
|
|
|
__printk_safe_enter(); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define printk_safe_exit_irqrestore(flags) \
|
|
|
|
do { \
|
|
|
|
__printk_safe_exit(); \
|
|
|
|
local_irq_restore(flags); \
|
|
|
|
} while (0)
|
|
|
|
|
2018-06-27 22:20:28 +08:00
|
|
|
void defer_console_output(void);
|
|
|
|
|
2021-06-16 00:52:51 +08:00
|
|
|
u16 printk_parse_prefix(const char *text, int *level,
|
|
|
|
enum printk_info_flags *flags);
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
#else
|
|
|
|
|
|
|
|
/*
|
2021-03-03 18:15:26 +08:00
|
|
|
* In !PRINTK builds we still export console_sem
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
* semaphore and some of console functions (console_unlock()/etc.), so
|
|
|
|
* printk-safe must preserve the existing local IRQ guarantees.
|
|
|
|
*/
|
|
|
|
#define printk_safe_enter_irqsave(flags) local_irq_save(flags)
|
|
|
|
#define printk_safe_exit_irqrestore(flags) local_irq_restore(flags)
|
|
|
|
|
printk: queue wake_up_klogd irq_work only if per-CPU areas are ready
printk_deferred(), similarly to printk_safe/printk_nmi, does not
immediately attempt to print a new message on the consoles, avoiding
calls into non-reentrant kernel paths, e.g. scheduler or timekeeping,
which potentially can deadlock the system.
Those printk() flavors, instead, rely on per-CPU flush irq_work to print
messages from safer contexts. For same reasons (recursive scheduler or
timekeeping calls) printk() uses per-CPU irq_work in order to wake up
user space syslog/kmsg readers.
However, only printk_safe/printk_nmi do make sure that per-CPU areas
have been initialised and that it's safe to modify per-CPU irq_work.
This means that, for instance, should printk_deferred() be invoked "too
early", that is before per-CPU areas are initialised, printk_deferred()
will perform illegal per-CPU access.
Lech Perczak [0] reports that after commit 1b710b1b10ef ("char/random:
silence a lockdep splat with printk()") user-space syslog/kmsg readers
are not able to read new kernel messages.
The reason is printk_deferred() being called too early (as was pointed
out by Petr and John).
Fix printk_deferred() and do not queue per-CPU irq_work before per-CPU
areas are initialized.
Link: https://lore.kernel.org/lkml/aa0732c6-5c4e-8a8b-a1c1-75ebe3dca05b@camlintechnologies.com/
Reported-by: Lech Perczak <l.perczak@camlintechnologies.com>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Tested-by: Jann Horn <jannh@google.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-03-03 19:30:02 +08:00
|
|
|
static inline bool printk_percpu_data_ready(void) { return false; }
|
printk: introduce per-cpu safe_print seq buffer
This patch extends the idea of NMI per-cpu buffers to regions
that may cause recursive printk() calls and possible deadlocks.
Namely, printk() can't handle printk calls from schedule code
or printk() calls from lock debugging code (spin_dump() for instance);
because those may be called with `sem->lock' already taken or any
other `critical' locks (p->pi_lock, etc.). An example of deadlock
can be
vprintk_emit()
console_unlock()
up() << raw_spin_lock_irqsave(&sem->lock, flags);
wake_up_process()
try_to_wake_up()
ttwu_queue()
ttwu_activate()
activate_task()
enqueue_task()
enqueue_task_fair()
cfs_rq_of()
task_of()
WARN_ON_ONCE(!entity_is_task(se))
vprintk_emit()
console_trylock()
down_trylock()
raw_spin_lock_irqsave(&sem->lock, flags)
^^^^ deadlock
and some other cases.
Just like in NMI implementation, the solution uses a per-cpu
`printk_func' pointer to 'redirect' printk() calls to a 'safe'
callback, that store messages in a per-cpu buffer and flushes
them back to logbuf buffer later.
Usage example:
printk()
printk_safe_enter_irqsave(flags)
//
// any printk() call from here will endup in vprintk_safe(),
// that stores messages in a special per-CPU buffer.
//
printk_safe_exit_irqrestore(flags)
The 'redirection' mechanism, though, has been reworked, as suggested
by Petr Mladek. Instead of using a per-cpu @print_func callback we now
keep a per-cpu printk-context variable and call either default or nmi
vprintk function depending on its value. printk_nmi_entrer/exit and
printk_safe_enter/exit, thus, just set/celar corresponding bits in
printk-context functions.
The patch only adds printk_safe support, we don't use it yet.
Link: http://lkml.kernel.org/r/20161227141611.940-4-sergey.senozhatsky@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Calvin Owens <calvinowens@fb.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2016-12-27 22:16:06 +08:00
|
|
|
#endif /* CONFIG_PRINTK */
|