From e11da6799fe0b1e49668cec68ab646b46712c57c Mon Sep 17 00:00:00 2001 From: Chris Down Date: Wed, 30 Mar 2022 15:32:20 +0100 Subject: [PATCH 01/18] MAINTAINERS: Add printk indexing maintainers on mention of printk_index This will primarily catch new and changed printk_index_subsys_emit calls, but it's also worth catching changes to other printk indexing infrastructure outside of kernel/printk/index.c. This avoids churn due to missing ccs when adding new printk indexes, as was the case recently for the first round of the XFS printk indexing patches. Signed-off-by: Chris Down Cc: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/YkRp9IhToTmTnkl7@chrisdown.name --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) diff --git a/MAINTAINERS b/MAINTAINERS index c10fbd13080a..e1e6007a724d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15586,6 +15586,7 @@ PRINTK INDEXING R: Chris Down S: Maintained F: kernel/printk/index.c +K: printk_index PROC FILESYSTEM L: linux-kernel@vger.kernel.org From a5c7a39f508ae1fd3288493b96dd26079bae41bf Mon Sep 17 00:00:00 2001 From: Petr Mladek Date: Tue, 5 Apr 2022 13:48:29 +0200 Subject: [PATCH 02/18] printk/index: Printk index feature documentation Document the printk index feature. The primary motivation is to explain that it is not creating KABI from particular printk() calls. Acked-by: Sergey Senozhatsky Reviewed-by: Chris Down Signed-off-by: Petr Mladek --- Documentation/core-api/index.rst | 1 + Documentation/core-api/printk-index.rst | 137 ++++++++++++++++++++++++ MAINTAINERS | 1 + 3 files changed, 139 insertions(+) create mode 100644 Documentation/core-api/printk-index.rst diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index 972d46a5ddf6..8a3fce1fe582 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -20,6 +20,7 @@ it. workqueue printk-basics printk-formats + printk-index symbol-namespaces Data structures and low-level utilities diff --git a/Documentation/core-api/printk-index.rst b/Documentation/core-api/printk-index.rst new file mode 100644 index 000000000000..3062f37d119b --- /dev/null +++ b/Documentation/core-api/printk-index.rst @@ -0,0 +1,137 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============ +Printk Index +============ + +There are many ways how to monitor the state of the system. One important +source of information is the system log. It provides a lot of information, +including more or less important warnings and error messages. + +There are monitoring tools that filter and take action based on messages +logged. + +The kernel messages are evolving together with the code. As a result, +particular kernel messages are not KABI and never will be! + +It is a huge challenge for maintaining the system log monitors. It requires +knowing what messages were updated in a particular kernel version and why. +Finding these changes in the sources would require non-trivial parsers. +Also it would require matching the sources with the binary kernel which +is not always trivial. Various changes might be backported. Various kernel +versions might be used on different monitored systems. + +This is where the printk index feature might become useful. It provides +a dump of printk formats used all over the source code used for the kernel +and modules on the running system. It is accessible at runtime via debugfs. + +The printk index helps to find changes in the message formats. Also it helps +to track the strings back to the kernel sources and the related commit. + + +User Interface +============== + +The index of printk formats are split in into separate files. The files are +named according to the binaries where the printk formats are built-in. There +is always "vmlinux" and optionally also modules, for example:: + + /sys/kernel/debug/printk/index/vmlinux + /sys/kernel/debug/printk/index/ext4 + /sys/kernel/debug/printk/index/scsi_mod + +Note that only loaded modules are shown. Also printk formats from a module +might appear in "vmlinux" when the module is built-in. + +The content is inspired by the dynamic debug interface and looks like:: + + $> head -1 /sys/kernel/debug/printk/index/vmlinux; shuf -n 5 vmlinux + # filename:line function "format" + <5> block/blk-settings.c:661 disk_stack_limits "%s: Warning: Device %s is misaligned\n" + <4> kernel/trace/trace.c:8296 trace_create_file "Could not create tracefs '%s' entry\n" + <6> arch/x86/kernel/hpet.c:144 _hpet_print_config "hpet: %s(%d):\n" + <6> init/do_mounts.c:605 prepare_namespace "Waiting for root device %s...\n" + <6> drivers/acpi/osl.c:1410 acpi_no_auto_serialize_setup "ACPI: auto-serialization disabled\n" + +, where the meaning is: + + - :level: log level value: 0-7 for particular severity, -1 as default, + 'c' as continuous line without an explicit log level + - :flags: optional flags: currently only 'c' for KERN_CONT + - :filename\:line: source filename and line number of the related + printk() call. Note that there are many wrappers, for example, + pr_warn(), pr_warn_once(), dev_warn(). + - :function: function name where the printk() call is used. + - :format: format string + +The extra information makes it a bit harder to find differences +between various kernels. Especially the line number might change +very often. On the other hand, it helps a lot to confirm that +it is the same string or find the commit that is responsible +for eventual changes. + + +printk() Is Not a Stable KABI +============================= + +Several developers are afraid that exporting all these implementation +details into the user space will transform particular printk() calls +into KABI. + +But it is exactly the opposite. printk() calls must _not_ be KABI. +And the printk index helps user space tools to deal with this. + + +Subsystem specific printk wrappers +================================== + +The printk index is generated using extra metadata that are stored in +a dedicated .elf section ".printk_index". It is achieved using macro +wrappers doing __printk_index_emit() together with the real printk() +call. The same technique is used also for the metadata used by +the dynamic debug feature. + +The metadata are stored for a particular message only when it is printed +using these special wrappers. It is implemented for the commonly +used printk() calls, including, for example, pr_warn(), or pr_once(). + +Additional changes are necessary for various subsystem specific wrappers +that call the original printk() via a common helper function. These needs +their own wrappers adding __printk_index_emit(). + +Only few subsystem specific wrappers have been updated so far, +for example, dev_printk(). As a result, the printk formats from +some subsystes can be missing in the printk index. + + +Subsystem specific prefix +========================= + +The macro pr_fmt() macro allows to define a prefix that is printed +before the string generated by the related printk() calls. + +Subsystem specific wrappers usually add even more complicated +prefixes. + +These prefixes can be stored into the printk index metadata +by an optional parameter of __printk_index_emit(). The debugfs +interface might then show the printk formats including these prefixes. +For example, drivers/acpi/osl.c contains:: + + #define pr_fmt(fmt) "ACPI: OSL: " fmt + + static int __init acpi_no_auto_serialize_setup(char *str) + { + acpi_gbl_auto_serialize_methods = FALSE; + pr_info("Auto-serialization disabled\n"); + + return 1; + } + +This results in the following printk index entry:: + + <6> drivers/acpi/osl.c:1410 acpi_no_auto_serialize_setup "ACPI: auto-serialization disabled\n" + +It helps matching messages from the real log with printk index. +Then the source file name, line number, and function name can +be used to match the string with the source code. diff --git a/MAINTAINERS b/MAINTAINERS index e1e6007a724d..86490b48963a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15585,6 +15585,7 @@ F: kernel/printk/ PRINTK INDEXING R: Chris Down S: Maintained +F: Documentation/core-api/printk-index.rst F: kernel/printk/index.c K: printk_index From faebd693c59387b7b765fab64b543855e15a91b4 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:36 +0206 Subject: [PATCH 03/18] printk: rename cpulock functions Since the printk cpulock is CPU-reentrant and since it is used in all contexts, its usage must be carefully considered and most likely will require programming locklessly. To avoid mistaking the printk cpulock as a typical lock, rename it to cpu_sync. The main functions then become: printk_cpu_sync_get_irqsave(flags); printk_cpu_sync_put_irqrestore(flags); Add extra notes of caution in the function description to help developers understand the requirements for correct usage. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-2-john.ogness@linutronix.de --- include/linux/printk.h | 54 +++++++++++++++++++------------- kernel/printk/printk.c | 71 +++++++++++++++++++++--------------------- lib/dump_stack.c | 4 +-- lib/nmi_backtrace.c | 4 +-- 4 files changed, 73 insertions(+), 60 deletions(-) diff --git a/include/linux/printk.h b/include/linux/printk.h index 1522df223c0f..859323a52985 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -277,43 +277,55 @@ static inline void printk_trigger_flush(void) #endif #ifdef CONFIG_SMP -extern int __printk_cpu_trylock(void); -extern void __printk_wait_on_cpu_lock(void); -extern void __printk_cpu_unlock(void); +extern int __printk_cpu_sync_try_get(void); +extern void __printk_cpu_sync_wait(void); +extern void __printk_cpu_sync_put(void); /** - * printk_cpu_lock_irqsave() - Acquire the printk cpu-reentrant spinning - * lock and disable interrupts. + * printk_cpu_sync_get_irqsave() - Acquire the printk cpu-reentrant spinning + * lock and disable interrupts. * @flags: Stack-allocated storage for saving local interrupt state, - * to be passed to printk_cpu_unlock_irqrestore(). + * to be passed to printk_cpu_sync_put_irqrestore(). * * If the lock is owned by another CPU, spin until it becomes available. * Interrupts are restored while spinning. + * + * CAUTION: This function must be used carefully. It does not behave like a + * typical lock. Here are important things to watch out for... + * + * * This function is reentrant on the same CPU. Therefore the calling + * code must not assume exclusive access to data if code accessing the + * data can run reentrant or within NMI context on the same CPU. + * + * * If there exists usage of this function from NMI context, it becomes + * unsafe to perform any type of locking or spinning to wait for other + * CPUs after calling this function from any context. This includes + * using spinlocks or any other busy-waiting synchronization methods. */ -#define printk_cpu_lock_irqsave(flags) \ - for (;;) { \ - local_irq_save(flags); \ - if (__printk_cpu_trylock()) \ - break; \ - local_irq_restore(flags); \ - __printk_wait_on_cpu_lock(); \ +#define printk_cpu_sync_get_irqsave(flags) \ + for (;;) { \ + local_irq_save(flags); \ + if (__printk_cpu_sync_try_get()) \ + break; \ + local_irq_restore(flags); \ + __printk_cpu_sync_wait(); \ } /** - * printk_cpu_unlock_irqrestore() - Release the printk cpu-reentrant spinning - * lock and restore interrupts. - * @flags: Caller's saved interrupt state, from printk_cpu_lock_irqsave(). + * printk_cpu_sync_put_irqrestore() - Release the printk cpu-reentrant spinning + * lock and restore interrupts. + * @flags: Caller's saved interrupt state, from printk_cpu_sync_get_irqsave(). */ -#define printk_cpu_unlock_irqrestore(flags) \ +#define printk_cpu_sync_put_irqrestore(flags) \ do { \ - __printk_cpu_unlock(); \ + __printk_cpu_sync_put(); \ local_irq_restore(flags); \ - } while (0) \ + } while (0) #else -#define printk_cpu_lock_irqsave(flags) ((void)flags) -#define printk_cpu_unlock_irqrestore(flags) ((void)flags) +#define printk_cpu_sync_get_irqsave(flags) ((void)flags) +#define printk_cpu_sync_put_irqrestore(flags) ((void)flags) #endif /* CONFIG_SMP */ diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index da03c15ecc89..13a1eebe72af 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -3667,26 +3667,26 @@ EXPORT_SYMBOL_GPL(kmsg_dump_rewind); #endif #ifdef CONFIG_SMP -static atomic_t printk_cpulock_owner = ATOMIC_INIT(-1); -static atomic_t printk_cpulock_nested = ATOMIC_INIT(0); +static atomic_t printk_cpu_sync_owner = ATOMIC_INIT(-1); +static atomic_t printk_cpu_sync_nested = ATOMIC_INIT(0); /** - * __printk_wait_on_cpu_lock() - Busy wait until the printk cpu-reentrant - * spinning lock is not owned by any CPU. + * __printk_cpu_sync_wait() - Busy wait until the printk cpu-reentrant + * spinning lock is not owned by any CPU. * * Context: Any context. */ -void __printk_wait_on_cpu_lock(void) +void __printk_cpu_sync_wait(void) { do { cpu_relax(); - } while (atomic_read(&printk_cpulock_owner) != -1); + } while (atomic_read(&printk_cpu_sync_owner) != -1); } -EXPORT_SYMBOL(__printk_wait_on_cpu_lock); +EXPORT_SYMBOL(__printk_cpu_sync_wait); /** - * __printk_cpu_trylock() - Try to acquire the printk cpu-reentrant - * spinning lock. + * __printk_cpu_sync_try_get() - Try to acquire the printk cpu-reentrant + * spinning lock. * * If no processor has the lock, the calling processor takes the lock and * becomes the owner. If the calling processor is already the owner of the @@ -3695,7 +3695,7 @@ EXPORT_SYMBOL(__printk_wait_on_cpu_lock); * Context: Any context. Expects interrupts to be disabled. * Return: 1 on success, otherwise 0. */ -int __printk_cpu_trylock(void) +int __printk_cpu_sync_try_get(void) { int cpu; int old; @@ -3705,79 +3705,80 @@ int __printk_cpu_trylock(void) /* * Guarantee loads and stores from this CPU when it is the lock owner * are _not_ visible to the previous lock owner. This pairs with - * __printk_cpu_unlock:B. + * __printk_cpu_sync_put:B. * * Memory barrier involvement: * - * If __printk_cpu_trylock:A reads from __printk_cpu_unlock:B, then - * __printk_cpu_unlock:A can never read from __printk_cpu_trylock:B. + * If __printk_cpu_sync_try_get:A reads from __printk_cpu_sync_put:B, + * then __printk_cpu_sync_put:A can never read from + * __printk_cpu_sync_try_get:B. * * Relies on: * - * RELEASE from __printk_cpu_unlock:A to __printk_cpu_unlock:B + * RELEASE from __printk_cpu_sync_put:A to __printk_cpu_sync_put:B * of the previous CPU * matching - * ACQUIRE from __printk_cpu_trylock:A to __printk_cpu_trylock:B - * of this CPU + * ACQUIRE from __printk_cpu_sync_try_get:A to + * __printk_cpu_sync_try_get:B of this CPU */ - old = atomic_cmpxchg_acquire(&printk_cpulock_owner, -1, - cpu); /* LMM(__printk_cpu_trylock:A) */ + old = atomic_cmpxchg_acquire(&printk_cpu_sync_owner, -1, + cpu); /* LMM(__printk_cpu_sync_try_get:A) */ if (old == -1) { /* * This CPU is now the owner and begins loading/storing - * data: LMM(__printk_cpu_trylock:B) + * data: LMM(__printk_cpu_sync_try_get:B) */ return 1; } else if (old == cpu) { /* This CPU is already the owner. */ - atomic_inc(&printk_cpulock_nested); + atomic_inc(&printk_cpu_sync_nested); return 1; } return 0; } -EXPORT_SYMBOL(__printk_cpu_trylock); +EXPORT_SYMBOL(__printk_cpu_sync_try_get); /** - * __printk_cpu_unlock() - Release the printk cpu-reentrant spinning lock. + * __printk_cpu_sync_put() - Release the printk cpu-reentrant spinning lock. * * The calling processor must be the owner of the lock. * * Context: Any context. Expects interrupts to be disabled. */ -void __printk_cpu_unlock(void) +void __printk_cpu_sync_put(void) { - if (atomic_read(&printk_cpulock_nested)) { - atomic_dec(&printk_cpulock_nested); + if (atomic_read(&printk_cpu_sync_nested)) { + atomic_dec(&printk_cpu_sync_nested); return; } /* * This CPU is finished loading/storing data: - * LMM(__printk_cpu_unlock:A) + * LMM(__printk_cpu_sync_put:A) */ /* * Guarantee loads and stores from this CPU when it was the * lock owner are visible to the next lock owner. This pairs - * with __printk_cpu_trylock:A. + * with __printk_cpu_sync_try_get:A. * * Memory barrier involvement: * - * If __printk_cpu_trylock:A reads from __printk_cpu_unlock:B, - * then __printk_cpu_trylock:B reads from __printk_cpu_unlock:A. + * If __printk_cpu_sync_try_get:A reads from __printk_cpu_sync_put:B, + * then __printk_cpu_sync_try_get:B reads from __printk_cpu_sync_put:A. * * Relies on: * - * RELEASE from __printk_cpu_unlock:A to __printk_cpu_unlock:B + * RELEASE from __printk_cpu_sync_put:A to __printk_cpu_sync_put:B * of this CPU * matching - * ACQUIRE from __printk_cpu_trylock:A to __printk_cpu_trylock:B - * of the next CPU + * ACQUIRE from __printk_cpu_sync_try_get:A to + * __printk_cpu_sync_try_get:B of the next CPU */ - atomic_set_release(&printk_cpulock_owner, - -1); /* LMM(__printk_cpu_unlock:B) */ + atomic_set_release(&printk_cpu_sync_owner, + -1); /* LMM(__printk_cpu_sync_put:B) */ } -EXPORT_SYMBOL(__printk_cpu_unlock); +EXPORT_SYMBOL(__printk_cpu_sync_put); #endif /* CONFIG_SMP */ diff --git a/lib/dump_stack.c b/lib/dump_stack.c index 6b7f1bf6715d..83471e81501a 100644 --- a/lib/dump_stack.c +++ b/lib/dump_stack.c @@ -102,9 +102,9 @@ asmlinkage __visible void dump_stack_lvl(const char *log_lvl) * Permit this cpu to perform nested stack dumps while serialising * against other CPUs */ - printk_cpu_lock_irqsave(flags); + printk_cpu_sync_get_irqsave(flags); __dump_stack(log_lvl); - printk_cpu_unlock_irqrestore(flags); + printk_cpu_sync_put_irqrestore(flags); } EXPORT_SYMBOL(dump_stack_lvl); diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c index 199ab201d501..d01aec6ae15c 100644 --- a/lib/nmi_backtrace.c +++ b/lib/nmi_backtrace.c @@ -99,7 +99,7 @@ bool nmi_cpu_backtrace(struct pt_regs *regs) * Allow nested NMI backtraces while serializing * against other CPUs. */ - printk_cpu_lock_irqsave(flags); + printk_cpu_sync_get_irqsave(flags); if (!READ_ONCE(backtrace_idle) && regs && cpu_in_idle(instruction_pointer(regs))) { pr_warn("NMI backtrace for cpu %d skipped: idling at %pS\n", cpu, (void *)instruction_pointer(regs)); @@ -110,7 +110,7 @@ bool nmi_cpu_backtrace(struct pt_regs *regs) else dump_stack(); } - printk_cpu_unlock_irqrestore(flags); + printk_cpu_sync_put_irqrestore(flags); cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask)); return true; } From f5343321b71ac0a1112adeab0ff90b239bad3a83 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:37 +0206 Subject: [PATCH 04/18] printk: cpu sync always disable interrupts The CPU sync functions are a NOP for !CONFIG_SMP. But for !CONFIG_SMP they still need to disable interrupts in order to preserve context within the CPU sync sections. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Reviewed-by: Sergey Senozhatsky Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-3-john.ogness@linutronix.de --- include/linux/printk.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/printk.h b/include/linux/printk.h index 859323a52985..b70a42f94031 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -281,9 +281,16 @@ extern int __printk_cpu_sync_try_get(void); extern void __printk_cpu_sync_wait(void); extern void __printk_cpu_sync_put(void); +#else + +#define __printk_cpu_sync_try_get() true +#define __printk_cpu_sync_wait() +#define __printk_cpu_sync_put() +#endif /* CONFIG_SMP */ + /** - * printk_cpu_sync_get_irqsave() - Acquire the printk cpu-reentrant spinning - * lock and disable interrupts. + * printk_cpu_sync_get_irqsave() - Disable interrupts and acquire the printk + * cpu-reentrant spinning lock. * @flags: Stack-allocated storage for saving local interrupt state, * to be passed to printk_cpu_sync_put_irqrestore(). * @@ -322,13 +329,6 @@ extern void __printk_cpu_sync_put(void); local_irq_restore(flags); \ } while (0) -#else - -#define printk_cpu_sync_get_irqsave(flags) ((void)flags) -#define printk_cpu_sync_put_irqrestore(flags) ((void)flags) - -#endif /* CONFIG_SMP */ - extern int kptr_restrict; /** From 1f5d783094cf28b4905f51cad846eb5d1db6673e Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:38 +0206 Subject: [PATCH 05/18] printk: add missing memory barrier to wake_up_klogd() It is important that any new records are visible to preparing waiters before the waker checks if the wait queue is empty. Otherwise it is possible that: - there are new records available - the waker sees an empty wait queue and does not wake - the preparing waiter sees no new records and begins to wait This is exactly the problem that the function description of waitqueue_active() warns about. Use wq_has_sleeper() instead of waitqueue_active() because it includes the necessary full memory barrier. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-4-john.ogness@linutronix.de --- kernel/printk/printk.c | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 13a1eebe72af..f817dfb4852d 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -746,8 +746,19 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf, goto out; } + /* + * Guarantee this task is visible on the waitqueue before + * checking the wake condition. + * + * The full memory barrier within set_current_state() of + * prepare_to_wait_event() pairs with the full memory barrier + * within wq_has_sleeper(). + * + * This pairs with wake_up_klogd:A. + */ ret = wait_event_interruptible(log_wait, - prb_read_valid(prb, atomic64_read(&user->seq), r)); + prb_read_valid(prb, + atomic64_read(&user->seq), r)); /* LMM(devkmsg_read:A) */ if (ret) goto out; } @@ -1513,7 +1524,18 @@ static int syslog_print(char __user *buf, int size) seq = syslog_seq; mutex_unlock(&syslog_lock); - len = wait_event_interruptible(log_wait, prb_read_valid(prb, seq, NULL)); + /* + * Guarantee this task is visible on the waitqueue before + * checking the wake condition. + * + * The full memory barrier within set_current_state() of + * prepare_to_wait_event() pairs with the full memory barrier + * within wq_has_sleeper(). + * + * This pairs with wake_up_klogd:A. + */ + len = wait_event_interruptible(log_wait, + prb_read_valid(prb, seq, NULL)); /* LMM(syslog_print:A) */ mutex_lock(&syslog_lock); if (len) @@ -3316,7 +3338,18 @@ void wake_up_klogd(void) return; preempt_disable(); - if (waitqueue_active(&log_wait)) { + /* + * Guarantee any new records can be seen by tasks preparing to wait + * before this context checks if the wait queue is empty. + * + * The full memory barrier within wq_has_sleeper() pairs with the full + * memory barrier within set_current_state() of + * prepare_to_wait_event(), which is called after ___wait_event() adds + * the waiter but before it has checked the wait condition. + * + * This pairs with devkmsg_read:A and syslog_print:A. + */ + if (wq_has_sleeper(&log_wait)) { /* LMM(wake_up_klogd:A) */ this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP); irq_work_queue(this_cpu_ptr(&wake_up_klogd_work)); } From 938ba4084abcf6fdd21d9078513c52f8fb9b00d0 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:39 +0206 Subject: [PATCH 06/18] printk: wake up all waiters There can be multiple tasks waiting for new records. They should all be woken. Use wake_up_interruptible_all() instead of wake_up_interruptible(). Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-5-john.ogness@linutronix.de --- kernel/printk/printk.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index f817dfb4852d..e23357002648 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -3326,7 +3326,7 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work) } if (pending & PRINTK_PENDING_WAKEUP) - wake_up_interruptible(&log_wait); + wake_up_interruptible_all(&log_wait); } static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = From 5341b93dea8c39d7612f7a227015d4b1d5cf30db Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:40 +0206 Subject: [PATCH 07/18] printk: wake waiters for safe and NMI contexts When printk() is called from safe or NMI contexts, it will directly store the record (vprintk_store()) and then defer the console output. However, defer_console_output() only causes console printing and does not wake any waiters of new records. Wake waiters from defer_console_output() so that they also are aware of the new records from safe and NMI contexts. Fixes: 03fc7f9c99c1 ("printk/nmi: Prevent deadlock when accessing the main log buffer in NMI") Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-6-john.ogness@linutronix.de --- kernel/printk/printk.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index e23357002648..7bb148a1debb 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -754,7 +754,7 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf, * prepare_to_wait_event() pairs with the full memory barrier * within wq_has_sleeper(). * - * This pairs with wake_up_klogd:A. + * This pairs with __wake_up_klogd:A. */ ret = wait_event_interruptible(log_wait, prb_read_valid(prb, @@ -1532,7 +1532,7 @@ static int syslog_print(char __user *buf, int size) * prepare_to_wait_event() pairs with the full memory barrier * within wq_has_sleeper(). * - * This pairs with wake_up_klogd:A. + * This pairs with __wake_up_klogd:A. */ len = wait_event_interruptible(log_wait, prb_read_valid(prb, seq, NULL)); /* LMM(syslog_print:A) */ @@ -3332,7 +3332,7 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work) static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = IRQ_WORK_INIT_LAZY(wake_up_klogd_work_func); -void wake_up_klogd(void) +static void __wake_up_klogd(int val) { if (!printk_percpu_data_ready()) return; @@ -3349,22 +3349,26 @@ void wake_up_klogd(void) * * This pairs with devkmsg_read:A and syslog_print:A. */ - if (wq_has_sleeper(&log_wait)) { /* LMM(wake_up_klogd:A) */ - this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP); + if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */ + (val & PRINTK_PENDING_OUTPUT)) { + this_cpu_or(printk_pending, val); irq_work_queue(this_cpu_ptr(&wake_up_klogd_work)); } preempt_enable(); } +void wake_up_klogd(void) +{ + __wake_up_klogd(PRINTK_PENDING_WAKEUP); +} + void defer_console_output(void) { - if (!printk_percpu_data_ready()) - return; - - preempt_disable(); - this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT); - irq_work_queue(this_cpu_ptr(&wake_up_klogd_work)); - preempt_enable(); + /* + * New messages may have been added directly to the ringbuffer + * using vprintk_store(), so wake any waiters as well. + */ + __wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT); } void printk_trigger_flush(void) From 9f0844de49cf0557d5c359131004acbb179c174e Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:41 +0206 Subject: [PATCH 08/18] printk: get caller_id/timestamp after migration disable Currently the local CPU timestamp and caller_id for the record are collected while migration is enabled. Since this information is CPU-specific, it should be collected with migration disabled. Migration is disabled immediately after collecting this information anyway, so just move the information collection to after the migration disabling. Signed-off-by: John Ogness Reviewed-by: Sergey Senozhatsky Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-7-john.ogness@linutronix.de --- kernel/printk/printk.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 7bb148a1debb..82ad3d3d0d4a 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -2063,7 +2063,7 @@ static inline void printk_delay(void) static inline u32 printk_caller_id(void) { return in_task() ? task_pid_nr(current) : - 0x80000000 + raw_smp_processor_id(); + 0x80000000 + smp_processor_id(); } /** @@ -2145,7 +2145,6 @@ int vprintk_store(int facility, int level, const struct dev_printk_info *dev_info, const char *fmt, va_list args) { - const u32 caller_id = printk_caller_id(); struct prb_reserved_entry e; enum printk_info_flags flags = 0; struct printk_record r; @@ -2155,10 +2154,14 @@ int vprintk_store(int facility, int level, u8 *recursion_ptr; u16 reserve_size; va_list args2; + u32 caller_id; u16 text_len; int ret = 0; u64 ts_nsec; + if (!printk_enter_irqsave(recursion_ptr, irqflags)) + return 0; + /* * Since the duration of printk() can vary depending on the message * and state of the ringbuffer, grab the timestamp now so that it is @@ -2167,8 +2170,7 @@ int vprintk_store(int facility, int level, */ ts_nsec = local_clock(); - if (!printk_enter_irqsave(recursion_ptr, irqflags)) - return 0; + caller_id = printk_caller_id(); /* * The sprintf needs to come first since the syslog prefix might be From 1f47e8af45fdd754c5538f8aeb7857a6c087fd6a Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:42 +0206 Subject: [PATCH 09/18] printk: call boot_delay_msec() in printk_delay() boot_delay_msec() is always called immediately before printk_delay() so just call it from within printk_delay(). Signed-off-by: John Ogness Reviewed-by: Sergey Senozhatsky Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-8-john.ogness@linutronix.de --- kernel/printk/printk.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 82ad3d3d0d4a..2f99e0b383b9 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -2048,8 +2048,10 @@ static u8 *__printk_recursion_counter(void) int printk_delay_msec __read_mostly; -static inline void printk_delay(void) +static inline void printk_delay(int level) { + boot_delay_msec(level); + if (unlikely(printk_delay_msec)) { int m = printk_delay_msec; @@ -2274,8 +2276,7 @@ asmlinkage int vprintk_emit(int facility, int level, in_sched = true; } - boot_delay_msec(level); - printk_delay(); + printk_delay(level); printed_len = vprintk_store(facility, level, dev_info, fmt, args); From 1fc0ca9e0db61882208650b3603071e9f4b5cfee Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:43 +0206 Subject: [PATCH 10/18] printk: add con_printk() macro for console details It is useful to generate log messages that include details about the related console. Rather than duplicate the code to assemble the details, put that code into a macro con_printk(). Once console printers become threaded, this macro will find more users. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-9-john.ogness@linutronix.de --- kernel/printk/printk.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 2f99e0b383b9..e36d3ed41afa 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -3015,6 +3015,11 @@ static void try_enable_default_console(struct console *newcon) newcon->flags |= CON_CONSDEV; } +#define con_printk(lvl, con, fmt, ...) \ + printk(lvl pr_fmt("%sconsole [%s%d] " fmt), \ + (con->flags & CON_BOOT) ? "boot" : "", \ + con->name, con->index, ##__VA_ARGS__) + /* * The console driver calls this routine during kernel initialization * to register the console printing procedure with printk() and to @@ -3153,9 +3158,7 @@ void register_console(struct console *newcon) * users know there might be something in the kernel's log buffer that * went to the bootconsole (that they do not see on the real console) */ - pr_info("%sconsole [%s%d] enabled\n", - (newcon->flags & CON_BOOT) ? "boot" : "" , - newcon->name, newcon->index); + con_printk(KERN_INFO, newcon, "enabled\n"); if (bootcon_enabled && ((newcon->flags & (CON_CONSDEV | CON_BOOT)) == CON_CONSDEV) && !keep_bootcon) { @@ -3174,9 +3177,7 @@ int unregister_console(struct console *console) struct console *con; int res; - pr_info("%sconsole [%s%d] disabled\n", - (console->flags & CON_BOOT) ? "boot" : "" , - console->name, console->index); + con_printk(KERN_INFO, console, "disabled\n"); res = _braille_unregister_console(console); if (res < 0) From a699449bb13b70b8bd10dc03ad7327ea3993221e Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:44 +0206 Subject: [PATCH 11/18] printk: refactor and rework printing logic Refactor/rework printing logic in order to prepare for moving to threaded console printing. - Move @console_seq into struct console so that the current "position" of each console can be tracked individually. - Move @console_dropped into struct console so that the current drop count of each console can be tracked individually. - Modify printing logic so that each console independently loads, prepares, and prints its next record. - Remove exclusive_console logic. Since console positions are handled independently, replaying past records occurs naturally. - Update the comments explaining why preemption is disabled while printing from printk() context. With these changes, there is a change in behavior: the console replaying the log (formerly exclusive console) will no longer block other consoles. New messages appear on the other consoles while the newly added console is still replaying. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-10-john.ogness@linutronix.de --- include/linux/console.h | 2 + kernel/printk/printk.c | 443 +++++++++++++++++++++------------------- 2 files changed, 231 insertions(+), 214 deletions(-) diff --git a/include/linux/console.h b/include/linux/console.h index 7cd758a4f44e..8c1686e2c233 100644 --- a/include/linux/console.h +++ b/include/linux/console.h @@ -151,6 +151,8 @@ struct console { int cflag; uint ispeed; uint ospeed; + u64 seq; + unsigned long dropped; void *data; struct console *next; }; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index e36d3ed41afa..3dea8bbaf402 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -280,11 +280,6 @@ static bool panic_in_progress(void) */ static int console_locked, console_suspended; -/* - * If exclusive_console is non-NULL then only this console is to be printed to. - */ -static struct console *exclusive_console; - /* * Array of consoles built from command line options (console=) */ @@ -374,12 +369,6 @@ static u64 syslog_seq; static size_t syslog_partial; static bool syslog_time; -/* All 3 protected by @console_sem. */ -/* the next printk record to write to the console */ -static u64 console_seq; -static u64 exclusive_console_stop_seq; -static unsigned long console_dropped; - struct latched_seq { seqcount_latch_t latch; u64 val[2]; @@ -1933,47 +1922,26 @@ static int console_trylock_spinning(void) } /* - * Call the console drivers, asking them to write out - * log_buf[start] to log_buf[end - 1]. - * The console_lock must be held. + * Call the specified console driver, asking it to write out the specified + * text and length. For non-extended consoles, if any records have been + * dropped, a dropped message will be written out first. */ -static void call_console_drivers(const char *ext_text, size_t ext_len, - const char *text, size_t len) +static void call_console_driver(struct console *con, const char *text, size_t len) { static char dropped_text[64]; - size_t dropped_len = 0; - struct console *con; + size_t dropped_len; trace_console_rcuidle(text, len); - if (!console_drivers) - return; - - if (console_dropped) { + if (con->dropped && !(con->flags & CON_EXTENDED)) { dropped_len = snprintf(dropped_text, sizeof(dropped_text), "** %lu printk messages dropped **\n", - console_dropped); - console_dropped = 0; + con->dropped); + con->dropped = 0; + con->write(con, dropped_text, dropped_len); } - for_each_console(con) { - if (exclusive_console && con != exclusive_console) - continue; - if (!(con->flags & CON_ENABLED)) - continue; - if (!con->write) - continue; - if (!cpu_online(smp_processor_id()) && - !(con->flags & CON_ANYTIME)) - continue; - if (con->flags & CON_EXTENDED) - con->write(con, ext_text, ext_len); - else { - if (dropped_len) - con->write(con, dropped_text, dropped_len); - con->write(con, text, len); - } - } + con->write(con, text, len); } /* @@ -2283,15 +2251,18 @@ asmlinkage int vprintk_emit(int facility, int level, /* If called from the scheduler, we can not call up(). */ if (!in_sched) { /* - * Disable preemption to avoid being preempted while holding - * console_sem which would prevent anyone from printing to - * console + * The caller may be holding system-critical or + * timing-sensitive locks. Disable preemption during + * printing of all remaining records to all consoles so that + * this context can return as soon as possible. Hopefully + * another printk() caller will take over the printing. */ preempt_disable(); /* * Try to acquire and then immediately release the console - * semaphore. The release will print out buffers and wake up - * /dev/kmsg and syslog() users. + * semaphore. The release will print out buffers. With the + * spinning variant, this context tries to take over the + * printing from another printing context. */ if (console_trylock_spinning()) console_unlock(); @@ -2329,11 +2300,9 @@ EXPORT_SYMBOL(_printk); #define prb_read_valid(rb, seq, r) false #define prb_first_valid_seq(rb) 0 +#define prb_next_seq(rb) 0 static u64 syslog_seq; -static u64 console_seq; -static u64 exclusive_console_stop_seq; -static unsigned long console_dropped; static size_t record_print_text(const struct printk_record *r, bool syslog, bool time) @@ -2350,8 +2319,7 @@ static ssize_t msg_print_ext_body(char *buf, size_t size, struct dev_printk_info *dev_info) { return 0; } static void console_lock_spinning_enable(void) { } static int console_lock_spinning_disable_and_check(void) { return 0; } -static void call_console_drivers(const char *ext_text, size_t ext_len, - const char *text, size_t len) {} +static void call_console_driver(struct console *con, const char *text, size_t len) { } static bool suppress_message_printing(int level) { return false; } #endif /* CONFIG_PRINTK */ @@ -2621,22 +2589,6 @@ int is_console_locked(void) } EXPORT_SYMBOL(is_console_locked); -/* - * Check if we have any console that is capable of printing while cpu is - * booting or shutting down. Requires console_sem. - */ -static int have_callable_console(void) -{ - struct console *con; - - for_each_console(con) - if ((con->flags & CON_ENABLED) && - (con->flags & CON_ANYTIME)) - return 1; - - return 0; -} - /* * Return true when this CPU should unlock console_sem without pushing all * messages to the console. This reduces the chance that the console is @@ -2657,15 +2609,182 @@ static bool abandon_console_lock_in_panic(void) } /* - * Can we actually use the console at this time on this cpu? + * Check if the given console is currently capable and allowed to print + * records. * - * Console drivers may assume that per-cpu resources have been allocated. So - * unless they're explicitly marked as being able to cope (CON_ANYTIME) don't - * call them until this CPU is officially up. + * Requires the console_lock. */ -static inline int can_use_console(void) +static inline bool console_is_usable(struct console *con) { - return cpu_online(raw_smp_processor_id()) || have_callable_console(); + if (!(con->flags & CON_ENABLED)) + return false; + + if (!con->write) + return false; + + /* + * Console drivers may assume that per-cpu resources have been + * allocated. So unless they're explicitly marked as being able to + * cope (CON_ANYTIME) don't call them until this CPU is officially up. + */ + if (!cpu_online(raw_smp_processor_id()) && + !(con->flags & CON_ANYTIME)) + return false; + + return true; +} + +static void __console_unlock(void) +{ + console_locked = 0; + up_console_sem(); +} + +/* + * Print one record for the given console. The record printed is whatever + * record is the next available record for the given console. + * + * @handover will be set to true if a printk waiter has taken over the + * console_lock, in which case the caller is no longer holding the + * console_lock. Otherwise it is set to false. + * + * Returns false if the given console has no next record to print, otherwise + * true. + * + * Requires the console_lock. + */ +static bool console_emit_next_record(struct console *con, bool *handover) +{ + static char ext_text[CONSOLE_EXT_LOG_MAX]; + static char text[CONSOLE_LOG_MAX]; + static int panic_console_dropped; + struct printk_info info; + struct printk_record r; + unsigned long flags; + char *write_text; + size_t len; + + prb_rec_init_rd(&r, &info, text, sizeof(text)); + + *handover = false; + + if (!prb_read_valid(prb, con->seq, &r)) + return false; + + if (con->seq != r.info->seq) { + con->dropped += r.info->seq - con->seq; + con->seq = r.info->seq; + if (panic_in_progress() && panic_console_dropped++ > 10) { + suppress_panic_printk = 1; + pr_warn_once("Too many dropped messages. Suppress messages on non-panic CPUs to prevent livelock.\n"); + } + } + + /* Skip record that has level above the console loglevel. */ + if (suppress_message_printing(r.info->level)) { + con->seq++; + goto skip; + } + + if (con->flags & CON_EXTENDED) { + write_text = &ext_text[0]; + len = info_print_ext_header(ext_text, sizeof(ext_text), r.info); + len += msg_print_ext_body(ext_text + len, sizeof(ext_text) - len, + &r.text_buf[0], r.info->text_len, &r.info->dev_info); + } else { + write_text = &text[0]; + len = record_print_text(&r, console_msg_format & MSG_FORMAT_SYSLOG, printk_time); + } + + /* + * While actively printing out messages, if another printk() + * were to occur on another CPU, it may wait for this one to + * finish. This task can not be preempted if there is a + * waiter waiting to take over. + * + * Interrupts are disabled because the hand over to a waiter + * must not be interrupted until the hand over is completed + * (@console_waiter is cleared). + */ + printk_safe_enter_irqsave(flags); + console_lock_spinning_enable(); + + stop_critical_timings(); /* don't trace print latency */ + call_console_driver(con, write_text, len); + start_critical_timings(); + + con->seq++; + + *handover = console_lock_spinning_disable_and_check(); + printk_safe_exit_irqrestore(flags); +skip: + return true; +} + +/* + * Print out all remaining records to all consoles. + * + * @do_cond_resched is set by the caller. It can be true only in schedulable + * context. + * + * @next_seq is set to the sequence number after the last available record. + * The value is valid only when this function returns true. It means that all + * usable consoles are completely flushed. + * + * @handover will be set to true if a printk waiter has taken over the + * console_lock, in which case the caller is no longer holding the + * console_lock. Otherwise it is set to false. + * + * Returns true when there was at least one usable console and all messages + * were flushed to all usable consoles. A returned false informs the caller + * that everything was not flushed (either there were no usable consoles or + * another context has taken over printing or it is a panic situation and this + * is not the panic CPU). Regardless the reason, the caller should assume it + * is not useful to immediately try again. + * + * Requires the console_lock. + */ +static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handover) +{ + bool any_usable = false; + struct console *con; + bool any_progress; + + *next_seq = 0; + *handover = false; + + do { + any_progress = false; + + for_each_console(con) { + bool progress; + + if (!console_is_usable(con)) + continue; + any_usable = true; + + progress = console_emit_next_record(con, handover); + if (*handover) + return false; + + /* Track the next of the highest seq flushed. */ + if (con->seq > *next_seq) + *next_seq = con->seq; + + if (!progress) + continue; + any_progress = true; + + /* Allow panic_cpu to take over the consoles safely. */ + if (abandon_console_lock_in_panic()) + return false; + + if (do_cond_resched) + cond_resched(); + } + } while (any_progress); + + return any_usable; } /** @@ -2678,28 +2797,20 @@ static inline int can_use_console(void) * by printk(). If this is the case, console_unlock(); emits * the output prior to releasing the lock. * - * If there is output waiting, we wake /dev/kmsg and syslog() users. - * * console_unlock(); may be called from any context. */ void console_unlock(void) { - static char ext_text[CONSOLE_EXT_LOG_MAX]; - static char text[CONSOLE_LOG_MAX]; - static int panic_console_dropped; - unsigned long flags; - bool do_cond_resched, retry; - struct printk_info info; - struct printk_record r; - u64 __maybe_unused next_seq; + bool do_cond_resched; + bool handover; + bool flushed; + u64 next_seq; if (console_suspended) { up_console_sem(); return; } - prb_rec_init_rd(&r, &info, text, sizeof(text)); - /* * Console drivers are called with interrupts disabled, so * @console_may_schedule should be cleared before; however, we may @@ -2708,125 +2819,34 @@ void console_unlock(void) * between lines if allowable. Not doing so can cause a very long * scheduling stall on a slow console leading to RCU stall and * softlockup warnings which exacerbate the issue with more - * messages practically incapacitating the system. - * - * console_trylock() is not able to detect the preemptive - * context reliably. Therefore the value must be stored before - * and cleared after the "again" goto label. + * messages practically incapacitating the system. Therefore, create + * a local to use for the printing loop. */ do_cond_resched = console_may_schedule; -again: - console_may_schedule = 0; - /* - * We released the console_sem lock, so we need to recheck if - * cpu is online and (if not) is there at least one CON_ANYTIME - * console. - */ - if (!can_use_console()) { - console_locked = 0; - up_console_sem(); - return; - } + do { + console_may_schedule = 0; - for (;;) { - size_t ext_len = 0; - int handover; - size_t len; - -skip: - if (!prb_read_valid(prb, console_seq, &r)) - break; - - if (console_seq != r.info->seq) { - console_dropped += r.info->seq - console_seq; - console_seq = r.info->seq; - if (panic_in_progress() && panic_console_dropped++ > 10) { - suppress_panic_printk = 1; - pr_warn_once("Too many dropped messages. Suppress messages on non-panic CPUs to prevent livelock.\n"); - } - } - - if (suppress_message_printing(r.info->level)) { - /* - * Skip record we have buffered and already printed - * directly to the console when we received it, and - * record that has level above the console loglevel. - */ - console_seq++; - goto skip; - } - - /* Output to all consoles once old messages replayed. */ - if (unlikely(exclusive_console && - console_seq >= exclusive_console_stop_seq)) { - exclusive_console = NULL; - } + flushed = console_flush_all(do_cond_resched, &next_seq, &handover); + if (!handover) + __console_unlock(); /* - * Handle extended console text first because later - * record_print_text() will modify the record buffer in-place. + * Abort if there was a failure to flush all messages to all + * usable consoles. Either it is not possible to flush (in + * which case it would be an infinite loop of retrying) or + * another context has taken over printing. */ - if (nr_ext_console_drivers) { - ext_len = info_print_ext_header(ext_text, - sizeof(ext_text), - r.info); - ext_len += msg_print_ext_body(ext_text + ext_len, - sizeof(ext_text) - ext_len, - &r.text_buf[0], - r.info->text_len, - &r.info->dev_info); - } - len = record_print_text(&r, - console_msg_format & MSG_FORMAT_SYSLOG, - printk_time); - console_seq++; - - /* - * While actively printing out messages, if another printk() - * were to occur on another CPU, it may wait for this one to - * finish. This task can not be preempted if there is a - * waiter waiting to take over. - * - * Interrupts are disabled because the hand over to a waiter - * must not be interrupted until the hand over is completed - * (@console_waiter is cleared). - */ - printk_safe_enter_irqsave(flags); - console_lock_spinning_enable(); - - stop_critical_timings(); /* don't trace print latency */ - call_console_drivers(ext_text, ext_len, text, len); - start_critical_timings(); - - handover = console_lock_spinning_disable_and_check(); - printk_safe_exit_irqrestore(flags); - if (handover) - return; - - /* Allow panic_cpu to take over the consoles safely */ - if (abandon_console_lock_in_panic()) + if (!flushed) break; - if (do_cond_resched) - cond_resched(); - } - - /* Get consistent value of the next-to-be-used sequence number. */ - next_seq = console_seq; - - console_locked = 0; - up_console_sem(); - - /* - * Someone could have filled up the buffer again, so re-check if there's - * something to flush. In case we cannot trylock the console_sem again, - * there's a new owner and the console_unlock() from them will do the - * flush, no worries. - */ - retry = prb_read_valid(prb, next_seq, NULL); - if (retry && !abandon_console_lock_in_panic() && console_trylock()) - goto again; + /* + * Some context may have added new records after + * console_flush_all() but before unlocking the console. + * Re-check if there is a new record to flush. If the trylock + * fails, another context is already handling the printing. + */ + } while (prb_read_valid(prb, next_seq, NULL) && console_trylock()); } EXPORT_SYMBOL(console_unlock); @@ -2886,8 +2906,14 @@ void console_flush_on_panic(enum con_flush_mode mode) console_trylock(); console_may_schedule = 0; - if (mode == CONSOLE_REPLAY_ALL) - console_seq = prb_first_valid_seq(prb); + if (mode == CONSOLE_REPLAY_ALL) { + struct console *c; + u64 seq; + + seq = prb_first_valid_seq(prb); + for_each_console(c) + c->seq = seq; + } console_unlock(); } @@ -3127,26 +3153,15 @@ void register_console(struct console *newcon) if (newcon->flags & CON_EXTENDED) nr_ext_console_drivers++; + newcon->dropped = 0; if (newcon->flags & CON_PRINTBUFFER) { - /* - * console_unlock(); will print out the buffered messages - * for us. - * - * We're about to replay the log buffer. Only do this to the - * just-registered console to avoid excessive message spam to - * the already-registered consoles. - * - * Set exclusive_console with disabled interrupts to reduce - * race window with eventual console_flush_on_panic() that - * ignores console_lock. - */ - exclusive_console = newcon; - exclusive_console_stop_seq = console_seq; - /* Get a consistent copy of @syslog_seq. */ mutex_lock(&syslog_lock); - console_seq = syslog_seq; + newcon->seq = syslog_seq; mutex_unlock(&syslog_lock); + } else { + /* Begin with next message. */ + newcon->seq = prb_next_seq(prb); } console_unlock(); console_sysfs_notify(); From 03a749e628fdbc665d7f9712637f880a79da8b78 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:45 +0206 Subject: [PATCH 12/18] printk: move buffer definitions into console_emit_next_record() caller Extended consoles print extended messages and do not print messages about dropped records. Non-extended consoles print "normal" messages as well as extra messages about dropped records. Currently the buffers for these various message types are defined within the functions that might use them and their usage is based upon the CON_EXTENDED flag. This will be a problem when moving to kthread printers because each printer must be able to provide its own buffers. Move all the message buffer definitions outside of console_emit_next_record(). The caller knows if extended or dropped messages should be printed and can specify the appropriate buffers to use. The console_emit_next_record() and call_console_driver() functions can know what to print based on whether specified buffers are non-NULL. With this change, buffer definition/allocation/specification is separated from the code that does the various types of string printing. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-11-john.ogness@linutronix.de --- kernel/printk/printk.c | 60 ++++++++++++++++++++++++++++++------------ 1 file changed, 43 insertions(+), 17 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 3dea8bbaf402..dec5355c5b5b 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -394,6 +394,9 @@ static struct latched_seq clear_seq = { /* the maximum size of a formatted record (i.e. with prefix added per line) */ #define CONSOLE_LOG_MAX 1024 +/* the maximum size for a dropped text message */ +#define DROPPED_TEXT_MAX 64 + /* the maximum size allowed to be reserved for a record */ #define LOG_LINE_MAX (CONSOLE_LOG_MAX - PREFIX_MAX) @@ -1923,18 +1926,18 @@ static int console_trylock_spinning(void) /* * Call the specified console driver, asking it to write out the specified - * text and length. For non-extended consoles, if any records have been + * text and length. If @dropped_text is non-NULL and any records have been * dropped, a dropped message will be written out first. */ -static void call_console_driver(struct console *con, const char *text, size_t len) +static void call_console_driver(struct console *con, const char *text, size_t len, + char *dropped_text) { - static char dropped_text[64]; size_t dropped_len; trace_console_rcuidle(text, len); - if (con->dropped && !(con->flags & CON_EXTENDED)) { - dropped_len = snprintf(dropped_text, sizeof(dropped_text), + if (con->dropped && dropped_text) { + dropped_len = snprintf(dropped_text, DROPPED_TEXT_MAX, "** %lu printk messages dropped **\n", con->dropped); con->dropped = 0; @@ -2296,6 +2299,7 @@ EXPORT_SYMBOL(_printk); #else /* CONFIG_PRINTK */ #define CONSOLE_LOG_MAX 0 +#define DROPPED_TEXT_MAX 0 #define printk_time false #define prb_read_valid(rb, seq, r) false @@ -2319,7 +2323,10 @@ static ssize_t msg_print_ext_body(char *buf, size_t size, struct dev_printk_info *dev_info) { return 0; } static void console_lock_spinning_enable(void) { } static int console_lock_spinning_disable_and_check(void) { return 0; } -static void call_console_driver(struct console *con, const char *text, size_t len) { } +static void call_console_driver(struct console *con, const char *text, size_t len, + char *dropped_text) +{ +} static bool suppress_message_printing(int level) { return false; } #endif /* CONFIG_PRINTK */ @@ -2644,6 +2651,14 @@ static void __console_unlock(void) * Print one record for the given console. The record printed is whatever * record is the next available record for the given console. * + * @text is a buffer of size CONSOLE_LOG_MAX. + * + * If extended messages should be printed, @ext_text is a buffer of size + * CONSOLE_EXT_LOG_MAX. Otherwise @ext_text must be NULL. + * + * If dropped messages should be printed, @dropped_text is a buffer of size + * DROPPED_TEXT_MAX. Otherwise @dropped_text must be NULL. + * * @handover will be set to true if a printk waiter has taken over the * console_lock, in which case the caller is no longer holding the * console_lock. Otherwise it is set to false. @@ -2653,10 +2668,9 @@ static void __console_unlock(void) * * Requires the console_lock. */ -static bool console_emit_next_record(struct console *con, bool *handover) +static bool console_emit_next_record(struct console *con, char *text, char *ext_text, + char *dropped_text, bool *handover) { - static char ext_text[CONSOLE_EXT_LOG_MAX]; - static char text[CONSOLE_LOG_MAX]; static int panic_console_dropped; struct printk_info info; struct printk_record r; @@ -2664,7 +2678,7 @@ static bool console_emit_next_record(struct console *con, bool *handover) char *write_text; size_t len; - prb_rec_init_rd(&r, &info, text, sizeof(text)); + prb_rec_init_rd(&r, &info, text, CONSOLE_LOG_MAX); *handover = false; @@ -2686,13 +2700,13 @@ static bool console_emit_next_record(struct console *con, bool *handover) goto skip; } - if (con->flags & CON_EXTENDED) { - write_text = &ext_text[0]; - len = info_print_ext_header(ext_text, sizeof(ext_text), r.info); - len += msg_print_ext_body(ext_text + len, sizeof(ext_text) - len, + if (ext_text) { + write_text = ext_text; + len = info_print_ext_header(ext_text, CONSOLE_EXT_LOG_MAX, r.info); + len += msg_print_ext_body(ext_text + len, CONSOLE_EXT_LOG_MAX - len, &r.text_buf[0], r.info->text_len, &r.info->dev_info); } else { - write_text = &text[0]; + write_text = text; len = record_print_text(&r, console_msg_format & MSG_FORMAT_SYSLOG, printk_time); } @@ -2710,7 +2724,7 @@ static bool console_emit_next_record(struct console *con, bool *handover) console_lock_spinning_enable(); stop_critical_timings(); /* don't trace print latency */ - call_console_driver(con, write_text, len); + call_console_driver(con, write_text, len, dropped_text); start_critical_timings(); con->seq++; @@ -2746,6 +2760,9 @@ skip: */ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handover) { + static char dropped_text[DROPPED_TEXT_MAX]; + static char ext_text[CONSOLE_EXT_LOG_MAX]; + static char text[CONSOLE_LOG_MAX]; bool any_usable = false; struct console *con; bool any_progress; @@ -2763,7 +2780,16 @@ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handove continue; any_usable = true; - progress = console_emit_next_record(con, handover); + if (con->flags & CON_EXTENDED) { + /* Extended consoles do not print "dropped messages". */ + progress = console_emit_next_record(con, &text[0], + &ext_text[0], NULL, + handover); + } else { + progress = console_emit_next_record(con, &text[0], + NULL, &dropped_text[0], + handover); + } if (*handover) return false; From 3b604ca81202eea2a917eb6491e90f610fba0ec7 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:46 +0206 Subject: [PATCH 13/18] printk: add pr_flush() Provide a might-sleep function to allow waiting for console printers to catch up to the latest logged message. Use pr_flush() whenever it is desirable to get buffered messages printed before continuing: suspend_console(), resume_console(), console_stop(), console_start(), console_unblank(). Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-12-john.ogness@linutronix.de --- include/linux/printk.h | 7 ++++ kernel/printk/printk.c | 83 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 90 insertions(+) diff --git a/include/linux/printk.h b/include/linux/printk.h index b70a42f94031..091fba7283e1 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -170,6 +170,8 @@ extern void __printk_safe_exit(void); #define printk_deferred_enter __printk_safe_enter #define printk_deferred_exit __printk_safe_exit +extern bool pr_flush(int timeout_ms, bool reset_on_progress); + /* * Please don't use printk_ratelimit(), because it shares ratelimiting state * with all other unrelated printk_ratelimit() callsites. Instead use @@ -220,6 +222,11 @@ static inline void printk_deferred_exit(void) { } +static inline bool pr_flush(int timeout_ms, bool reset_on_progress) +{ + return true; +} + static inline int printk_ratelimit(void) { return 0; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index dec5355c5b5b..a06999d55278 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -2296,6 +2296,8 @@ asmlinkage __visible int _printk(const char *fmt, ...) } EXPORT_SYMBOL(_printk); +static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress); + #else /* CONFIG_PRINTK */ #define CONSOLE_LOG_MAX 0 @@ -2328,6 +2330,7 @@ static void call_console_driver(struct console *con, const char *text, size_t le { } static bool suppress_message_printing(int level) { return false; } +static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; } #endif /* CONFIG_PRINTK */ @@ -2515,6 +2518,7 @@ void suspend_console(void) if (!console_suspend_enabled) return; pr_info("Suspending console(s) (use no_console_suspend to debug)\n"); + pr_flush(1000, true); console_lock(); console_suspended = 1; up_console_sem(); @@ -2527,6 +2531,7 @@ void resume_console(void) down_console_sem(); console_suspended = 0; console_unlock(); + pr_flush(1000, true); } /** @@ -2912,6 +2917,9 @@ void console_unblank(void) if ((c->flags & CON_ENABLED) && c->unblank) c->unblank(); console_unlock(); + + if (!oops_in_progress) + pr_flush(1000, true); } /** @@ -2970,6 +2978,7 @@ struct tty_driver *console_device(int *index) */ void console_stop(struct console *console) { + __pr_flush(console, 1000, true); console_lock(); console->flags &= ~CON_ENABLED; console_unlock(); @@ -2981,6 +2990,7 @@ void console_start(struct console *console) console_lock(); console->flags |= CON_ENABLED; console_unlock(); + __pr_flush(console, 1000, true); } EXPORT_SYMBOL(console_start); @@ -3352,6 +3362,79 @@ static int __init printk_late_init(void) late_initcall(printk_late_init); #if defined CONFIG_PRINTK +/* If @con is specified, only wait for that console. Otherwise wait for all. */ +static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) +{ + int remaining = timeout_ms; + struct console *c; + u64 last_diff = 0; + u64 printk_seq; + u64 diff; + u64 seq; + + might_sleep(); + + seq = prb_next_seq(prb); + + for (;;) { + diff = 0; + + console_lock(); + for_each_console(c) { + if (con && con != c) + continue; + if (!console_is_usable(c)) + continue; + printk_seq = c->seq; + if (printk_seq < seq) + diff += seq - printk_seq; + } + console_unlock(); + + if (diff != last_diff && reset_on_progress) + remaining = timeout_ms; + + if (diff == 0 || remaining == 0) + break; + + if (remaining < 0) { + /* no timeout limit */ + msleep(100); + } else if (remaining < 100) { + msleep(remaining); + remaining = 0; + } else { + msleep(100); + remaining -= 100; + } + + last_diff = diff; + } + + return (diff == 0); +} + +/** + * pr_flush() - Wait for printing threads to catch up. + * + * @timeout_ms: The maximum time (in ms) to wait. + * @reset_on_progress: Reset the timeout if forward progress is seen. + * + * A value of 0 for @timeout_ms means no waiting will occur. A value of -1 + * represents infinite waiting. + * + * If @reset_on_progress is true, the timeout will be reset whenever any + * printer has been seen to make some forward progress. + * + * Context: Process context. May sleep while acquiring console lock. + * Return: true if all enabled printers are caught up. + */ +bool pr_flush(int timeout_ms, bool reset_on_progress) +{ + return __pr_flush(NULL, timeout_ms, reset_on_progress); +} +EXPORT_SYMBOL(pr_flush); + /* * Delayed printk version, for scheduler-internal messages: */ From 2bb2b7b57f81255c13f4395ea911d6bdc70c9fe2 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:47 +0206 Subject: [PATCH 14/18] printk: add functions to prefer direct printing Once kthread printing is available, console printing will no longer occur in the context of the printk caller. However, there are some special contexts where it is desirable for the printk caller to directly print out kernel messages. Using pr_flush() to wait for threaded printers is only possible if the caller is in a sleepable context and the kthreads are active. That is not always the case. Introduce printk_prefer_direct_enter() and printk_prefer_direct_exit() functions to explicitly (and globally) activate/deactivate preferred direct console printing. The term "direct console printing" refers to printing to all enabled consoles from the context of the printk caller. The term "prefer" is used because this type of printing is only best effort. If the console is currently locked or other printers are already actively printing, the printk caller will need to rely on the other contexts to handle the printing. This preferred direct printing is how all printing has been handled until now (unless it was explicitly deferred). When kthread printing is introduced, there may be some unanticipated problems due to kthreads being unable to flush important messages. In order to minimize such risks, preferred direct printing is activated for the primary important messages when the system experiences general types of major errors. These are: - emergency reboot/shutdown - cpu and rcu stalls - hard and soft lockups - hung tasks - warn - sysrq Note that since kthread printing does not yet exist, no behavior changes result from this commit. This is only implementing the counter and marking the various places where preferred direct printing is active. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Acked-by: Paul E. McKenney # for RCU Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-13-john.ogness@linutronix.de --- drivers/tty/sysrq.c | 2 ++ include/linux/printk.h | 11 +++++++++++ kernel/hung_task.c | 11 ++++++++++- kernel/panic.c | 4 ++++ kernel/printk/printk.c | 28 ++++++++++++++++++++++++++++ kernel/rcu/tree_stall.h | 2 ++ kernel/reboot.c | 14 +++++++++++++- kernel/watchdog.c | 4 ++++ kernel/watchdog_hld.c | 4 ++++ 9 files changed, 78 insertions(+), 2 deletions(-) diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index bbfd004449b5..2884cd638d64 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -578,6 +578,7 @@ void __handle_sysrq(int key, bool check_mask) rcu_sysrq_start(); rcu_read_lock(); + printk_prefer_direct_enter(); /* * Raise the apparent loglevel to maximum so that the sysrq header * is shown to provide the user with positive feedback. We do not @@ -619,6 +620,7 @@ void __handle_sysrq(int key, bool check_mask) pr_cont("\n"); console_loglevel = orig_log_level; } + printk_prefer_direct_exit(); rcu_read_unlock(); rcu_sysrq_end(); diff --git a/include/linux/printk.h b/include/linux/printk.h index 091fba7283e1..cd26aab0ab2a 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -170,6 +170,9 @@ extern void __printk_safe_exit(void); #define printk_deferred_enter __printk_safe_enter #define printk_deferred_exit __printk_safe_exit +extern void printk_prefer_direct_enter(void); +extern void printk_prefer_direct_exit(void); + extern bool pr_flush(int timeout_ms, bool reset_on_progress); /* @@ -222,6 +225,14 @@ static inline void printk_deferred_exit(void) { } +static inline void printk_prefer_direct_enter(void) +{ +} + +static inline void printk_prefer_direct_exit(void) +{ +} + static inline bool pr_flush(int timeout_ms, bool reset_on_progress) { return true; diff --git a/kernel/hung_task.c b/kernel/hung_task.c index 52501e5f7655..02a65d554340 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -127,6 +127,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) * complain: */ if (sysctl_hung_task_warnings) { + printk_prefer_direct_enter(); + if (sysctl_hung_task_warnings > 0) sysctl_hung_task_warnings--; pr_err("INFO: task %s:%d blocked for more than %ld seconds.\n", @@ -142,6 +144,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) if (sysctl_hung_task_all_cpu_backtrace) hung_task_show_all_bt = true; + + printk_prefer_direct_exit(); } touch_nmi_watchdog(); @@ -204,12 +208,17 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) } unlock: rcu_read_unlock(); - if (hung_task_show_lock) + if (hung_task_show_lock) { + printk_prefer_direct_enter(); debug_show_all_locks(); + printk_prefer_direct_exit(); + } if (hung_task_show_all_bt) { hung_task_show_all_bt = false; + printk_prefer_direct_enter(); trigger_all_cpu_backtrace(); + printk_prefer_direct_exit(); } if (hung_task_call_panic) diff --git a/kernel/panic.c b/kernel/panic.c index 55b50e052ec3..7d422597403f 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -560,6 +560,8 @@ void __warn(const char *file, int line, void *caller, unsigned taint, { disable_trace_on_warning(); + printk_prefer_direct_enter(); + if (file) pr_warn("WARNING: CPU: %d PID: %d at %s:%d %pS\n", raw_smp_processor_id(), current->pid, file, line, @@ -597,6 +599,8 @@ void __warn(const char *file, int line, void *caller, unsigned taint, /* Just a warning, don't kill lockdep. */ add_taint(taint, LOCKDEP_STILL_OK); + + printk_prefer_direct_exit(); } #ifndef __WARN_FLAGS diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index a06999d55278..ed7f738261cc 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -362,6 +362,34 @@ static int console_msg_format = MSG_FORMAT_DEFAULT; static DEFINE_MUTEX(syslog_lock); #ifdef CONFIG_PRINTK +static atomic_t printk_prefer_direct = ATOMIC_INIT(0); + +/** + * printk_prefer_direct_enter - cause printk() calls to attempt direct + * printing to all enabled consoles + * + * Since it is not possible to call into the console printing code from any + * context, there is no guarantee that direct printing will occur. + * + * This globally effects all printk() callers. + * + * Context: Any context. + */ +void printk_prefer_direct_enter(void) +{ + atomic_inc(&printk_prefer_direct); +} + +/** + * printk_prefer_direct_exit - restore printk() behavior + * + * Context: Any context. + */ +void printk_prefer_direct_exit(void) +{ + WARN_ON(atomic_dec_if_positive(&printk_prefer_direct) < 0); +} + DECLARE_WAIT_QUEUE_HEAD(log_wait); /* All 3 protected by @syslog_lock. */ /* the next printk record to read by syslog(READ) or /proc/kmsg */ diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 0c5d8516516a..d612707c2ed0 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -619,6 +619,7 @@ static void print_cpu_stall(unsigned long gps) * See Documentation/RCU/stallwarn.rst for info on how to debug * RCU CPU stall warnings. */ + printk_prefer_direct_enter(); trace_rcu_stall_warning(rcu_state.name, TPS("SelfDetected")); pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name); raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags); @@ -656,6 +657,7 @@ static void print_cpu_stall(unsigned long gps) */ set_tsk_need_resched(current); set_preempt_need_resched(); + printk_prefer_direct_exit(); } static void check_cpu_stall(struct rcu_data *rdp) diff --git a/kernel/reboot.c b/kernel/reboot.c index 6bcc5d6a6572..4177645e74d6 100644 --- a/kernel/reboot.c +++ b/kernel/reboot.c @@ -447,9 +447,11 @@ static int __orderly_reboot(void) ret = run_cmd(reboot_cmd); if (ret) { + printk_prefer_direct_enter(); pr_warn("Failed to start orderly reboot: forcing the issue\n"); emergency_sync(); kernel_restart(NULL); + printk_prefer_direct_exit(); } return ret; @@ -462,6 +464,7 @@ static int __orderly_poweroff(bool force) ret = run_cmd(poweroff_cmd); if (ret && force) { + printk_prefer_direct_enter(); pr_warn("Failed to start orderly shutdown: forcing the issue\n"); /* @@ -471,6 +474,7 @@ static int __orderly_poweroff(bool force) */ emergency_sync(); kernel_power_off(); + printk_prefer_direct_exit(); } return ret; @@ -528,6 +532,8 @@ EXPORT_SYMBOL_GPL(orderly_reboot); */ static void hw_failure_emergency_poweroff_func(struct work_struct *work) { + printk_prefer_direct_enter(); + /* * We have reached here after the emergency shutdown waiting period has * expired. This means orderly_poweroff has not been able to shut off @@ -544,6 +550,8 @@ static void hw_failure_emergency_poweroff_func(struct work_struct *work) */ pr_emerg("Hardware protection shutdown failed. Trying emergency restart\n"); emergency_restart(); + + printk_prefer_direct_exit(); } static DECLARE_DELAYED_WORK(hw_failure_emergency_poweroff_work, @@ -582,11 +590,13 @@ void hw_protection_shutdown(const char *reason, int ms_until_forced) { static atomic_t allow_proceed = ATOMIC_INIT(1); + printk_prefer_direct_enter(); + pr_emerg("HARDWARE PROTECTION shutdown (%s)\n", reason); /* Shutdown should be initiated only once. */ if (!atomic_dec_and_test(&allow_proceed)) - return; + goto out; /* * Queue a backup emergency shutdown in the event of @@ -594,6 +604,8 @@ void hw_protection_shutdown(const char *reason, int ms_until_forced) */ hw_failure_emergency_poweroff(ms_until_forced); orderly_poweroff(true); +out: + printk_prefer_direct_exit(); } EXPORT_SYMBOL_GPL(hw_protection_shutdown); diff --git a/kernel/watchdog.c b/kernel/watchdog.c index 9166220457bc..40024e03d422 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -424,6 +424,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) /* Start period for the next softlockup warning. */ update_report_ts(); + printk_prefer_direct_enter(); + pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", smp_processor_id(), duration, current->comm, task_pid_nr(current)); @@ -442,6 +444,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK); if (softlockup_panic) panic("softlockup: hung tasks"); + + printk_prefer_direct_exit(); } return HRTIMER_RESTART; diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c index 247bf0b1582c..701f35f0e2d4 100644 --- a/kernel/watchdog_hld.c +++ b/kernel/watchdog_hld.c @@ -135,6 +135,8 @@ static void watchdog_overflow_callback(struct perf_event *event, if (__this_cpu_read(hard_watchdog_warn) == true) return; + printk_prefer_direct_enter(); + pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", this_cpu); print_modules(); @@ -155,6 +157,8 @@ static void watchdog_overflow_callback(struct perf_event *event, if (hardlockup_panic) nmi_panic(regs, "Hard LOCKUP"); + printk_prefer_direct_exit(); + __this_cpu_write(hard_watchdog_warn, true); return; } From 09c5ba0aa2fcfdadb17d045c3ee6f86d69270df7 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:48 +0206 Subject: [PATCH 15/18] printk: add kthread console printers Create a kthread for each console to perform console printing. During normal operation (@system_state == SYSTEM_RUNNING), the kthread printers are responsible for all printing on their respective consoles. During non-normal operation, console printing is done as it has been: within the context of the printk caller or within irqwork triggered by the printk caller, referred to as direct printing. Since threaded console printers are responsible for all printing during normal operation, this also includes messages generated via deferred printk calls. If direct printing is in effect during a deferred printk call, the queued irqwork will perform the direct printing. To make it clear that this is the only time that the irqwork will perform direct printing, rename the flag PRINTK_PENDING_OUTPUT to PRINTK_PENDING_DIRECT_OUTPUT. Threaded console printers synchronize against each other and against console lockers by taking the console lock for each message that is printed. Note that the kthread printers do not care about direct printing. They will always try to print if new records are available. They can be blocked by direct printing, but will be woken again once direct printing is finished. Console unregistration is a bit tricky because the associated kthread printer cannot be stopped while the console lock is held. A policy is implemented that states: whichever task clears con->thread (under the console lock) is responsible for stopping the kthread. unregister_console() will clear con->thread while the console lock is held and then stop the kthread after releasing the console lock. For consoles that have implemented the exit() callback, the kthread is stopped before exit() is called. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-14-john.ogness@linutronix.de --- include/linux/console.h | 2 + kernel/printk/printk.c | 329 +++++++++++++++++++++++++++++++++++++--- 2 files changed, 309 insertions(+), 22 deletions(-) diff --git a/include/linux/console.h b/include/linux/console.h index 8c1686e2c233..9a251e70c090 100644 --- a/include/linux/console.h +++ b/include/linux/console.h @@ -153,6 +153,8 @@ struct console { uint ospeed; u64 seq; unsigned long dropped; + struct task_struct *thread; + void *data; struct console *next; }; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index ed7f738261cc..e4cdc424c826 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -361,6 +361,13 @@ static int console_msg_format = MSG_FORMAT_DEFAULT; /* syslog_lock protects syslog_* variables and write access to clear_seq. */ static DEFINE_MUTEX(syslog_lock); +/* + * A flag to signify if printk_activate_kthreads() has already started the + * kthread printers. If true, any later registered consoles must start their + * own kthread directly. The flag is write protected by the console_lock. + */ +static bool printk_kthreads_available; + #ifdef CONFIG_PRINTK static atomic_t printk_prefer_direct = ATOMIC_INIT(0); @@ -390,6 +397,39 @@ void printk_prefer_direct_exit(void) WARN_ON(atomic_dec_if_positive(&printk_prefer_direct) < 0); } +/* + * Calling printk() always wakes kthread printers so that they can + * flush the new message to their respective consoles. Also, if direct + * printing is allowed, printk() tries to flush the messages directly. + * + * Direct printing is allowed in situations when the kthreads + * are not available or the system is in a problematic state. + * + * See the implementation about possible races. + */ +static inline bool allow_direct_printing(void) +{ + /* + * Checking kthread availability is a possible race because the + * kthread printers can become permanently disabled during runtime. + * However, doing that requires holding the console_lock, so any + * pending messages will be direct printed by console_unlock(). + */ + if (!printk_kthreads_available) + return true; + + /* + * Prefer direct printing when the system is in a problematic state. + * The context that sets this state will always see the updated value. + * The other contexts do not care. Anyway, direct printing is just a + * best effort. The direct output is only possible when console_lock + * is not already taken and no kthread printers are actively printing. + */ + return (system_state > SYSTEM_RUNNING || + oops_in_progress || + atomic_read(&printk_prefer_direct)); +} + DECLARE_WAIT_QUEUE_HEAD(log_wait); /* All 3 protected by @syslog_lock. */ /* the next printk record to read by syslog(READ) or /proc/kmsg */ @@ -2280,10 +2320,10 @@ asmlinkage int vprintk_emit(int facility, int level, printed_len = vprintk_store(facility, level, dev_info, fmt, args); /* If called from the scheduler, we can not call up(). */ - if (!in_sched) { + if (!in_sched && allow_direct_printing()) { /* * The caller may be holding system-critical or - * timing-sensitive locks. Disable preemption during + * timing-sensitive locks. Disable preemption during direct * printing of all remaining records to all consoles so that * this context can return as soon as possible. Hopefully * another printk() caller will take over the printing. @@ -2326,6 +2366,8 @@ EXPORT_SYMBOL(_printk); static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress); +static void printk_start_kthread(struct console *con); + #else /* CONFIG_PRINTK */ #define CONSOLE_LOG_MAX 0 @@ -2359,6 +2401,8 @@ static void call_console_driver(struct console *con, const char *text, size_t le } static bool suppress_message_printing(int level) { return false; } static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; } +static void printk_start_kthread(struct console *con) { } +static bool allow_direct_printing(void) { return true; } #endif /* CONFIG_PRINTK */ @@ -2559,6 +2603,13 @@ void resume_console(void) down_console_sem(); console_suspended = 0; console_unlock(); + + /* + * While suspended, new records may have been added to the + * ringbuffer. Wake up the kthread printers to print them. + */ + wake_up_klogd(); + pr_flush(1000, true); } @@ -2577,6 +2628,9 @@ static int console_cpu_notify(unsigned int cpu) /* If trylock fails, someone else is doing the printing */ if (console_trylock()) console_unlock(); + + /* Wake kthread printers. Some may have become usable. */ + wake_up_klogd(); } return 0; } @@ -2648,18 +2702,9 @@ static bool abandon_console_lock_in_panic(void) return atomic_read(&panic_cpu) != raw_smp_processor_id(); } -/* - * Check if the given console is currently capable and allowed to print - * records. - * - * Requires the console_lock. - */ -static inline bool console_is_usable(struct console *con) +static inline bool __console_is_usable(short flags) { - if (!(con->flags & CON_ENABLED)) - return false; - - if (!con->write) + if (!(flags & CON_ENABLED)) return false; /* @@ -2668,12 +2713,26 @@ static inline bool console_is_usable(struct console *con) * cope (CON_ANYTIME) don't call them until this CPU is officially up. */ if (!cpu_online(raw_smp_processor_id()) && - !(con->flags & CON_ANYTIME)) + !(flags & CON_ANYTIME)) return false; return true; } +/* + * Check if the given console is currently capable and allowed to print + * records. + * + * Requires the console_lock. + */ +static inline bool console_is_usable(struct console *con) +{ + if (!con->write) + return false; + + return __console_is_usable(con->flags); +} + static void __console_unlock(void) { console_locked = 0; @@ -2786,8 +2845,8 @@ skip: * were flushed to all usable consoles. A returned false informs the caller * that everything was not flushed (either there were no usable consoles or * another context has taken over printing or it is a panic situation and this - * is not the panic CPU). Regardless the reason, the caller should assume it - * is not useful to immediately try again. + * is not the panic CPU or direct printing is not preferred). Regardless the + * reason, the caller should assume it is not useful to immediately try again. * * Requires the console_lock. */ @@ -2804,6 +2863,10 @@ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handove *handover = false; do { + /* Let the kthread printers do the work if they can. */ + if (!allow_direct_printing()) + return false; + any_progress = false; for_each_console(con) { @@ -3018,6 +3081,10 @@ void console_start(struct console *console) console_lock(); console->flags |= CON_ENABLED; console_unlock(); + + /* Wake the newly enabled kthread printer. */ + wake_up_klogd(); + __pr_flush(console, 1000, true); } EXPORT_SYMBOL(console_start); @@ -3218,6 +3285,8 @@ void register_console(struct console *newcon) nr_ext_console_drivers++; newcon->dropped = 0; + newcon->thread = NULL; + if (newcon->flags & CON_PRINTBUFFER) { /* Get a consistent copy of @syslog_seq. */ mutex_lock(&syslog_lock); @@ -3227,6 +3296,10 @@ void register_console(struct console *newcon) /* Begin with next message. */ newcon->seq = prb_next_seq(prb); } + + if (printk_kthreads_available) + printk_start_kthread(newcon); + console_unlock(); console_sysfs_notify(); @@ -3253,6 +3326,7 @@ EXPORT_SYMBOL(register_console); int unregister_console(struct console *console) { + struct task_struct *thd; struct console *con; int res; @@ -3293,7 +3367,20 @@ int unregister_console(struct console *console) console_drivers->flags |= CON_CONSDEV; console->flags &= ~CON_ENABLED; + + /* + * console->thread can only be cleared under the console lock. But + * stopping the thread must be done without the console lock. The + * task that clears @thread is the task that stops the kthread. + */ + thd = console->thread; + console->thread = NULL; + console_unlock(); + + if (thd) + kthread_stop(thd); + console_sysfs_notify(); if (console->exit) @@ -3389,6 +3476,20 @@ static int __init printk_late_init(void) } late_initcall(printk_late_init); +static int __init printk_activate_kthreads(void) +{ + struct console *con; + + console_lock(); + printk_kthreads_available = true; + for_each_console(con) + printk_start_kthread(con); + console_unlock(); + + return 0; +} +early_initcall(printk_activate_kthreads); + #if defined CONFIG_PRINTK /* If @con is specified, only wait for that console. Otherwise wait for all. */ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) @@ -3463,11 +3564,180 @@ bool pr_flush(int timeout_ms, bool reset_on_progress) } EXPORT_SYMBOL(pr_flush); +static void __printk_fallback_preferred_direct(void) +{ + printk_prefer_direct_enter(); + pr_err("falling back to preferred direct printing\n"); + printk_kthreads_available = false; +} + +/* + * Enter preferred direct printing, but never exit. Mark console threads as + * unavailable. The system is then forever in preferred direct printing and + * any printing threads will exit. + * + * Must *not* be called under console_lock. Use + * __printk_fallback_preferred_direct() if already holding console_lock. + */ +static void printk_fallback_preferred_direct(void) +{ + console_lock(); + __printk_fallback_preferred_direct(); + console_unlock(); +} + +static bool printer_should_wake(struct console *con, u64 seq) +{ + short flags; + + if (kthread_should_stop() || !printk_kthreads_available) + return true; + + if (console_suspended) + return false; + + /* + * This is an unsafe read from con->flags, but a false positive is + * not a problem. Worst case it would allow the printer to wake up + * although it is disabled. But the printer will notice that when + * attempting to print and instead go back to sleep. + */ + flags = data_race(READ_ONCE(con->flags)); + + if (!__console_is_usable(flags)) + return false; + + return prb_read_valid(prb, seq, NULL); +} + +static int printk_kthread_func(void *data) +{ + struct console *con = data; + char *dropped_text = NULL; + char *ext_text = NULL; + bool handover; + u64 seq = 0; + char *text; + int error; + + text = kmalloc(CONSOLE_LOG_MAX, GFP_KERNEL); + if (!text) { + con_printk(KERN_ERR, con, "failed to allocate text buffer\n"); + printk_fallback_preferred_direct(); + goto out; + } + + if (con->flags & CON_EXTENDED) { + ext_text = kmalloc(CONSOLE_EXT_LOG_MAX, GFP_KERNEL); + if (!ext_text) { + con_printk(KERN_ERR, con, "failed to allocate ext_text buffer\n"); + printk_fallback_preferred_direct(); + goto out; + } + } else { + dropped_text = kmalloc(DROPPED_TEXT_MAX, GFP_KERNEL); + if (!dropped_text) { + con_printk(KERN_ERR, con, "failed to allocate dropped_text buffer\n"); + printk_fallback_preferred_direct(); + goto out; + } + } + + con_printk(KERN_INFO, con, "printing thread started\n"); + + for (;;) { + /* + * Guarantee this task is visible on the waitqueue before + * checking the wake condition. + * + * The full memory barrier within set_current_state() of + * prepare_to_wait_event() pairs with the full memory barrier + * within wq_has_sleeper(). + * + * This pairs with __wake_up_klogd:A. + */ + error = wait_event_interruptible(log_wait, + printer_should_wake(con, seq)); /* LMM(printk_kthread_func:A) */ + + if (kthread_should_stop() || !printk_kthreads_available) + break; + + if (error) + continue; + + console_lock(); + + if (console_suspended) { + up_console_sem(); + continue; + } + + if (!console_is_usable(con)) { + __console_unlock(); + continue; + } + + /* + * Even though the printk kthread is always preemptible, it is + * still not allowed to call cond_resched() from within + * console drivers. The task may become non-preemptible in the + * console driver call chain. For example, vt_console_print() + * takes a spinlock and then can call into fbcon_redraw(), + * which can conditionally invoke cond_resched(). + */ + console_may_schedule = 0; + console_emit_next_record(con, text, ext_text, dropped_text, &handover); + if (handover) + continue; + + seq = con->seq; + + __console_unlock(); + } + + con_printk(KERN_INFO, con, "printing thread stopped\n"); +out: + kfree(dropped_text); + kfree(ext_text); + kfree(text); + + console_lock(); + /* + * If this kthread is being stopped by another task, con->thread will + * already be NULL. That is fine. The important thing is that it is + * NULL after the kthread exits. + */ + con->thread = NULL; + console_unlock(); + + return 0; +} + +/* Must be called under console_lock. */ +static void printk_start_kthread(struct console *con) +{ + /* + * Do not start a kthread if there is no write() callback. The + * kthreads assume the write() callback exists. + */ + if (!con->write) + return; + + con->thread = kthread_run(printk_kthread_func, con, + "pr/%s%d", con->name, con->index); + if (IS_ERR(con->thread)) { + con->thread = NULL; + con_printk(KERN_ERR, con, "unable to start printing thread\n"); + __printk_fallback_preferred_direct(); + return; + } +} + /* * Delayed printk version, for scheduler-internal messages: */ -#define PRINTK_PENDING_WAKEUP 0x01 -#define PRINTK_PENDING_OUTPUT 0x02 +#define PRINTK_PENDING_WAKEUP 0x01 +#define PRINTK_PENDING_DIRECT_OUTPUT 0x02 static DEFINE_PER_CPU(int, printk_pending); @@ -3475,10 +3745,14 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work) { int pending = this_cpu_xchg(printk_pending, 0); - if (pending & PRINTK_PENDING_OUTPUT) { + if (pending & PRINTK_PENDING_DIRECT_OUTPUT) { + printk_prefer_direct_enter(); + /* If trylock fails, someone else is doing the printing */ if (console_trylock()) console_unlock(); + + printk_prefer_direct_exit(); } if (pending & PRINTK_PENDING_WAKEUP) @@ -3503,10 +3777,11 @@ static void __wake_up_klogd(int val) * prepare_to_wait_event(), which is called after ___wait_event() adds * the waiter but before it has checked the wait condition. * - * This pairs with devkmsg_read:A and syslog_print:A. + * This pairs with devkmsg_read:A, syslog_print:A, and + * printk_kthread_func:A. */ if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */ - (val & PRINTK_PENDING_OUTPUT)) { + (val & PRINTK_PENDING_DIRECT_OUTPUT)) { this_cpu_or(printk_pending, val); irq_work_queue(this_cpu_ptr(&wake_up_klogd_work)); } @@ -3524,7 +3799,17 @@ void defer_console_output(void) * New messages may have been added directly to the ringbuffer * using vprintk_store(), so wake any waiters as well. */ - __wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT); + int val = PRINTK_PENDING_WAKEUP; + + /* + * Make sure that some context will print the messages when direct + * printing is allowed. This happens in situations when the kthreads + * may not be as reliable or perhaps unusable. + */ + if (allow_direct_printing()) + val |= PRINTK_PENDING_DIRECT_OUTPUT; + + __wake_up_klogd(val); } void printk_trigger_flush(void) From 8e274732115f63c1d09136284431b3555bd5cc56 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Mon, 25 Apr 2022 23:04:28 +0206 Subject: [PATCH 16/18] printk: extend console_lock for per-console locking Currently threaded console printers synchronize against each other using console_lock(). However, different console drivers are unrelated and do not require any synchronization between each other. Removing the synchronization between the threaded console printers will allow each console to print at its own speed. But the threaded consoles printers do still need to synchronize against console_lock() callers. Introduce a per-console mutex and a new console boolean field @blocked to provide this synchronization. console_lock() is modified so that it must acquire the mutex of each console in order to set the @blocked field. Console printing threads will acquire their mutex while printing a record. If @blocked was set, the thread will go back to sleep instead of printing. The reason for the @blocked boolean field is so that console_lock() callers do not need to acquire multiple console mutexes simultaneously, which would introduce unnecessary complexity due to nested mutex locking. Also, a new field was chosen instead of adding a new @flags value so that the blocked status could be checked without concern of reading inconsistent values due to @flags updates from other contexts. Threaded console printers also need to synchronize against console_trylock() callers. Since console_trylock() may be called from any context, the per-console mutex cannot be used for this synchronization. (mutex_trylock() cannot be called from atomic contexts.) Introduce a global atomic counter to identify if any threaded printers are active. The threaded printers will also check the atomic counter to identify if the console has been locked by another task via console_trylock(). Note that @console_sem is still used to provide synchronization between console_lock() and console_trylock() callers. A locking overview for console_lock(), console_trylock(), and the threaded printers is as follows (pseudo code): console_lock() { down(&console_sem); for_each_console(con) { mutex_lock(&con->lock); con->blocked = true; mutex_unlock(&con->lock); } /* console_lock acquired */ } console_trylock() { if (down_trylock(&console_sem) == 0) { if (atomic_cmpxchg(&console_kthreads_active, 0, -1) == 0) { /* console_lock acquired */ } } } threaded_printer() { mutex_lock(&con->lock); if (!con->blocked) { /* console_lock() callers blocked */ if (atomic_inc_unless_negative(&console_kthreads_active)) { /* console_trylock() callers blocked */ con->write(); atomic_dec(&console_lock_count); } } mutex_unlock(&con->lock); } The console owner and waiter logic now only applies between contexts that have taken the console_lock via console_trylock(). Threaded printers never take the console_lock, so they do not have a console_lock to handover. Tasks that have used console_lock() will block the threaded printers using a mutex and if the console_lock is handed over to an atomic context, it would be unable to unblock the threaded printers. However, the console_trylock() case is really the only scenario that is interesting for handovers anyway. @panic_console_dropped must change to atomic_t since it is no longer protected exclusively by the console_lock. Since threaded printers remain asleep if they see that the console is locked, they now must be explicitly woken in __console_unlock(). This means wake_up_klogd() calls following a console_unlock() are no longer necessary and are removed. Also note that threaded printers no longer need to check @console_suspended. The check for the @blocked field implicitly covers the suspended console case. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/878rrs6ft7.fsf@jogness.linutronix.de --- include/linux/console.h | 15 +++ kernel/printk/printk.c | 261 +++++++++++++++++++++++++++++++--------- 2 files changed, 220 insertions(+), 56 deletions(-) diff --git a/include/linux/console.h b/include/linux/console.h index 9a251e70c090..143653090c48 100644 --- a/include/linux/console.h +++ b/include/linux/console.h @@ -16,6 +16,7 @@ #include #include +#include struct vc_data; struct console_font_op; @@ -154,6 +155,20 @@ struct console { u64 seq; unsigned long dropped; struct task_struct *thread; + bool blocked; + + /* + * The per-console lock is used by printing kthreads to synchronize + * this console with callers of console_lock(). This is necessary in + * order to allow printing kthreads to run in parallel to each other, + * while each safely accessing the @blocked field and synchronizing + * against direct printing via console_lock/console_unlock. + * + * Note: For synchronizing against direct printing via + * console_trylock/console_unlock, see the static global + * variable @console_kthreads_active. + */ + struct mutex lock; void *data; struct console *next; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index e4cdc424c826..750d1229cc11 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -223,6 +223,33 @@ int devkmsg_sysctl_set_loglvl(struct ctl_table *table, int write, /* Number of registered extended console drivers. */ static int nr_ext_console_drivers; +/* + * Used to synchronize printing kthreads against direct printing via + * console_trylock/console_unlock. + * + * Values: + * -1 = console kthreads atomically blocked (via global trylock) + * 0 = no kthread printing, console not locked (via trylock) + * >0 = kthread(s) actively printing + * + * Note: For synchronizing against direct printing via + * console_lock/console_unlock, see the @lock variable in + * struct console. + */ +static atomic_t console_kthreads_active = ATOMIC_INIT(0); + +#define console_kthreads_atomic_tryblock() \ + (atomic_cmpxchg(&console_kthreads_active, 0, -1) == 0) +#define console_kthreads_atomic_unblock() \ + atomic_cmpxchg(&console_kthreads_active, -1, 0) +#define console_kthreads_atomically_blocked() \ + (atomic_read(&console_kthreads_active) == -1) + +#define console_kthread_printing_tryenter() \ + atomic_inc_unless_negative(&console_kthreads_active) +#define console_kthread_printing_exit() \ + atomic_dec(&console_kthreads_active) + /* * Helper macros to handle lockdep when locking/unlocking console_sem. We use * macros instead of functions so that _RET_IP_ contains useful information. @@ -270,6 +297,49 @@ static bool panic_in_progress(void) return unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID); } +/* + * Tracks whether kthread printers are all blocked. A value of true implies + * that the console is locked via console_lock() or the console is suspended. + * Writing to this variable requires holding @console_sem. + */ +static bool console_kthreads_blocked; + +/* + * Block all kthread printers from a schedulable context. + * + * Requires holding @console_sem. + */ +static void console_kthreads_block(void) +{ + struct console *con; + + for_each_console(con) { + mutex_lock(&con->lock); + con->blocked = true; + mutex_unlock(&con->lock); + } + + console_kthreads_blocked = true; +} + +/* + * Unblock all kthread printers from a schedulable context. + * + * Requires holding @console_sem. + */ +static void console_kthreads_unblock(void) +{ + struct console *con; + + for_each_console(con) { + mutex_lock(&con->lock); + con->blocked = false; + mutex_unlock(&con->lock); + } + + console_kthreads_blocked = false; +} + /* * This is used for debugging the mess that is the VT code by * keeping track if we have the console semaphore held. It's @@ -2603,13 +2673,6 @@ void resume_console(void) down_console_sem(); console_suspended = 0; console_unlock(); - - /* - * While suspended, new records may have been added to the - * ringbuffer. Wake up the kthread printers to print them. - */ - wake_up_klogd(); - pr_flush(1000, true); } @@ -2628,9 +2691,14 @@ static int console_cpu_notify(unsigned int cpu) /* If trylock fails, someone else is doing the printing */ if (console_trylock()) console_unlock(); - - /* Wake kthread printers. Some may have become usable. */ - wake_up_klogd(); + else { + /* + * If a new CPU comes online, the conditions for + * printer_should_wake() may have changed for some + * kthread printer with !CON_ANYTIME. + */ + wake_up_klogd(); + } } return 0; } @@ -2650,6 +2718,7 @@ void console_lock(void) down_console_sem(); if (console_suspended) return; + console_kthreads_block(); console_locked = 1; console_may_schedule = 1; } @@ -2671,6 +2740,10 @@ int console_trylock(void) up_console_sem(); return 0; } + if (!console_kthreads_atomic_tryblock()) { + up_console_sem(); + return 0; + } console_locked = 1; console_may_schedule = 0; return 1; @@ -2679,7 +2752,7 @@ EXPORT_SYMBOL(console_trylock); int is_console_locked(void) { - return console_locked; + return (console_locked || atomic_read(&console_kthreads_active)); } EXPORT_SYMBOL(is_console_locked); @@ -2723,7 +2796,7 @@ static inline bool __console_is_usable(short flags) * Check if the given console is currently capable and allowed to print * records. * - * Requires the console_lock. + * Requires holding the console_lock. */ static inline bool console_is_usable(struct console *con) { @@ -2736,6 +2809,22 @@ static inline bool console_is_usable(struct console *con) static void __console_unlock(void) { console_locked = 0; + + /* + * Depending on whether console_lock() or console_trylock() was used, + * appropriately allow the kthread printers to continue. + */ + if (console_kthreads_blocked) + console_kthreads_unblock(); + else + console_kthreads_atomic_unblock(); + + /* + * New records may have arrived while the console was locked. + * Wake the kthread printers to print them. + */ + wake_up_klogd(); + up_console_sem(); } @@ -2753,17 +2842,19 @@ static void __console_unlock(void) * * @handover will be set to true if a printk waiter has taken over the * console_lock, in which case the caller is no longer holding the - * console_lock. Otherwise it is set to false. + * console_lock. Otherwise it is set to false. A NULL pointer may be provided + * to disable allowing the console_lock to be taken over by a printk waiter. * * Returns false if the given console has no next record to print, otherwise * true. * - * Requires the console_lock. + * Requires the console_lock if @handover is non-NULL. + * Requires con->lock otherwise. */ -static bool console_emit_next_record(struct console *con, char *text, char *ext_text, - char *dropped_text, bool *handover) +static bool __console_emit_next_record(struct console *con, char *text, char *ext_text, + char *dropped_text, bool *handover) { - static int panic_console_dropped; + static atomic_t panic_console_dropped = ATOMIC_INIT(0); struct printk_info info; struct printk_record r; unsigned long flags; @@ -2772,7 +2863,8 @@ static bool console_emit_next_record(struct console *con, char *text, char *ext_ prb_rec_init_rd(&r, &info, text, CONSOLE_LOG_MAX); - *handover = false; + if (handover) + *handover = false; if (!prb_read_valid(prb, con->seq, &r)) return false; @@ -2780,7 +2872,8 @@ static bool console_emit_next_record(struct console *con, char *text, char *ext_ if (con->seq != r.info->seq) { con->dropped += r.info->seq - con->seq; con->seq = r.info->seq; - if (panic_in_progress() && panic_console_dropped++ > 10) { + if (panic_in_progress() && + atomic_fetch_inc_relaxed(&panic_console_dropped) > 10) { suppress_panic_printk = 1; pr_warn_once("Too many dropped messages. Suppress messages on non-panic CPUs to prevent livelock.\n"); } @@ -2802,31 +2895,61 @@ static bool console_emit_next_record(struct console *con, char *text, char *ext_ len = record_print_text(&r, console_msg_format & MSG_FORMAT_SYSLOG, printk_time); } - /* - * While actively printing out messages, if another printk() - * were to occur on another CPU, it may wait for this one to - * finish. This task can not be preempted if there is a - * waiter waiting to take over. - * - * Interrupts are disabled because the hand over to a waiter - * must not be interrupted until the hand over is completed - * (@console_waiter is cleared). - */ - printk_safe_enter_irqsave(flags); - console_lock_spinning_enable(); + if (handover) { + /* + * While actively printing out messages, if another printk() + * were to occur on another CPU, it may wait for this one to + * finish. This task can not be preempted if there is a + * waiter waiting to take over. + * + * Interrupts are disabled because the hand over to a waiter + * must not be interrupted until the hand over is completed + * (@console_waiter is cleared). + */ + printk_safe_enter_irqsave(flags); + console_lock_spinning_enable(); + + /* don't trace irqsoff print latency */ + stop_critical_timings(); + } - stop_critical_timings(); /* don't trace print latency */ call_console_driver(con, write_text, len, dropped_text); - start_critical_timings(); con->seq++; - *handover = console_lock_spinning_disable_and_check(); - printk_safe_exit_irqrestore(flags); + if (handover) { + start_critical_timings(); + *handover = console_lock_spinning_disable_and_check(); + printk_safe_exit_irqrestore(flags); + } skip: return true; } +/* + * Print a record for a given console, but allow another printk() caller to + * take over the console_lock and continue printing. + * + * Requires the console_lock, but depending on @handover after the call, the + * caller may no longer have the console_lock. + * + * See __console_emit_next_record() for argument and return details. + */ +static bool console_emit_next_record_transferable(struct console *con, char *text, char *ext_text, + char *dropped_text, bool *handover) +{ + /* + * Handovers are only supported if threaded printers are atomically + * blocked. The context taking over the console_lock may be atomic. + */ + if (!console_kthreads_atomically_blocked()) { + *handover = false; + handover = NULL; + } + + return __console_emit_next_record(con, text, ext_text, dropped_text, handover); +} + /* * Print out all remaining records to all consoles. * @@ -2878,13 +3001,11 @@ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handove if (con->flags & CON_EXTENDED) { /* Extended consoles do not print "dropped messages". */ - progress = console_emit_next_record(con, &text[0], - &ext_text[0], NULL, - handover); + progress = console_emit_next_record_transferable(con, &text[0], + &ext_text[0], NULL, handover); } else { - progress = console_emit_next_record(con, &text[0], - NULL, &dropped_text[0], - handover); + progress = console_emit_next_record_transferable(con, &text[0], + NULL, &dropped_text[0], handover); } if (*handover) return false; @@ -2999,6 +3120,10 @@ void console_unblank(void) if (oops_in_progress) { if (down_trylock_console_sem() != 0) return; + if (!console_kthreads_atomic_tryblock()) { + up_console_sem(); + return; + } } else console_lock(); @@ -3081,10 +3206,6 @@ void console_start(struct console *console) console_lock(); console->flags |= CON_ENABLED; console_unlock(); - - /* Wake the newly enabled kthread printer. */ - wake_up_klogd(); - __pr_flush(console, 1000, true); } EXPORT_SYMBOL(console_start); @@ -3286,6 +3407,8 @@ void register_console(struct console *newcon) newcon->dropped = 0; newcon->thread = NULL; + newcon->blocked = true; + mutex_init(&newcon->lock); if (newcon->flags & CON_PRINTBUFFER) { /* Get a consistent copy of @syslog_seq. */ @@ -3586,6 +3709,19 @@ static void printk_fallback_preferred_direct(void) console_unlock(); } +/* + * Print a record for a given console, not allowing another printk() caller + * to take over. This is appropriate for contexts that do not have the + * console_lock. + * + * See __console_emit_next_record() for argument and return details. + */ +static bool console_emit_next_record(struct console *con, char *text, char *ext_text, + char *dropped_text) +{ + return __console_emit_next_record(con, text, ext_text, dropped_text, NULL); +} + static bool printer_should_wake(struct console *con, u64 seq) { short flags; @@ -3593,8 +3729,10 @@ static bool printer_should_wake(struct console *con, u64 seq) if (kthread_should_stop() || !printk_kthreads_available) return true; - if (console_suspended) + if (con->blocked || + console_kthreads_atomically_blocked()) { return false; + } /* * This is an unsafe read from con->flags, but a false positive is @@ -3615,7 +3753,6 @@ static int printk_kthread_func(void *data) struct console *con = data; char *dropped_text = NULL; char *ext_text = NULL; - bool handover; u64 seq = 0; char *text; int error; @@ -3665,15 +3802,27 @@ static int printk_kthread_func(void *data) if (error) continue; - console_lock(); + error = mutex_lock_interruptible(&con->lock); + if (error) + continue; - if (console_suspended) { - up_console_sem(); + if (con->blocked || + !console_kthread_printing_tryenter()) { + /* Another context has locked the console_lock. */ + mutex_unlock(&con->lock); continue; } - if (!console_is_usable(con)) { - __console_unlock(); + /* + * Although this context has not locked the console_lock, it + * is known that the console_lock is not locked and it is not + * possible for any other context to lock the console_lock. + * Therefore it is safe to read con->flags. + */ + + if (!__console_is_usable(con->flags)) { + console_kthread_printing_exit(); + mutex_unlock(&con->lock); continue; } @@ -3686,13 +3835,13 @@ static int printk_kthread_func(void *data) * which can conditionally invoke cond_resched(). */ console_may_schedule = 0; - console_emit_next_record(con, text, ext_text, dropped_text, &handover); - if (handover) - continue; + console_emit_next_record(con, text, ext_text, dropped_text); seq = con->seq; - __console_unlock(); + console_kthread_printing_exit(); + + mutex_unlock(&con->lock); } con_printk(KERN_INFO, con, "printing thread stopped\n"); From ab406816fca009349b89cbde885daf68a8c77e33 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Thu, 21 Apr 2022 23:28:50 +0206 Subject: [PATCH 17/18] printk: remove @console_locked The static global variable @console_locked is used to help debug VT code to make sure that certain code paths are running with the console_lock held. However, this information is also available with the static global variable @console_kthreads_blocked (for locking via console_lock()), and the static global variable @console_kthreads_active (for locking via console_trylock()). Remove @console_locked and update is_console_locked() to use the alternative variables. Signed-off-by: John Ogness Reviewed-by: Petr Mladek Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220421212250.565456-16-john.ogness@linutronix.de --- kernel/printk/printk.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 750d1229cc11..f66d6e72a642 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -340,15 +340,7 @@ static void console_kthreads_unblock(void) console_kthreads_blocked = false; } -/* - * This is used for debugging the mess that is the VT code by - * keeping track if we have the console semaphore held. It's - * definitely not the perfect debug tool (we don't know if _WE_ - * hold it and are racing, but it helps tracking those weird code - * paths in the console code where we end up in places I want - * locked without the console semaphore held). - */ -static int console_locked, console_suspended; +static int console_suspended; /* * Array of consoles built from command line options (console=) @@ -2719,7 +2711,6 @@ void console_lock(void) if (console_suspended) return; console_kthreads_block(); - console_locked = 1; console_may_schedule = 1; } EXPORT_SYMBOL(console_lock); @@ -2744,15 +2735,26 @@ int console_trylock(void) up_console_sem(); return 0; } - console_locked = 1; console_may_schedule = 0; return 1; } EXPORT_SYMBOL(console_trylock); +/* + * This is used to help to make sure that certain paths within the VT code are + * running with the console lock held. It is definitely not the perfect debug + * tool (it is not known if the VT code is the task holding the console lock), + * but it helps tracking those weird code paths in the console code such as + * when the console is suspended: where the console is not locked but no + * console printing may occur. + * + * Note: This returns true when the console is suspended but is not locked. + * This is intentional because the VT code must consider that situation + * the same as if the console was locked. + */ int is_console_locked(void) { - return (console_locked || atomic_read(&console_kthreads_active)); + return (console_kthreads_blocked || atomic_read(&console_kthreads_active)); } EXPORT_SYMBOL(is_console_locked); @@ -2808,8 +2810,6 @@ static inline bool console_is_usable(struct console *con) static void __console_unlock(void) { - console_locked = 0; - /* * Depending on whether console_lock() or console_trylock() was used, * appropriately allow the kthread printers to continue. @@ -3127,7 +3127,6 @@ void console_unblank(void) } else console_lock(); - console_locked = 1; console_may_schedule = 0; for_each_console(c) if ((c->flags & CON_ENABLED) && c->unblank) From 701850dc0c31bfadf75a0a74af7d2c97859945ec Mon Sep 17 00:00:00 2001 From: Marco Elver Date: Tue, 3 May 2022 09:38:44 +0200 Subject: [PATCH 18/18] printk, tracing: fix console tracepoint The original intent of the 'console' tracepoint per the commit 95100358491a ("printk/tracing: Add console output tracing") had been to "[...] record any printk messages into the trace, regardless of the current console loglevel. This can help correlate (existing) printk debugging with other tracing." Petr points out [1] that calling trace_console_rcuidle() in call_console_driver() had been the wrong thing for a while, because "printk() always used console_trylock() and the message was flushed to the console only when the trylock succeeded. And it was always deferred in NMI or when printed via printk_deferred()." With the commit 09c5ba0aa2fc ("printk: add kthread console printers"), things only got worse, and calls to call_console_driver() no longer happen with typical printk() calls but always appear deferred [2]. As such, the tracepoint can no longer serve its purpose to clearly correlate printk() calls and other tracing, as well as breaks usecases that expect every printk() call to result in a callback of the console tracepoint. Notably, the KFENCE and KCSAN test suites, which want to capture console output and assume a printk() immediately gives us a callback to the console tracepoint. Fix the console tracepoint by moving it into printk_sprint() [3]. One notable difference is that by moving tracing into printk_sprint(), the 'text' will no longer include the "header" (loglevel and timestamp), but only the raw message. Arguably this is less of a problem now that the console tracepoint happens on the printk() call and isn't delayed. Link: https://lore.kernel.org/all/Ym+WqKStCg%2FEHfh3@alley/ [1] Link: https://lore.kernel.org/all/CA+G9fYu2kS0wR4WqMRsj2rePKV9XLgOU1PiXnMvpT+Z=c2ucHA@mail.gmail.com/ [2] Link: https://lore.kernel.org/all/87fslup9dx.fsf@jogness.linutronix.de/ [3] Reported-by: Linux Kernel Functional Testing Signed-off-by: Marco Elver Cc: John Ogness Cc: Petr Mladek Reviewed-by: Petr Mladek Acked-by: John Ogness Acked-by: Steven Rostedt (Google) Signed-off-by: Petr Mladek Link: https://lore.kernel.org/r/20220503073844.4148944-1-elver@google.com --- kernel/printk/printk.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index f66d6e72a642..a3e1035929b0 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -2064,8 +2064,6 @@ static void call_console_driver(struct console *con, const char *text, size_t le { size_t dropped_len; - trace_console_rcuidle(text, len); - if (con->dropped && dropped_text) { dropped_len = snprintf(dropped_text, DROPPED_TEXT_MAX, "** %lu printk messages dropped **\n", @@ -2240,6 +2238,8 @@ static u16 printk_sprint(char *text, u16 size, int facility, } } + trace_console_rcuidle(text, text_len); + return text_len; }