License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifndef _LINUX_WAIT_H
|
|
|
|
#define _LINUX_WAIT_H
|
2013-10-04 16:24:49 +08:00
|
|
|
/*
|
|
|
|
* Linux wait queue related types and methods
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/stddef.h>
|
|
|
|
#include <linux/spinlock.h>
|
2017-02-03 00:54:15 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/current.h>
|
2012-10-13 17:46:48 +08:00
|
|
|
#include <uapi/linux/wait.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-06-20 18:06:13 +08:00
|
|
|
typedef struct wait_queue_entry wait_queue_entry_t;
|
2017-03-05 17:33:16 +08:00
|
|
|
|
|
|
|
typedef int (*wait_queue_func_t)(struct wait_queue_entry *wq_entry, unsigned mode, int flags, void *key);
|
|
|
|
int default_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int flags, void *key);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-06-20 18:06:13 +08:00
|
|
|
/* wait_queue_entry::flags */
|
2014-09-24 16:18:47 +08:00
|
|
|
#define WQ_FLAG_EXCLUSIVE 0x01
|
|
|
|
#define WQ_FLAG_WOKEN 0x02
|
2017-08-26 00:13:54 +08:00
|
|
|
#define WQ_FLAG_BOOKMARK 0x04
|
2019-10-31 03:30:41 +08:00
|
|
|
#define WQ_FLAG_CUSTOM 0x08
|
mm: allow a controlled amount of unfairness in the page lock
Commit 2a9127fcf229 ("mm: rewrite wait_on_page_bit_common() logic") made
the page locking entirely fair, in that if a waiter came in while the
lock was held, the lock would be transferred to the lockers strictly in
order.
That was intended to finally get rid of the long-reported watchdog
failures that involved the page lock under extreme load, where a process
could end up waiting essentially forever, as other page lockers stole
the lock from under it.
It also improved some benchmarks, but it ended up causing huge
performance regressions on others, simply because fair lock behavior
doesn't end up giving out the lock as aggressively, causing better
worst-case latency, but potentially much worse average latencies and
throughput.
Instead of reverting that change entirely, this introduces a controlled
amount of unfairness, with a sysctl knob to tune it if somebody needs
to. But the default value should hopefully be good for any normal load,
allowing a few rounds of lock stealing, but enforcing the strict
ordering before the lock has been stolen too many times.
There is also a hint from Matthieu Baerts that the fair page coloring
may end up exposing an ABBA deadlock that is hidden by the usual
optimistic lock stealing, and while the unfairness doesn't fix the
fundamental issue (and I'm still looking at that), it avoids it in
practice.
The amount of unfairness can be modified by writing a new value to the
'sysctl_page_lock_unfairness' variable (default value of 5, exposed
through /proc/sys/vm/page_lock_unfairness), but that is hopefully
something we'd use mainly for debugging rather than being necessary for
any deep system tuning.
This whole issue has exposed just how critical the page lock can be, and
how contended it gets under certain locks. And the main contention
doesn't really seem to be anything related to IO (which was the origin
of this lock), but for things like just verifying that the page file
mapping is stable while faulting in the page into a page table.
Link: https://lore.kernel.org/linux-fsdevel/ed8442fd-6f54-dd84-cd4a-941e8b7ee603@MichaelLarabel.com/
Link: https://www.phoronix.com/scan.php?page=article&item=linux-50-59&num=1
Link: https://lore.kernel.org/linux-fsdevel/c560a38d-8313-51fb-b1ec-e904bd8836bc@tessares.net/
Reported-and-tested-by: Michael Larabel <Michael@michaellarabel.com>
Tested-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Chris Mason <clm@fb.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-14 05:05:35 +08:00
|
|
|
#define WQ_FLAG_DONE 0x10
|
2020-10-27 22:39:43 +08:00
|
|
|
#define WQ_FLAG_PRIORITY 0x20
|
2014-09-24 16:18:47 +08:00
|
|
|
|
2017-06-20 18:06:13 +08:00
|
|
|
/*
|
|
|
|
* A single wait-queue entry structure:
|
|
|
|
*/
|
|
|
|
struct wait_queue_entry {
|
2013-10-04 16:24:49 +08:00
|
|
|
unsigned int flags;
|
|
|
|
void *private;
|
|
|
|
wait_queue_func_t func;
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
struct list_head entry;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2017-03-05 18:10:18 +08:00
|
|
|
struct wait_queue_head {
|
2013-10-04 16:24:49 +08:00
|
|
|
spinlock_t lock;
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
struct list_head head;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
2017-03-05 18:10:18 +08:00
|
|
|
typedef struct wait_queue_head wait_queue_head_t;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-11-07 16:59:43 +08:00
|
|
|
struct task_struct;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Macros for declaration and initialisaton of the datatypes
|
|
|
|
*/
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __WAITQUEUE_INITIALIZER(name, tsk) { \
|
|
|
|
.private = tsk, \
|
|
|
|
.func = default_wake_function, \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
.entry = { NULL, NULL } }
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define DECLARE_WAITQUEUE(name, tsk) \
|
2017-03-05 17:33:16 +08:00
|
|
|
struct wait_queue_entry name = __WAITQUEUE_INITIALIZER(name, tsk)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __WAIT_QUEUE_HEAD_INITIALIZER(name) { \
|
|
|
|
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
|
2021-06-01 23:11:20 +08:00
|
|
|
.head = LIST_HEAD_INIT(name.head) }
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#define DECLARE_WAIT_QUEUE_HEAD(name) \
|
2017-03-05 18:10:18 +08:00
|
|
|
struct wait_queue_head name = __WAIT_QUEUE_HEAD_INITIALIZER(name)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-05 18:10:18 +08:00
|
|
|
extern void __init_waitqueue_head(struct wait_queue_head *wq_head, const char *name, struct lock_class_key *);
|
2009-08-10 19:33:05 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define init_waitqueue_head(wq_head) \
|
|
|
|
do { \
|
|
|
|
static struct lock_class_key __key; \
|
|
|
|
\
|
|
|
|
__init_waitqueue_head((wq_head), #wq_head, &__key); \
|
2009-08-10 19:33:05 +08:00
|
|
|
} while (0)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-10-30 14:46:36 +08:00
|
|
|
#ifdef CONFIG_LOCKDEP
|
|
|
|
# define __WAIT_QUEUE_HEAD_INIT_ONSTACK(name) \
|
|
|
|
({ init_waitqueue_head(&name); name; })
|
|
|
|
# define DECLARE_WAIT_QUEUE_HEAD_ONSTACK(name) \
|
2017-03-05 18:10:18 +08:00
|
|
|
struct wait_queue_head name = __WAIT_QUEUE_HEAD_INIT_ONSTACK(name)
|
2006-10-30 14:46:36 +08:00
|
|
|
#else
|
|
|
|
# define DECLARE_WAIT_QUEUE_HEAD_ONSTACK(name) DECLARE_WAIT_QUEUE_HEAD(name)
|
|
|
|
#endif
|
|
|
|
|
2017-03-05 17:33:16 +08:00
|
|
|
static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-03-05 17:33:16 +08:00
|
|
|
wq_entry->flags = 0;
|
|
|
|
wq_entry->private = p;
|
|
|
|
wq_entry->func = default_wake_function;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-10-04 16:24:49 +08:00
|
|
|
static inline void
|
2017-03-05 17:33:16 +08:00
|
|
|
init_waitqueue_func_entry(struct wait_queue_entry *wq_entry, wait_queue_func_t func)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-03-05 17:33:16 +08:00
|
|
|
wq_entry->flags = 0;
|
|
|
|
wq_entry->private = NULL;
|
|
|
|
wq_entry->func = func;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2015-10-23 20:32:34 +08:00
|
|
|
/**
|
|
|
|
* waitqueue_active -- locklessly test for waiters on the queue
|
2017-03-05 18:10:18 +08:00
|
|
|
* @wq_head: the waitqueue to test for waiters
|
2015-10-23 20:32:34 +08:00
|
|
|
*
|
|
|
|
* returns true if the wait list is not empty
|
|
|
|
*
|
|
|
|
* NOTE: this function is lockless and requires care, incorrect usage _will_
|
|
|
|
* lead to sporadic and non-obvious failure.
|
|
|
|
*
|
2017-03-05 18:10:18 +08:00
|
|
|
* Use either while holding wait_queue_head::lock or when used for wakeups
|
2019-04-09 08:43:57 +08:00
|
|
|
* with an extra smp_mb() like::
|
2015-10-23 20:32:34 +08:00
|
|
|
*
|
|
|
|
* CPU0 - waker CPU1 - waiter
|
|
|
|
*
|
|
|
|
* for (;;) {
|
2017-03-05 19:07:33 +08:00
|
|
|
* @cond = true; prepare_to_wait(&wq_head, &wait, state);
|
2015-10-23 20:32:34 +08:00
|
|
|
* smp_mb(); // smp_mb() from set_current_state()
|
2017-03-05 19:07:33 +08:00
|
|
|
* if (waitqueue_active(wq_head)) if (@cond)
|
|
|
|
* wake_up(wq_head); break;
|
2015-10-23 20:32:34 +08:00
|
|
|
* schedule();
|
|
|
|
* }
|
2017-03-05 19:07:33 +08:00
|
|
|
* finish_wait(&wq_head, &wait);
|
2015-10-23 20:32:34 +08:00
|
|
|
*
|
|
|
|
* Because without the explicit smp_mb() it's possible for the
|
|
|
|
* waitqueue_active() load to get hoisted over the @cond store such that we'll
|
|
|
|
* observe an empty wait list while the waiter might not observe @cond.
|
|
|
|
*
|
|
|
|
* Also note that this 'optimization' trades a spin_lock() for an smp_mb(),
|
|
|
|
* which (when the lock is uncontended) are of roughly equal cost.
|
|
|
|
*/
|
2017-03-05 18:10:18 +08:00
|
|
|
static inline int waitqueue_active(struct wait_queue_head *wq_head)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
return !list_empty(&wq_head->head);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-07-17 04:19:25 +08:00
|
|
|
/**
|
|
|
|
* wq_has_single_sleeper - check if there is only one sleeper
|
|
|
|
* @wq_head: wait queue head
|
|
|
|
*
|
|
|
|
* Returns true of wq_head has only one sleeper on the list.
|
|
|
|
*
|
|
|
|
* Please refer to the comment for waitqueue_active.
|
|
|
|
*/
|
|
|
|
static inline bool wq_has_single_sleeper(struct wait_queue_head *wq_head)
|
|
|
|
{
|
|
|
|
return list_is_singular(&wq_head->head);
|
|
|
|
}
|
|
|
|
|
2015-11-26 13:55:39 +08:00
|
|
|
/**
|
|
|
|
* wq_has_sleeper - check if there are any waiting processes
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: wait queue head
|
2015-11-26 13:55:39 +08:00
|
|
|
*
|
2017-03-05 19:07:33 +08:00
|
|
|
* Returns true if wq_head has waiting processes
|
2015-11-26 13:55:39 +08:00
|
|
|
*
|
|
|
|
* Please refer to the comment for waitqueue_active.
|
|
|
|
*/
|
2017-03-05 18:10:18 +08:00
|
|
|
static inline bool wq_has_sleeper(struct wait_queue_head *wq_head)
|
2015-11-26 13:55:39 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We need to be sure we are in sync with the
|
|
|
|
* add_wait_queue modifications to the wait queue.
|
|
|
|
*
|
|
|
|
* This memory barrier should be paired with one on the
|
|
|
|
* waiting side.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
2017-03-05 18:10:18 +08:00
|
|
|
return waitqueue_active(wq_head);
|
2015-11-26 13:55:39 +08:00
|
|
|
}
|
|
|
|
|
2017-03-05 18:10:18 +08:00
|
|
|
extern void add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
|
|
|
extern void add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
2020-10-27 22:39:43 +08:00
|
|
|
extern void add_wait_queue_priority(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
2017-03-05 18:10:18 +08:00
|
|
|
extern void remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-05 18:10:18 +08:00
|
|
|
static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2020-10-27 22:39:43 +08:00
|
|
|
struct list_head *head = &wq_head->head;
|
|
|
|
struct wait_queue_entry *wq;
|
|
|
|
|
|
|
|
list_for_each_entry(wq, &wq_head->head, entry) {
|
|
|
|
if (!(wq->flags & WQ_FLAG_PRIORITY))
|
|
|
|
break;
|
|
|
|
head = &wq->entry;
|
|
|
|
}
|
|
|
|
list_add(&wq_entry->entry, head);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used for wake-one threads:
|
|
|
|
*/
|
2013-10-04 16:24:49 +08:00
|
|
|
static inline void
|
2017-03-05 18:10:18 +08:00
|
|
|
__add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2010-05-07 14:33:26 +08:00
|
|
|
{
|
2017-03-05 17:33:16 +08:00
|
|
|
wq_entry->flags |= WQ_FLAG_EXCLUSIVE;
|
2017-03-05 18:10:18 +08:00
|
|
|
__add_wait_queue(wq_head, wq_entry);
|
2010-05-07 14:33:26 +08:00
|
|
|
}
|
|
|
|
|
2017-03-05 18:10:18 +08:00
|
|
|
static inline void __add_wait_queue_entry_tail(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
list_add_tail(&wq_entry->entry, &wq_head->head);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-10-04 16:24:49 +08:00
|
|
|
static inline void
|
2017-03-05 18:10:18 +08:00
|
|
|
__add_wait_queue_entry_tail_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2010-05-07 14:33:26 +08:00
|
|
|
{
|
2017-03-05 17:33:16 +08:00
|
|
|
wq_entry->flags |= WQ_FLAG_EXCLUSIVE;
|
2017-03-05 18:10:18 +08:00
|
|
|
__add_wait_queue_entry_tail(wq_head, wq_entry);
|
2010-05-07 14:33:26 +08:00
|
|
|
}
|
|
|
|
|
2013-10-04 16:24:49 +08:00
|
|
|
static inline void
|
2017-03-05 18:10:18 +08:00
|
|
|
__remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
list_del(&wq_entry->entry);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2017-03-05 18:10:18 +08:00
|
|
|
void __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
|
|
|
|
void __wake_up_locked_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
|
2017-08-26 00:13:55 +08:00
|
|
|
void __wake_up_locked_key_bookmark(struct wait_queue_head *wq_head,
|
|
|
|
unsigned int mode, void *key, wait_queue_entry_t *bookmark);
|
2019-10-16 22:13:41 +08:00
|
|
|
void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
|
2019-09-24 23:07:45 +08:00
|
|
|
void __wake_up_locked_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
|
2017-03-05 18:10:18 +08:00
|
|
|
void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr);
|
2019-10-16 22:13:41 +08:00
|
|
|
void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-12-07 06:34:36 +08:00
|
|
|
#define wake_up(x) __wake_up(x, TASK_NORMAL, 1, NULL)
|
|
|
|
#define wake_up_nr(x, nr) __wake_up(x, TASK_NORMAL, nr, NULL)
|
|
|
|
#define wake_up_all(x) __wake_up(x, TASK_NORMAL, 0, NULL)
|
2011-12-01 07:04:00 +08:00
|
|
|
#define wake_up_locked(x) __wake_up_locked((x), TASK_NORMAL, 1)
|
|
|
|
#define wake_up_all_locked(x) __wake_up_locked((x), TASK_NORMAL, 0)
|
2007-12-07 06:34:36 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#define wake_up_interruptible(x) __wake_up(x, TASK_INTERRUPTIBLE, 1, NULL)
|
|
|
|
#define wake_up_interruptible_nr(x, nr) __wake_up(x, TASK_INTERRUPTIBLE, nr, NULL)
|
|
|
|
#define wake_up_interruptible_all(x) __wake_up(x, TASK_INTERRUPTIBLE, 0, NULL)
|
2019-10-16 22:13:41 +08:00
|
|
|
#define wake_up_interruptible_sync(x) __wake_up_sync((x), TASK_INTERRUPTIBLE)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lockdep: annotate epoll
On Sat, 2008-01-05 at 13:35 -0800, Davide Libenzi wrote:
> I remember I talked with Arjan about this time ago. Basically, since 1)
> you can drop an epoll fd inside another epoll fd 2) callback-based wakeups
> are used, you can see a wake_up() from inside another wake_up(), but they
> will never refer to the same lock instance.
> Think about:
>
> dfd = socket(...);
> efd1 = epoll_create();
> efd2 = epoll_create();
> epoll_ctl(efd1, EPOLL_CTL_ADD, dfd, ...);
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
>
> When a packet arrives to the device underneath "dfd", the net code will
> issue a wake_up() on its poll wake list. Epoll (efd1) has installed a
> callback wakeup entry on that queue, and the wake_up() performed by the
> "dfd" net code will end up in ep_poll_callback(). At this point epoll
> (efd1) notices that it may have some event ready, so it needs to wake up
> the waiters on its poll wait list (efd2). So it calls ep_poll_safewake()
> that ends up in another wake_up(), after having checked about the
> recursion constraints. That are, no more than EP_MAX_POLLWAKE_NESTS, to
> avoid stack blasting. Never hit the same queue, to avoid loops like:
>
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
> epoll_ctl(efd3, EPOLL_CTL_ADD, efd2, ...);
> epoll_ctl(efd4, EPOLL_CTL_ADD, efd3, ...);
> epoll_ctl(efd1, EPOLL_CTL_ADD, efd4, ...);
>
> The code "if (tncur->wq == wq || ..." prevents re-entering the same
> queue/lock.
Since the epoll code is very careful to not nest same instance locks
allow the recursion.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Davide Libenzi <davidel@xmailserver.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05 14:27:20 +08:00
|
|
|
/*
|
2009-04-01 06:24:20 +08:00
|
|
|
* Wakeup macros to be used to report events to the targets.
|
lockdep: annotate epoll
On Sat, 2008-01-05 at 13:35 -0800, Davide Libenzi wrote:
> I remember I talked with Arjan about this time ago. Basically, since 1)
> you can drop an epoll fd inside another epoll fd 2) callback-based wakeups
> are used, you can see a wake_up() from inside another wake_up(), but they
> will never refer to the same lock instance.
> Think about:
>
> dfd = socket(...);
> efd1 = epoll_create();
> efd2 = epoll_create();
> epoll_ctl(efd1, EPOLL_CTL_ADD, dfd, ...);
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
>
> When a packet arrives to the device underneath "dfd", the net code will
> issue a wake_up() on its poll wake list. Epoll (efd1) has installed a
> callback wakeup entry on that queue, and the wake_up() performed by the
> "dfd" net code will end up in ep_poll_callback(). At this point epoll
> (efd1) notices that it may have some event ready, so it needs to wake up
> the waiters on its poll wait list (efd2). So it calls ep_poll_safewake()
> that ends up in another wake_up(), after having checked about the
> recursion constraints. That are, no more than EP_MAX_POLLWAKE_NESTS, to
> avoid stack blasting. Never hit the same queue, to avoid loops like:
>
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
> epoll_ctl(efd3, EPOLL_CTL_ADD, efd2, ...);
> epoll_ctl(efd4, EPOLL_CTL_ADD, efd3, ...);
> epoll_ctl(efd1, EPOLL_CTL_ADD, efd4, ...);
>
> The code "if (tncur->wq == wq || ..." prevents re-entering the same
> queue/lock.
Since the epoll code is very careful to not nest same instance locks
allow the recursion.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Davide Libenzi <davidel@xmailserver.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05 14:27:20 +08:00
|
|
|
*/
|
2017-07-04 08:14:56 +08:00
|
|
|
#define poll_to_key(m) ((void *)(__force uintptr_t)(__poll_t)(m))
|
|
|
|
#define key_to_poll(m) ((__force __poll_t)(uintptr_t)(void *)(m))
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wake_up_poll(x, m) \
|
2017-07-04 08:14:56 +08:00
|
|
|
__wake_up(x, TASK_NORMAL, 1, poll_to_key(m))
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wake_up_locked_poll(x, m) \
|
2017-07-04 08:14:56 +08:00
|
|
|
__wake_up_locked_key((x), TASK_NORMAL, poll_to_key(m))
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wake_up_interruptible_poll(x, m) \
|
2017-07-04 08:14:56 +08:00
|
|
|
__wake_up(x, TASK_INTERRUPTIBLE, 1, poll_to_key(m))
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wake_up_interruptible_sync_poll(x, m) \
|
2019-10-16 22:13:41 +08:00
|
|
|
__wake_up_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m))
|
2019-09-24 23:07:45 +08:00
|
|
|
#define wake_up_interruptible_sync_poll_locked(x, m) \
|
|
|
|
__wake_up_locked_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m))
|
lockdep: annotate epoll
On Sat, 2008-01-05 at 13:35 -0800, Davide Libenzi wrote:
> I remember I talked with Arjan about this time ago. Basically, since 1)
> you can drop an epoll fd inside another epoll fd 2) callback-based wakeups
> are used, you can see a wake_up() from inside another wake_up(), but they
> will never refer to the same lock instance.
> Think about:
>
> dfd = socket(...);
> efd1 = epoll_create();
> efd2 = epoll_create();
> epoll_ctl(efd1, EPOLL_CTL_ADD, dfd, ...);
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
>
> When a packet arrives to the device underneath "dfd", the net code will
> issue a wake_up() on its poll wake list. Epoll (efd1) has installed a
> callback wakeup entry on that queue, and the wake_up() performed by the
> "dfd" net code will end up in ep_poll_callback(). At this point epoll
> (efd1) notices that it may have some event ready, so it needs to wake up
> the waiters on its poll wait list (efd2). So it calls ep_poll_safewake()
> that ends up in another wake_up(), after having checked about the
> recursion constraints. That are, no more than EP_MAX_POLLWAKE_NESTS, to
> avoid stack blasting. Never hit the same queue, to avoid loops like:
>
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
> epoll_ctl(efd3, EPOLL_CTL_ADD, efd2, ...);
> epoll_ctl(efd4, EPOLL_CTL_ADD, efd3, ...);
> epoll_ctl(efd1, EPOLL_CTL_ADD, efd4, ...);
>
> The code "if (tncur->wq == wq || ..." prevents re-entering the same
> queue/lock.
Since the epoll code is very careful to not nest same instance locks
allow the recursion.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Davide Libenzi <davidel@xmailserver.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05 14:27:20 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define ___wait_cond_timeout(condition) \
|
|
|
|
({ \
|
|
|
|
bool __cond = (condition); \
|
|
|
|
if (__cond && !__ret) \
|
|
|
|
__ret = 1; \
|
|
|
|
__cond || !__ret; \
|
2013-10-02 17:22:19 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define ___wait_is_interruptible(state) \
|
|
|
|
(!__builtin_constant_p(state) || \
|
|
|
|
state == TASK_INTERRUPTIBLE || state == TASK_KILLABLE) \
|
2013-10-02 17:22:21 +08:00
|
|
|
|
2017-03-05 17:33:16 +08:00
|
|
|
extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags);
|
2016-09-06 22:00:55 +08:00
|
|
|
|
2014-04-19 06:07:17 +08:00
|
|
|
/*
|
|
|
|
* The below macro ___wait_event() has an explicit shadow of the __ret
|
|
|
|
* variable when used from the wait_event_*() macros.
|
|
|
|
*
|
|
|
|
* This is so that both can use the ___wait_cond_timeout() construct
|
|
|
|
* to wrap the condition.
|
|
|
|
*
|
|
|
|
* The type inconsistency of the wait_event_*() __ret variable is also
|
|
|
|
* on purpose; we use long where we can return timeout values and int
|
|
|
|
* otherwise.
|
|
|
|
*/
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define ___wait_event(wq_head, condition, state, exclusive, ret, cmd) \
|
|
|
|
({ \
|
|
|
|
__label__ __out; \
|
|
|
|
struct wait_queue_entry __wq_entry; \
|
|
|
|
long __ret = ret; /* explicit shadow */ \
|
|
|
|
\
|
|
|
|
init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \
|
|
|
|
for (;;) { \
|
|
|
|
long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\
|
|
|
|
\
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
\
|
|
|
|
if (___wait_is_interruptible(state) && __int) { \
|
|
|
|
__ret = __int; \
|
|
|
|
goto __out; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
cmd; \
|
|
|
|
} \
|
|
|
|
finish_wait(&wq_head, &__wq_entry); \
|
|
|
|
__out: __ret; \
|
2013-10-02 17:22:33 +08:00
|
|
|
})
|
2013-10-02 17:22:21 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event(wq_head, condition) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
2013-10-02 17:22:33 +08:00
|
|
|
schedule())
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event - sleep until a condition gets true
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 06:20:36 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 19:07:33 +08:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event(wq_head, condition); \
|
2005-04-17 06:20:36 +08:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __io_wait_event(wq_head, condition) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
2015-02-03 19:55:31 +08:00
|
|
|
io_schedule())
|
|
|
|
|
|
|
|
/*
|
|
|
|
* io_wait_event() -- like wait_event() but with io_schedule()
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define io_wait_event(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__io_wait_event(wq_head, condition); \
|
2015-02-03 19:55:31 +08:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_freezable(wq_head, condition) \
|
|
|
|
___wait_event(wq_head, condition, TASK_INTERRUPTIBLE, 0, 0, \
|
2019-02-08 04:03:52 +08:00
|
|
|
freezable_schedule())
|
2014-10-29 19:21:57 +08:00
|
|
|
|
|
|
|
/**
|
2016-02-23 21:39:28 +08:00
|
|
|
* wait_event_freezable - sleep (or freeze) until a condition gets true
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2014-10-29 19:21:57 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE -- so as not to contribute
|
|
|
|
* to system load) until the @condition evaluates to true. The
|
2017-03-05 19:07:33 +08:00
|
|
|
* @condition is checked each time the waitqueue @wq_head is woken up.
|
2014-10-29 19:21:57 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_freezable(wq_head, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_freezable(wq_head, condition); \
|
|
|
|
__ret; \
|
2014-10-29 19:21:57 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_UNINTERRUPTIBLE, 0, timeout, \
|
2013-10-02 17:22:33 +08:00
|
|
|
__ret = schedule_timeout(__ret))
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_timeout - sleep until a condition gets true or a timeout elapses
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 06:20:36 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 19:07:33 +08:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
2014-08-25 01:12:27 +08:00
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* or the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_timeout(wq_head, condition, timeout); \
|
|
|
|
__ret; \
|
2005-04-17 06:20:36 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_freezable_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_INTERRUPTIBLE, 0, timeout, \
|
2019-02-08 04:03:52 +08:00
|
|
|
__ret = freezable_schedule_timeout(__ret))
|
2014-10-29 19:21:57 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* like wait_event_timeout() -- except it uses TASK_INTERRUPTIBLE to avoid
|
|
|
|
* increasing load and is freezable.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_freezable_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_freezable_timeout(wq_head, condition, timeout); \
|
|
|
|
__ret; \
|
2014-10-29 19:21:57 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_exclusive_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 1, 0, \
|
2015-05-08 16:19:05 +08:00
|
|
|
cmd1; schedule(); cmd2)
|
|
|
|
/*
|
|
|
|
* Just like wait_event_cmd(), except it sets exclusive flag
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_exclusive_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_exclusive_cmd(wq_head, condition, cmd1, cmd2); \
|
2015-05-08 16:19:05 +08:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
2013-11-14 12:16:16 +08:00
|
|
|
cmd1; schedule(); cmd2)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_cmd - sleep until a condition gets true
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2013-11-14 12:16:16 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
2014-01-22 00:22:06 +08:00
|
|
|
* @cmd1: the command will be executed before sleep
|
|
|
|
* @cmd2: the command will be executed after sleep
|
2013-11-14 12:16:16 +08:00
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 19:07:33 +08:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2013-11-14 12:16:16 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_cmd(wq_head, condition, cmd1, cmd2); \
|
2013-11-14 12:16:16 +08:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_interruptible(wq_head, condition) \
|
|
|
|
___wait_event(wq_head, condition, TASK_INTERRUPTIBLE, 0, 0, \
|
2013-10-02 17:22:24 +08:00
|
|
|
schedule())
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible - sleep until a condition gets true
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 06:20:36 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-03-05 19:07:33 +08:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible(wq_head, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible(wq_head, condition); \
|
|
|
|
__ret; \
|
2005-04-17 06:20:36 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_interruptible_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_INTERRUPTIBLE, 0, timeout, \
|
2013-10-02 17:22:33 +08:00
|
|
|
__ret = schedule_timeout(__ret))
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_timeout - sleep until a condition gets true or a timeout elapses
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 06:20:36 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-03-05 19:07:33 +08:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
2013-05-25 06:55:09 +08:00
|
|
|
* Returns:
|
2014-08-25 01:12:27 +08:00
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed, or -%ERESTARTSYS if it was
|
|
|
|
* interrupted by a signal.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_timeout(wq_head, \
|
|
|
|
condition, timeout); \
|
|
|
|
__ret; \
|
2005-04-17 06:20:36 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_hrtimeout(wq_head, condition, timeout, state) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
struct hrtimer_sleeper __t; \
|
|
|
|
\
|
2019-07-27 02:30:50 +08:00
|
|
|
hrtimer_init_sleeper_on_stack(&__t, CLOCK_MONOTONIC, \
|
|
|
|
HRTIMER_MODE_REL); \
|
2017-03-05 19:07:33 +08:00
|
|
|
if ((timeout) != KTIME_MAX) \
|
|
|
|
hrtimer_start_range_ns(&__t.timer, timeout, \
|
|
|
|
current->timer_slack_ns, \
|
|
|
|
HRTIMER_MODE_REL); \
|
|
|
|
\
|
|
|
|
__ret = ___wait_event(wq_head, condition, state, 0, 0, \
|
|
|
|
if (!__t.task) { \
|
|
|
|
__ret = -ETIME; \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
schedule()); \
|
|
|
|
\
|
|
|
|
hrtimer_cancel(&__t.timer); \
|
|
|
|
destroy_hrtimer_on_stack(&__t.timer); \
|
|
|
|
__ret; \
|
2013-05-08 07:18:43 +08:00
|
|
|
})
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_hrtimeout - sleep until a condition gets true or a timeout elapses
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2013-05-08 07:18:43 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, as a ktime_t
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-03-05 19:07:33 +08:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2013-05-08 07:18:43 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function returns 0 if @condition became true, or -ETIME if the timeout
|
|
|
|
* elapsed.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_hrtimeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_hrtimeout(wq_head, condition, timeout, \
|
|
|
|
TASK_UNINTERRUPTIBLE); \
|
|
|
|
__ret; \
|
2013-05-08 07:18:43 +08:00
|
|
|
})
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_hrtimeout - sleep until a condition gets true or a timeout elapses
|
2017-07-25 03:58:00 +08:00
|
|
|
* @wq: the waitqueue to wait on
|
2013-05-08 07:18:43 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, as a ktime_t
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-07-25 03:58:00 +08:00
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
2013-05-08 07:18:43 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function returns 0 if @condition became true, -ERESTARTSYS if it was
|
|
|
|
* interrupted by a signal, or -ETIME if the timeout elapsed.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_hrtimeout(wq, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_hrtimeout(wq, condition, timeout, \
|
|
|
|
TASK_INTERRUPTIBLE); \
|
|
|
|
__ret; \
|
2013-05-08 07:18:43 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_interruptible_exclusive(wq, condition) \
|
|
|
|
___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, 0, \
|
2013-10-02 17:22:26 +08:00
|
|
|
schedule())
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_exclusive(wq, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_exclusive(wq, condition); \
|
|
|
|
__ret; \
|
2005-04-17 06:20:36 +08:00
|
|
|
})
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_killable_exclusive(wq, condition) \
|
|
|
|
___wait_event(wq, condition, TASK_KILLABLE, 1, 0, \
|
2016-07-19 15:04:34 +08:00
|
|
|
schedule())
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_killable_exclusive(wq, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_killable_exclusive(wq, condition); \
|
|
|
|
__ret; \
|
2016-07-19 15:04:34 +08:00
|
|
|
})
|
|
|
|
|
2010-05-05 18:53:11 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_freezable_exclusive(wq, condition) \
|
|
|
|
___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, 0, \
|
2019-02-08 04:03:52 +08:00
|
|
|
freezable_schedule())
|
2014-10-29 19:21:57 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_freezable_exclusive(wq, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_freezable_exclusive(wq, condition); \
|
|
|
|
__ret; \
|
2014-10-29 19:21:57 +08:00
|
|
|
})
|
|
|
|
|
2018-02-13 05:22:36 +08:00
|
|
|
/**
|
|
|
|
* wait_event_idle - wait for a condition without contributing to system load
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true.
|
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
#define wait_event_idle(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
___wait_event(wq_head, condition, TASK_IDLE, 0, 0, schedule()); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_idle_exclusive - wait for a condition with contributing to system load
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true.
|
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus if other processes wait on the same list, when this
|
|
|
|
* process is woken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
#define wait_event_idle_exclusive(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
___wait_event(wq_head, condition, TASK_IDLE, 1, 0, schedule()); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define __wait_event_idle_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_IDLE, 0, timeout, \
|
|
|
|
__ret = schedule_timeout(__ret))
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_idle_timeout - sleep without load until a condition becomes true or a timeout elapses
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
|
|
|
* the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* or the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed.
|
|
|
|
*/
|
|
|
|
#define wait_event_idle_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_idle_timeout(wq_head, condition, timeout); \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define __wait_event_idle_exclusive_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_IDLE, 1, timeout, \
|
|
|
|
__ret = schedule_timeout(__ret))
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_idle_exclusive_timeout - sleep without load until a condition becomes true or a timeout elapses
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
|
|
|
* the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus if other processes wait on the same list, when this
|
|
|
|
* process is woken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* or the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed.
|
|
|
|
*/
|
|
|
|
#define wait_event_idle_exclusive_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_idle_exclusive_timeout(wq_head, condition, timeout);\
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
2017-06-20 18:06:13 +08:00
|
|
|
extern int do_wait_intr(wait_queue_head_t *, wait_queue_entry_t *);
|
|
|
|
extern int do_wait_intr_irq(wait_queue_head_t *, wait_queue_entry_t *);
|
2014-10-29 19:21:57 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_interruptible_locked(wq, condition, exclusive, fn) \
|
|
|
|
({ \
|
|
|
|
int __ret; \
|
|
|
|
DEFINE_WAIT(__wait); \
|
|
|
|
if (exclusive) \
|
|
|
|
__wait.flags |= WQ_FLAG_EXCLUSIVE; \
|
|
|
|
do { \
|
|
|
|
__ret = fn(&(wq), &__wait); \
|
|
|
|
if (__ret) \
|
|
|
|
break; \
|
|
|
|
} while (!(condition)); \
|
|
|
|
__remove_wait_queue(&(wq), &__wait); \
|
|
|
|
__set_current_state(TASK_RUNNING); \
|
|
|
|
__ret; \
|
2010-05-05 18:53:11 +08:00
|
|
|
})
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_locked - sleep until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock()/spin_unlock()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_locked(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 07:33:14 +08:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 0, do_wait_intr))
|
2010-05-05 18:53:11 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_locked_irq - sleep until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_locked_irq(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 07:33:14 +08:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 0, do_wait_intr_irq))
|
2010-05-05 18:53:11 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_exclusive_locked - sleep exclusively until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock()/spin_unlock()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus when other process waits process on the list if this
|
|
|
|
* process is awaken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_exclusive_locked(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 07:33:14 +08:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 1, do_wait_intr))
|
2010-05-05 18:53:11 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_exclusive_locked_irq - sleep until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus when other process waits process on the list if this
|
|
|
|
* process is awaken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_exclusive_locked_irq(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 07:33:14 +08:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 1, do_wait_intr_irq))
|
2010-05-05 18:53:11 +08:00
|
|
|
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_killable(wq, condition) \
|
2013-10-02 17:22:33 +08:00
|
|
|
___wait_event(wq, condition, TASK_KILLABLE, 0, 0, schedule())
|
2007-12-07 01:00:00 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_killable - sleep until a condition gets true
|
2017-07-25 03:58:00 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2007-12-07 01:00:00 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_KILLABLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-07-25 03:58:00 +08:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2007-12-07 01:00:00 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_killable(wq_head, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_killable(wq_head, condition); \
|
|
|
|
__ret; \
|
2007-12-07 01:00:00 +08:00
|
|
|
})
|
|
|
|
|
2017-08-19 06:15:55 +08:00
|
|
|
#define __wait_event_killable_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_KILLABLE, 0, timeout, \
|
|
|
|
__ret = schedule_timeout(__ret))
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_killable_timeout - sleep until a condition gets true or a timeout elapses
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_KILLABLE) until the
|
|
|
|
* @condition evaluates to true or a kill signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed, or -%ERESTARTSYS if it was
|
|
|
|
* interrupted by a kill signal.
|
|
|
|
*
|
|
|
|
* Only kill signals interrupt this process.
|
|
|
|
*/
|
|
|
|
#define wait_event_killable_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_killable_timeout(wq_head, \
|
|
|
|
condition, timeout); \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
2012-11-30 18:42:40 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_lock_irq(wq_head, condition, lock, cmd) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
|
|
|
spin_unlock_irq(&lock); \
|
|
|
|
cmd; \
|
|
|
|
schedule(); \
|
2013-10-02 17:22:33 +08:00
|
|
|
spin_lock_irq(&lock))
|
2012-11-30 18:42:40 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_lock_irq_cmd - sleep until a condition gets true. The
|
|
|
|
* condition is checked under the lock. This
|
|
|
|
* is expected to be called with the lock
|
|
|
|
* taken.
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 18:42:40 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before cmd
|
|
|
|
* and schedule() and reacquired afterwards.
|
|
|
|
* @cmd: a command which is invoked outside the critical section before
|
|
|
|
* sleep
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 19:07:33 +08:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2012-11-30 18:42:40 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before invoking the cmd and going to sleep and is reacquired
|
|
|
|
* afterwards.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_lock_irq_cmd(wq_head, condition, lock, cmd) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_lock_irq(wq_head, condition, lock, cmd); \
|
2012-11-30 18:42:40 +08:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_lock_irq - sleep until a condition gets true. The
|
|
|
|
* condition is checked under the lock. This
|
|
|
|
* is expected to be called with the lock
|
|
|
|
* taken.
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 18:42:40 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before schedule()
|
|
|
|
* and reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 19:07:33 +08:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2012-11-30 18:42:40 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before going to sleep and is reacquired afterwards.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_lock_irq(wq_head, condition, lock) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_lock_irq(wq_head, condition, lock, ); \
|
2012-11-30 18:42:40 +08:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define __wait_event_interruptible_lock_irq(wq_head, condition, lock, cmd) \
|
|
|
|
___wait_event(wq_head, condition, TASK_INTERRUPTIBLE, 0, 0, \
|
|
|
|
spin_unlock_irq(&lock); \
|
|
|
|
cmd; \
|
|
|
|
schedule(); \
|
2013-10-02 17:22:28 +08:00
|
|
|
spin_lock_irq(&lock))
|
2012-11-30 18:42:40 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_lock_irq_cmd - sleep until a condition gets true.
|
|
|
|
* The condition is checked under the lock. This is expected to
|
|
|
|
* be called with the lock taken.
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 18:42:40 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before cmd and
|
|
|
|
* schedule() and reacquired afterwards.
|
|
|
|
* @cmd: a command which is invoked outside the critical section before
|
|
|
|
* sleep
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received. The @condition is
|
2017-03-05 19:07:33 +08:00
|
|
|
* checked each time the waitqueue @wq_head is woken up.
|
2012-11-30 18:42:40 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before invoking the cmd and going to sleep and is reacquired
|
|
|
|
* afterwards.
|
|
|
|
*
|
|
|
|
* The macro will return -ERESTARTSYS if it was interrupted by a signal
|
|
|
|
* and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_lock_irq_cmd(wq_head, condition, lock, cmd) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_lock_irq(wq_head, \
|
|
|
|
condition, lock, cmd); \
|
|
|
|
__ret; \
|
2012-11-30 18:42:40 +08:00
|
|
|
})
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_lock_irq - sleep until a condition gets true.
|
|
|
|
* The condition is checked under the lock. This is expected
|
|
|
|
* to be called with the lock taken.
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 18:42:40 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before schedule()
|
|
|
|
* and reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or signal is received. The @condition is
|
2017-03-05 19:07:33 +08:00
|
|
|
* checked each time the waitqueue @wq_head is woken up.
|
2012-11-30 18:42:40 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before going to sleep and is reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The macro will return -ERESTARTSYS if it was interrupted by a signal
|
|
|
|
* and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_lock_irq(wq_head, condition, lock) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_lock_irq(wq_head, \
|
|
|
|
condition, lock,); \
|
|
|
|
__ret; \
|
2012-11-30 18:42:40 +08:00
|
|
|
})
|
|
|
|
|
2018-10-10 11:23:09 +08:00
|
|
|
#define __wait_event_lock_irq_timeout(wq_head, condition, lock, timeout, state) \
|
2017-03-05 19:07:33 +08:00
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
2018-10-10 11:23:09 +08:00
|
|
|
state, 0, timeout, \
|
2017-03-05 19:07:33 +08:00
|
|
|
spin_unlock_irq(&lock); \
|
|
|
|
__ret = schedule_timeout(__ret); \
|
2013-10-02 17:22:29 +08:00
|
|
|
spin_lock_irq(&lock));
|
2013-08-22 23:45:36 +08:00
|
|
|
|
|
|
|
/**
|
2013-10-04 16:24:49 +08:00
|
|
|
* wait_event_interruptible_lock_irq_timeout - sleep until a condition gets
|
|
|
|
* true or a timeout elapses. The condition is checked under
|
|
|
|
* the lock. This is expected to be called with the lock taken.
|
2017-03-05 19:07:33 +08:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2013-08-22 23:45:36 +08:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before schedule()
|
|
|
|
* and reacquired afterwards.
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or signal is received. The @condition is
|
2017-03-05 19:07:33 +08:00
|
|
|
* checked each time the waitqueue @wq_head is woken up.
|
2013-08-22 23:45:36 +08:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before going to sleep and is reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The function returns 0 if the @timeout elapsed, -ERESTARTSYS if it
|
|
|
|
* was interrupted by a signal, and the remaining jiffies otherwise
|
|
|
|
* if the condition evaluated to true before the timeout elapsed.
|
|
|
|
*/
|
2017-03-05 19:07:33 +08:00
|
|
|
#define wait_event_interruptible_lock_irq_timeout(wq_head, condition, lock, \
|
|
|
|
timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
2018-10-10 11:23:09 +08:00
|
|
|
__ret = __wait_event_lock_irq_timeout( \
|
|
|
|
wq_head, condition, lock, timeout, \
|
|
|
|
TASK_INTERRUPTIBLE); \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define wait_event_lock_irq_timeout(wq_head, condition, lock, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_lock_irq_timeout( \
|
|
|
|
wq_head, condition, lock, timeout, \
|
|
|
|
TASK_UNINTERRUPTIBLE); \
|
2017-03-05 19:07:33 +08:00
|
|
|
__ret; \
|
2013-08-22 23:45:36 +08:00
|
|
|
})
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Waitqueues which are removed from the waitqueue_head at wakeup time
|
|
|
|
*/
|
2017-03-05 18:10:18 +08:00
|
|
|
void prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
|
2021-06-07 19:26:13 +08:00
|
|
|
bool prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
|
2017-03-05 18:10:18 +08:00
|
|
|
long prepare_to_wait_event(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
|
|
|
|
void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
2017-03-05 17:33:16 +08:00
|
|
|
long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout);
|
|
|
|
int woken_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
|
|
|
|
int autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define DEFINE_WAIT_FUNC(name, function) \
|
|
|
|
struct wait_queue_entry name = { \
|
|
|
|
.private = current, \
|
|
|
|
.func = function, \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
.entry = LIST_HEAD_INIT((name).entry), \
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2009-04-28 17:24:21 +08:00
|
|
|
#define DEFINE_WAIT(name) DEFINE_WAIT_FUNC(name, autoremove_wake_function)
|
|
|
|
|
2017-03-05 19:07:33 +08:00
|
|
|
#define init_wait(wait) \
|
|
|
|
do { \
|
|
|
|
(wait)->private = current; \
|
|
|
|
(wait)->func = autoremove_wake_function; \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 18:06:46 +08:00
|
|
|
INIT_LIST_HEAD(&(wait)->entry); \
|
2017-03-05 19:07:33 +08:00
|
|
|
(wait)->flags = 0; \
|
2005-04-17 06:20:36 +08:00
|
|
|
} while (0)
|
|
|
|
|
sched/core: Add function to sample state of locked-down task
A running task's state can be sampled in a consistent manner (for example,
for diagnostic purposes) simply by invoking smp_call_function_single()
on its CPU, which may be obtained using task_cpu(), then having the
IPI handler verify that the desired task is in fact still running.
However, if the task is not running, this sampling can in theory be done
immediately and directly. In practice, the task might start running at
any time, including during the sampling period. Gaining a consistent
sample of a not-running task therefore requires that something be done
to lock down the target task's state.
This commit therefore adds a try_invoke_on_locked_down_task() function
that invokes a specified function if the specified task can be locked
down, returning true if successful and if the specified function returns
true. Otherwise this function simply returns false. Given that the
function passed to try_invoke_on_nonrunning_task() might be invoked with
a runqueue lock held, that function had better be quite lightweight.
The function is passed the target task's task_struct pointer and the
argument passed to try_invoke_on_locked_down_task(), allowing easy access
to task state and to a location for further variables to be passed in
and out.
Note that the specified function will be called even if the specified
task is currently running. The function can use ->on_rq and task_curr()
to quickly and easily determine the task's state, and can return false
if this state is not to the function's liking. The caller of the
try_invoke_on_locked_down_task() would then see the false return value,
and could take appropriate action, for example, trying again later or
sending an IPI if matters are more urgent.
It is expected that use cases such as the RCU CPU stall warning code will
simply return false if the task is currently running. However, there are
use cases involving nohz_full CPUs where the specified function might
instead fall back to an alternative sampling scheme that relies on heavier
synchronization (such as memory barriers) in the target task.
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
[ paulmck: Apply feedback from Peter Zijlstra and Steven Rostedt. ]
[ paulmck: Invoke if running to handle feedback from Mathieu Desnoyers. ]
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-03-12 05:23:21 +08:00
|
|
|
bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg);
|
|
|
|
|
2013-10-04 16:24:49 +08:00
|
|
|
#endif /* _LINUX_WAIT_H */
|