Merge branch 'for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu
Pull RCU changes from Paul E. McKenney: - The combination of tree geometry-initialization simplifications and OS-jitter-reduction changes to expedited grace periods. These two are stacked due to the large number of conflicts that would otherwise result. [ With one addition, a temporary commit to silence a lockdep false positive. Additional changes to the expedited grace-period primitives (queued for 4.4) remove the cause of this false positive, and therefore include a revert of this temporary commit. ] - Documentation updates. - Torture-test updates. - Miscellaneous fixes. Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
commit
9b9412dc70
|
@ -28,7 +28,7 @@ o You must use one of the rcu_dereference() family of primitives
|
||||||
o Avoid cancellation when using the "+" and "-" infix arithmetic
|
o Avoid cancellation when using the "+" and "-" infix arithmetic
|
||||||
operators. For example, for a given variable "x", avoid
|
operators. For example, for a given variable "x", avoid
|
||||||
"(x-x)". There are similar arithmetic pitfalls from other
|
"(x-x)". There are similar arithmetic pitfalls from other
|
||||||
arithmetic operatiors, such as "(x*0)", "(x/(x+1))" or "(x%1)".
|
arithmetic operators, such as "(x*0)", "(x/(x+1))" or "(x%1)".
|
||||||
The compiler is within its rights to substitute zero for all of
|
The compiler is within its rights to substitute zero for all of
|
||||||
these expressions, so that subsequent accesses no longer depend
|
these expressions, so that subsequent accesses no longer depend
|
||||||
on the rcu_dereference(), again possibly resulting in bugs due
|
on the rcu_dereference(), again possibly resulting in bugs due
|
||||||
|
|
|
@ -26,12 +26,6 @@ CONFIG_RCU_CPU_STALL_TIMEOUT
|
||||||
Stall-warning messages may be enabled and disabled completely via
|
Stall-warning messages may be enabled and disabled completely via
|
||||||
/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
|
/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
|
||||||
|
|
||||||
CONFIG_RCU_CPU_STALL_INFO
|
|
||||||
|
|
||||||
This kernel configuration parameter causes the stall warning to
|
|
||||||
print out additional per-CPU diagnostic information, including
|
|
||||||
information on scheduling-clock ticks and RCU's idle-CPU tracking.
|
|
||||||
|
|
||||||
RCU_STALL_DELAY_DELTA
|
RCU_STALL_DELAY_DELTA
|
||||||
|
|
||||||
Although the lockdep facility is extremely useful, it does add
|
Although the lockdep facility is extremely useful, it does add
|
||||||
|
@ -101,15 +95,13 @@ interact. Please note that it is not possible to entirely eliminate this
|
||||||
sort of false positive without resorting to things like stop_machine(),
|
sort of false positive without resorting to things like stop_machine(),
|
||||||
which is overkill for this sort of problem.
|
which is overkill for this sort of problem.
|
||||||
|
|
||||||
If the CONFIG_RCU_CPU_STALL_INFO kernel configuration parameter is set,
|
Recent kernels will print a long form of the stall-warning message:
|
||||||
more information is printed with the stall-warning message, for example:
|
|
||||||
|
|
||||||
INFO: rcu_preempt detected stall on CPU
|
INFO: rcu_preempt detected stall on CPU
|
||||||
0: (63959 ticks this GP) idle=241/3fffffffffffffff/0 softirq=82/543
|
0: (63959 ticks this GP) idle=241/3fffffffffffffff/0 softirq=82/543
|
||||||
(t=65000 jiffies)
|
(t=65000 jiffies)
|
||||||
|
|
||||||
In kernels with CONFIG_RCU_FAST_NO_HZ, even more information is
|
In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed:
|
||||||
printed:
|
|
||||||
|
|
||||||
INFO: rcu_preempt detected stall on CPU
|
INFO: rcu_preempt detected stall on CPU
|
||||||
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 nonlazy_posted: 25 .D
|
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 nonlazy_posted: 25 .D
|
||||||
|
@ -171,6 +163,23 @@ message will be about three times the interval between the beginning
|
||||||
of the stall and the first message.
|
of the stall and the first message.
|
||||||
|
|
||||||
|
|
||||||
|
Stall Warnings for Expedited Grace Periods
|
||||||
|
|
||||||
|
If an expedited grace period detects a stall, it will place a message
|
||||||
|
like the following in dmesg:
|
||||||
|
|
||||||
|
INFO: rcu_sched detected expedited stalls on CPUs: { 1 2 6 } 26009 jiffies s: 1043
|
||||||
|
|
||||||
|
This indicates that CPUs 1, 2, and 6 have failed to respond to a
|
||||||
|
reschedule IPI, that the expedited grace period has been going on for
|
||||||
|
26,009 jiffies, and that the expedited grace-period sequence counter is
|
||||||
|
1043. The fact that this last value is odd indicates that an expedited
|
||||||
|
grace period is in flight.
|
||||||
|
|
||||||
|
It is entirely possible to see stall warnings from normal and from
|
||||||
|
expedited grace periods at about the same time from the same run.
|
||||||
|
|
||||||
|
|
||||||
What Causes RCU CPU Stall Warnings?
|
What Causes RCU CPU Stall Warnings?
|
||||||
|
|
||||||
So your kernel printed an RCU CPU stall warning. The next question is
|
So your kernel printed an RCU CPU stall warning. The next question is
|
||||||
|
|
|
@ -237,42 +237,26 @@ o "ktl" is the low-order 16 bits (in hexadecimal) of the count of
|
||||||
|
|
||||||
The output of "cat rcu/rcu_preempt/rcuexp" looks as follows:
|
The output of "cat rcu/rcu_preempt/rcuexp" looks as follows:
|
||||||
|
|
||||||
s=21872 d=21872 w=0 tf=0 wd1=0 wd2=0 n=0 sc=21872 dt=21872 dl=0 dx=21872
|
s=21872 wd0=0 wd1=0 wd2=0 wd3=5 n=0 enq=0 sc=21872
|
||||||
|
|
||||||
These fields are as follows:
|
These fields are as follows:
|
||||||
|
|
||||||
o "s" is the starting sequence number.
|
o "s" is the sequence number, with an odd number indicating that
|
||||||
|
an expedited grace period is in progress.
|
||||||
|
|
||||||
o "d" is the ending sequence number. When the starting and ending
|
o "wd0", "wd1", "wd2", and "wd3" are the number of times that an
|
||||||
numbers differ, there is an expedited grace period in progress.
|
attempt to start an expedited grace period found that someone
|
||||||
|
else had completed an expedited grace period that satisfies the
|
||||||
o "w" is the number of times that the sequence numbers have been
|
|
||||||
in danger of wrapping.
|
|
||||||
|
|
||||||
o "tf" is the number of times that contention has resulted in a
|
|
||||||
failure to begin an expedited grace period.
|
|
||||||
|
|
||||||
o "wd1" and "wd2" are the number of times that an attempt to
|
|
||||||
start an expedited grace period found that someone else had
|
|
||||||
completed an expedited grace period that satisfies the
|
|
||||||
attempted request. "Our work is done."
|
attempted request. "Our work is done."
|
||||||
|
|
||||||
o "n" is number of times that contention was so great that
|
o "n" is number of times that a concurrent CPU-hotplug operation
|
||||||
the request was demoted from an expedited grace period to
|
forced a fallback to a normal grace period.
|
||||||
a normal grace period.
|
|
||||||
|
o "enq" is the number of quiescent states still outstanding.
|
||||||
|
|
||||||
o "sc" is the number of times that the attempt to start a
|
o "sc" is the number of times that the attempt to start a
|
||||||
new expedited grace period succeeded.
|
new expedited grace period succeeded.
|
||||||
|
|
||||||
o "dt" is the number of times that we attempted to update
|
|
||||||
the "d" counter.
|
|
||||||
|
|
||||||
o "dl" is the number of times that we failed to update the "d"
|
|
||||||
counter.
|
|
||||||
|
|
||||||
o "dx" is the number of times that we succeeded in updating
|
|
||||||
the "d" counter.
|
|
||||||
|
|
||||||
|
|
||||||
The output of "cat rcu/rcu_preempt/rcugp" looks as follows:
|
The output of "cat rcu/rcu_preempt/rcugp" looks as follows:
|
||||||
|
|
||||||
|
|
|
@ -883,7 +883,7 @@ All: lockdep-checked RCU-protected pointer access
|
||||||
|
|
||||||
rcu_access_pointer
|
rcu_access_pointer
|
||||||
rcu_dereference_raw
|
rcu_dereference_raw
|
||||||
rcu_lockdep_assert
|
RCU_LOCKDEP_WARN
|
||||||
rcu_sleep_check
|
rcu_sleep_check
|
||||||
RCU_NONIDLE
|
RCU_NONIDLE
|
||||||
|
|
||||||
|
|
|
@ -3135,22 +3135,35 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||||
in a given burst of a callback-flood test.
|
in a given burst of a callback-flood test.
|
||||||
|
|
||||||
rcutorture.fqs_duration= [KNL]
|
rcutorture.fqs_duration= [KNL]
|
||||||
Set duration of force_quiescent_state bursts.
|
Set duration of force_quiescent_state bursts
|
||||||
|
in microseconds.
|
||||||
|
|
||||||
rcutorture.fqs_holdoff= [KNL]
|
rcutorture.fqs_holdoff= [KNL]
|
||||||
Set holdoff time within force_quiescent_state bursts.
|
Set holdoff time within force_quiescent_state bursts
|
||||||
|
in microseconds.
|
||||||
|
|
||||||
rcutorture.fqs_stutter= [KNL]
|
rcutorture.fqs_stutter= [KNL]
|
||||||
Set wait time between force_quiescent_state bursts.
|
Set wait time between force_quiescent_state bursts
|
||||||
|
in seconds.
|
||||||
|
|
||||||
|
rcutorture.gp_cond= [KNL]
|
||||||
|
Use conditional/asynchronous update-side
|
||||||
|
primitives, if available.
|
||||||
|
|
||||||
rcutorture.gp_exp= [KNL]
|
rcutorture.gp_exp= [KNL]
|
||||||
Use expedited update-side primitives.
|
Use expedited update-side primitives, if available.
|
||||||
|
|
||||||
rcutorture.gp_normal= [KNL]
|
rcutorture.gp_normal= [KNL]
|
||||||
Use normal (non-expedited) update-side primitives.
|
Use normal (non-expedited) asynchronous
|
||||||
If both gp_exp and gp_normal are set, do both.
|
update-side primitives, if available.
|
||||||
If neither gp_exp nor gp_normal are set, still
|
|
||||||
do both.
|
rcutorture.gp_sync= [KNL]
|
||||||
|
Use normal (non-expedited) synchronous
|
||||||
|
update-side primitives, if available. If all
|
||||||
|
of rcutorture.gp_cond=, rcutorture.gp_exp=,
|
||||||
|
rcutorture.gp_normal=, and rcutorture.gp_sync=
|
||||||
|
are zero, rcutorture acts as if is interpreted
|
||||||
|
they are all non-zero.
|
||||||
|
|
||||||
rcutorture.n_barrier_cbs= [KNL]
|
rcutorture.n_barrier_cbs= [KNL]
|
||||||
Set callbacks/threads for rcu_barrier() testing.
|
Set callbacks/threads for rcu_barrier() testing.
|
||||||
|
@ -3177,9 +3190,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||||
Set time (s) between CPU-hotplug operations, or
|
Set time (s) between CPU-hotplug operations, or
|
||||||
zero to disable CPU-hotplug testing.
|
zero to disable CPU-hotplug testing.
|
||||||
|
|
||||||
rcutorture.torture_runnable= [BOOT]
|
|
||||||
Start rcutorture running at boot time.
|
|
||||||
|
|
||||||
rcutorture.shuffle_interval= [KNL]
|
rcutorture.shuffle_interval= [KNL]
|
||||||
Set task-shuffle interval (s). Shuffling tasks
|
Set task-shuffle interval (s). Shuffling tasks
|
||||||
allows some CPUs to go into dyntick-idle mode
|
allows some CPUs to go into dyntick-idle mode
|
||||||
|
@ -3220,6 +3230,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||||
Test RCU's dyntick-idle handling. See also the
|
Test RCU's dyntick-idle handling. See also the
|
||||||
rcutorture.shuffle_interval parameter.
|
rcutorture.shuffle_interval parameter.
|
||||||
|
|
||||||
|
rcutorture.torture_runnable= [BOOT]
|
||||||
|
Start rcutorture running at boot time.
|
||||||
|
|
||||||
rcutorture.torture_type= [KNL]
|
rcutorture.torture_type= [KNL]
|
||||||
Specify the RCU implementation to test.
|
Specify the RCU implementation to test.
|
||||||
|
|
||||||
|
|
|
@ -194,22 +194,22 @@ There are some minimal guarantees that may be expected of a CPU:
|
||||||
(*) On any given CPU, dependent memory accesses will be issued in order, with
|
(*) On any given CPU, dependent memory accesses will be issued in order, with
|
||||||
respect to itself. This means that for:
|
respect to itself. This means that for:
|
||||||
|
|
||||||
ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
|
WRITE_ONCE(Q, P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
|
||||||
|
|
||||||
the CPU will issue the following memory operations:
|
the CPU will issue the following memory operations:
|
||||||
|
|
||||||
Q = LOAD P, D = LOAD *Q
|
Q = LOAD P, D = LOAD *Q
|
||||||
|
|
||||||
and always in that order. On most systems, smp_read_barrier_depends()
|
and always in that order. On most systems, smp_read_barrier_depends()
|
||||||
does nothing, but it is required for DEC Alpha. The ACCESS_ONCE()
|
does nothing, but it is required for DEC Alpha. The READ_ONCE()
|
||||||
is required to prevent compiler mischief. Please note that you
|
and WRITE_ONCE() are required to prevent compiler mischief. Please
|
||||||
should normally use something like rcu_dereference() instead of
|
note that you should normally use something like rcu_dereference()
|
||||||
open-coding smp_read_barrier_depends().
|
instead of open-coding smp_read_barrier_depends().
|
||||||
|
|
||||||
(*) Overlapping loads and stores within a particular CPU will appear to be
|
(*) Overlapping loads and stores within a particular CPU will appear to be
|
||||||
ordered within that CPU. This means that for:
|
ordered within that CPU. This means that for:
|
||||||
|
|
||||||
a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
|
a = READ_ONCE(*X); WRITE_ONCE(*X, b);
|
||||||
|
|
||||||
the CPU will only issue the following sequence of memory operations:
|
the CPU will only issue the following sequence of memory operations:
|
||||||
|
|
||||||
|
@ -217,7 +217,7 @@ There are some minimal guarantees that may be expected of a CPU:
|
||||||
|
|
||||||
And for:
|
And for:
|
||||||
|
|
||||||
ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
|
WRITE_ONCE(*X, c); d = READ_ONCE(*X);
|
||||||
|
|
||||||
the CPU will only issue:
|
the CPU will only issue:
|
||||||
|
|
||||||
|
@ -228,11 +228,11 @@ There are some minimal guarantees that may be expected of a CPU:
|
||||||
|
|
||||||
And there are a number of things that _must_ or _must_not_ be assumed:
|
And there are a number of things that _must_ or _must_not_ be assumed:
|
||||||
|
|
||||||
(*) It _must_not_ be assumed that the compiler will do what you want with
|
(*) It _must_not_ be assumed that the compiler will do what you want
|
||||||
memory references that are not protected by ACCESS_ONCE(). Without
|
with memory references that are not protected by READ_ONCE() and
|
||||||
ACCESS_ONCE(), the compiler is within its rights to do all sorts
|
WRITE_ONCE(). Without them, the compiler is within its rights to
|
||||||
of "creative" transformations, which are covered in the Compiler
|
do all sorts of "creative" transformations, which are covered in
|
||||||
Barrier section.
|
the Compiler Barrier section.
|
||||||
|
|
||||||
(*) It _must_not_ be assumed that independent loads and stores will be issued
|
(*) It _must_not_ be assumed that independent loads and stores will be issued
|
||||||
in the order given. This means that for:
|
in the order given. This means that for:
|
||||||
|
@ -520,8 +520,8 @@ following sequence of events:
|
||||||
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
|
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
|
||||||
B = 4;
|
B = 4;
|
||||||
<write barrier>
|
<write barrier>
|
||||||
ACCESS_ONCE(P) = &B
|
WRITE_ONCE(P, &B)
|
||||||
Q = ACCESS_ONCE(P);
|
Q = READ_ONCE(P);
|
||||||
D = *Q;
|
D = *Q;
|
||||||
|
|
||||||
There's a clear data dependency here, and it would seem that by the end of the
|
There's a clear data dependency here, and it would seem that by the end of the
|
||||||
|
@ -547,8 +547,8 @@ between the address load and the data load:
|
||||||
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
|
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
|
||||||
B = 4;
|
B = 4;
|
||||||
<write barrier>
|
<write barrier>
|
||||||
ACCESS_ONCE(P) = &B
|
WRITE_ONCE(P, &B);
|
||||||
Q = ACCESS_ONCE(P);
|
Q = READ_ONCE(P);
|
||||||
<data dependency barrier>
|
<data dependency barrier>
|
||||||
D = *Q;
|
D = *Q;
|
||||||
|
|
||||||
|
@ -574,8 +574,8 @@ access:
|
||||||
{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
|
{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
|
||||||
M[1] = 4;
|
M[1] = 4;
|
||||||
<write barrier>
|
<write barrier>
|
||||||
ACCESS_ONCE(P) = 1
|
WRITE_ONCE(P, 1);
|
||||||
Q = ACCESS_ONCE(P);
|
Q = READ_ONCE(P);
|
||||||
<data dependency barrier>
|
<data dependency barrier>
|
||||||
D = M[Q];
|
D = M[Q];
|
||||||
|
|
||||||
|
@ -596,10 +596,10 @@ A load-load control dependency requires a full read memory barrier, not
|
||||||
simply a data dependency barrier to make it work correctly. Consider the
|
simply a data dependency barrier to make it work correctly. Consider the
|
||||||
following bit of code:
|
following bit of code:
|
||||||
|
|
||||||
q = ACCESS_ONCE(a);
|
q = READ_ONCE(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
<data dependency barrier> /* BUG: No data dependency!!! */
|
<data dependency barrier> /* BUG: No data dependency!!! */
|
||||||
p = ACCESS_ONCE(b);
|
p = READ_ONCE(b);
|
||||||
}
|
}
|
||||||
|
|
||||||
This will not have the desired effect because there is no actual data
|
This will not have the desired effect because there is no actual data
|
||||||
|
@ -608,10 +608,10 @@ by attempting to predict the outcome in advance, so that other CPUs see
|
||||||
the load from b as having happened before the load from a. In such a
|
the load from b as having happened before the load from a. In such a
|
||||||
case what's actually required is:
|
case what's actually required is:
|
||||||
|
|
||||||
q = ACCESS_ONCE(a);
|
q = READ_ONCE(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
<read barrier>
|
<read barrier>
|
||||||
p = ACCESS_ONCE(b);
|
p = READ_ONCE(b);
|
||||||
}
|
}
|
||||||
|
|
||||||
However, stores are not speculated. This means that ordering -is- provided
|
However, stores are not speculated. This means that ordering -is- provided
|
||||||
|
@ -619,7 +619,7 @@ for load-store control dependencies, as in the following example:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
}
|
}
|
||||||
|
|
||||||
Control dependencies pair normally with other types of barriers. That
|
Control dependencies pair normally with other types of barriers. That
|
||||||
|
@ -647,11 +647,11 @@ branches of the "if" statement as follows:
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
barrier();
|
barrier();
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
do_something();
|
do_something();
|
||||||
} else {
|
} else {
|
||||||
barrier();
|
barrier();
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
do_something_else();
|
do_something_else();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -660,12 +660,12 @@ optimization levels:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
barrier();
|
barrier();
|
||||||
ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
|
WRITE_ONCE(b, p); /* BUG: No ordering vs. load from a!!! */
|
||||||
if (q) {
|
if (q) {
|
||||||
/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
|
/* WRITE_ONCE(b, p); -- moved up, BUG!!! */
|
||||||
do_something();
|
do_something();
|
||||||
} else {
|
} else {
|
||||||
/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
|
/* WRITE_ONCE(b, p); -- moved up, BUG!!! */
|
||||||
do_something_else();
|
do_something_else();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -676,7 +676,7 @@ assembly code even after all compiler optimizations have been applied.
|
||||||
Therefore, if you need ordering in this example, you need explicit
|
Therefore, if you need ordering in this example, you need explicit
|
||||||
memory barriers, for example, smp_store_release():
|
memory barriers, for example, smp_store_release():
|
||||||
|
|
||||||
q = ACCESS_ONCE(a);
|
q = READ_ONCE(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
smp_store_release(&b, p);
|
smp_store_release(&b, p);
|
||||||
do_something();
|
do_something();
|
||||||
|
@ -690,10 +690,10 @@ ordering is guaranteed only when the stores differ, for example:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
do_something();
|
do_something();
|
||||||
} else {
|
} else {
|
||||||
ACCESS_ONCE(b) = r;
|
WRITE_ONCE(b, r);
|
||||||
do_something_else();
|
do_something_else();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -706,10 +706,10 @@ the needed conditional. For example:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
if (q % MAX) {
|
if (q % MAX) {
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
do_something();
|
do_something();
|
||||||
} else {
|
} else {
|
||||||
ACCESS_ONCE(b) = r;
|
WRITE_ONCE(b, r);
|
||||||
do_something_else();
|
do_something_else();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -718,7 +718,7 @@ equal to zero, in which case the compiler is within its rights to
|
||||||
transform the above code into the following:
|
transform the above code into the following:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
do_something_else();
|
do_something_else();
|
||||||
|
|
||||||
Given this transformation, the CPU is not required to respect the ordering
|
Given this transformation, the CPU is not required to respect the ordering
|
||||||
|
@ -731,10 +731,10 @@ one, perhaps as follows:
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
|
BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
|
||||||
if (q % MAX) {
|
if (q % MAX) {
|
||||||
ACCESS_ONCE(b) = p;
|
WRITE_ONCE(b, p);
|
||||||
do_something();
|
do_something();
|
||||||
} else {
|
} else {
|
||||||
ACCESS_ONCE(b) = r;
|
WRITE_ONCE(b, r);
|
||||||
do_something_else();
|
do_something_else();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -746,18 +746,18 @@ You must also be careful not to rely too much on boolean short-circuit
|
||||||
evaluation. Consider this example:
|
evaluation. Consider this example:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
if (a || 1 > 0)
|
if (q || 1 > 0)
|
||||||
ACCESS_ONCE(b) = 1;
|
WRITE_ONCE(b, 1);
|
||||||
|
|
||||||
Because the first condition cannot fault and the second condition is
|
Because the first condition cannot fault and the second condition is
|
||||||
always true, the compiler can transform this example as following,
|
always true, the compiler can transform this example as following,
|
||||||
defeating control dependency:
|
defeating control dependency:
|
||||||
|
|
||||||
q = READ_ONCE_CTRL(a);
|
q = READ_ONCE_CTRL(a);
|
||||||
ACCESS_ONCE(b) = 1;
|
WRITE_ONCE(b, 1);
|
||||||
|
|
||||||
This example underscores the need to ensure that the compiler cannot
|
This example underscores the need to ensure that the compiler cannot
|
||||||
out-guess your code. More generally, although ACCESS_ONCE() does force
|
out-guess your code. More generally, although READ_ONCE() does force
|
||||||
the compiler to actually emit code for a given load, it does not force
|
the compiler to actually emit code for a given load, it does not force
|
||||||
the compiler to use the results.
|
the compiler to use the results.
|
||||||
|
|
||||||
|
@ -769,7 +769,7 @@ x and y both being zero:
|
||||||
======================= =======================
|
======================= =======================
|
||||||
r1 = READ_ONCE_CTRL(x); r2 = READ_ONCE_CTRL(y);
|
r1 = READ_ONCE_CTRL(x); r2 = READ_ONCE_CTRL(y);
|
||||||
if (r1 > 0) if (r2 > 0)
|
if (r1 > 0) if (r2 > 0)
|
||||||
ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
|
WRITE_ONCE(y, 1); WRITE_ONCE(x, 1);
|
||||||
|
|
||||||
assert(!(r1 == 1 && r2 == 1));
|
assert(!(r1 == 1 && r2 == 1));
|
||||||
|
|
||||||
|
@ -779,7 +779,7 @@ then adding the following CPU would guarantee a related assertion:
|
||||||
|
|
||||||
CPU 2
|
CPU 2
|
||||||
=====================
|
=====================
|
||||||
ACCESS_ONCE(x) = 2;
|
WRITE_ONCE(x, 2);
|
||||||
|
|
||||||
assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
|
assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
|
||||||
|
|
||||||
|
@ -798,8 +798,7 @@ In summary:
|
||||||
|
|
||||||
(*) Control dependencies must be headed by READ_ONCE_CTRL().
|
(*) Control dependencies must be headed by READ_ONCE_CTRL().
|
||||||
Or, as a much less preferable alternative, interpose
|
Or, as a much less preferable alternative, interpose
|
||||||
be headed by READ_ONCE() or an ACCESS_ONCE() read and must
|
smp_read_barrier_depends() between a READ_ONCE() and the
|
||||||
have smp_read_barrier_depends() between this read and the
|
|
||||||
control-dependent write.
|
control-dependent write.
|
||||||
|
|
||||||
(*) Control dependencies can order prior loads against later stores.
|
(*) Control dependencies can order prior loads against later stores.
|
||||||
|
@ -815,15 +814,16 @@ In summary:
|
||||||
|
|
||||||
(*) Control dependencies require at least one run-time conditional
|
(*) Control dependencies require at least one run-time conditional
|
||||||
between the prior load and the subsequent store, and this
|
between the prior load and the subsequent store, and this
|
||||||
conditional must involve the prior load. If the compiler
|
conditional must involve the prior load. If the compiler is able
|
||||||
is able to optimize the conditional away, it will have also
|
to optimize the conditional away, it will have also optimized
|
||||||
optimized away the ordering. Careful use of ACCESS_ONCE() can
|
away the ordering. Careful use of READ_ONCE_CTRL() READ_ONCE(),
|
||||||
help to preserve the needed conditional.
|
and WRITE_ONCE() can help to preserve the needed conditional.
|
||||||
|
|
||||||
(*) Control dependencies require that the compiler avoid reordering the
|
(*) Control dependencies require that the compiler avoid reordering the
|
||||||
dependency into nonexistence. Careful use of ACCESS_ONCE() or
|
dependency into nonexistence. Careful use of READ_ONCE_CTRL()
|
||||||
barrier() can help to preserve your control dependency. Please
|
or smp_read_barrier_depends() can help to preserve your control
|
||||||
see the Compiler Barrier section for more information.
|
dependency. Please see the Compiler Barrier section for more
|
||||||
|
information.
|
||||||
|
|
||||||
(*) Control dependencies pair normally with other types of barriers.
|
(*) Control dependencies pair normally with other types of barriers.
|
||||||
|
|
||||||
|
@ -848,11 +848,11 @@ barrier, an acquire barrier, a release barrier, or a general barrier:
|
||||||
|
|
||||||
CPU 1 CPU 2
|
CPU 1 CPU 2
|
||||||
=============== ===============
|
=============== ===============
|
||||||
ACCESS_ONCE(a) = 1;
|
WRITE_ONCE(a, 1);
|
||||||
<write barrier>
|
<write barrier>
|
||||||
ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b);
|
WRITE_ONCE(b, 2); x = READ_ONCE(b);
|
||||||
<read barrier>
|
<read barrier>
|
||||||
y = ACCESS_ONCE(a);
|
y = READ_ONCE(a);
|
||||||
|
|
||||||
Or:
|
Or:
|
||||||
|
|
||||||
|
@ -860,7 +860,7 @@ Or:
|
||||||
=============== ===============================
|
=============== ===============================
|
||||||
a = 1;
|
a = 1;
|
||||||
<write barrier>
|
<write barrier>
|
||||||
ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b);
|
WRITE_ONCE(b, &a); x = READ_ONCE(b);
|
||||||
<data dependency barrier>
|
<data dependency barrier>
|
||||||
y = *x;
|
y = *x;
|
||||||
|
|
||||||
|
@ -868,11 +868,11 @@ Or even:
|
||||||
|
|
||||||
CPU 1 CPU 2
|
CPU 1 CPU 2
|
||||||
=============== ===============================
|
=============== ===============================
|
||||||
r1 = ACCESS_ONCE(y);
|
r1 = READ_ONCE(y);
|
||||||
<general barrier>
|
<general barrier>
|
||||||
ACCESS_ONCE(y) = 1; if (r2 = ACCESS_ONCE(x)) {
|
WRITE_ONCE(y, 1); if (r2 = READ_ONCE(x)) {
|
||||||
<implicit control dependency>
|
<implicit control dependency>
|
||||||
ACCESS_ONCE(y) = 1;
|
WRITE_ONCE(y, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
assert(r1 == 0 || r2 == 0);
|
assert(r1 == 0 || r2 == 0);
|
||||||
|
@ -886,11 +886,11 @@ versa:
|
||||||
|
|
||||||
CPU 1 CPU 2
|
CPU 1 CPU 2
|
||||||
=================== ===================
|
=================== ===================
|
||||||
ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c);
|
WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
|
||||||
ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d);
|
WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
|
||||||
<write barrier> \ <read barrier>
|
<write barrier> \ <read barrier>
|
||||||
ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a);
|
WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
|
||||||
ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b);
|
WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
|
||||||
|
|
||||||
|
|
||||||
EXAMPLES OF MEMORY BARRIER SEQUENCES
|
EXAMPLES OF MEMORY BARRIER SEQUENCES
|
||||||
|
@ -1340,10 +1340,10 @@ compiler from moving the memory accesses either side of it to the other side:
|
||||||
|
|
||||||
barrier();
|
barrier();
|
||||||
|
|
||||||
This is a general barrier -- there are no read-read or write-write variants
|
This is a general barrier -- there are no read-read or write-write
|
||||||
of barrier(). However, ACCESS_ONCE() can be thought of as a weak form
|
variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
|
||||||
for barrier() that affects only the specific accesses flagged by the
|
thought of as weak forms of barrier() that affect only the specific
|
||||||
ACCESS_ONCE().
|
accesses flagged by the READ_ONCE() or WRITE_ONCE().
|
||||||
|
|
||||||
The barrier() function has the following effects:
|
The barrier() function has the following effects:
|
||||||
|
|
||||||
|
@ -1355,9 +1355,10 @@ The barrier() function has the following effects:
|
||||||
(*) Within a loop, forces the compiler to load the variables used
|
(*) Within a loop, forces the compiler to load the variables used
|
||||||
in that loop's conditional on each pass through that loop.
|
in that loop's conditional on each pass through that loop.
|
||||||
|
|
||||||
The ACCESS_ONCE() function can prevent any number of optimizations that,
|
The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
|
||||||
while perfectly safe in single-threaded code, can be fatal in concurrent
|
optimizations that, while perfectly safe in single-threaded code, can
|
||||||
code. Here are some examples of these sorts of optimizations:
|
be fatal in concurrent code. Here are some examples of these sorts
|
||||||
|
of optimizations:
|
||||||
|
|
||||||
(*) The compiler is within its rights to reorder loads and stores
|
(*) The compiler is within its rights to reorder loads and stores
|
||||||
to the same variable, and in some cases, the CPU is within its
|
to the same variable, and in some cases, the CPU is within its
|
||||||
|
@ -1370,11 +1371,11 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
Might result in an older value of x stored in a[1] than in a[0].
|
Might result in an older value of x stored in a[1] than in a[0].
|
||||||
Prevent both the compiler and the CPU from doing this as follows:
|
Prevent both the compiler and the CPU from doing this as follows:
|
||||||
|
|
||||||
a[0] = ACCESS_ONCE(x);
|
a[0] = READ_ONCE(x);
|
||||||
a[1] = ACCESS_ONCE(x);
|
a[1] = READ_ONCE(x);
|
||||||
|
|
||||||
In short, ACCESS_ONCE() provides cache coherence for accesses from
|
In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
|
||||||
multiple CPUs to a single variable.
|
accesses from multiple CPUs to a single variable.
|
||||||
|
|
||||||
(*) The compiler is within its rights to merge successive loads from
|
(*) The compiler is within its rights to merge successive loads from
|
||||||
the same variable. Such merging can cause the compiler to "optimize"
|
the same variable. Such merging can cause the compiler to "optimize"
|
||||||
|
@ -1391,9 +1392,9 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
for (;;)
|
for (;;)
|
||||||
do_something_with(tmp);
|
do_something_with(tmp);
|
||||||
|
|
||||||
Use ACCESS_ONCE() to prevent the compiler from doing this to you:
|
Use READ_ONCE() to prevent the compiler from doing this to you:
|
||||||
|
|
||||||
while (tmp = ACCESS_ONCE(a))
|
while (tmp = READ_ONCE(a))
|
||||||
do_something_with(tmp);
|
do_something_with(tmp);
|
||||||
|
|
||||||
(*) The compiler is within its rights to reload a variable, for example,
|
(*) The compiler is within its rights to reload a variable, for example,
|
||||||
|
@ -1415,9 +1416,9 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
a was modified by some other CPU between the "while" statement and
|
a was modified by some other CPU between the "while" statement and
|
||||||
the call to do_something_with().
|
the call to do_something_with().
|
||||||
|
|
||||||
Again, use ACCESS_ONCE() to prevent the compiler from doing this:
|
Again, use READ_ONCE() to prevent the compiler from doing this:
|
||||||
|
|
||||||
while (tmp = ACCESS_ONCE(a))
|
while (tmp = READ_ONCE(a))
|
||||||
do_something_with(tmp);
|
do_something_with(tmp);
|
||||||
|
|
||||||
Note that if the compiler runs short of registers, it might save
|
Note that if the compiler runs short of registers, it might save
|
||||||
|
@ -1437,21 +1438,21 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
|
|
||||||
do { } while (0);
|
do { } while (0);
|
||||||
|
|
||||||
This transformation is a win for single-threaded code because it gets
|
This transformation is a win for single-threaded code because it
|
||||||
rid of a load and a branch. The problem is that the compiler will
|
gets rid of a load and a branch. The problem is that the compiler
|
||||||
carry out its proof assuming that the current CPU is the only one
|
will carry out its proof assuming that the current CPU is the only
|
||||||
updating variable 'a'. If variable 'a' is shared, then the compiler's
|
one updating variable 'a'. If variable 'a' is shared, then the
|
||||||
proof will be erroneous. Use ACCESS_ONCE() to tell the compiler
|
compiler's proof will be erroneous. Use READ_ONCE() to tell the
|
||||||
that it doesn't know as much as it thinks it does:
|
compiler that it doesn't know as much as it thinks it does:
|
||||||
|
|
||||||
while (tmp = ACCESS_ONCE(a))
|
while (tmp = READ_ONCE(a))
|
||||||
do_something_with(tmp);
|
do_something_with(tmp);
|
||||||
|
|
||||||
But please note that the compiler is also closely watching what you
|
But please note that the compiler is also closely watching what you
|
||||||
do with the value after the ACCESS_ONCE(). For example, suppose you
|
do with the value after the READ_ONCE(). For example, suppose you
|
||||||
do the following and MAX is a preprocessor macro with the value 1:
|
do the following and MAX is a preprocessor macro with the value 1:
|
||||||
|
|
||||||
while ((tmp = ACCESS_ONCE(a)) % MAX)
|
while ((tmp = READ_ONCE(a)) % MAX)
|
||||||
do_something_with(tmp);
|
do_something_with(tmp);
|
||||||
|
|
||||||
Then the compiler knows that the result of the "%" operator applied
|
Then the compiler knows that the result of the "%" operator applied
|
||||||
|
@ -1475,12 +1476,12 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
surprise if some other CPU might have stored to variable 'a' in the
|
surprise if some other CPU might have stored to variable 'a' in the
|
||||||
meantime.
|
meantime.
|
||||||
|
|
||||||
Use ACCESS_ONCE() to prevent the compiler from making this sort of
|
Use WRITE_ONCE() to prevent the compiler from making this sort of
|
||||||
wrong guess:
|
wrong guess:
|
||||||
|
|
||||||
ACCESS_ONCE(a) = 0;
|
WRITE_ONCE(a, 0);
|
||||||
/* Code that does not store to variable a. */
|
/* Code that does not store to variable a. */
|
||||||
ACCESS_ONCE(a) = 0;
|
WRITE_ONCE(a, 0);
|
||||||
|
|
||||||
(*) The compiler is within its rights to reorder memory accesses unless
|
(*) The compiler is within its rights to reorder memory accesses unless
|
||||||
you tell it not to. For example, consider the following interaction
|
you tell it not to. For example, consider the following interaction
|
||||||
|
@ -1509,40 +1510,43 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
}
|
}
|
||||||
|
|
||||||
If the interrupt occurs between these two statement, then
|
If the interrupt occurs between these two statement, then
|
||||||
interrupt_handler() might be passed a garbled msg. Use ACCESS_ONCE()
|
interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
|
||||||
to prevent this as follows:
|
to prevent this as follows:
|
||||||
|
|
||||||
void process_level(void)
|
void process_level(void)
|
||||||
{
|
{
|
||||||
ACCESS_ONCE(msg) = get_message();
|
WRITE_ONCE(msg, get_message());
|
||||||
ACCESS_ONCE(flag) = true;
|
WRITE_ONCE(flag, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
void interrupt_handler(void)
|
void interrupt_handler(void)
|
||||||
{
|
{
|
||||||
if (ACCESS_ONCE(flag))
|
if (READ_ONCE(flag))
|
||||||
process_message(ACCESS_ONCE(msg));
|
process_message(READ_ONCE(msg));
|
||||||
}
|
}
|
||||||
|
|
||||||
Note that the ACCESS_ONCE() wrappers in interrupt_handler()
|
Note that the READ_ONCE() and WRITE_ONCE() wrappers in
|
||||||
are needed if this interrupt handler can itself be interrupted
|
interrupt_handler() are needed if this interrupt handler can itself
|
||||||
by something that also accesses 'flag' and 'msg', for example,
|
be interrupted by something that also accesses 'flag' and 'msg',
|
||||||
a nested interrupt or an NMI. Otherwise, ACCESS_ONCE() is not
|
for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
|
||||||
needed in interrupt_handler() other than for documentation purposes.
|
and WRITE_ONCE() are not needed in interrupt_handler() other than
|
||||||
(Note also that nested interrupts do not typically occur in modern
|
for documentation purposes. (Note also that nested interrupts
|
||||||
Linux kernels, in fact, if an interrupt handler returns with
|
do not typically occur in modern Linux kernels, in fact, if an
|
||||||
interrupts enabled, you will get a WARN_ONCE() splat.)
|
interrupt handler returns with interrupts enabled, you will get a
|
||||||
|
WARN_ONCE() splat.)
|
||||||
|
|
||||||
You should assume that the compiler can move ACCESS_ONCE() past
|
You should assume that the compiler can move READ_ONCE() and
|
||||||
code not containing ACCESS_ONCE(), barrier(), or similar primitives.
|
WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
|
||||||
|
barrier(), or similar primitives.
|
||||||
|
|
||||||
This effect could also be achieved using barrier(), but ACCESS_ONCE()
|
This effect could also be achieved using barrier(), but READ_ONCE()
|
||||||
is more selective: With ACCESS_ONCE(), the compiler need only forget
|
and WRITE_ONCE() are more selective: With READ_ONCE() and
|
||||||
the contents of the indicated memory locations, while with barrier()
|
WRITE_ONCE(), the compiler need only forget the contents of the
|
||||||
the compiler must discard the value of all memory locations that
|
indicated memory locations, while with barrier() the compiler must
|
||||||
it has currented cached in any machine registers. Of course,
|
discard the value of all memory locations that it has currented
|
||||||
the compiler must also respect the order in which the ACCESS_ONCE()s
|
cached in any machine registers. Of course, the compiler must also
|
||||||
occur, though the CPU of course need not do so.
|
respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
|
||||||
|
though the CPU of course need not do so.
|
||||||
|
|
||||||
(*) The compiler is within its rights to invent stores to a variable,
|
(*) The compiler is within its rights to invent stores to a variable,
|
||||||
as in the following example:
|
as in the following example:
|
||||||
|
@ -1562,16 +1566,16 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
a branch. Unfortunately, in concurrent code, this optimization
|
a branch. Unfortunately, in concurrent code, this optimization
|
||||||
could cause some other CPU to see a spurious value of 42 -- even
|
could cause some other CPU to see a spurious value of 42 -- even
|
||||||
if variable 'a' was never zero -- when loading variable 'b'.
|
if variable 'a' was never zero -- when loading variable 'b'.
|
||||||
Use ACCESS_ONCE() to prevent this as follows:
|
Use WRITE_ONCE() to prevent this as follows:
|
||||||
|
|
||||||
if (a)
|
if (a)
|
||||||
ACCESS_ONCE(b) = a;
|
WRITE_ONCE(b, a);
|
||||||
else
|
else
|
||||||
ACCESS_ONCE(b) = 42;
|
WRITE_ONCE(b, 42);
|
||||||
|
|
||||||
The compiler can also invent loads. These are usually less
|
The compiler can also invent loads. These are usually less
|
||||||
damaging, but they can result in cache-line bouncing and thus in
|
damaging, but they can result in cache-line bouncing and thus in
|
||||||
poor performance and scalability. Use ACCESS_ONCE() to prevent
|
poor performance and scalability. Use READ_ONCE() to prevent
|
||||||
invented loads.
|
invented loads.
|
||||||
|
|
||||||
(*) For aligned memory locations whose size allows them to be accessed
|
(*) For aligned memory locations whose size allows them to be accessed
|
||||||
|
@ -1590,9 +1594,9 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
This optimization can therefore be a win in single-threaded code.
|
This optimization can therefore be a win in single-threaded code.
|
||||||
In fact, a recent bug (since fixed) caused GCC to incorrectly use
|
In fact, a recent bug (since fixed) caused GCC to incorrectly use
|
||||||
this optimization in a volatile store. In the absence of such bugs,
|
this optimization in a volatile store. In the absence of such bugs,
|
||||||
use of ACCESS_ONCE() prevents store tearing in the following example:
|
use of WRITE_ONCE() prevents store tearing in the following example:
|
||||||
|
|
||||||
ACCESS_ONCE(p) = 0x00010002;
|
WRITE_ONCE(p, 0x00010002);
|
||||||
|
|
||||||
Use of packed structures can also result in load and store tearing,
|
Use of packed structures can also result in load and store tearing,
|
||||||
as in this example:
|
as in this example:
|
||||||
|
@ -1609,22 +1613,23 @@ code. Here are some examples of these sorts of optimizations:
|
||||||
foo2.b = foo1.b;
|
foo2.b = foo1.b;
|
||||||
foo2.c = foo1.c;
|
foo2.c = foo1.c;
|
||||||
|
|
||||||
Because there are no ACCESS_ONCE() wrappers and no volatile markings,
|
Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
|
||||||
the compiler would be well within its rights to implement these three
|
volatile markings, the compiler would be well within its rights to
|
||||||
assignment statements as a pair of 32-bit loads followed by a pair
|
implement these three assignment statements as a pair of 32-bit
|
||||||
of 32-bit stores. This would result in load tearing on 'foo1.b'
|
loads followed by a pair of 32-bit stores. This would result in
|
||||||
and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing
|
load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
|
||||||
in this example:
|
and WRITE_ONCE() again prevent tearing in this example:
|
||||||
|
|
||||||
foo2.a = foo1.a;
|
foo2.a = foo1.a;
|
||||||
ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
|
WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
|
||||||
foo2.c = foo1.c;
|
foo2.c = foo1.c;
|
||||||
|
|
||||||
All that aside, it is never necessary to use ACCESS_ONCE() on a variable
|
All that aside, it is never necessary to use READ_ONCE() and
|
||||||
that has been marked volatile. For example, because 'jiffies' is marked
|
WRITE_ONCE() on a variable that has been marked volatile. For example,
|
||||||
volatile, it is never necessary to say ACCESS_ONCE(jiffies). The reason
|
because 'jiffies' is marked volatile, it is never necessary to
|
||||||
for this is that ACCESS_ONCE() is implemented as a volatile cast, which
|
say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
|
||||||
has no effect when its argument is already marked volatile.
|
WRITE_ONCE() are implemented as volatile casts, which has no effect when
|
||||||
|
its argument is already marked volatile.
|
||||||
|
|
||||||
Please note that these compiler barriers have no direct effect on the CPU,
|
Please note that these compiler barriers have no direct effect on the CPU,
|
||||||
which may then reorder things however it wishes.
|
which may then reorder things however it wishes.
|
||||||
|
@ -1646,14 +1651,15 @@ The Linux kernel has eight basic CPU memory barriers:
|
||||||
All memory barriers except the data dependency barriers imply a compiler
|
All memory barriers except the data dependency barriers imply a compiler
|
||||||
barrier. Data dependencies do not impose any additional compiler ordering.
|
barrier. Data dependencies do not impose any additional compiler ordering.
|
||||||
|
|
||||||
Aside: In the case of data dependencies, the compiler would be expected to
|
Aside: In the case of data dependencies, the compiler would be expected
|
||||||
issue the loads in the correct order (eg. `a[b]` would have to load the value
|
to issue the loads in the correct order (eg. `a[b]` would have to load
|
||||||
of b before loading a[b]), however there is no guarantee in the C specification
|
the value of b before loading a[b]), however there is no guarantee in
|
||||||
that the compiler may not speculate the value of b (eg. is equal to 1) and load
|
the C specification that the compiler may not speculate the value of b
|
||||||
a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
|
(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
|
||||||
problem of a compiler reloading b after having loaded a[b], thus having a newer
|
tmp = a[b]; ). There is also the problem of a compiler reloading b after
|
||||||
copy of b than a[b]. A consensus has not yet been reached about these problems,
|
having loaded a[b], thus having a newer copy of b than a[b]. A consensus
|
||||||
however the ACCESS_ONCE macro is a good place to start looking.
|
has not yet been reached about these problems, however the READ_ONCE()
|
||||||
|
macro is a good place to start looking.
|
||||||
|
|
||||||
SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
|
SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
|
||||||
systems because it is assumed that a CPU will appear to be self-consistent,
|
systems because it is assumed that a CPU will appear to be self-consistent,
|
||||||
|
@ -1852,11 +1858,12 @@ Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not
|
||||||
imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE
|
imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE
|
||||||
pair to produce a full barrier, the ACQUIRE can be followed by an
|
pair to produce a full barrier, the ACQUIRE can be followed by an
|
||||||
smp_mb__after_unlock_lock() invocation. This will produce a full barrier
|
smp_mb__after_unlock_lock() invocation. This will produce a full barrier
|
||||||
if either (a) the RELEASE and the ACQUIRE are executed by the same
|
(including transitivity) if either (a) the RELEASE and the ACQUIRE are
|
||||||
CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable.
|
executed by the same CPU or task, or (b) the RELEASE and ACQUIRE act on
|
||||||
The smp_mb__after_unlock_lock() primitive is free on many architectures.
|
the same variable. The smp_mb__after_unlock_lock() primitive is free
|
||||||
Without smp_mb__after_unlock_lock(), the CPU's execution of the critical
|
on many architectures. Without smp_mb__after_unlock_lock(), the CPU's
|
||||||
sections corresponding to the RELEASE and the ACQUIRE can cross, so that:
|
execution of the critical sections corresponding to the RELEASE and the
|
||||||
|
ACQUIRE can cross, so that:
|
||||||
|
|
||||||
*A = a;
|
*A = a;
|
||||||
RELEASE M
|
RELEASE M
|
||||||
|
@ -2126,12 +2133,12 @@ three CPUs; then should the following sequence of events occur:
|
||||||
|
|
||||||
CPU 1 CPU 2
|
CPU 1 CPU 2
|
||||||
=============================== ===============================
|
=============================== ===============================
|
||||||
ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e;
|
WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
|
||||||
ACQUIRE M ACQUIRE Q
|
ACQUIRE M ACQUIRE Q
|
||||||
ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f;
|
WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
|
||||||
ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g;
|
WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
|
||||||
RELEASE M RELEASE Q
|
RELEASE M RELEASE Q
|
||||||
ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h;
|
WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
|
||||||
|
|
||||||
Then there is no guarantee as to what order CPU 3 will see the accesses to *A
|
Then there is no guarantee as to what order CPU 3 will see the accesses to *A
|
||||||
through *H occur in, other than the constraints imposed by the separate locks
|
through *H occur in, other than the constraints imposed by the separate locks
|
||||||
|
@ -2151,18 +2158,18 @@ However, if the following occurs:
|
||||||
|
|
||||||
CPU 1 CPU 2
|
CPU 1 CPU 2
|
||||||
=============================== ===============================
|
=============================== ===============================
|
||||||
ACCESS_ONCE(*A) = a;
|
WRITE_ONCE(*A, a);
|
||||||
ACQUIRE M [1]
|
ACQUIRE M [1]
|
||||||
ACCESS_ONCE(*B) = b;
|
WRITE_ONCE(*B, b);
|
||||||
ACCESS_ONCE(*C) = c;
|
WRITE_ONCE(*C, c);
|
||||||
RELEASE M [1]
|
RELEASE M [1]
|
||||||
ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e;
|
WRITE_ONCE(*D, d); WRITE_ONCE(*E, e);
|
||||||
ACQUIRE M [2]
|
ACQUIRE M [2]
|
||||||
smp_mb__after_unlock_lock();
|
smp_mb__after_unlock_lock();
|
||||||
ACCESS_ONCE(*F) = f;
|
WRITE_ONCE(*F, f);
|
||||||
ACCESS_ONCE(*G) = g;
|
WRITE_ONCE(*G, g);
|
||||||
RELEASE M [2]
|
RELEASE M [2]
|
||||||
ACCESS_ONCE(*H) = h;
|
WRITE_ONCE(*H, h);
|
||||||
|
|
||||||
CPU 3 might see:
|
CPU 3 might see:
|
||||||
|
|
||||||
|
@ -2881,11 +2888,11 @@ A programmer might take it for granted that the CPU will perform memory
|
||||||
operations in exactly the order specified, so that if the CPU is, for example,
|
operations in exactly the order specified, so that if the CPU is, for example,
|
||||||
given the following piece of code to execute:
|
given the following piece of code to execute:
|
||||||
|
|
||||||
a = ACCESS_ONCE(*A);
|
a = READ_ONCE(*A);
|
||||||
ACCESS_ONCE(*B) = b;
|
WRITE_ONCE(*B, b);
|
||||||
c = ACCESS_ONCE(*C);
|
c = READ_ONCE(*C);
|
||||||
d = ACCESS_ONCE(*D);
|
d = READ_ONCE(*D);
|
||||||
ACCESS_ONCE(*E) = e;
|
WRITE_ONCE(*E, e);
|
||||||
|
|
||||||
they would then expect that the CPU will complete the memory operation for each
|
they would then expect that the CPU will complete the memory operation for each
|
||||||
instruction before moving on to the next one, leading to a definite sequence of
|
instruction before moving on to the next one, leading to a definite sequence of
|
||||||
|
@ -2932,12 +2939,12 @@ However, it is guaranteed that a CPU will be self-consistent: it will see its
|
||||||
_own_ accesses appear to be correctly ordered, without the need for a memory
|
_own_ accesses appear to be correctly ordered, without the need for a memory
|
||||||
barrier. For instance with the following code:
|
barrier. For instance with the following code:
|
||||||
|
|
||||||
U = ACCESS_ONCE(*A);
|
U = READ_ONCE(*A);
|
||||||
ACCESS_ONCE(*A) = V;
|
WRITE_ONCE(*A, V);
|
||||||
ACCESS_ONCE(*A) = W;
|
WRITE_ONCE(*A, W);
|
||||||
X = ACCESS_ONCE(*A);
|
X = READ_ONCE(*A);
|
||||||
ACCESS_ONCE(*A) = Y;
|
WRITE_ONCE(*A, Y);
|
||||||
Z = ACCESS_ONCE(*A);
|
Z = READ_ONCE(*A);
|
||||||
|
|
||||||
and assuming no intervention by an external influence, it can be assumed that
|
and assuming no intervention by an external influence, it can be assumed that
|
||||||
the final result will appear to be:
|
the final result will appear to be:
|
||||||
|
@ -2953,13 +2960,14 @@ accesses:
|
||||||
U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
|
U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
|
||||||
|
|
||||||
in that order, but, without intervention, the sequence may have almost any
|
in that order, but, without intervention, the sequence may have almost any
|
||||||
combination of elements combined or discarded, provided the program's view of
|
combination of elements combined or discarded, provided the program's view
|
||||||
the world remains consistent. Note that ACCESS_ONCE() is -not- optional
|
of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
|
||||||
in the above example, as there are architectures where a given CPU might
|
are -not- optional in the above example, as there are architectures
|
||||||
reorder successive loads to the same location. On such architectures,
|
where a given CPU might reorder successive loads to the same location.
|
||||||
ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
|
On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
|
||||||
Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
|
necessary to prevent this, for example, on Itanium the volatile casts
|
||||||
special ld.acq and st.rel instructions that prevent such reordering.
|
used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
|
||||||
|
and st.rel instructions (respectively) that prevent such reordering.
|
||||||
|
|
||||||
The compiler may also combine, discard or defer elements of the sequence before
|
The compiler may also combine, discard or defer elements of the sequence before
|
||||||
the CPU even sees them.
|
the CPU even sees them.
|
||||||
|
@ -2973,13 +2981,14 @@ may be reduced to:
|
||||||
|
|
||||||
*A = W;
|
*A = W;
|
||||||
|
|
||||||
since, without either a write barrier or an ACCESS_ONCE(), it can be
|
since, without either a write barrier or an WRITE_ONCE(), it can be
|
||||||
assumed that the effect of the storage of V to *A is lost. Similarly:
|
assumed that the effect of the storage of V to *A is lost. Similarly:
|
||||||
|
|
||||||
*A = Y;
|
*A = Y;
|
||||||
Z = *A;
|
Z = *A;
|
||||||
|
|
||||||
may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
|
may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
|
||||||
|
reduced to:
|
||||||
|
|
||||||
*A = Y;
|
*A = Y;
|
||||||
Z = Y;
|
Z = Y;
|
||||||
|
|
|
@ -8472,7 +8472,7 @@ M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
|
||||||
M: Josh Triplett <josh@joshtriplett.org>
|
M: Josh Triplett <josh@joshtriplett.org>
|
||||||
R: Steven Rostedt <rostedt@goodmis.org>
|
R: Steven Rostedt <rostedt@goodmis.org>
|
||||||
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||||
R: Lai Jiangshan <laijs@cn.fujitsu.com>
|
R: Lai Jiangshan <jiangshanlai@gmail.com>
|
||||||
L: linux-kernel@vger.kernel.org
|
L: linux-kernel@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
|
||||||
|
@ -8499,7 +8499,7 @@ M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
|
||||||
M: Josh Triplett <josh@joshtriplett.org>
|
M: Josh Triplett <josh@joshtriplett.org>
|
||||||
R: Steven Rostedt <rostedt@goodmis.org>
|
R: Steven Rostedt <rostedt@goodmis.org>
|
||||||
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||||
R: Lai Jiangshan <laijs@cn.fujitsu.com>
|
R: Lai Jiangshan <jiangshanlai@gmail.com>
|
||||||
L: linux-kernel@vger.kernel.org
|
L: linux-kernel@vger.kernel.org
|
||||||
W: http://www.rdrop.com/users/paulmck/RCU/
|
W: http://www.rdrop.com/users/paulmck/RCU/
|
||||||
S: Supported
|
S: Supported
|
||||||
|
@ -9367,7 +9367,7 @@ F: include/linux/sl?b*.h
|
||||||
F: mm/sl?b*
|
F: mm/sl?b*
|
||||||
|
|
||||||
SLEEPABLE READ-COPY UPDATE (SRCU)
|
SLEEPABLE READ-COPY UPDATE (SRCU)
|
||||||
M: Lai Jiangshan <laijs@cn.fujitsu.com>
|
M: Lai Jiangshan <jiangshanlai@gmail.com>
|
||||||
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
|
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
|
||||||
M: Josh Triplett <josh@joshtriplett.org>
|
M: Josh Triplett <josh@joshtriplett.org>
|
||||||
R: Steven Rostedt <rostedt@goodmis.org>
|
R: Steven Rostedt <rostedt@goodmis.org>
|
||||||
|
|
|
@ -54,9 +54,9 @@ static DEFINE_MUTEX(mce_chrdev_read_mutex);
|
||||||
|
|
||||||
#define rcu_dereference_check_mce(p) \
|
#define rcu_dereference_check_mce(p) \
|
||||||
({ \
|
({ \
|
||||||
rcu_lockdep_assert(rcu_read_lock_sched_held() || \
|
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() && \
|
||||||
lockdep_is_held(&mce_chrdev_read_mutex), \
|
!lockdep_is_held(&mce_chrdev_read_mutex), \
|
||||||
"suspicious rcu_dereference_check_mce() usage"); \
|
"suspicious rcu_dereference_check_mce() usage"); \
|
||||||
smp_load_acquire(&(p)); \
|
smp_load_acquire(&(p)); \
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|
|
@ -136,7 +136,7 @@ enum ctx_state ist_enter(struct pt_regs *regs)
|
||||||
preempt_count_add(HARDIRQ_OFFSET);
|
preempt_count_add(HARDIRQ_OFFSET);
|
||||||
|
|
||||||
/* This code is a bit fragile. Test it. */
|
/* This code is a bit fragile. Test it. */
|
||||||
rcu_lockdep_assert(rcu_is_watching(), "ist_enter didn't work");
|
RCU_LOCKDEP_WARN(!rcu_is_watching(), "ist_enter didn't work");
|
||||||
|
|
||||||
return prev_state;
|
return prev_state;
|
||||||
}
|
}
|
||||||
|
|
|
@ -110,8 +110,8 @@ static DEFINE_MUTEX(dev_opp_list_lock);
|
||||||
|
|
||||||
#define opp_rcu_lockdep_assert() \
|
#define opp_rcu_lockdep_assert() \
|
||||||
do { \
|
do { \
|
||||||
rcu_lockdep_assert(rcu_read_lock_held() || \
|
RCU_LOCKDEP_WARN(!rcu_read_lock_held() && \
|
||||||
lockdep_is_held(&dev_opp_list_lock), \
|
!lockdep_is_held(&dev_opp_list_lock), \
|
||||||
"Missing rcu_read_lock() or " \
|
"Missing rcu_read_lock() or " \
|
||||||
"dev_opp_list_lock protection"); \
|
"dev_opp_list_lock protection"); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
|
@ -86,8 +86,8 @@ static inline struct file *__fcheck_files(struct files_struct *files, unsigned i
|
||||||
|
|
||||||
static inline struct file *fcheck_files(struct files_struct *files, unsigned int fd)
|
static inline struct file *fcheck_files(struct files_struct *files, unsigned int fd)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_read_lock_held() ||
|
RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&
|
||||||
lockdep_is_held(&files->file_lock),
|
!lockdep_is_held(&files->file_lock),
|
||||||
"suspicious rcu_dereference_check() usage");
|
"suspicious rcu_dereference_check() usage");
|
||||||
return __fcheck_files(files, fd);
|
return __fcheck_files(files, fd);
|
||||||
}
|
}
|
||||||
|
|
|
@ -226,6 +226,37 @@ struct rcu_synchronize {
|
||||||
};
|
};
|
||||||
void wakeme_after_rcu(struct rcu_head *head);
|
void wakeme_after_rcu(struct rcu_head *head);
|
||||||
|
|
||||||
|
void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
|
||||||
|
struct rcu_synchronize *rs_array);
|
||||||
|
|
||||||
|
#define _wait_rcu_gp(checktiny, ...) \
|
||||||
|
do { \
|
||||||
|
call_rcu_func_t __crcu_array[] = { __VA_ARGS__ }; \
|
||||||
|
const int __n = ARRAY_SIZE(__crcu_array); \
|
||||||
|
struct rcu_synchronize __rs_array[__n]; \
|
||||||
|
\
|
||||||
|
__wait_rcu_gp(checktiny, __n, __crcu_array, __rs_array); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
|
#define wait_rcu_gp(...) _wait_rcu_gp(false, __VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* synchronize_rcu_mult - Wait concurrently for multiple grace periods
|
||||||
|
* @...: List of call_rcu() functions for the flavors to wait on.
|
||||||
|
*
|
||||||
|
* This macro waits concurrently for multiple flavors of RCU grace periods.
|
||||||
|
* For example, synchronize_rcu_mult(call_rcu, call_rcu_bh) would wait
|
||||||
|
* on concurrent RCU and RCU-bh grace periods. Waiting on a give SRCU
|
||||||
|
* domain requires you to write a wrapper function for that SRCU domain's
|
||||||
|
* call_srcu() function, supplying the corresponding srcu_struct.
|
||||||
|
*
|
||||||
|
* If Tiny RCU, tell _wait_rcu_gp() not to bother waiting for RCU
|
||||||
|
* or RCU-bh, given that anywhere synchronize_rcu_mult() can be called
|
||||||
|
* is automatically a grace period.
|
||||||
|
*/
|
||||||
|
#define synchronize_rcu_mult(...) \
|
||||||
|
_wait_rcu_gp(IS_ENABLED(CONFIG_TINY_RCU), __VA_ARGS__)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* call_rcu_tasks() - Queue an RCU for invocation task-based grace period
|
* call_rcu_tasks() - Queue an RCU for invocation task-based grace period
|
||||||
* @head: structure to be used for queueing the RCU updates.
|
* @head: structure to be used for queueing the RCU updates.
|
||||||
|
@ -309,7 +340,7 @@ static inline void rcu_sysrq_end(void)
|
||||||
}
|
}
|
||||||
#endif /* #else #ifdef CONFIG_RCU_STALL_COMMON */
|
#endif /* #else #ifdef CONFIG_RCU_STALL_COMMON */
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_USER_QS
|
#ifdef CONFIG_NO_HZ_FULL
|
||||||
void rcu_user_enter(void);
|
void rcu_user_enter(void);
|
||||||
void rcu_user_exit(void);
|
void rcu_user_exit(void);
|
||||||
#else
|
#else
|
||||||
|
@ -317,7 +348,7 @@ static inline void rcu_user_enter(void) { }
|
||||||
static inline void rcu_user_exit(void) { }
|
static inline void rcu_user_exit(void) { }
|
||||||
static inline void rcu_user_hooks_switch(struct task_struct *prev,
|
static inline void rcu_user_hooks_switch(struct task_struct *prev,
|
||||||
struct task_struct *next) { }
|
struct task_struct *next) { }
|
||||||
#endif /* CONFIG_RCU_USER_QS */
|
#endif /* CONFIG_NO_HZ_FULL */
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_NOCB_CPU
|
#ifdef CONFIG_RCU_NOCB_CPU
|
||||||
void rcu_init_nohz(void);
|
void rcu_init_nohz(void);
|
||||||
|
@ -392,10 +423,6 @@ bool __rcu_is_watching(void);
|
||||||
* TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
|
* TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
typedef void call_rcu_func_t(struct rcu_head *head,
|
|
||||||
void (*func)(struct rcu_head *head));
|
|
||||||
void wait_rcu_gp(call_rcu_func_t crf);
|
|
||||||
|
|
||||||
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
|
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
|
||||||
#include <linux/rcutree.h>
|
#include <linux/rcutree.h>
|
||||||
#elif defined(CONFIG_TINY_RCU)
|
#elif defined(CONFIG_TINY_RCU)
|
||||||
|
@ -469,46 +496,10 @@ int rcu_read_lock_bh_held(void);
|
||||||
* If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an
|
* If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an
|
||||||
* RCU-sched read-side critical section. In absence of
|
* RCU-sched read-side critical section. In absence of
|
||||||
* CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU-sched read-side
|
* CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU-sched read-side
|
||||||
* critical section unless it can prove otherwise. Note that disabling
|
* critical section unless it can prove otherwise.
|
||||||
* of preemption (including disabling irqs) counts as an RCU-sched
|
|
||||||
* read-side critical section. This is useful for debug checks in functions
|
|
||||||
* that required that they be called within an RCU-sched read-side
|
|
||||||
* critical section.
|
|
||||||
*
|
|
||||||
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot
|
|
||||||
* and while lockdep is disabled.
|
|
||||||
*
|
|
||||||
* Note that if the CPU is in the idle loop from an RCU point of
|
|
||||||
* view (ie: that we are in the section between rcu_idle_enter() and
|
|
||||||
* rcu_idle_exit()) then rcu_read_lock_held() returns false even if the CPU
|
|
||||||
* did an rcu_read_lock(). The reason for this is that RCU ignores CPUs
|
|
||||||
* that are in such a section, considering these as in extended quiescent
|
|
||||||
* state, so such a CPU is effectively never in an RCU read-side critical
|
|
||||||
* section regardless of what RCU primitives it invokes. This state of
|
|
||||||
* affairs is required --- we need to keep an RCU-free window in idle
|
|
||||||
* where the CPU may possibly enter into low power mode. This way we can
|
|
||||||
* notice an extended quiescent state to other CPUs that started a grace
|
|
||||||
* period. Otherwise we would delay any grace period as long as we run in
|
|
||||||
* the idle task.
|
|
||||||
*
|
|
||||||
* Similarly, we avoid claiming an SRCU read lock held if the current
|
|
||||||
* CPU is offline.
|
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_PREEMPT_COUNT
|
#ifdef CONFIG_PREEMPT_COUNT
|
||||||
static inline int rcu_read_lock_sched_held(void)
|
int rcu_read_lock_sched_held(void);
|
||||||
{
|
|
||||||
int lockdep_opinion = 0;
|
|
||||||
|
|
||||||
if (!debug_lockdep_rcu_enabled())
|
|
||||||
return 1;
|
|
||||||
if (!rcu_is_watching())
|
|
||||||
return 0;
|
|
||||||
if (!rcu_lockdep_current_cpu_online())
|
|
||||||
return 0;
|
|
||||||
if (debug_locks)
|
|
||||||
lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
|
|
||||||
return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
|
|
||||||
}
|
|
||||||
#else /* #ifdef CONFIG_PREEMPT_COUNT */
|
#else /* #ifdef CONFIG_PREEMPT_COUNT */
|
||||||
static inline int rcu_read_lock_sched_held(void)
|
static inline int rcu_read_lock_sched_held(void)
|
||||||
{
|
{
|
||||||
|
@ -545,6 +536,11 @@ static inline int rcu_read_lock_sched_held(void)
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
|
/* Deprecate rcu_lockdep_assert(): Use RCU_LOCKDEP_WARN() instead. */
|
||||||
|
static inline void __attribute((deprecated)) deprecate_rcu_lockdep_assert(void)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_PROVE_RCU
|
#ifdef CONFIG_PROVE_RCU
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -555,17 +551,32 @@ static inline int rcu_read_lock_sched_held(void)
|
||||||
#define rcu_lockdep_assert(c, s) \
|
#define rcu_lockdep_assert(c, s) \
|
||||||
do { \
|
do { \
|
||||||
static bool __section(.data.unlikely) __warned; \
|
static bool __section(.data.unlikely) __warned; \
|
||||||
|
deprecate_rcu_lockdep_assert(); \
|
||||||
if (debug_lockdep_rcu_enabled() && !__warned && !(c)) { \
|
if (debug_lockdep_rcu_enabled() && !__warned && !(c)) { \
|
||||||
__warned = true; \
|
__warned = true; \
|
||||||
lockdep_rcu_suspicious(__FILE__, __LINE__, s); \
|
lockdep_rcu_suspicious(__FILE__, __LINE__, s); \
|
||||||
} \
|
} \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
|
||||||
|
* @c: condition to check
|
||||||
|
* @s: informative message
|
||||||
|
*/
|
||||||
|
#define RCU_LOCKDEP_WARN(c, s) \
|
||||||
|
do { \
|
||||||
|
static bool __section(.data.unlikely) __warned; \
|
||||||
|
if (debug_lockdep_rcu_enabled() && !__warned && (c)) { \
|
||||||
|
__warned = true; \
|
||||||
|
lockdep_rcu_suspicious(__FILE__, __LINE__, s); \
|
||||||
|
} \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
#if defined(CONFIG_PROVE_RCU) && !defined(CONFIG_PREEMPT_RCU)
|
#if defined(CONFIG_PROVE_RCU) && !defined(CONFIG_PREEMPT_RCU)
|
||||||
static inline void rcu_preempt_sleep_check(void)
|
static inline void rcu_preempt_sleep_check(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_lock_map),
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_lock_map),
|
||||||
"Illegal context switch in RCU read-side critical section");
|
"Illegal context switch in RCU read-side critical section");
|
||||||
}
|
}
|
||||||
#else /* #ifdef CONFIG_PROVE_RCU */
|
#else /* #ifdef CONFIG_PROVE_RCU */
|
||||||
static inline void rcu_preempt_sleep_check(void)
|
static inline void rcu_preempt_sleep_check(void)
|
||||||
|
@ -576,15 +587,16 @@ static inline void rcu_preempt_sleep_check(void)
|
||||||
#define rcu_sleep_check() \
|
#define rcu_sleep_check() \
|
||||||
do { \
|
do { \
|
||||||
rcu_preempt_sleep_check(); \
|
rcu_preempt_sleep_check(); \
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map), \
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map), \
|
||||||
"Illegal context switch in RCU-bh read-side critical section"); \
|
"Illegal context switch in RCU-bh read-side critical section"); \
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_sched_lock_map), \
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map), \
|
||||||
"Illegal context switch in RCU-sched read-side critical section"); \
|
"Illegal context switch in RCU-sched read-side critical section"); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_PROVE_RCU */
|
#else /* #ifdef CONFIG_PROVE_RCU */
|
||||||
|
|
||||||
#define rcu_lockdep_assert(c, s) do { } while (0)
|
#define rcu_lockdep_assert(c, s) deprecate_rcu_lockdep_assert()
|
||||||
|
#define RCU_LOCKDEP_WARN(c, s) do { } while (0)
|
||||||
#define rcu_sleep_check() do { } while (0)
|
#define rcu_sleep_check() do { } while (0)
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_PROVE_RCU */
|
#endif /* #else #ifdef CONFIG_PROVE_RCU */
|
||||||
|
@ -615,13 +627,13 @@ static inline void rcu_preempt_sleep_check(void)
|
||||||
({ \
|
({ \
|
||||||
/* Dependency order vs. p above. */ \
|
/* Dependency order vs. p above. */ \
|
||||||
typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
|
typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
|
||||||
rcu_lockdep_assert(c, "suspicious rcu_dereference_check() usage"); \
|
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
|
||||||
rcu_dereference_sparse(p, space); \
|
rcu_dereference_sparse(p, space); \
|
||||||
((typeof(*p) __force __kernel *)(________p1)); \
|
((typeof(*p) __force __kernel *)(________p1)); \
|
||||||
})
|
})
|
||||||
#define __rcu_dereference_protected(p, c, space) \
|
#define __rcu_dereference_protected(p, c, space) \
|
||||||
({ \
|
({ \
|
||||||
rcu_lockdep_assert(c, "suspicious rcu_dereference_protected() usage"); \
|
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_protected() usage"); \
|
||||||
rcu_dereference_sparse(p, space); \
|
rcu_dereference_sparse(p, space); \
|
||||||
((typeof(*p) __force __kernel *)(p)); \
|
((typeof(*p) __force __kernel *)(p)); \
|
||||||
})
|
})
|
||||||
|
@ -845,8 +857,8 @@ static inline void rcu_read_lock(void)
|
||||||
__rcu_read_lock();
|
__rcu_read_lock();
|
||||||
__acquire(RCU);
|
__acquire(RCU);
|
||||||
rcu_lock_acquire(&rcu_lock_map);
|
rcu_lock_acquire(&rcu_lock_map);
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_lock() used illegally while idle");
|
"rcu_read_lock() used illegally while idle");
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -896,8 +908,8 @@ static inline void rcu_read_lock(void)
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_unlock(void)
|
static inline void rcu_read_unlock(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_unlock() used illegally while idle");
|
"rcu_read_unlock() used illegally while idle");
|
||||||
__release(RCU);
|
__release(RCU);
|
||||||
__rcu_read_unlock();
|
__rcu_read_unlock();
|
||||||
rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
|
rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
|
||||||
|
@ -925,8 +937,8 @@ static inline void rcu_read_lock_bh(void)
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
__acquire(RCU_BH);
|
__acquire(RCU_BH);
|
||||||
rcu_lock_acquire(&rcu_bh_lock_map);
|
rcu_lock_acquire(&rcu_bh_lock_map);
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_lock_bh() used illegally while idle");
|
"rcu_read_lock_bh() used illegally while idle");
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -936,8 +948,8 @@ static inline void rcu_read_lock_bh(void)
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_unlock_bh(void)
|
static inline void rcu_read_unlock_bh(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_unlock_bh() used illegally while idle");
|
"rcu_read_unlock_bh() used illegally while idle");
|
||||||
rcu_lock_release(&rcu_bh_lock_map);
|
rcu_lock_release(&rcu_bh_lock_map);
|
||||||
__release(RCU_BH);
|
__release(RCU_BH);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
|
@ -961,8 +973,8 @@ static inline void rcu_read_lock_sched(void)
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
__acquire(RCU_SCHED);
|
__acquire(RCU_SCHED);
|
||||||
rcu_lock_acquire(&rcu_sched_lock_map);
|
rcu_lock_acquire(&rcu_sched_lock_map);
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_lock_sched() used illegally while idle");
|
"rcu_read_lock_sched() used illegally while idle");
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
||||||
|
@ -979,8 +991,8 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_unlock_sched(void)
|
static inline void rcu_read_unlock_sched(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_unlock_sched() used illegally while idle");
|
"rcu_read_unlock_sched() used illegally while idle");
|
||||||
rcu_lock_release(&rcu_sched_lock_map);
|
rcu_lock_release(&rcu_sched_lock_map);
|
||||||
__release(RCU_SCHED);
|
__release(RCU_SCHED);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
|
@ -1031,7 +1043,7 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
|
||||||
#define RCU_INIT_POINTER(p, v) \
|
#define RCU_INIT_POINTER(p, v) \
|
||||||
do { \
|
do { \
|
||||||
rcu_dereference_sparse(p, __rcu); \
|
rcu_dereference_sparse(p, __rcu); \
|
||||||
p = RCU_INITIALIZER(v); \
|
WRITE_ONCE(p, RCU_INITIALIZER(v)); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -37,6 +37,16 @@ static inline void cond_synchronize_rcu(unsigned long oldstate)
|
||||||
might_sleep();
|
might_sleep();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline unsigned long get_state_synchronize_sched(void)
|
||||||
|
{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void cond_synchronize_sched(unsigned long oldstate)
|
||||||
|
{
|
||||||
|
might_sleep();
|
||||||
|
}
|
||||||
|
|
||||||
static inline void rcu_barrier_bh(void)
|
static inline void rcu_barrier_bh(void)
|
||||||
{
|
{
|
||||||
wait_rcu_gp(call_rcu_bh);
|
wait_rcu_gp(call_rcu_bh);
|
||||||
|
|
|
@ -76,6 +76,8 @@ void rcu_barrier_bh(void);
|
||||||
void rcu_barrier_sched(void);
|
void rcu_barrier_sched(void);
|
||||||
unsigned long get_state_synchronize_rcu(void);
|
unsigned long get_state_synchronize_rcu(void);
|
||||||
void cond_synchronize_rcu(unsigned long oldstate);
|
void cond_synchronize_rcu(unsigned long oldstate);
|
||||||
|
unsigned long get_state_synchronize_sched(void);
|
||||||
|
void cond_synchronize_sched(unsigned long oldstate);
|
||||||
|
|
||||||
extern unsigned long rcutorture_testseq;
|
extern unsigned long rcutorture_testseq;
|
||||||
extern unsigned long rcutorture_vernum;
|
extern unsigned long rcutorture_vernum;
|
||||||
|
|
|
@ -212,6 +212,9 @@ struct callback_head {
|
||||||
};
|
};
|
||||||
#define rcu_head callback_head
|
#define rcu_head callback_head
|
||||||
|
|
||||||
|
typedef void (*rcu_callback_t)(struct rcu_head *head);
|
||||||
|
typedef void (*call_rcu_func_t)(struct rcu_head *head, rcu_callback_t func);
|
||||||
|
|
||||||
/* clocksource cycle base type */
|
/* clocksource cycle base type */
|
||||||
typedef u64 cycle_t;
|
typedef u64 cycle_t;
|
||||||
|
|
||||||
|
|
|
@ -661,7 +661,6 @@ TRACE_EVENT(rcu_torture_read,
|
||||||
* Tracepoint for _rcu_barrier() execution. The string "s" describes
|
* Tracepoint for _rcu_barrier() execution. The string "s" describes
|
||||||
* the _rcu_barrier phase:
|
* the _rcu_barrier phase:
|
||||||
* "Begin": _rcu_barrier() started.
|
* "Begin": _rcu_barrier() started.
|
||||||
* "Check": _rcu_barrier() checking for piggybacking.
|
|
||||||
* "EarlyExit": _rcu_barrier() piggybacked, thus early exit.
|
* "EarlyExit": _rcu_barrier() piggybacked, thus early exit.
|
||||||
* "Inc1": _rcu_barrier() piggyback check counter incremented.
|
* "Inc1": _rcu_barrier() piggyback check counter incremented.
|
||||||
* "OfflineNoCB": _rcu_barrier() found callback on never-online CPU
|
* "OfflineNoCB": _rcu_barrier() found callback on never-online CPU
|
||||||
|
|
10
init/Kconfig
10
init/Kconfig
|
@ -538,15 +538,6 @@ config RCU_STALL_COMMON
|
||||||
config CONTEXT_TRACKING
|
config CONTEXT_TRACKING
|
||||||
bool
|
bool
|
||||||
|
|
||||||
config RCU_USER_QS
|
|
||||||
bool
|
|
||||||
help
|
|
||||||
This option sets hooks on kernel / userspace boundaries and
|
|
||||||
puts RCU in extended quiescent state when the CPU runs in
|
|
||||||
userspace. It means that when a CPU runs in userspace, it is
|
|
||||||
excluded from the global RCU state machine and thus doesn't
|
|
||||||
try to keep the timer tick on for RCU.
|
|
||||||
|
|
||||||
config CONTEXT_TRACKING_FORCE
|
config CONTEXT_TRACKING_FORCE
|
||||||
bool "Force context tracking"
|
bool "Force context tracking"
|
||||||
depends on CONTEXT_TRACKING
|
depends on CONTEXT_TRACKING
|
||||||
|
@ -707,6 +698,7 @@ config RCU_BOOST_DELAY
|
||||||
config RCU_NOCB_CPU
|
config RCU_NOCB_CPU
|
||||||
bool "Offload RCU callback processing from boot-selected CPUs"
|
bool "Offload RCU callback processing from boot-selected CPUs"
|
||||||
depends on TREE_RCU || PREEMPT_RCU
|
depends on TREE_RCU || PREEMPT_RCU
|
||||||
|
depends on RCU_EXPERT || NO_HZ_FULL
|
||||||
default n
|
default n
|
||||||
help
|
help
|
||||||
Use this option to reduce OS jitter for aggressive HPC or
|
Use this option to reduce OS jitter for aggressive HPC or
|
||||||
|
|
|
@ -107,8 +107,8 @@ static DEFINE_SPINLOCK(release_agent_path_lock);
|
||||||
struct percpu_rw_semaphore cgroup_threadgroup_rwsem;
|
struct percpu_rw_semaphore cgroup_threadgroup_rwsem;
|
||||||
|
|
||||||
#define cgroup_assert_mutex_or_rcu_locked() \
|
#define cgroup_assert_mutex_or_rcu_locked() \
|
||||||
rcu_lockdep_assert(rcu_read_lock_held() || \
|
RCU_LOCKDEP_WARN(!rcu_read_lock_held() && \
|
||||||
lockdep_is_held(&cgroup_mutex), \
|
!lockdep_is_held(&cgroup_mutex), \
|
||||||
"cgroup_mutex or RCU read lock required");
|
"cgroup_mutex or RCU read lock required");
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
10
kernel/cpu.c
10
kernel/cpu.c
|
@ -381,14 +381,14 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
|
||||||
* will observe it.
|
* will observe it.
|
||||||
*
|
*
|
||||||
* For CONFIG_PREEMPT we have preemptible RCU and its sync_rcu() might
|
* For CONFIG_PREEMPT we have preemptible RCU and its sync_rcu() might
|
||||||
* not imply sync_sched(), so explicitly call both.
|
* not imply sync_sched(), so wait for both.
|
||||||
*
|
*
|
||||||
* Do sync before park smpboot threads to take care the rcu boost case.
|
* Do sync before park smpboot threads to take care the rcu boost case.
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_PREEMPT
|
if (IS_ENABLED(CONFIG_PREEMPT))
|
||||||
synchronize_sched();
|
synchronize_rcu_mult(call_rcu, call_rcu_sched);
|
||||||
#endif
|
else
|
||||||
synchronize_rcu();
|
synchronize_rcu();
|
||||||
|
|
||||||
smpboot_park_threads(cpu);
|
smpboot_park_threads(cpu);
|
||||||
|
|
||||||
|
|
|
@ -451,9 +451,8 @@ EXPORT_SYMBOL(pid_task);
|
||||||
*/
|
*/
|
||||||
struct task_struct *find_task_by_pid_ns(pid_t nr, struct pid_namespace *ns)
|
struct task_struct *find_task_by_pid_ns(pid_t nr, struct pid_namespace *ns)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_read_lock_held(),
|
RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
|
||||||
"find_task_by_pid_ns() needs rcu_read_lock()"
|
"find_task_by_pid_ns() needs rcu_read_lock() protection");
|
||||||
" protection");
|
|
||||||
return pid_task(find_pid_ns(nr, ns), PIDTYPE_PID);
|
return pid_task(find_pid_ns(nr, ns), PIDTYPE_PID);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -635,6 +635,8 @@ static struct rcu_torture_ops sched_ops = {
|
||||||
.deferred_free = rcu_sched_torture_deferred_free,
|
.deferred_free = rcu_sched_torture_deferred_free,
|
||||||
.sync = synchronize_sched,
|
.sync = synchronize_sched,
|
||||||
.exp_sync = synchronize_sched_expedited,
|
.exp_sync = synchronize_sched_expedited,
|
||||||
|
.get_state = get_state_synchronize_sched,
|
||||||
|
.cond_sync = cond_synchronize_sched,
|
||||||
.call = call_rcu_sched,
|
.call = call_rcu_sched,
|
||||||
.cb_barrier = rcu_barrier_sched,
|
.cb_barrier = rcu_barrier_sched,
|
||||||
.fqs = rcu_sched_force_quiescent_state,
|
.fqs = rcu_sched_force_quiescent_state,
|
||||||
|
@ -684,10 +686,20 @@ static struct rcu_torture_ops tasks_ops = {
|
||||||
|
|
||||||
#define RCUTORTURE_TASKS_OPS &tasks_ops,
|
#define RCUTORTURE_TASKS_OPS &tasks_ops,
|
||||||
|
|
||||||
|
static bool __maybe_unused torturing_tasks(void)
|
||||||
|
{
|
||||||
|
return cur_ops == &tasks_ops;
|
||||||
|
}
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_TASKS_RCU */
|
#else /* #ifdef CONFIG_TASKS_RCU */
|
||||||
|
|
||||||
#define RCUTORTURE_TASKS_OPS
|
#define RCUTORTURE_TASKS_OPS
|
||||||
|
|
||||||
|
static bool torturing_tasks(void)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_TASKS_RCU */
|
#endif /* #else #ifdef CONFIG_TASKS_RCU */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -823,9 +835,7 @@ rcu_torture_cbflood(void *arg)
|
||||||
}
|
}
|
||||||
if (err) {
|
if (err) {
|
||||||
VERBOSE_TOROUT_STRING("rcu_torture_cbflood disabled: Bad args or OOM");
|
VERBOSE_TOROUT_STRING("rcu_torture_cbflood disabled: Bad args or OOM");
|
||||||
while (!torture_must_stop())
|
goto wait_for_stop;
|
||||||
schedule_timeout_interruptible(HZ);
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
VERBOSE_TOROUT_STRING("rcu_torture_cbflood task started");
|
VERBOSE_TOROUT_STRING("rcu_torture_cbflood task started");
|
||||||
do {
|
do {
|
||||||
|
@ -844,6 +854,7 @@ rcu_torture_cbflood(void *arg)
|
||||||
stutter_wait("rcu_torture_cbflood");
|
stutter_wait("rcu_torture_cbflood");
|
||||||
} while (!torture_must_stop());
|
} while (!torture_must_stop());
|
||||||
vfree(rhp);
|
vfree(rhp);
|
||||||
|
wait_for_stop:
|
||||||
torture_kthread_stopping("rcu_torture_cbflood");
|
torture_kthread_stopping("rcu_torture_cbflood");
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1088,7 +1099,8 @@ static void rcu_torture_timer(unsigned long unused)
|
||||||
p = rcu_dereference_check(rcu_torture_current,
|
p = rcu_dereference_check(rcu_torture_current,
|
||||||
rcu_read_lock_bh_held() ||
|
rcu_read_lock_bh_held() ||
|
||||||
rcu_read_lock_sched_held() ||
|
rcu_read_lock_sched_held() ||
|
||||||
srcu_read_lock_held(srcu_ctlp));
|
srcu_read_lock_held(srcu_ctlp) ||
|
||||||
|
torturing_tasks());
|
||||||
if (p == NULL) {
|
if (p == NULL) {
|
||||||
/* Leave because rcu_torture_writer is not yet underway */
|
/* Leave because rcu_torture_writer is not yet underway */
|
||||||
cur_ops->readunlock(idx);
|
cur_ops->readunlock(idx);
|
||||||
|
@ -1162,7 +1174,8 @@ rcu_torture_reader(void *arg)
|
||||||
p = rcu_dereference_check(rcu_torture_current,
|
p = rcu_dereference_check(rcu_torture_current,
|
||||||
rcu_read_lock_bh_held() ||
|
rcu_read_lock_bh_held() ||
|
||||||
rcu_read_lock_sched_held() ||
|
rcu_read_lock_sched_held() ||
|
||||||
srcu_read_lock_held(srcu_ctlp));
|
srcu_read_lock_held(srcu_ctlp) ||
|
||||||
|
torturing_tasks());
|
||||||
if (p == NULL) {
|
if (p == NULL) {
|
||||||
/* Wait for rcu_torture_writer to get underway */
|
/* Wait for rcu_torture_writer to get underway */
|
||||||
cur_ops->readunlock(idx);
|
cur_ops->readunlock(idx);
|
||||||
|
@ -1507,7 +1520,7 @@ static int rcu_torture_barrier_init(void)
|
||||||
int i;
|
int i;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (n_barrier_cbs == 0)
|
if (n_barrier_cbs <= 0)
|
||||||
return 0;
|
return 0;
|
||||||
if (cur_ops->call == NULL || cur_ops->cb_barrier == NULL) {
|
if (cur_ops->call == NULL || cur_ops->cb_barrier == NULL) {
|
||||||
pr_alert("%s" TORTURE_FLAG
|
pr_alert("%s" TORTURE_FLAG
|
||||||
|
@ -1786,12 +1799,15 @@ rcu_torture_init(void)
|
||||||
writer_task);
|
writer_task);
|
||||||
if (firsterr)
|
if (firsterr)
|
||||||
goto unwind;
|
goto unwind;
|
||||||
fakewriter_tasks = kzalloc(nfakewriters * sizeof(fakewriter_tasks[0]),
|
if (nfakewriters > 0) {
|
||||||
GFP_KERNEL);
|
fakewriter_tasks = kzalloc(nfakewriters *
|
||||||
if (fakewriter_tasks == NULL) {
|
sizeof(fakewriter_tasks[0]),
|
||||||
VERBOSE_TOROUT_ERRSTRING("out of memory");
|
GFP_KERNEL);
|
||||||
firsterr = -ENOMEM;
|
if (fakewriter_tasks == NULL) {
|
||||||
goto unwind;
|
VERBOSE_TOROUT_ERRSTRING("out of memory");
|
||||||
|
firsterr = -ENOMEM;
|
||||||
|
goto unwind;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
for (i = 0; i < nfakewriters; i++) {
|
for (i = 0; i < nfakewriters; i++) {
|
||||||
firsterr = torture_create_kthread(rcu_torture_fakewriter,
|
firsterr = torture_create_kthread(rcu_torture_fakewriter,
|
||||||
|
@ -1818,7 +1834,7 @@ rcu_torture_init(void)
|
||||||
if (firsterr)
|
if (firsterr)
|
||||||
goto unwind;
|
goto unwind;
|
||||||
}
|
}
|
||||||
if (test_no_idle_hz) {
|
if (test_no_idle_hz && shuffle_interval > 0) {
|
||||||
firsterr = torture_shuffle_init(shuffle_interval * HZ);
|
firsterr = torture_shuffle_init(shuffle_interval * HZ);
|
||||||
if (firsterr)
|
if (firsterr)
|
||||||
goto unwind;
|
goto unwind;
|
||||||
|
|
|
@ -252,14 +252,15 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *sp, int idx)
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* srcu_readers_active - returns approximate number of readers.
|
* srcu_readers_active - returns true if there are readers. and false
|
||||||
|
* otherwise
|
||||||
* @sp: which srcu_struct to count active readers (holding srcu_read_lock).
|
* @sp: which srcu_struct to count active readers (holding srcu_read_lock).
|
||||||
*
|
*
|
||||||
* Note that this is not an atomic primitive, and can therefore suffer
|
* Note that this is not an atomic primitive, and can therefore suffer
|
||||||
* severe errors when invoked on an active srcu_struct. That said, it
|
* severe errors when invoked on an active srcu_struct. That said, it
|
||||||
* can be useful as an error check at cleanup time.
|
* can be useful as an error check at cleanup time.
|
||||||
*/
|
*/
|
||||||
static int srcu_readers_active(struct srcu_struct *sp)
|
static bool srcu_readers_active(struct srcu_struct *sp)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
unsigned long sum = 0;
|
unsigned long sum = 0;
|
||||||
|
@ -414,11 +415,11 @@ static void __synchronize_srcu(struct srcu_struct *sp, int trycount)
|
||||||
struct rcu_head *head = &rcu.head;
|
struct rcu_head *head = &rcu.head;
|
||||||
bool done = false;
|
bool done = false;
|
||||||
|
|
||||||
rcu_lockdep_assert(!lock_is_held(&sp->dep_map) &&
|
RCU_LOCKDEP_WARN(lock_is_held(&sp->dep_map) ||
|
||||||
!lock_is_held(&rcu_bh_lock_map) &&
|
lock_is_held(&rcu_bh_lock_map) ||
|
||||||
!lock_is_held(&rcu_lock_map) &&
|
lock_is_held(&rcu_lock_map) ||
|
||||||
!lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal synchronize_srcu() in same-type SRCU (or RCU) read-side critical section");
|
"Illegal synchronize_srcu() in same-type SRCU (or in RCU) read-side critical section");
|
||||||
|
|
||||||
might_sleep();
|
might_sleep();
|
||||||
init_completion(&rcu.completion);
|
init_completion(&rcu.completion);
|
||||||
|
|
|
@ -191,10 +191,10 @@ static void rcu_process_callbacks(struct softirq_action *unused)
|
||||||
*/
|
*/
|
||||||
void synchronize_sched(void)
|
void synchronize_sched(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
!lock_is_held(&rcu_lock_map) &&
|
lock_is_held(&rcu_lock_map) ||
|
||||||
!lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal synchronize_sched() in RCU read-side critical section");
|
"Illegal synchronize_sched() in RCU read-side critical section");
|
||||||
cond_resched();
|
cond_resched();
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(synchronize_sched);
|
EXPORT_SYMBOL_GPL(synchronize_sched);
|
||||||
|
|
|
@ -70,6 +70,8 @@ MODULE_ALIAS("rcutree");
|
||||||
|
|
||||||
static struct lock_class_key rcu_node_class[RCU_NUM_LVLS];
|
static struct lock_class_key rcu_node_class[RCU_NUM_LVLS];
|
||||||
static struct lock_class_key rcu_fqs_class[RCU_NUM_LVLS];
|
static struct lock_class_key rcu_fqs_class[RCU_NUM_LVLS];
|
||||||
|
static struct lock_class_key rcu_exp_class[RCU_NUM_LVLS];
|
||||||
|
static struct lock_class_key rcu_exp_sched_class[RCU_NUM_LVLS];
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* In order to export the rcu_state name to the tracing tools, it
|
* In order to export the rcu_state name to the tracing tools, it
|
||||||
|
@ -124,13 +126,8 @@ module_param(rcu_fanout_exact, bool, 0444);
|
||||||
static int rcu_fanout_leaf = RCU_FANOUT_LEAF;
|
static int rcu_fanout_leaf = RCU_FANOUT_LEAF;
|
||||||
module_param(rcu_fanout_leaf, int, 0444);
|
module_param(rcu_fanout_leaf, int, 0444);
|
||||||
int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
|
int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
|
||||||
static int num_rcu_lvl[] = { /* Number of rcu_nodes at specified level. */
|
/* Number of rcu_nodes at specified level. */
|
||||||
NUM_RCU_LVL_0,
|
static int num_rcu_lvl[] = NUM_RCU_LVL_INIT;
|
||||||
NUM_RCU_LVL_1,
|
|
||||||
NUM_RCU_LVL_2,
|
|
||||||
NUM_RCU_LVL_3,
|
|
||||||
NUM_RCU_LVL_4,
|
|
||||||
};
|
|
||||||
int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
|
int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -649,12 +646,12 @@ static void rcu_eqs_enter_common(long long oldval, bool user)
|
||||||
* It is illegal to enter an extended quiescent state while
|
* It is illegal to enter an extended quiescent state while
|
||||||
* in an RCU read-side critical section.
|
* in an RCU read-side critical section.
|
||||||
*/
|
*/
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_lock_map),
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_lock_map),
|
||||||
"Illegal idle entry in RCU read-side critical section.");
|
"Illegal idle entry in RCU read-side critical section.");
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map),
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),
|
||||||
"Illegal idle entry in RCU-bh read-side critical section.");
|
"Illegal idle entry in RCU-bh read-side critical section.");
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_sched_lock_map),
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal idle entry in RCU-sched read-side critical section.");
|
"Illegal idle entry in RCU-sched read-side critical section.");
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -701,7 +698,7 @@ void rcu_idle_enter(void)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_idle_enter);
|
EXPORT_SYMBOL_GPL(rcu_idle_enter);
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_USER_QS
|
#ifdef CONFIG_NO_HZ_FULL
|
||||||
/**
|
/**
|
||||||
* rcu_user_enter - inform RCU that we are resuming userspace.
|
* rcu_user_enter - inform RCU that we are resuming userspace.
|
||||||
*
|
*
|
||||||
|
@ -714,7 +711,7 @@ void rcu_user_enter(void)
|
||||||
{
|
{
|
||||||
rcu_eqs_enter(1);
|
rcu_eqs_enter(1);
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_RCU_USER_QS */
|
#endif /* CONFIG_NO_HZ_FULL */
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rcu_irq_exit - inform RCU that current CPU is exiting irq towards idle
|
* rcu_irq_exit - inform RCU that current CPU is exiting irq towards idle
|
||||||
|
@ -828,7 +825,7 @@ void rcu_idle_exit(void)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_idle_exit);
|
EXPORT_SYMBOL_GPL(rcu_idle_exit);
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_USER_QS
|
#ifdef CONFIG_NO_HZ_FULL
|
||||||
/**
|
/**
|
||||||
* rcu_user_exit - inform RCU that we are exiting userspace.
|
* rcu_user_exit - inform RCU that we are exiting userspace.
|
||||||
*
|
*
|
||||||
|
@ -839,7 +836,7 @@ void rcu_user_exit(void)
|
||||||
{
|
{
|
||||||
rcu_eqs_exit(1);
|
rcu_eqs_exit(1);
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_RCU_USER_QS */
|
#endif /* CONFIG_NO_HZ_FULL */
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rcu_irq_enter - inform RCU that current CPU is entering irq away from idle
|
* rcu_irq_enter - inform RCU that current CPU is entering irq away from idle
|
||||||
|
@ -978,9 +975,9 @@ bool notrace rcu_is_watching(void)
|
||||||
{
|
{
|
||||||
bool ret;
|
bool ret;
|
||||||
|
|
||||||
preempt_disable();
|
preempt_disable_notrace();
|
||||||
ret = __rcu_is_watching();
|
ret = __rcu_is_watching();
|
||||||
preempt_enable();
|
preempt_enable_notrace();
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_is_watching);
|
EXPORT_SYMBOL_GPL(rcu_is_watching);
|
||||||
|
@ -1178,9 +1175,11 @@ static void rcu_check_gp_kthread_starvation(struct rcu_state *rsp)
|
||||||
j = jiffies;
|
j = jiffies;
|
||||||
gpa = READ_ONCE(rsp->gp_activity);
|
gpa = READ_ONCE(rsp->gp_activity);
|
||||||
if (j - gpa > 2 * HZ)
|
if (j - gpa > 2 * HZ)
|
||||||
pr_err("%s kthread starved for %ld jiffies! g%lu c%lu f%#x\n",
|
pr_err("%s kthread starved for %ld jiffies! g%lu c%lu f%#x s%d ->state=%#lx\n",
|
||||||
rsp->name, j - gpa,
|
rsp->name, j - gpa,
|
||||||
rsp->gpnum, rsp->completed, rsp->gp_flags);
|
rsp->gpnum, rsp->completed,
|
||||||
|
rsp->gp_flags, rsp->gp_state,
|
||||||
|
rsp->gp_kthread ? rsp->gp_kthread->state : 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -1905,6 +1904,26 @@ static int rcu_gp_init(struct rcu_state *rsp)
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Helper function for wait_event_interruptible_timeout() wakeup
|
||||||
|
* at force-quiescent-state time.
|
||||||
|
*/
|
||||||
|
static bool rcu_gp_fqs_check_wake(struct rcu_state *rsp, int *gfp)
|
||||||
|
{
|
||||||
|
struct rcu_node *rnp = rcu_get_root(rsp);
|
||||||
|
|
||||||
|
/* Someone like call_rcu() requested a force-quiescent-state scan. */
|
||||||
|
*gfp = READ_ONCE(rsp->gp_flags);
|
||||||
|
if (*gfp & RCU_GP_FLAG_FQS)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
/* The current grace period has completed. */
|
||||||
|
if (!READ_ONCE(rnp->qsmask) && !rcu_preempt_blocked_readers_cgp(rnp))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Do one round of quiescent-state forcing.
|
* Do one round of quiescent-state forcing.
|
||||||
*/
|
*/
|
||||||
|
@ -2041,6 +2060,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
|
||||||
wait_event_interruptible(rsp->gp_wq,
|
wait_event_interruptible(rsp->gp_wq,
|
||||||
READ_ONCE(rsp->gp_flags) &
|
READ_ONCE(rsp->gp_flags) &
|
||||||
RCU_GP_FLAG_INIT);
|
RCU_GP_FLAG_INIT);
|
||||||
|
rsp->gp_state = RCU_GP_DONE_GPS;
|
||||||
/* Locking provides needed memory barrier. */
|
/* Locking provides needed memory barrier. */
|
||||||
if (rcu_gp_init(rsp))
|
if (rcu_gp_init(rsp))
|
||||||
break;
|
break;
|
||||||
|
@ -2068,11 +2088,8 @@ static int __noreturn rcu_gp_kthread(void *arg)
|
||||||
TPS("fqswait"));
|
TPS("fqswait"));
|
||||||
rsp->gp_state = RCU_GP_WAIT_FQS;
|
rsp->gp_state = RCU_GP_WAIT_FQS;
|
||||||
ret = wait_event_interruptible_timeout(rsp->gp_wq,
|
ret = wait_event_interruptible_timeout(rsp->gp_wq,
|
||||||
((gf = READ_ONCE(rsp->gp_flags)) &
|
rcu_gp_fqs_check_wake(rsp, &gf), j);
|
||||||
RCU_GP_FLAG_FQS) ||
|
rsp->gp_state = RCU_GP_DOING_FQS;
|
||||||
(!READ_ONCE(rnp->qsmask) &&
|
|
||||||
!rcu_preempt_blocked_readers_cgp(rnp)),
|
|
||||||
j);
|
|
||||||
/* Locking provides needed memory barriers. */
|
/* Locking provides needed memory barriers. */
|
||||||
/* If grace period done, leave loop. */
|
/* If grace period done, leave loop. */
|
||||||
if (!READ_ONCE(rnp->qsmask) &&
|
if (!READ_ONCE(rnp->qsmask) &&
|
||||||
|
@ -2110,7 +2127,9 @@ static int __noreturn rcu_gp_kthread(void *arg)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Handle grace-period end. */
|
/* Handle grace-period end. */
|
||||||
|
rsp->gp_state = RCU_GP_CLEANUP;
|
||||||
rcu_gp_cleanup(rsp);
|
rcu_gp_cleanup(rsp);
|
||||||
|
rsp->gp_state = RCU_GP_CLEANED;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3161,10 +3180,10 @@ static inline int rcu_blocking_is_gp(void)
|
||||||
*/
|
*/
|
||||||
void synchronize_sched(void)
|
void synchronize_sched(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
!lock_is_held(&rcu_lock_map) &&
|
lock_is_held(&rcu_lock_map) ||
|
||||||
!lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal synchronize_sched() in RCU-sched read-side critical section");
|
"Illegal synchronize_sched() in RCU-sched read-side critical section");
|
||||||
if (rcu_blocking_is_gp())
|
if (rcu_blocking_is_gp())
|
||||||
return;
|
return;
|
||||||
if (rcu_gp_is_expedited())
|
if (rcu_gp_is_expedited())
|
||||||
|
@ -3188,10 +3207,10 @@ EXPORT_SYMBOL_GPL(synchronize_sched);
|
||||||
*/
|
*/
|
||||||
void synchronize_rcu_bh(void)
|
void synchronize_rcu_bh(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
!lock_is_held(&rcu_lock_map) &&
|
lock_is_held(&rcu_lock_map) ||
|
||||||
!lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal synchronize_rcu_bh() in RCU-bh read-side critical section");
|
"Illegal synchronize_rcu_bh() in RCU-bh read-side critical section");
|
||||||
if (rcu_blocking_is_gp())
|
if (rcu_blocking_is_gp())
|
||||||
return;
|
return;
|
||||||
if (rcu_gp_is_expedited())
|
if (rcu_gp_is_expedited())
|
||||||
|
@ -3253,23 +3272,247 @@ void cond_synchronize_rcu(unsigned long oldstate)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(cond_synchronize_rcu);
|
EXPORT_SYMBOL_GPL(cond_synchronize_rcu);
|
||||||
|
|
||||||
static int synchronize_sched_expedited_cpu_stop(void *data)
|
/**
|
||||||
|
* get_state_synchronize_sched - Snapshot current RCU-sched state
|
||||||
|
*
|
||||||
|
* Returns a cookie that is used by a later call to cond_synchronize_sched()
|
||||||
|
* to determine whether or not a full grace period has elapsed in the
|
||||||
|
* meantime.
|
||||||
|
*/
|
||||||
|
unsigned long get_state_synchronize_sched(void)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* There must be a full memory barrier on each affected CPU
|
* Any prior manipulation of RCU-protected data must happen
|
||||||
* between the time that try_stop_cpus() is called and the
|
* before the load from ->gpnum.
|
||||||
* time that it returns.
|
|
||||||
*
|
|
||||||
* In the current initial implementation of cpu_stop, the
|
|
||||||
* above condition is already met when the control reaches
|
|
||||||
* this point and the following smp_mb() is not strictly
|
|
||||||
* necessary. Do smp_mb() anyway for documentation and
|
|
||||||
* robustness against future implementation changes.
|
|
||||||
*/
|
*/
|
||||||
smp_mb(); /* See above comment block. */
|
smp_mb(); /* ^^^ */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Make sure this load happens before the purportedly
|
||||||
|
* time-consuming work between get_state_synchronize_sched()
|
||||||
|
* and cond_synchronize_sched().
|
||||||
|
*/
|
||||||
|
return smp_load_acquire(&rcu_sched_state.gpnum);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(get_state_synchronize_sched);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* cond_synchronize_sched - Conditionally wait for an RCU-sched grace period
|
||||||
|
*
|
||||||
|
* @oldstate: return value from earlier call to get_state_synchronize_sched()
|
||||||
|
*
|
||||||
|
* If a full RCU-sched grace period has elapsed since the earlier call to
|
||||||
|
* get_state_synchronize_sched(), just return. Otherwise, invoke
|
||||||
|
* synchronize_sched() to wait for a full grace period.
|
||||||
|
*
|
||||||
|
* Yes, this function does not take counter wrap into account. But
|
||||||
|
* counter wrap is harmless. If the counter wraps, we have waited for
|
||||||
|
* more than 2 billion grace periods (and way more on a 64-bit system!),
|
||||||
|
* so waiting for one additional grace period should be just fine.
|
||||||
|
*/
|
||||||
|
void cond_synchronize_sched(unsigned long oldstate)
|
||||||
|
{
|
||||||
|
unsigned long newstate;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Ensure that this load happens before any RCU-destructive
|
||||||
|
* actions the caller might carry out after we return.
|
||||||
|
*/
|
||||||
|
newstate = smp_load_acquire(&rcu_sched_state.completed);
|
||||||
|
if (ULONG_CMP_GE(oldstate, newstate))
|
||||||
|
synchronize_sched();
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(cond_synchronize_sched);
|
||||||
|
|
||||||
|
/* Adjust sequence number for start of update-side operation. */
|
||||||
|
static void rcu_seq_start(unsigned long *sp)
|
||||||
|
{
|
||||||
|
WRITE_ONCE(*sp, *sp + 1);
|
||||||
|
smp_mb(); /* Ensure update-side operation after counter increment. */
|
||||||
|
WARN_ON_ONCE(!(*sp & 0x1));
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Adjust sequence number for end of update-side operation. */
|
||||||
|
static void rcu_seq_end(unsigned long *sp)
|
||||||
|
{
|
||||||
|
smp_mb(); /* Ensure update-side operation before counter increment. */
|
||||||
|
WRITE_ONCE(*sp, *sp + 1);
|
||||||
|
WARN_ON_ONCE(*sp & 0x1);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Take a snapshot of the update side's sequence number. */
|
||||||
|
static unsigned long rcu_seq_snap(unsigned long *sp)
|
||||||
|
{
|
||||||
|
unsigned long s;
|
||||||
|
|
||||||
|
smp_mb(); /* Caller's modifications seen first by other CPUs. */
|
||||||
|
s = (READ_ONCE(*sp) + 3) & ~0x1;
|
||||||
|
smp_mb(); /* Above access must not bleed into critical section. */
|
||||||
|
return s;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Given a snapshot from rcu_seq_snap(), determine whether or not a
|
||||||
|
* full update-side operation has occurred.
|
||||||
|
*/
|
||||||
|
static bool rcu_seq_done(unsigned long *sp, unsigned long s)
|
||||||
|
{
|
||||||
|
return ULONG_CMP_GE(READ_ONCE(*sp), s);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Wrapper functions for expedited grace periods. */
|
||||||
|
static void rcu_exp_gp_seq_start(struct rcu_state *rsp)
|
||||||
|
{
|
||||||
|
rcu_seq_start(&rsp->expedited_sequence);
|
||||||
|
}
|
||||||
|
static void rcu_exp_gp_seq_end(struct rcu_state *rsp)
|
||||||
|
{
|
||||||
|
rcu_seq_end(&rsp->expedited_sequence);
|
||||||
|
smp_mb(); /* Ensure that consecutive grace periods serialize. */
|
||||||
|
}
|
||||||
|
static unsigned long rcu_exp_gp_seq_snap(struct rcu_state *rsp)
|
||||||
|
{
|
||||||
|
return rcu_seq_snap(&rsp->expedited_sequence);
|
||||||
|
}
|
||||||
|
static bool rcu_exp_gp_seq_done(struct rcu_state *rsp, unsigned long s)
|
||||||
|
{
|
||||||
|
return rcu_seq_done(&rsp->expedited_sequence, s);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Common code for synchronize_{rcu,sched}_expedited() work-done checking. */
|
||||||
|
static bool sync_exp_work_done(struct rcu_state *rsp, struct rcu_node *rnp,
|
||||||
|
struct rcu_data *rdp,
|
||||||
|
atomic_long_t *stat, unsigned long s)
|
||||||
|
{
|
||||||
|
if (rcu_exp_gp_seq_done(rsp, s)) {
|
||||||
|
if (rnp)
|
||||||
|
mutex_unlock(&rnp->exp_funnel_mutex);
|
||||||
|
else if (rdp)
|
||||||
|
mutex_unlock(&rdp->exp_funnel_mutex);
|
||||||
|
/* Ensure test happens before caller kfree(). */
|
||||||
|
smp_mb__before_atomic(); /* ^^^ */
|
||||||
|
atomic_long_inc(stat);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Funnel-lock acquisition for expedited grace periods. Returns a
|
||||||
|
* pointer to the root rcu_node structure, or NULL if some other
|
||||||
|
* task did the expedited grace period for us.
|
||||||
|
*/
|
||||||
|
static struct rcu_node *exp_funnel_lock(struct rcu_state *rsp, unsigned long s)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
struct rcu_node *rnp0;
|
||||||
|
struct rcu_node *rnp1 = NULL;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* First try directly acquiring the root lock in order to reduce
|
||||||
|
* latency in the common case where expedited grace periods are
|
||||||
|
* rare. We check mutex_is_locked() to avoid pathological levels of
|
||||||
|
* memory contention on ->exp_funnel_mutex in the heavy-load case.
|
||||||
|
*/
|
||||||
|
rnp0 = rcu_get_root(rsp);
|
||||||
|
if (!mutex_is_locked(&rnp0->exp_funnel_mutex)) {
|
||||||
|
if (mutex_trylock(&rnp0->exp_funnel_mutex)) {
|
||||||
|
if (sync_exp_work_done(rsp, rnp0, NULL,
|
||||||
|
&rsp->expedited_workdone0, s))
|
||||||
|
return NULL;
|
||||||
|
return rnp0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Each pass through the following loop works its way
|
||||||
|
* up the rcu_node tree, returning if others have done the
|
||||||
|
* work or otherwise falls through holding the root rnp's
|
||||||
|
* ->exp_funnel_mutex. The mapping from CPU to rcu_node structure
|
||||||
|
* can be inexact, as it is just promoting locality and is not
|
||||||
|
* strictly needed for correctness.
|
||||||
|
*/
|
||||||
|
rdp = per_cpu_ptr(rsp->rda, raw_smp_processor_id());
|
||||||
|
if (sync_exp_work_done(rsp, NULL, NULL, &rsp->expedited_workdone1, s))
|
||||||
|
return NULL;
|
||||||
|
mutex_lock(&rdp->exp_funnel_mutex);
|
||||||
|
rnp0 = rdp->mynode;
|
||||||
|
for (; rnp0 != NULL; rnp0 = rnp0->parent) {
|
||||||
|
if (sync_exp_work_done(rsp, rnp1, rdp,
|
||||||
|
&rsp->expedited_workdone2, s))
|
||||||
|
return NULL;
|
||||||
|
mutex_lock(&rnp0->exp_funnel_mutex);
|
||||||
|
if (rnp1)
|
||||||
|
mutex_unlock(&rnp1->exp_funnel_mutex);
|
||||||
|
else
|
||||||
|
mutex_unlock(&rdp->exp_funnel_mutex);
|
||||||
|
rnp1 = rnp0;
|
||||||
|
}
|
||||||
|
if (sync_exp_work_done(rsp, rnp1, rdp,
|
||||||
|
&rsp->expedited_workdone3, s))
|
||||||
|
return NULL;
|
||||||
|
return rnp1;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Invoked on each online non-idle CPU for expedited quiescent state. */
|
||||||
|
static int synchronize_sched_expedited_cpu_stop(void *data)
|
||||||
|
{
|
||||||
|
struct rcu_data *rdp = data;
|
||||||
|
struct rcu_state *rsp = rdp->rsp;
|
||||||
|
|
||||||
|
/* We are here: If we are last, do the wakeup. */
|
||||||
|
rdp->exp_done = true;
|
||||||
|
if (atomic_dec_and_test(&rsp->expedited_need_qs))
|
||||||
|
wake_up(&rsp->expedited_wq);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void synchronize_sched_expedited_wait(struct rcu_state *rsp)
|
||||||
|
{
|
||||||
|
int cpu;
|
||||||
|
unsigned long jiffies_stall;
|
||||||
|
unsigned long jiffies_start;
|
||||||
|
struct rcu_data *rdp;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
jiffies_stall = rcu_jiffies_till_stall_check();
|
||||||
|
jiffies_start = jiffies;
|
||||||
|
|
||||||
|
for (;;) {
|
||||||
|
ret = wait_event_interruptible_timeout(
|
||||||
|
rsp->expedited_wq,
|
||||||
|
!atomic_read(&rsp->expedited_need_qs),
|
||||||
|
jiffies_stall);
|
||||||
|
if (ret > 0)
|
||||||
|
return;
|
||||||
|
if (ret < 0) {
|
||||||
|
/* Hit a signal, disable CPU stall warnings. */
|
||||||
|
wait_event(rsp->expedited_wq,
|
||||||
|
!atomic_read(&rsp->expedited_need_qs));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
pr_err("INFO: %s detected expedited stalls on CPUs: {",
|
||||||
|
rsp->name);
|
||||||
|
for_each_online_cpu(cpu) {
|
||||||
|
rdp = per_cpu_ptr(rsp->rda, cpu);
|
||||||
|
|
||||||
|
if (rdp->exp_done)
|
||||||
|
continue;
|
||||||
|
pr_cont(" %d", cpu);
|
||||||
|
}
|
||||||
|
pr_cont(" } %lu jiffies s: %lu\n",
|
||||||
|
jiffies - jiffies_start, rsp->expedited_sequence);
|
||||||
|
for_each_online_cpu(cpu) {
|
||||||
|
rdp = per_cpu_ptr(rsp->rda, cpu);
|
||||||
|
|
||||||
|
if (rdp->exp_done)
|
||||||
|
continue;
|
||||||
|
dump_cpu_task(cpu);
|
||||||
|
}
|
||||||
|
jiffies_stall = 3 * rcu_jiffies_till_stall_check() + 3;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* synchronize_sched_expedited - Brute-force RCU-sched grace period
|
* synchronize_sched_expedited - Brute-force RCU-sched grace period
|
||||||
*
|
*
|
||||||
|
@ -3281,58 +3524,21 @@ static int synchronize_sched_expedited_cpu_stop(void *data)
|
||||||
* restructure your code to batch your updates, and then use a single
|
* restructure your code to batch your updates, and then use a single
|
||||||
* synchronize_sched() instead.
|
* synchronize_sched() instead.
|
||||||
*
|
*
|
||||||
* This implementation can be thought of as an application of ticket
|
* This implementation can be thought of as an application of sequence
|
||||||
* locking to RCU, with sync_sched_expedited_started and
|
* locking to expedited grace periods, but using the sequence counter to
|
||||||
* sync_sched_expedited_done taking on the roles of the halves
|
* determine when someone else has already done the work instead of for
|
||||||
* of the ticket-lock word. Each task atomically increments
|
* retrying readers.
|
||||||
* sync_sched_expedited_started upon entry, snapshotting the old value,
|
|
||||||
* then attempts to stop all the CPUs. If this succeeds, then each
|
|
||||||
* CPU will have executed a context switch, resulting in an RCU-sched
|
|
||||||
* grace period. We are then done, so we use atomic_cmpxchg() to
|
|
||||||
* update sync_sched_expedited_done to match our snapshot -- but
|
|
||||||
* only if someone else has not already advanced past our snapshot.
|
|
||||||
*
|
|
||||||
* On the other hand, if try_stop_cpus() fails, we check the value
|
|
||||||
* of sync_sched_expedited_done. If it has advanced past our
|
|
||||||
* initial snapshot, then someone else must have forced a grace period
|
|
||||||
* some time after we took our snapshot. In this case, our work is
|
|
||||||
* done for us, and we can simply return. Otherwise, we try again,
|
|
||||||
* but keep our initial snapshot for purposes of checking for someone
|
|
||||||
* doing our work for us.
|
|
||||||
*
|
|
||||||
* If we fail too many times in a row, we fall back to synchronize_sched().
|
|
||||||
*/
|
*/
|
||||||
void synchronize_sched_expedited(void)
|
void synchronize_sched_expedited(void)
|
||||||
{
|
{
|
||||||
cpumask_var_t cm;
|
|
||||||
bool cma = false;
|
|
||||||
int cpu;
|
int cpu;
|
||||||
long firstsnap, s, snap;
|
unsigned long s;
|
||||||
int trycount = 0;
|
struct rcu_node *rnp;
|
||||||
struct rcu_state *rsp = &rcu_sched_state;
|
struct rcu_state *rsp = &rcu_sched_state;
|
||||||
|
|
||||||
/*
|
/* Take a snapshot of the sequence number. */
|
||||||
* If we are in danger of counter wrap, just do synchronize_sched().
|
s = rcu_exp_gp_seq_snap(rsp);
|
||||||
* By allowing sync_sched_expedited_started to advance no more than
|
|
||||||
* ULONG_MAX/8 ahead of sync_sched_expedited_done, we are ensuring
|
|
||||||
* that more than 3.5 billion CPUs would be required to force a
|
|
||||||
* counter wrap on a 32-bit system. Quite a few more CPUs would of
|
|
||||||
* course be required on a 64-bit system.
|
|
||||||
*/
|
|
||||||
if (ULONG_CMP_GE((ulong)atomic_long_read(&rsp->expedited_start),
|
|
||||||
(ulong)atomic_long_read(&rsp->expedited_done) +
|
|
||||||
ULONG_MAX / 8)) {
|
|
||||||
wait_rcu_gp(call_rcu_sched);
|
|
||||||
atomic_long_inc(&rsp->expedited_wrap);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Take a ticket. Note that atomic_inc_return() implies a
|
|
||||||
* full memory barrier.
|
|
||||||
*/
|
|
||||||
snap = atomic_long_inc_return(&rsp->expedited_start);
|
|
||||||
firstsnap = snap;
|
|
||||||
if (!try_get_online_cpus()) {
|
if (!try_get_online_cpus()) {
|
||||||
/* CPU hotplug operation in flight, fall back to normal GP. */
|
/* CPU hotplug operation in flight, fall back to normal GP. */
|
||||||
wait_rcu_gp(call_rcu_sched);
|
wait_rcu_gp(call_rcu_sched);
|
||||||
|
@ -3341,100 +3547,38 @@ void synchronize_sched_expedited(void)
|
||||||
}
|
}
|
||||||
WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id()));
|
WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id()));
|
||||||
|
|
||||||
/* Offline CPUs, idle CPUs, and any CPU we run on are quiescent. */
|
rnp = exp_funnel_lock(rsp, s);
|
||||||
cma = zalloc_cpumask_var(&cm, GFP_KERNEL);
|
if (rnp == NULL) {
|
||||||
if (cma) {
|
|
||||||
cpumask_copy(cm, cpu_online_mask);
|
|
||||||
cpumask_clear_cpu(raw_smp_processor_id(), cm);
|
|
||||||
for_each_cpu(cpu, cm) {
|
|
||||||
struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
|
|
||||||
|
|
||||||
if (!(atomic_add_return(0, &rdtp->dynticks) & 0x1))
|
|
||||||
cpumask_clear_cpu(cpu, cm);
|
|
||||||
}
|
|
||||||
if (cpumask_weight(cm) == 0)
|
|
||||||
goto all_cpus_idle;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Each pass through the following loop attempts to force a
|
|
||||||
* context switch on each CPU.
|
|
||||||
*/
|
|
||||||
while (try_stop_cpus(cma ? cm : cpu_online_mask,
|
|
||||||
synchronize_sched_expedited_cpu_stop,
|
|
||||||
NULL) == -EAGAIN) {
|
|
||||||
put_online_cpus();
|
put_online_cpus();
|
||||||
atomic_long_inc(&rsp->expedited_tryfail);
|
return; /* Someone else did our work for us. */
|
||||||
|
|
||||||
/* Check to see if someone else did our work for us. */
|
|
||||||
s = atomic_long_read(&rsp->expedited_done);
|
|
||||||
if (ULONG_CMP_GE((ulong)s, (ulong)firstsnap)) {
|
|
||||||
/* ensure test happens before caller kfree */
|
|
||||||
smp_mb__before_atomic(); /* ^^^ */
|
|
||||||
atomic_long_inc(&rsp->expedited_workdone1);
|
|
||||||
free_cpumask_var(cm);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* No joy, try again later. Or just synchronize_sched(). */
|
|
||||||
if (trycount++ < 10) {
|
|
||||||
udelay(trycount * num_online_cpus());
|
|
||||||
} else {
|
|
||||||
wait_rcu_gp(call_rcu_sched);
|
|
||||||
atomic_long_inc(&rsp->expedited_normal);
|
|
||||||
free_cpumask_var(cm);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Recheck to see if someone else did our work for us. */
|
|
||||||
s = atomic_long_read(&rsp->expedited_done);
|
|
||||||
if (ULONG_CMP_GE((ulong)s, (ulong)firstsnap)) {
|
|
||||||
/* ensure test happens before caller kfree */
|
|
||||||
smp_mb__before_atomic(); /* ^^^ */
|
|
||||||
atomic_long_inc(&rsp->expedited_workdone2);
|
|
||||||
free_cpumask_var(cm);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Refetching sync_sched_expedited_started allows later
|
|
||||||
* callers to piggyback on our grace period. We retry
|
|
||||||
* after they started, so our grace period works for them,
|
|
||||||
* and they started after our first try, so their grace
|
|
||||||
* period works for us.
|
|
||||||
*/
|
|
||||||
if (!try_get_online_cpus()) {
|
|
||||||
/* CPU hotplug operation in flight, use normal GP. */
|
|
||||||
wait_rcu_gp(call_rcu_sched);
|
|
||||||
atomic_long_inc(&rsp->expedited_normal);
|
|
||||||
free_cpumask_var(cm);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
snap = atomic_long_read(&rsp->expedited_start);
|
|
||||||
smp_mb(); /* ensure read is before try_stop_cpus(). */
|
|
||||||
}
|
}
|
||||||
atomic_long_inc(&rsp->expedited_stoppedcpus);
|
|
||||||
|
|
||||||
all_cpus_idle:
|
rcu_exp_gp_seq_start(rsp);
|
||||||
free_cpumask_var(cm);
|
|
||||||
|
|
||||||
/*
|
/* Stop each CPU that is online, non-idle, and not us. */
|
||||||
* Everyone up to our most recent fetch is covered by our grace
|
init_waitqueue_head(&rsp->expedited_wq);
|
||||||
* period. Update the counter, but only if our work is still
|
atomic_set(&rsp->expedited_need_qs, 1); /* Extra count avoids race. */
|
||||||
* relevant -- which it won't be if someone who started later
|
for_each_online_cpu(cpu) {
|
||||||
* than we did already did their update.
|
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
||||||
*/
|
struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
|
||||||
do {
|
|
||||||
atomic_long_inc(&rsp->expedited_done_tries);
|
rdp->exp_done = false;
|
||||||
s = atomic_long_read(&rsp->expedited_done);
|
|
||||||
if (ULONG_CMP_GE((ulong)s, (ulong)snap)) {
|
/* Skip our CPU and any idle CPUs. */
|
||||||
/* ensure test happens before caller kfree */
|
if (raw_smp_processor_id() == cpu ||
|
||||||
smp_mb__before_atomic(); /* ^^^ */
|
!(atomic_add_return(0, &rdtp->dynticks) & 0x1))
|
||||||
atomic_long_inc(&rsp->expedited_done_lost);
|
continue;
|
||||||
break;
|
atomic_inc(&rsp->expedited_need_qs);
|
||||||
}
|
stop_one_cpu_nowait(cpu, synchronize_sched_expedited_cpu_stop,
|
||||||
} while (atomic_long_cmpxchg(&rsp->expedited_done, s, snap) != s);
|
rdp, &rdp->exp_stop_work);
|
||||||
atomic_long_inc(&rsp->expedited_done_exit);
|
}
|
||||||
|
|
||||||
|
/* Remove extra count and, if necessary, wait for CPUs to stop. */
|
||||||
|
if (!atomic_dec_and_test(&rsp->expedited_need_qs))
|
||||||
|
synchronize_sched_expedited_wait(rsp);
|
||||||
|
|
||||||
|
rcu_exp_gp_seq_end(rsp);
|
||||||
|
mutex_unlock(&rnp->exp_funnel_mutex);
|
||||||
|
|
||||||
put_online_cpus();
|
put_online_cpus();
|
||||||
}
|
}
|
||||||
|
@ -3571,10 +3715,10 @@ static void rcu_barrier_callback(struct rcu_head *rhp)
|
||||||
struct rcu_state *rsp = rdp->rsp;
|
struct rcu_state *rsp = rdp->rsp;
|
||||||
|
|
||||||
if (atomic_dec_and_test(&rsp->barrier_cpu_count)) {
|
if (atomic_dec_and_test(&rsp->barrier_cpu_count)) {
|
||||||
_rcu_barrier_trace(rsp, "LastCB", -1, rsp->n_barrier_done);
|
_rcu_barrier_trace(rsp, "LastCB", -1, rsp->barrier_sequence);
|
||||||
complete(&rsp->barrier_completion);
|
complete(&rsp->barrier_completion);
|
||||||
} else {
|
} else {
|
||||||
_rcu_barrier_trace(rsp, "CB", -1, rsp->n_barrier_done);
|
_rcu_barrier_trace(rsp, "CB", -1, rsp->barrier_sequence);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3586,7 +3730,7 @@ static void rcu_barrier_func(void *type)
|
||||||
struct rcu_state *rsp = type;
|
struct rcu_state *rsp = type;
|
||||||
struct rcu_data *rdp = raw_cpu_ptr(rsp->rda);
|
struct rcu_data *rdp = raw_cpu_ptr(rsp->rda);
|
||||||
|
|
||||||
_rcu_barrier_trace(rsp, "IRQ", -1, rsp->n_barrier_done);
|
_rcu_barrier_trace(rsp, "IRQ", -1, rsp->barrier_sequence);
|
||||||
atomic_inc(&rsp->barrier_cpu_count);
|
atomic_inc(&rsp->barrier_cpu_count);
|
||||||
rsp->call(&rdp->barrier_head, rcu_barrier_callback);
|
rsp->call(&rdp->barrier_head, rcu_barrier_callback);
|
||||||
}
|
}
|
||||||
|
@ -3599,55 +3743,24 @@ static void _rcu_barrier(struct rcu_state *rsp)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
struct rcu_data *rdp;
|
struct rcu_data *rdp;
|
||||||
unsigned long snap = READ_ONCE(rsp->n_barrier_done);
|
unsigned long s = rcu_seq_snap(&rsp->barrier_sequence);
|
||||||
unsigned long snap_done;
|
|
||||||
|
|
||||||
_rcu_barrier_trace(rsp, "Begin", -1, snap);
|
_rcu_barrier_trace(rsp, "Begin", -1, s);
|
||||||
|
|
||||||
/* Take mutex to serialize concurrent rcu_barrier() requests. */
|
/* Take mutex to serialize concurrent rcu_barrier() requests. */
|
||||||
mutex_lock(&rsp->barrier_mutex);
|
mutex_lock(&rsp->barrier_mutex);
|
||||||
|
|
||||||
/*
|
/* Did someone else do our work for us? */
|
||||||
* Ensure that all prior references, including to ->n_barrier_done,
|
if (rcu_seq_done(&rsp->barrier_sequence, s)) {
|
||||||
* are ordered before the _rcu_barrier() machinery.
|
_rcu_barrier_trace(rsp, "EarlyExit", -1, rsp->barrier_sequence);
|
||||||
*/
|
|
||||||
smp_mb(); /* See above block comment. */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Recheck ->n_barrier_done to see if others did our work for us.
|
|
||||||
* This means checking ->n_barrier_done for an even-to-odd-to-even
|
|
||||||
* transition. The "if" expression below therefore rounds the old
|
|
||||||
* value up to the next even number and adds two before comparing.
|
|
||||||
*/
|
|
||||||
snap_done = rsp->n_barrier_done;
|
|
||||||
_rcu_barrier_trace(rsp, "Check", -1, snap_done);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* If the value in snap is odd, we needed to wait for the current
|
|
||||||
* rcu_barrier() to complete, then wait for the next one, in other
|
|
||||||
* words, we need the value of snap_done to be three larger than
|
|
||||||
* the value of snap. On the other hand, if the value in snap is
|
|
||||||
* even, we only had to wait for the next rcu_barrier() to complete,
|
|
||||||
* in other words, we need the value of snap_done to be only two
|
|
||||||
* greater than the value of snap. The "(snap + 3) & ~0x1" computes
|
|
||||||
* this for us (thank you, Linus!).
|
|
||||||
*/
|
|
||||||
if (ULONG_CMP_GE(snap_done, (snap + 3) & ~0x1)) {
|
|
||||||
_rcu_barrier_trace(rsp, "EarlyExit", -1, snap_done);
|
|
||||||
smp_mb(); /* caller's subsequent code after above check. */
|
smp_mb(); /* caller's subsequent code after above check. */
|
||||||
mutex_unlock(&rsp->barrier_mutex);
|
mutex_unlock(&rsp->barrier_mutex);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/* Mark the start of the barrier operation. */
|
||||||
* Increment ->n_barrier_done to avoid duplicate work. Use
|
rcu_seq_start(&rsp->barrier_sequence);
|
||||||
* WRITE_ONCE() to prevent the compiler from speculating
|
_rcu_barrier_trace(rsp, "Inc1", -1, rsp->barrier_sequence);
|
||||||
* the increment to precede the early-exit check.
|
|
||||||
*/
|
|
||||||
WRITE_ONCE(rsp->n_barrier_done, rsp->n_barrier_done + 1);
|
|
||||||
WARN_ON_ONCE((rsp->n_barrier_done & 0x1) != 1);
|
|
||||||
_rcu_barrier_trace(rsp, "Inc1", -1, rsp->n_barrier_done);
|
|
||||||
smp_mb(); /* Order ->n_barrier_done increment with below mechanism. */
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize the count to one rather than to zero in order to
|
* Initialize the count to one rather than to zero in order to
|
||||||
|
@ -3671,10 +3784,10 @@ static void _rcu_barrier(struct rcu_state *rsp)
|
||||||
if (rcu_is_nocb_cpu(cpu)) {
|
if (rcu_is_nocb_cpu(cpu)) {
|
||||||
if (!rcu_nocb_cpu_needs_barrier(rsp, cpu)) {
|
if (!rcu_nocb_cpu_needs_barrier(rsp, cpu)) {
|
||||||
_rcu_barrier_trace(rsp, "OfflineNoCB", cpu,
|
_rcu_barrier_trace(rsp, "OfflineNoCB", cpu,
|
||||||
rsp->n_barrier_done);
|
rsp->barrier_sequence);
|
||||||
} else {
|
} else {
|
||||||
_rcu_barrier_trace(rsp, "OnlineNoCB", cpu,
|
_rcu_barrier_trace(rsp, "OnlineNoCB", cpu,
|
||||||
rsp->n_barrier_done);
|
rsp->barrier_sequence);
|
||||||
smp_mb__before_atomic();
|
smp_mb__before_atomic();
|
||||||
atomic_inc(&rsp->barrier_cpu_count);
|
atomic_inc(&rsp->barrier_cpu_count);
|
||||||
__call_rcu(&rdp->barrier_head,
|
__call_rcu(&rdp->barrier_head,
|
||||||
|
@ -3682,11 +3795,11 @@ static void _rcu_barrier(struct rcu_state *rsp)
|
||||||
}
|
}
|
||||||
} else if (READ_ONCE(rdp->qlen)) {
|
} else if (READ_ONCE(rdp->qlen)) {
|
||||||
_rcu_barrier_trace(rsp, "OnlineQ", cpu,
|
_rcu_barrier_trace(rsp, "OnlineQ", cpu,
|
||||||
rsp->n_barrier_done);
|
rsp->barrier_sequence);
|
||||||
smp_call_function_single(cpu, rcu_barrier_func, rsp, 1);
|
smp_call_function_single(cpu, rcu_barrier_func, rsp, 1);
|
||||||
} else {
|
} else {
|
||||||
_rcu_barrier_trace(rsp, "OnlineNQ", cpu,
|
_rcu_barrier_trace(rsp, "OnlineNQ", cpu,
|
||||||
rsp->n_barrier_done);
|
rsp->barrier_sequence);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
put_online_cpus();
|
put_online_cpus();
|
||||||
|
@ -3698,16 +3811,13 @@ static void _rcu_barrier(struct rcu_state *rsp)
|
||||||
if (atomic_dec_and_test(&rsp->barrier_cpu_count))
|
if (atomic_dec_and_test(&rsp->barrier_cpu_count))
|
||||||
complete(&rsp->barrier_completion);
|
complete(&rsp->barrier_completion);
|
||||||
|
|
||||||
/* Increment ->n_barrier_done to prevent duplicate work. */
|
|
||||||
smp_mb(); /* Keep increment after above mechanism. */
|
|
||||||
WRITE_ONCE(rsp->n_barrier_done, rsp->n_barrier_done + 1);
|
|
||||||
WARN_ON_ONCE((rsp->n_barrier_done & 0x1) != 0);
|
|
||||||
_rcu_barrier_trace(rsp, "Inc2", -1, rsp->n_barrier_done);
|
|
||||||
smp_mb(); /* Keep increment before caller's subsequent code. */
|
|
||||||
|
|
||||||
/* Wait for all rcu_barrier_callback() callbacks to be invoked. */
|
/* Wait for all rcu_barrier_callback() callbacks to be invoked. */
|
||||||
wait_for_completion(&rsp->barrier_completion);
|
wait_for_completion(&rsp->barrier_completion);
|
||||||
|
|
||||||
|
/* Mark the end of the barrier operation. */
|
||||||
|
_rcu_barrier_trace(rsp, "Inc2", -1, rsp->barrier_sequence);
|
||||||
|
rcu_seq_end(&rsp->barrier_sequence);
|
||||||
|
|
||||||
/* Other rcu_barrier() invocations can now safely proceed. */
|
/* Other rcu_barrier() invocations can now safely proceed. */
|
||||||
mutex_unlock(&rsp->barrier_mutex);
|
mutex_unlock(&rsp->barrier_mutex);
|
||||||
}
|
}
|
||||||
|
@ -3770,6 +3880,7 @@ rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
|
||||||
WARN_ON_ONCE(atomic_read(&rdp->dynticks->dynticks) != 1);
|
WARN_ON_ONCE(atomic_read(&rdp->dynticks->dynticks) != 1);
|
||||||
rdp->cpu = cpu;
|
rdp->cpu = cpu;
|
||||||
rdp->rsp = rsp;
|
rdp->rsp = rsp;
|
||||||
|
mutex_init(&rdp->exp_funnel_mutex);
|
||||||
rcu_boot_init_nocb_percpu_data(rdp);
|
rcu_boot_init_nocb_percpu_data(rdp);
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
}
|
}
|
||||||
|
@ -3961,22 +4072,22 @@ void rcu_scheduler_starting(void)
|
||||||
* Compute the per-level fanout, either using the exact fanout specified
|
* Compute the per-level fanout, either using the exact fanout specified
|
||||||
* or balancing the tree, depending on the rcu_fanout_exact boot parameter.
|
* or balancing the tree, depending on the rcu_fanout_exact boot parameter.
|
||||||
*/
|
*/
|
||||||
static void __init rcu_init_levelspread(struct rcu_state *rsp)
|
static void __init rcu_init_levelspread(int *levelspread, const int *levelcnt)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (rcu_fanout_exact) {
|
if (rcu_fanout_exact) {
|
||||||
rsp->levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf;
|
levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf;
|
||||||
for (i = rcu_num_lvls - 2; i >= 0; i--)
|
for (i = rcu_num_lvls - 2; i >= 0; i--)
|
||||||
rsp->levelspread[i] = RCU_FANOUT;
|
levelspread[i] = RCU_FANOUT;
|
||||||
} else {
|
} else {
|
||||||
int ccur;
|
int ccur;
|
||||||
int cprv;
|
int cprv;
|
||||||
|
|
||||||
cprv = nr_cpu_ids;
|
cprv = nr_cpu_ids;
|
||||||
for (i = rcu_num_lvls - 1; i >= 0; i--) {
|
for (i = rcu_num_lvls - 1; i >= 0; i--) {
|
||||||
ccur = rsp->levelcnt[i];
|
ccur = levelcnt[i];
|
||||||
rsp->levelspread[i] = (cprv + ccur - 1) / ccur;
|
levelspread[i] = (cprv + ccur - 1) / ccur;
|
||||||
cprv = ccur;
|
cprv = ccur;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -3988,23 +4099,20 @@ static void __init rcu_init_levelspread(struct rcu_state *rsp)
|
||||||
static void __init rcu_init_one(struct rcu_state *rsp,
|
static void __init rcu_init_one(struct rcu_state *rsp,
|
||||||
struct rcu_data __percpu *rda)
|
struct rcu_data __percpu *rda)
|
||||||
{
|
{
|
||||||
static const char * const buf[] = {
|
static const char * const buf[] = RCU_NODE_NAME_INIT;
|
||||||
"rcu_node_0",
|
static const char * const fqs[] = RCU_FQS_NAME_INIT;
|
||||||
"rcu_node_1",
|
static const char * const exp[] = RCU_EXP_NAME_INIT;
|
||||||
"rcu_node_2",
|
static const char * const exp_sched[] = RCU_EXP_SCHED_NAME_INIT;
|
||||||
"rcu_node_3" }; /* Match MAX_RCU_LVLS */
|
|
||||||
static const char * const fqs[] = {
|
|
||||||
"rcu_node_fqs_0",
|
|
||||||
"rcu_node_fqs_1",
|
|
||||||
"rcu_node_fqs_2",
|
|
||||||
"rcu_node_fqs_3" }; /* Match MAX_RCU_LVLS */
|
|
||||||
static u8 fl_mask = 0x1;
|
static u8 fl_mask = 0x1;
|
||||||
|
|
||||||
|
int levelcnt[RCU_NUM_LVLS]; /* # nodes in each level. */
|
||||||
|
int levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */
|
||||||
int cpustride = 1;
|
int cpustride = 1;
|
||||||
int i;
|
int i;
|
||||||
int j;
|
int j;
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
|
|
||||||
BUILD_BUG_ON(MAX_RCU_LVLS > ARRAY_SIZE(buf)); /* Fix buf[] init! */
|
BUILD_BUG_ON(RCU_NUM_LVLS > ARRAY_SIZE(buf)); /* Fix buf[] init! */
|
||||||
|
|
||||||
/* Silence gcc 4.8 false positive about array index out of range. */
|
/* Silence gcc 4.8 false positive about array index out of range. */
|
||||||
if (rcu_num_lvls <= 0 || rcu_num_lvls > RCU_NUM_LVLS)
|
if (rcu_num_lvls <= 0 || rcu_num_lvls > RCU_NUM_LVLS)
|
||||||
|
@ -4013,19 +4121,19 @@ static void __init rcu_init_one(struct rcu_state *rsp,
|
||||||
/* Initialize the level-tracking arrays. */
|
/* Initialize the level-tracking arrays. */
|
||||||
|
|
||||||
for (i = 0; i < rcu_num_lvls; i++)
|
for (i = 0; i < rcu_num_lvls; i++)
|
||||||
rsp->levelcnt[i] = num_rcu_lvl[i];
|
levelcnt[i] = num_rcu_lvl[i];
|
||||||
for (i = 1; i < rcu_num_lvls; i++)
|
for (i = 1; i < rcu_num_lvls; i++)
|
||||||
rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1];
|
rsp->level[i] = rsp->level[i - 1] + levelcnt[i - 1];
|
||||||
rcu_init_levelspread(rsp);
|
rcu_init_levelspread(levelspread, levelcnt);
|
||||||
rsp->flavor_mask = fl_mask;
|
rsp->flavor_mask = fl_mask;
|
||||||
fl_mask <<= 1;
|
fl_mask <<= 1;
|
||||||
|
|
||||||
/* Initialize the elements themselves, starting from the leaves. */
|
/* Initialize the elements themselves, starting from the leaves. */
|
||||||
|
|
||||||
for (i = rcu_num_lvls - 1; i >= 0; i--) {
|
for (i = rcu_num_lvls - 1; i >= 0; i--) {
|
||||||
cpustride *= rsp->levelspread[i];
|
cpustride *= levelspread[i];
|
||||||
rnp = rsp->level[i];
|
rnp = rsp->level[i];
|
||||||
for (j = 0; j < rsp->levelcnt[i]; j++, rnp++) {
|
for (j = 0; j < levelcnt[i]; j++, rnp++) {
|
||||||
raw_spin_lock_init(&rnp->lock);
|
raw_spin_lock_init(&rnp->lock);
|
||||||
lockdep_set_class_and_name(&rnp->lock,
|
lockdep_set_class_and_name(&rnp->lock,
|
||||||
&rcu_node_class[i], buf[i]);
|
&rcu_node_class[i], buf[i]);
|
||||||
|
@ -4045,14 +4153,23 @@ static void __init rcu_init_one(struct rcu_state *rsp,
|
||||||
rnp->grpmask = 0;
|
rnp->grpmask = 0;
|
||||||
rnp->parent = NULL;
|
rnp->parent = NULL;
|
||||||
} else {
|
} else {
|
||||||
rnp->grpnum = j % rsp->levelspread[i - 1];
|
rnp->grpnum = j % levelspread[i - 1];
|
||||||
rnp->grpmask = 1UL << rnp->grpnum;
|
rnp->grpmask = 1UL << rnp->grpnum;
|
||||||
rnp->parent = rsp->level[i - 1] +
|
rnp->parent = rsp->level[i - 1] +
|
||||||
j / rsp->levelspread[i - 1];
|
j / levelspread[i - 1];
|
||||||
}
|
}
|
||||||
rnp->level = i;
|
rnp->level = i;
|
||||||
INIT_LIST_HEAD(&rnp->blkd_tasks);
|
INIT_LIST_HEAD(&rnp->blkd_tasks);
|
||||||
rcu_init_one_nocb(rnp);
|
rcu_init_one_nocb(rnp);
|
||||||
|
mutex_init(&rnp->exp_funnel_mutex);
|
||||||
|
if (rsp == &rcu_sched_state)
|
||||||
|
lockdep_set_class_and_name(
|
||||||
|
&rnp->exp_funnel_mutex,
|
||||||
|
&rcu_exp_sched_class[i], exp_sched[i]);
|
||||||
|
else
|
||||||
|
lockdep_set_class_and_name(
|
||||||
|
&rnp->exp_funnel_mutex,
|
||||||
|
&rcu_exp_class[i], exp[i]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4076,9 +4193,7 @@ static void __init rcu_init_geometry(void)
|
||||||
{
|
{
|
||||||
ulong d;
|
ulong d;
|
||||||
int i;
|
int i;
|
||||||
int j;
|
int rcu_capacity[RCU_NUM_LVLS];
|
||||||
int n = nr_cpu_ids;
|
|
||||||
int rcu_capacity[MAX_RCU_LVLS + 1];
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize any unspecified boot parameters.
|
* Initialize any unspecified boot parameters.
|
||||||
|
@ -4100,48 +4215,50 @@ static void __init rcu_init_geometry(void)
|
||||||
pr_info("RCU: Adjusting geometry for rcu_fanout_leaf=%d, nr_cpu_ids=%d\n",
|
pr_info("RCU: Adjusting geometry for rcu_fanout_leaf=%d, nr_cpu_ids=%d\n",
|
||||||
rcu_fanout_leaf, nr_cpu_ids);
|
rcu_fanout_leaf, nr_cpu_ids);
|
||||||
|
|
||||||
/*
|
|
||||||
* Compute number of nodes that can be handled an rcu_node tree
|
|
||||||
* with the given number of levels. Setting rcu_capacity[0] makes
|
|
||||||
* some of the arithmetic easier.
|
|
||||||
*/
|
|
||||||
rcu_capacity[0] = 1;
|
|
||||||
rcu_capacity[1] = rcu_fanout_leaf;
|
|
||||||
for (i = 2; i <= MAX_RCU_LVLS; i++)
|
|
||||||
rcu_capacity[i] = rcu_capacity[i - 1] * RCU_FANOUT;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The boot-time rcu_fanout_leaf parameter is only permitted
|
* The boot-time rcu_fanout_leaf parameter is only permitted
|
||||||
* to increase the leaf-level fanout, not decrease it. Of course,
|
* to increase the leaf-level fanout, not decrease it. Of course,
|
||||||
* the leaf-level fanout cannot exceed the number of bits in
|
* the leaf-level fanout cannot exceed the number of bits in
|
||||||
* the rcu_node masks. Finally, the tree must be able to accommodate
|
* the rcu_node masks. Complain and fall back to the compile-
|
||||||
* the configured number of CPUs. Complain and fall back to the
|
* time values if these limits are exceeded.
|
||||||
* compile-time values if these limits are exceeded.
|
|
||||||
*/
|
*/
|
||||||
if (rcu_fanout_leaf < RCU_FANOUT_LEAF ||
|
if (rcu_fanout_leaf < RCU_FANOUT_LEAF ||
|
||||||
rcu_fanout_leaf > sizeof(unsigned long) * 8 ||
|
rcu_fanout_leaf > sizeof(unsigned long) * 8) {
|
||||||
n > rcu_capacity[MAX_RCU_LVLS]) {
|
rcu_fanout_leaf = RCU_FANOUT_LEAF;
|
||||||
WARN_ON(1);
|
WARN_ON(1);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Compute number of nodes that can be handled an rcu_node tree
|
||||||
|
* with the given number of levels.
|
||||||
|
*/
|
||||||
|
rcu_capacity[0] = rcu_fanout_leaf;
|
||||||
|
for (i = 1; i < RCU_NUM_LVLS; i++)
|
||||||
|
rcu_capacity[i] = rcu_capacity[i - 1] * RCU_FANOUT;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The tree must be able to accommodate the configured number of CPUs.
|
||||||
|
* If this limit is exceeded than we have a serious problem elsewhere.
|
||||||
|
*/
|
||||||
|
if (nr_cpu_ids > rcu_capacity[RCU_NUM_LVLS - 1])
|
||||||
|
panic("rcu_init_geometry: rcu_capacity[] is too small");
|
||||||
|
|
||||||
|
/* Calculate the number of levels in the tree. */
|
||||||
|
for (i = 0; nr_cpu_ids > rcu_capacity[i]; i++) {
|
||||||
|
}
|
||||||
|
rcu_num_lvls = i + 1;
|
||||||
|
|
||||||
/* Calculate the number of rcu_nodes at each level of the tree. */
|
/* Calculate the number of rcu_nodes at each level of the tree. */
|
||||||
for (i = 1; i <= MAX_RCU_LVLS; i++)
|
for (i = 0; i < rcu_num_lvls; i++) {
|
||||||
if (n <= rcu_capacity[i]) {
|
int cap = rcu_capacity[(rcu_num_lvls - 1) - i];
|
||||||
for (j = 0; j <= i; j++)
|
num_rcu_lvl[i] = DIV_ROUND_UP(nr_cpu_ids, cap);
|
||||||
num_rcu_lvl[j] =
|
}
|
||||||
DIV_ROUND_UP(n, rcu_capacity[i - j]);
|
|
||||||
rcu_num_lvls = i;
|
|
||||||
for (j = i + 1; j <= MAX_RCU_LVLS; j++)
|
|
||||||
num_rcu_lvl[j] = 0;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Calculate the total number of rcu_node structures. */
|
/* Calculate the total number of rcu_node structures. */
|
||||||
rcu_num_nodes = 0;
|
rcu_num_nodes = 0;
|
||||||
for (i = 0; i <= MAX_RCU_LVLS; i++)
|
for (i = 0; i < rcu_num_lvls; i++)
|
||||||
rcu_num_nodes += num_rcu_lvl[i];
|
rcu_num_nodes += num_rcu_lvl[i];
|
||||||
rcu_num_nodes -= n;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -27,6 +27,7 @@
|
||||||
#include <linux/threads.h>
|
#include <linux/threads.h>
|
||||||
#include <linux/cpumask.h>
|
#include <linux/cpumask.h>
|
||||||
#include <linux/seqlock.h>
|
#include <linux/seqlock.h>
|
||||||
|
#include <linux/stop_machine.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Define shape of hierarchy based on NR_CPUS, CONFIG_RCU_FANOUT, and
|
* Define shape of hierarchy based on NR_CPUS, CONFIG_RCU_FANOUT, and
|
||||||
|
@ -36,8 +37,6 @@
|
||||||
* Of course, your mileage may vary.
|
* Of course, your mileage may vary.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#define MAX_RCU_LVLS 4
|
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_FANOUT
|
#ifdef CONFIG_RCU_FANOUT
|
||||||
#define RCU_FANOUT CONFIG_RCU_FANOUT
|
#define RCU_FANOUT CONFIG_RCU_FANOUT
|
||||||
#else /* #ifdef CONFIG_RCU_FANOUT */
|
#else /* #ifdef CONFIG_RCU_FANOUT */
|
||||||
|
@ -66,38 +65,53 @@
|
||||||
#if NR_CPUS <= RCU_FANOUT_1
|
#if NR_CPUS <= RCU_FANOUT_1
|
||||||
# define RCU_NUM_LVLS 1
|
# define RCU_NUM_LVLS 1
|
||||||
# define NUM_RCU_LVL_0 1
|
# define NUM_RCU_LVL_0 1
|
||||||
# define NUM_RCU_LVL_1 (NR_CPUS)
|
# define NUM_RCU_NODES NUM_RCU_LVL_0
|
||||||
# define NUM_RCU_LVL_2 0
|
# define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0 }
|
||||||
# define NUM_RCU_LVL_3 0
|
# define RCU_NODE_NAME_INIT { "rcu_node_0" }
|
||||||
# define NUM_RCU_LVL_4 0
|
# define RCU_FQS_NAME_INIT { "rcu_node_fqs_0" }
|
||||||
|
# define RCU_EXP_NAME_INIT { "rcu_node_exp_0" }
|
||||||
|
# define RCU_EXP_SCHED_NAME_INIT \
|
||||||
|
{ "rcu_node_exp_sched_0" }
|
||||||
#elif NR_CPUS <= RCU_FANOUT_2
|
#elif NR_CPUS <= RCU_FANOUT_2
|
||||||
# define RCU_NUM_LVLS 2
|
# define RCU_NUM_LVLS 2
|
||||||
# define NUM_RCU_LVL_0 1
|
# define NUM_RCU_LVL_0 1
|
||||||
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
|
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
|
||||||
# define NUM_RCU_LVL_2 (NR_CPUS)
|
# define NUM_RCU_NODES (NUM_RCU_LVL_0 + NUM_RCU_LVL_1)
|
||||||
# define NUM_RCU_LVL_3 0
|
# define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0, NUM_RCU_LVL_1 }
|
||||||
# define NUM_RCU_LVL_4 0
|
# define RCU_NODE_NAME_INIT { "rcu_node_0", "rcu_node_1" }
|
||||||
|
# define RCU_FQS_NAME_INIT { "rcu_node_fqs_0", "rcu_node_fqs_1" }
|
||||||
|
# define RCU_EXP_NAME_INIT { "rcu_node_exp_0", "rcu_node_exp_1" }
|
||||||
|
# define RCU_EXP_SCHED_NAME_INIT \
|
||||||
|
{ "rcu_node_exp_sched_0", "rcu_node_exp_sched_1" }
|
||||||
#elif NR_CPUS <= RCU_FANOUT_3
|
#elif NR_CPUS <= RCU_FANOUT_3
|
||||||
# define RCU_NUM_LVLS 3
|
# define RCU_NUM_LVLS 3
|
||||||
# define NUM_RCU_LVL_0 1
|
# define NUM_RCU_LVL_0 1
|
||||||
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
|
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
|
||||||
# define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
|
# define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
|
||||||
# define NUM_RCU_LVL_3 (NR_CPUS)
|
# define NUM_RCU_NODES (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2)
|
||||||
# define NUM_RCU_LVL_4 0
|
# define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0, NUM_RCU_LVL_1, NUM_RCU_LVL_2 }
|
||||||
|
# define RCU_NODE_NAME_INIT { "rcu_node_0", "rcu_node_1", "rcu_node_2" }
|
||||||
|
# define RCU_FQS_NAME_INIT { "rcu_node_fqs_0", "rcu_node_fqs_1", "rcu_node_fqs_2" }
|
||||||
|
# define RCU_EXP_NAME_INIT { "rcu_node_exp_0", "rcu_node_exp_1", "rcu_node_exp_2" }
|
||||||
|
# define RCU_EXP_SCHED_NAME_INIT \
|
||||||
|
{ "rcu_node_exp_sched_0", "rcu_node_exp_sched_1", "rcu_node_exp_sched_2" }
|
||||||
#elif NR_CPUS <= RCU_FANOUT_4
|
#elif NR_CPUS <= RCU_FANOUT_4
|
||||||
# define RCU_NUM_LVLS 4
|
# define RCU_NUM_LVLS 4
|
||||||
# define NUM_RCU_LVL_0 1
|
# define NUM_RCU_LVL_0 1
|
||||||
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_3)
|
# define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_3)
|
||||||
# define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
|
# define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
|
||||||
# define NUM_RCU_LVL_3 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
|
# define NUM_RCU_LVL_3 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
|
||||||
# define NUM_RCU_LVL_4 (NR_CPUS)
|
# define NUM_RCU_NODES (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2 + NUM_RCU_LVL_3)
|
||||||
|
# define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0, NUM_RCU_LVL_1, NUM_RCU_LVL_2, NUM_RCU_LVL_3 }
|
||||||
|
# define RCU_NODE_NAME_INIT { "rcu_node_0", "rcu_node_1", "rcu_node_2", "rcu_node_3" }
|
||||||
|
# define RCU_FQS_NAME_INIT { "rcu_node_fqs_0", "rcu_node_fqs_1", "rcu_node_fqs_2", "rcu_node_fqs_3" }
|
||||||
|
# define RCU_EXP_NAME_INIT { "rcu_node_exp_0", "rcu_node_exp_1", "rcu_node_exp_2", "rcu_node_exp_3" }
|
||||||
|
# define RCU_EXP_SCHED_NAME_INIT \
|
||||||
|
{ "rcu_node_exp_sched_0", "rcu_node_exp_sched_1", "rcu_node_exp_sched_2", "rcu_node_exp_sched_3" }
|
||||||
#else
|
#else
|
||||||
# error "CONFIG_RCU_FANOUT insufficient for NR_CPUS"
|
# error "CONFIG_RCU_FANOUT insufficient for NR_CPUS"
|
||||||
#endif /* #if (NR_CPUS) <= RCU_FANOUT_1 */
|
#endif /* #if (NR_CPUS) <= RCU_FANOUT_1 */
|
||||||
|
|
||||||
#define RCU_SUM (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2 + NUM_RCU_LVL_3 + NUM_RCU_LVL_4)
|
|
||||||
#define NUM_RCU_NODES (RCU_SUM - NR_CPUS)
|
|
||||||
|
|
||||||
extern int rcu_num_lvls;
|
extern int rcu_num_lvls;
|
||||||
extern int rcu_num_nodes;
|
extern int rcu_num_nodes;
|
||||||
|
|
||||||
|
@ -236,6 +250,8 @@ struct rcu_node {
|
||||||
int need_future_gp[2];
|
int need_future_gp[2];
|
||||||
/* Counts of upcoming no-CB GP requests. */
|
/* Counts of upcoming no-CB GP requests. */
|
||||||
raw_spinlock_t fqslock ____cacheline_internodealigned_in_smp;
|
raw_spinlock_t fqslock ____cacheline_internodealigned_in_smp;
|
||||||
|
|
||||||
|
struct mutex exp_funnel_mutex ____cacheline_internodealigned_in_smp;
|
||||||
} ____cacheline_internodealigned_in_smp;
|
} ____cacheline_internodealigned_in_smp;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -287,12 +303,13 @@ struct rcu_data {
|
||||||
bool gpwrap; /* Possible gpnum/completed wrap. */
|
bool gpwrap; /* Possible gpnum/completed wrap. */
|
||||||
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
|
struct rcu_node *mynode; /* This CPU's leaf of hierarchy */
|
||||||
unsigned long grpmask; /* Mask to apply to leaf qsmask. */
|
unsigned long grpmask; /* Mask to apply to leaf qsmask. */
|
||||||
#ifdef CONFIG_RCU_CPU_STALL_INFO
|
|
||||||
unsigned long ticks_this_gp; /* The number of scheduling-clock */
|
unsigned long ticks_this_gp; /* The number of scheduling-clock */
|
||||||
/* ticks this CPU has handled */
|
/* ticks this CPU has handled */
|
||||||
/* during and after the last grace */
|
/* during and after the last grace */
|
||||||
/* period it is aware of. */
|
/* period it is aware of. */
|
||||||
#endif /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
|
struct cpu_stop_work exp_stop_work;
|
||||||
|
/* Expedited grace-period control */
|
||||||
|
/* for CPU stopping. */
|
||||||
|
|
||||||
/* 2) batch handling */
|
/* 2) batch handling */
|
||||||
/*
|
/*
|
||||||
|
@ -355,11 +372,13 @@ struct rcu_data {
|
||||||
unsigned long n_rp_nocb_defer_wakeup;
|
unsigned long n_rp_nocb_defer_wakeup;
|
||||||
unsigned long n_rp_need_nothing;
|
unsigned long n_rp_need_nothing;
|
||||||
|
|
||||||
/* 6) _rcu_barrier() and OOM callbacks. */
|
/* 6) _rcu_barrier(), OOM callbacks, and expediting. */
|
||||||
struct rcu_head barrier_head;
|
struct rcu_head barrier_head;
|
||||||
#ifdef CONFIG_RCU_FAST_NO_HZ
|
#ifdef CONFIG_RCU_FAST_NO_HZ
|
||||||
struct rcu_head oom_head;
|
struct rcu_head oom_head;
|
||||||
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */
|
||||||
|
struct mutex exp_funnel_mutex;
|
||||||
|
bool exp_done; /* Expedited QS for this CPU? */
|
||||||
|
|
||||||
/* 7) Callback offloading. */
|
/* 7) Callback offloading. */
|
||||||
#ifdef CONFIG_RCU_NOCB_CPU
|
#ifdef CONFIG_RCU_NOCB_CPU
|
||||||
|
@ -387,9 +406,7 @@ struct rcu_data {
|
||||||
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
|
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
|
||||||
|
|
||||||
/* 8) RCU CPU stall data. */
|
/* 8) RCU CPU stall data. */
|
||||||
#ifdef CONFIG_RCU_CPU_STALL_INFO
|
|
||||||
unsigned int softirq_snap; /* Snapshot of softirq activity. */
|
unsigned int softirq_snap; /* Snapshot of softirq activity. */
|
||||||
#endif /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
|
|
||||||
|
|
||||||
int cpu;
|
int cpu;
|
||||||
struct rcu_state *rsp;
|
struct rcu_state *rsp;
|
||||||
|
@ -442,9 +459,9 @@ do { \
|
||||||
*/
|
*/
|
||||||
struct rcu_state {
|
struct rcu_state {
|
||||||
struct rcu_node node[NUM_RCU_NODES]; /* Hierarchy. */
|
struct rcu_node node[NUM_RCU_NODES]; /* Hierarchy. */
|
||||||
struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */
|
struct rcu_node *level[RCU_NUM_LVLS + 1];
|
||||||
u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */
|
/* Hierarchy levels (+1 to */
|
||||||
u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */
|
/* shut bogus gcc warning) */
|
||||||
u8 flavor_mask; /* bit in flavor mask. */
|
u8 flavor_mask; /* bit in flavor mask. */
|
||||||
struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */
|
struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */
|
||||||
void (*call)(struct rcu_head *head, /* call_rcu() flavor. */
|
void (*call)(struct rcu_head *head, /* call_rcu() flavor. */
|
||||||
|
@ -479,21 +496,18 @@ struct rcu_state {
|
||||||
struct mutex barrier_mutex; /* Guards barrier fields. */
|
struct mutex barrier_mutex; /* Guards barrier fields. */
|
||||||
atomic_t barrier_cpu_count; /* # CPUs waiting on. */
|
atomic_t barrier_cpu_count; /* # CPUs waiting on. */
|
||||||
struct completion barrier_completion; /* Wake at barrier end. */
|
struct completion barrier_completion; /* Wake at barrier end. */
|
||||||
unsigned long n_barrier_done; /* ++ at start and end of */
|
unsigned long barrier_sequence; /* ++ at start and end of */
|
||||||
/* _rcu_barrier(). */
|
/* _rcu_barrier(). */
|
||||||
/* End of fields guarded by barrier_mutex. */
|
/* End of fields guarded by barrier_mutex. */
|
||||||
|
|
||||||
atomic_long_t expedited_start; /* Starting ticket. */
|
unsigned long expedited_sequence; /* Take a ticket. */
|
||||||
atomic_long_t expedited_done; /* Done ticket. */
|
atomic_long_t expedited_workdone0; /* # done by others #0. */
|
||||||
atomic_long_t expedited_wrap; /* # near-wrap incidents. */
|
|
||||||
atomic_long_t expedited_tryfail; /* # acquisition failures. */
|
|
||||||
atomic_long_t expedited_workdone1; /* # done by others #1. */
|
atomic_long_t expedited_workdone1; /* # done by others #1. */
|
||||||
atomic_long_t expedited_workdone2; /* # done by others #2. */
|
atomic_long_t expedited_workdone2; /* # done by others #2. */
|
||||||
|
atomic_long_t expedited_workdone3; /* # done by others #3. */
|
||||||
atomic_long_t expedited_normal; /* # fallbacks to normal. */
|
atomic_long_t expedited_normal; /* # fallbacks to normal. */
|
||||||
atomic_long_t expedited_stoppedcpus; /* # successful stop_cpus. */
|
atomic_t expedited_need_qs; /* # CPUs left to check in. */
|
||||||
atomic_long_t expedited_done_tries; /* # tries to update _done. */
|
wait_queue_head_t expedited_wq; /* Wait for check-ins. */
|
||||||
atomic_long_t expedited_done_lost; /* # times beaten to _done. */
|
|
||||||
atomic_long_t expedited_done_exit; /* # times exited _done loop. */
|
|
||||||
|
|
||||||
unsigned long jiffies_force_qs; /* Time at which to invoke */
|
unsigned long jiffies_force_qs; /* Time at which to invoke */
|
||||||
/* force_quiescent_state(). */
|
/* force_quiescent_state(). */
|
||||||
|
@ -527,7 +541,11 @@ struct rcu_state {
|
||||||
/* Values for rcu_state structure's gp_flags field. */
|
/* Values for rcu_state structure's gp_flags field. */
|
||||||
#define RCU_GP_WAIT_INIT 0 /* Initial state. */
|
#define RCU_GP_WAIT_INIT 0 /* Initial state. */
|
||||||
#define RCU_GP_WAIT_GPS 1 /* Wait for grace-period start. */
|
#define RCU_GP_WAIT_GPS 1 /* Wait for grace-period start. */
|
||||||
#define RCU_GP_WAIT_FQS 2 /* Wait for force-quiescent-state time. */
|
#define RCU_GP_DONE_GPS 2 /* Wait done for grace-period start. */
|
||||||
|
#define RCU_GP_WAIT_FQS 3 /* Wait for force-quiescent-state time. */
|
||||||
|
#define RCU_GP_DOING_FQS 4 /* Wait done for force-quiescent-state time. */
|
||||||
|
#define RCU_GP_CLEANUP 5 /* Grace-period cleanup started. */
|
||||||
|
#define RCU_GP_CLEANED 6 /* Grace-period cleanup complete. */
|
||||||
|
|
||||||
extern struct list_head rcu_struct_flavors;
|
extern struct list_head rcu_struct_flavors;
|
||||||
|
|
||||||
|
|
|
@ -82,10 +82,8 @@ static void __init rcu_bootup_announce_oddness(void)
|
||||||
pr_info("\tRCU lockdep checking is enabled.\n");
|
pr_info("\tRCU lockdep checking is enabled.\n");
|
||||||
if (IS_ENABLED(CONFIG_RCU_TORTURE_TEST_RUNNABLE))
|
if (IS_ENABLED(CONFIG_RCU_TORTURE_TEST_RUNNABLE))
|
||||||
pr_info("\tRCU torture testing starts during boot.\n");
|
pr_info("\tRCU torture testing starts during boot.\n");
|
||||||
if (IS_ENABLED(CONFIG_RCU_CPU_STALL_INFO))
|
if (RCU_NUM_LVLS >= 4)
|
||||||
pr_info("\tAdditional per-CPU info printed with stalls.\n");
|
pr_info("\tFour(or more)-level hierarchy is enabled.\n");
|
||||||
if (NUM_RCU_LVL_4 != 0)
|
|
||||||
pr_info("\tFour-level hierarchy is enabled.\n");
|
|
||||||
if (RCU_FANOUT_LEAF != 16)
|
if (RCU_FANOUT_LEAF != 16)
|
||||||
pr_info("\tBuild-time adjustment of leaf fanout to %d.\n",
|
pr_info("\tBuild-time adjustment of leaf fanout to %d.\n",
|
||||||
RCU_FANOUT_LEAF);
|
RCU_FANOUT_LEAF);
|
||||||
|
@ -418,8 +416,6 @@ static void rcu_print_detail_task_stall(struct rcu_state *rsp)
|
||||||
rcu_print_detail_task_stall_rnp(rnp);
|
rcu_print_detail_task_stall_rnp(rnp);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_CPU_STALL_INFO
|
|
||||||
|
|
||||||
static void rcu_print_task_stall_begin(struct rcu_node *rnp)
|
static void rcu_print_task_stall_begin(struct rcu_node *rnp)
|
||||||
{
|
{
|
||||||
pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
|
pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
|
||||||
|
@ -431,18 +427,6 @@ static void rcu_print_task_stall_end(void)
|
||||||
pr_cont("\n");
|
pr_cont("\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
|
|
||||||
|
|
||||||
static void rcu_print_task_stall_begin(struct rcu_node *rnp)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
static void rcu_print_task_stall_end(void)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_RCU_CPU_STALL_INFO */
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Scan the current list of tasks blocked within RCU read-side critical
|
* Scan the current list of tasks blocked within RCU read-side critical
|
||||||
* sections, printing out the tid of each.
|
* sections, printing out the tid of each.
|
||||||
|
@ -538,10 +522,10 @@ EXPORT_SYMBOL_GPL(call_rcu);
|
||||||
*/
|
*/
|
||||||
void synchronize_rcu(void)
|
void synchronize_rcu(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
|
RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) ||
|
||||||
!lock_is_held(&rcu_lock_map) &&
|
lock_is_held(&rcu_lock_map) ||
|
||||||
!lock_is_held(&rcu_sched_lock_map),
|
lock_is_held(&rcu_sched_lock_map),
|
||||||
"Illegal synchronize_rcu() in RCU read-side critical section");
|
"Illegal synchronize_rcu() in RCU read-side critical section");
|
||||||
if (!rcu_scheduler_active)
|
if (!rcu_scheduler_active)
|
||||||
return;
|
return;
|
||||||
if (rcu_gp_is_expedited())
|
if (rcu_gp_is_expedited())
|
||||||
|
@ -552,8 +536,6 @@ void synchronize_rcu(void)
|
||||||
EXPORT_SYMBOL_GPL(synchronize_rcu);
|
EXPORT_SYMBOL_GPL(synchronize_rcu);
|
||||||
|
|
||||||
static DECLARE_WAIT_QUEUE_HEAD(sync_rcu_preempt_exp_wq);
|
static DECLARE_WAIT_QUEUE_HEAD(sync_rcu_preempt_exp_wq);
|
||||||
static unsigned long sync_rcu_preempt_exp_count;
|
|
||||||
static DEFINE_MUTEX(sync_rcu_preempt_exp_mutex);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return non-zero if there are any tasks in RCU read-side critical
|
* Return non-zero if there are any tasks in RCU read-side critical
|
||||||
|
@ -573,7 +555,7 @@ static int rcu_preempted_readers_exp(struct rcu_node *rnp)
|
||||||
* for the current expedited grace period. Works only for preemptible
|
* for the current expedited grace period. Works only for preemptible
|
||||||
* RCU -- other RCU implementation use other means.
|
* RCU -- other RCU implementation use other means.
|
||||||
*
|
*
|
||||||
* Caller must hold sync_rcu_preempt_exp_mutex.
|
* Caller must hold the root rcu_node's exp_funnel_mutex.
|
||||||
*/
|
*/
|
||||||
static int sync_rcu_preempt_exp_done(struct rcu_node *rnp)
|
static int sync_rcu_preempt_exp_done(struct rcu_node *rnp)
|
||||||
{
|
{
|
||||||
|
@ -589,7 +571,7 @@ static int sync_rcu_preempt_exp_done(struct rcu_node *rnp)
|
||||||
* recursively up the tree. (Calm down, calm down, we do the recursion
|
* recursively up the tree. (Calm down, calm down, we do the recursion
|
||||||
* iteratively!)
|
* iteratively!)
|
||||||
*
|
*
|
||||||
* Caller must hold sync_rcu_preempt_exp_mutex.
|
* Caller must hold the root rcu_node's exp_funnel_mutex.
|
||||||
*/
|
*/
|
||||||
static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
||||||
bool wake)
|
bool wake)
|
||||||
|
@ -628,7 +610,7 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
||||||
* set the ->expmask bits on the leaf rcu_node structures to tell phase 2
|
* set the ->expmask bits on the leaf rcu_node structures to tell phase 2
|
||||||
* that work is needed here.
|
* that work is needed here.
|
||||||
*
|
*
|
||||||
* Caller must hold sync_rcu_preempt_exp_mutex.
|
* Caller must hold the root rcu_node's exp_funnel_mutex.
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
sync_rcu_preempt_exp_init1(struct rcu_state *rsp, struct rcu_node *rnp)
|
sync_rcu_preempt_exp_init1(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
|
@ -671,7 +653,7 @@ sync_rcu_preempt_exp_init1(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
* invoke rcu_report_exp_rnp() to clear out the upper-level ->expmask bits,
|
* invoke rcu_report_exp_rnp() to clear out the upper-level ->expmask bits,
|
||||||
* enabling rcu_read_unlock_special() to do the bit-clearing.
|
* enabling rcu_read_unlock_special() to do the bit-clearing.
|
||||||
*
|
*
|
||||||
* Caller must hold sync_rcu_preempt_exp_mutex.
|
* Caller must hold the root rcu_node's exp_funnel_mutex.
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
sync_rcu_preempt_exp_init2(struct rcu_state *rsp, struct rcu_node *rnp)
|
sync_rcu_preempt_exp_init2(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
|
@ -719,51 +701,17 @@ sync_rcu_preempt_exp_init2(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
void synchronize_rcu_expedited(void)
|
void synchronize_rcu_expedited(void)
|
||||||
{
|
{
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
|
struct rcu_node *rnp_unlock;
|
||||||
struct rcu_state *rsp = rcu_state_p;
|
struct rcu_state *rsp = rcu_state_p;
|
||||||
unsigned long snap;
|
unsigned long s;
|
||||||
int trycount = 0;
|
|
||||||
|
|
||||||
smp_mb(); /* Caller's modifications seen first by other CPUs. */
|
s = rcu_exp_gp_seq_snap(rsp);
|
||||||
snap = READ_ONCE(sync_rcu_preempt_exp_count) + 1;
|
|
||||||
smp_mb(); /* Above access cannot bleed into critical section. */
|
|
||||||
|
|
||||||
/*
|
rnp_unlock = exp_funnel_lock(rsp, s);
|
||||||
* Block CPU-hotplug operations. This means that any CPU-hotplug
|
if (rnp_unlock == NULL)
|
||||||
* operation that finds an rcu_node structure with tasks in the
|
return; /* Someone else did our work for us. */
|
||||||
* process of being boosted will know that all tasks blocking
|
|
||||||
* this expedited grace period will already be in the process of
|
|
||||||
* being boosted. This simplifies the process of moving tasks
|
|
||||||
* from leaf to root rcu_node structures.
|
|
||||||
*/
|
|
||||||
if (!try_get_online_cpus()) {
|
|
||||||
/* CPU-hotplug operation in flight, fall back to normal GP. */
|
|
||||||
wait_rcu_gp(call_rcu);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
rcu_exp_gp_seq_start(rsp);
|
||||||
* Acquire lock, falling back to synchronize_rcu() if too many
|
|
||||||
* lock-acquisition failures. Of course, if someone does the
|
|
||||||
* expedited grace period for us, just leave.
|
|
||||||
*/
|
|
||||||
while (!mutex_trylock(&sync_rcu_preempt_exp_mutex)) {
|
|
||||||
if (ULONG_CMP_LT(snap,
|
|
||||||
READ_ONCE(sync_rcu_preempt_exp_count))) {
|
|
||||||
put_online_cpus();
|
|
||||||
goto mb_ret; /* Others did our work for us. */
|
|
||||||
}
|
|
||||||
if (trycount++ < 10) {
|
|
||||||
udelay(trycount * num_online_cpus());
|
|
||||||
} else {
|
|
||||||
put_online_cpus();
|
|
||||||
wait_rcu_gp(call_rcu);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (ULONG_CMP_LT(snap, READ_ONCE(sync_rcu_preempt_exp_count))) {
|
|
||||||
put_online_cpus();
|
|
||||||
goto unlock_mb_ret; /* Others did our work for us. */
|
|
||||||
}
|
|
||||||
|
|
||||||
/* force all RCU readers onto ->blkd_tasks lists. */
|
/* force all RCU readers onto ->blkd_tasks lists. */
|
||||||
synchronize_sched_expedited();
|
synchronize_sched_expedited();
|
||||||
|
@ -779,20 +727,14 @@ void synchronize_rcu_expedited(void)
|
||||||
rcu_for_each_leaf_node(rsp, rnp)
|
rcu_for_each_leaf_node(rsp, rnp)
|
||||||
sync_rcu_preempt_exp_init2(rsp, rnp);
|
sync_rcu_preempt_exp_init2(rsp, rnp);
|
||||||
|
|
||||||
put_online_cpus();
|
|
||||||
|
|
||||||
/* Wait for snapshotted ->blkd_tasks lists to drain. */
|
/* Wait for snapshotted ->blkd_tasks lists to drain. */
|
||||||
rnp = rcu_get_root(rsp);
|
rnp = rcu_get_root(rsp);
|
||||||
wait_event(sync_rcu_preempt_exp_wq,
|
wait_event(sync_rcu_preempt_exp_wq,
|
||||||
sync_rcu_preempt_exp_done(rnp));
|
sync_rcu_preempt_exp_done(rnp));
|
||||||
|
|
||||||
/* Clean up and exit. */
|
/* Clean up and exit. */
|
||||||
smp_mb(); /* ensure expedited GP seen before counter increment. */
|
rcu_exp_gp_seq_end(rsp);
|
||||||
WRITE_ONCE(sync_rcu_preempt_exp_count, sync_rcu_preempt_exp_count + 1);
|
mutex_unlock(&rnp_unlock->exp_funnel_mutex);
|
||||||
unlock_mb_ret:
|
|
||||||
mutex_unlock(&sync_rcu_preempt_exp_mutex);
|
|
||||||
mb_ret:
|
|
||||||
smp_mb(); /* ensure subsequent action seen after grace period. */
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
|
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
|
||||||
|
|
||||||
|
@ -1061,8 +1003,7 @@ static int rcu_boost(struct rcu_node *rnp)
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Priority-boosting kthread. One per leaf rcu_node and one for the
|
* Priority-boosting kthread, one per leaf rcu_node.
|
||||||
* root rcu_node.
|
|
||||||
*/
|
*/
|
||||||
static int rcu_boost_kthread(void *arg)
|
static int rcu_boost_kthread(void *arg)
|
||||||
{
|
{
|
||||||
|
@ -1680,12 +1621,10 @@ static int rcu_oom_notify(struct notifier_block *self,
|
||||||
*/
|
*/
|
||||||
atomic_set(&oom_callback_count, 1);
|
atomic_set(&oom_callback_count, 1);
|
||||||
|
|
||||||
get_online_cpus();
|
|
||||||
for_each_online_cpu(cpu) {
|
for_each_online_cpu(cpu) {
|
||||||
smp_call_function_single(cpu, rcu_oom_notify_cpu, NULL, 1);
|
smp_call_function_single(cpu, rcu_oom_notify_cpu, NULL, 1);
|
||||||
cond_resched_rcu_qs();
|
cond_resched_rcu_qs();
|
||||||
}
|
}
|
||||||
put_online_cpus();
|
|
||||||
|
|
||||||
/* Unconditionally decrement: no need to wake ourselves up. */
|
/* Unconditionally decrement: no need to wake ourselves up. */
|
||||||
atomic_dec(&oom_callback_count);
|
atomic_dec(&oom_callback_count);
|
||||||
|
@ -1706,8 +1645,6 @@ early_initcall(rcu_register_oom_notifier);
|
||||||
|
|
||||||
#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */
|
#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_CPU_STALL_INFO
|
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_FAST_NO_HZ
|
#ifdef CONFIG_RCU_FAST_NO_HZ
|
||||||
|
|
||||||
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
|
static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
|
||||||
|
@ -1796,33 +1733,6 @@ static void increment_cpu_stall_ticks(void)
|
||||||
raw_cpu_inc(rsp->rda->ticks_this_gp);
|
raw_cpu_inc(rsp->rda->ticks_this_gp);
|
||||||
}
|
}
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_RCU_CPU_STALL_INFO */
|
|
||||||
|
|
||||||
static void print_cpu_stall_info_begin(void)
|
|
||||||
{
|
|
||||||
pr_cont(" {");
|
|
||||||
}
|
|
||||||
|
|
||||||
static void print_cpu_stall_info(struct rcu_state *rsp, int cpu)
|
|
||||||
{
|
|
||||||
pr_cont(" %d", cpu);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void print_cpu_stall_info_end(void)
|
|
||||||
{
|
|
||||||
pr_cont("} ");
|
|
||||||
}
|
|
||||||
|
|
||||||
static void zero_cpu_stall_ticks(struct rcu_data *rdp)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
static void increment_cpu_stall_ticks(void)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_RCU_CPU_STALL_INFO */
|
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_NOCB_CPU
|
#ifdef CONFIG_RCU_NOCB_CPU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -81,9 +81,9 @@ static void r_stop(struct seq_file *m, void *v)
|
||||||
static int show_rcubarrier(struct seq_file *m, void *v)
|
static int show_rcubarrier(struct seq_file *m, void *v)
|
||||||
{
|
{
|
||||||
struct rcu_state *rsp = (struct rcu_state *)m->private;
|
struct rcu_state *rsp = (struct rcu_state *)m->private;
|
||||||
seq_printf(m, "bcc: %d nbd: %lu\n",
|
seq_printf(m, "bcc: %d bseq: %lu\n",
|
||||||
atomic_read(&rsp->barrier_cpu_count),
|
atomic_read(&rsp->barrier_cpu_count),
|
||||||
rsp->n_barrier_done);
|
rsp->barrier_sequence);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -185,18 +185,15 @@ static int show_rcuexp(struct seq_file *m, void *v)
|
||||||
{
|
{
|
||||||
struct rcu_state *rsp = (struct rcu_state *)m->private;
|
struct rcu_state *rsp = (struct rcu_state *)m->private;
|
||||||
|
|
||||||
seq_printf(m, "s=%lu d=%lu w=%lu tf=%lu wd1=%lu wd2=%lu n=%lu sc=%lu dt=%lu dl=%lu dx=%lu\n",
|
seq_printf(m, "s=%lu wd0=%lu wd1=%lu wd2=%lu wd3=%lu n=%lu enq=%d sc=%lu\n",
|
||||||
atomic_long_read(&rsp->expedited_start),
|
rsp->expedited_sequence,
|
||||||
atomic_long_read(&rsp->expedited_done),
|
atomic_long_read(&rsp->expedited_workdone0),
|
||||||
atomic_long_read(&rsp->expedited_wrap),
|
|
||||||
atomic_long_read(&rsp->expedited_tryfail),
|
|
||||||
atomic_long_read(&rsp->expedited_workdone1),
|
atomic_long_read(&rsp->expedited_workdone1),
|
||||||
atomic_long_read(&rsp->expedited_workdone2),
|
atomic_long_read(&rsp->expedited_workdone2),
|
||||||
|
atomic_long_read(&rsp->expedited_workdone3),
|
||||||
atomic_long_read(&rsp->expedited_normal),
|
atomic_long_read(&rsp->expedited_normal),
|
||||||
atomic_long_read(&rsp->expedited_stoppedcpus),
|
atomic_read(&rsp->expedited_need_qs),
|
||||||
atomic_long_read(&rsp->expedited_done_tries),
|
rsp->expedited_sequence / 2);
|
||||||
atomic_long_read(&rsp->expedited_done_lost),
|
|
||||||
atomic_long_read(&rsp->expedited_done_exit));
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -62,6 +62,55 @@ MODULE_ALIAS("rcupdate");
|
||||||
|
|
||||||
module_param(rcu_expedited, int, 0);
|
module_param(rcu_expedited, int, 0);
|
||||||
|
|
||||||
|
#if defined(CONFIG_DEBUG_LOCK_ALLOC) && defined(CONFIG_PREEMPT_COUNT)
|
||||||
|
/**
|
||||||
|
* rcu_read_lock_sched_held() - might we be in RCU-sched read-side critical section?
|
||||||
|
*
|
||||||
|
* If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an
|
||||||
|
* RCU-sched read-side critical section. In absence of
|
||||||
|
* CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU-sched read-side
|
||||||
|
* critical section unless it can prove otherwise. Note that disabling
|
||||||
|
* of preemption (including disabling irqs) counts as an RCU-sched
|
||||||
|
* read-side critical section. This is useful for debug checks in functions
|
||||||
|
* that required that they be called within an RCU-sched read-side
|
||||||
|
* critical section.
|
||||||
|
*
|
||||||
|
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot
|
||||||
|
* and while lockdep is disabled.
|
||||||
|
*
|
||||||
|
* Note that if the CPU is in the idle loop from an RCU point of
|
||||||
|
* view (ie: that we are in the section between rcu_idle_enter() and
|
||||||
|
* rcu_idle_exit()) then rcu_read_lock_held() returns false even if the CPU
|
||||||
|
* did an rcu_read_lock(). The reason for this is that RCU ignores CPUs
|
||||||
|
* that are in such a section, considering these as in extended quiescent
|
||||||
|
* state, so such a CPU is effectively never in an RCU read-side critical
|
||||||
|
* section regardless of what RCU primitives it invokes. This state of
|
||||||
|
* affairs is required --- we need to keep an RCU-free window in idle
|
||||||
|
* where the CPU may possibly enter into low power mode. This way we can
|
||||||
|
* notice an extended quiescent state to other CPUs that started a grace
|
||||||
|
* period. Otherwise we would delay any grace period as long as we run in
|
||||||
|
* the idle task.
|
||||||
|
*
|
||||||
|
* Similarly, we avoid claiming an SRCU read lock held if the current
|
||||||
|
* CPU is offline.
|
||||||
|
*/
|
||||||
|
int rcu_read_lock_sched_held(void)
|
||||||
|
{
|
||||||
|
int lockdep_opinion = 0;
|
||||||
|
|
||||||
|
if (!debug_lockdep_rcu_enabled())
|
||||||
|
return 1;
|
||||||
|
if (!rcu_is_watching())
|
||||||
|
return 0;
|
||||||
|
if (!rcu_lockdep_current_cpu_online())
|
||||||
|
return 0;
|
||||||
|
if (debug_locks)
|
||||||
|
lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
|
||||||
|
return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL(rcu_read_lock_sched_held);
|
||||||
|
#endif
|
||||||
|
|
||||||
#ifndef CONFIG_TINY_RCU
|
#ifndef CONFIG_TINY_RCU
|
||||||
|
|
||||||
static atomic_t rcu_expedited_nesting =
|
static atomic_t rcu_expedited_nesting =
|
||||||
|
@ -269,20 +318,37 @@ void wakeme_after_rcu(struct rcu_head *head)
|
||||||
rcu = container_of(head, struct rcu_synchronize, head);
|
rcu = container_of(head, struct rcu_synchronize, head);
|
||||||
complete(&rcu->completion);
|
complete(&rcu->completion);
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(wakeme_after_rcu);
|
||||||
|
|
||||||
void wait_rcu_gp(call_rcu_func_t crf)
|
void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
|
||||||
|
struct rcu_synchronize *rs_array)
|
||||||
{
|
{
|
||||||
struct rcu_synchronize rcu;
|
int i;
|
||||||
|
|
||||||
init_rcu_head_on_stack(&rcu.head);
|
/* Initialize and register callbacks for each flavor specified. */
|
||||||
init_completion(&rcu.completion);
|
for (i = 0; i < n; i++) {
|
||||||
/* Will wake me after RCU finished. */
|
if (checktiny &&
|
||||||
crf(&rcu.head, wakeme_after_rcu);
|
(crcu_array[i] == call_rcu ||
|
||||||
/* Wait for it. */
|
crcu_array[i] == call_rcu_bh)) {
|
||||||
wait_for_completion(&rcu.completion);
|
might_sleep();
|
||||||
destroy_rcu_head_on_stack(&rcu.head);
|
continue;
|
||||||
|
}
|
||||||
|
init_rcu_head_on_stack(&rs_array[i].head);
|
||||||
|
init_completion(&rs_array[i].completion);
|
||||||
|
(crcu_array[i])(&rs_array[i].head, wakeme_after_rcu);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Wait for all callbacks to be invoked. */
|
||||||
|
for (i = 0; i < n; i++) {
|
||||||
|
if (checktiny &&
|
||||||
|
(crcu_array[i] == call_rcu ||
|
||||||
|
crcu_array[i] == call_rcu_bh))
|
||||||
|
continue;
|
||||||
|
wait_for_completion(&rs_array[i].completion);
|
||||||
|
destroy_rcu_head_on_stack(&rs_array[i].head);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(wait_rcu_gp);
|
EXPORT_SYMBOL_GPL(__wait_rcu_gp);
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
|
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
|
||||||
void init_rcu_head(struct rcu_head *head)
|
void init_rcu_head(struct rcu_head *head)
|
||||||
|
@ -523,8 +589,8 @@ EXPORT_SYMBOL_GPL(call_rcu_tasks);
|
||||||
void synchronize_rcu_tasks(void)
|
void synchronize_rcu_tasks(void)
|
||||||
{
|
{
|
||||||
/* Complain if the scheduler has not started. */
|
/* Complain if the scheduler has not started. */
|
||||||
rcu_lockdep_assert(!rcu_scheduler_active,
|
RCU_LOCKDEP_WARN(!rcu_scheduler_active,
|
||||||
"synchronize_rcu_tasks called too soon");
|
"synchronize_rcu_tasks called too soon");
|
||||||
|
|
||||||
/* Wait for the grace period. */
|
/* Wait for the grace period. */
|
||||||
wait_rcu_gp(call_rcu_tasks);
|
wait_rcu_gp(call_rcu_tasks);
|
||||||
|
|
|
@ -2200,8 +2200,8 @@ unsigned long to_ratio(u64 period, u64 runtime)
|
||||||
#ifdef CONFIG_SMP
|
#ifdef CONFIG_SMP
|
||||||
inline struct dl_bw *dl_bw_of(int i)
|
inline struct dl_bw *dl_bw_of(int i)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_read_lock_sched_held(),
|
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
|
||||||
"sched RCU must be held");
|
"sched RCU must be held");
|
||||||
return &cpu_rq(i)->rd->dl_bw;
|
return &cpu_rq(i)->rd->dl_bw;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2210,8 +2210,8 @@ static inline int dl_bw_cpus(int i)
|
||||||
struct root_domain *rd = cpu_rq(i)->rd;
|
struct root_domain *rd = cpu_rq(i)->rd;
|
||||||
int cpus = 0;
|
int cpus = 0;
|
||||||
|
|
||||||
rcu_lockdep_assert(rcu_read_lock_sched_held(),
|
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
|
||||||
"sched RCU must be held");
|
"sched RCU must be held");
|
||||||
for_each_cpu_and(i, rd->span, cpu_active_mask)
|
for_each_cpu_and(i, rd->span, cpu_active_mask)
|
||||||
cpus++;
|
cpus++;
|
||||||
|
|
||||||
|
|
|
@ -92,12 +92,10 @@ config NO_HZ_FULL
|
||||||
depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
|
depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
|
||||||
# We need at least one periodic CPU for timekeeping
|
# We need at least one periodic CPU for timekeeping
|
||||||
depends on SMP
|
depends on SMP
|
||||||
# RCU_USER_QS dependency
|
|
||||||
depends on HAVE_CONTEXT_TRACKING
|
depends on HAVE_CONTEXT_TRACKING
|
||||||
# VIRT_CPU_ACCOUNTING_GEN dependency
|
# VIRT_CPU_ACCOUNTING_GEN dependency
|
||||||
depends on HAVE_VIRT_CPU_ACCOUNTING_GEN
|
depends on HAVE_VIRT_CPU_ACCOUNTING_GEN
|
||||||
select NO_HZ_COMMON
|
select NO_HZ_COMMON
|
||||||
select RCU_USER_QS
|
|
||||||
select RCU_NOCB_CPU
|
select RCU_NOCB_CPU
|
||||||
select VIRT_CPU_ACCOUNTING_GEN
|
select VIRT_CPU_ACCOUNTING_GEN
|
||||||
select IRQ_WORK
|
select IRQ_WORK
|
||||||
|
|
|
@ -338,20 +338,20 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
|
||||||
#include <trace/events/workqueue.h>
|
#include <trace/events/workqueue.h>
|
||||||
|
|
||||||
#define assert_rcu_or_pool_mutex() \
|
#define assert_rcu_or_pool_mutex() \
|
||||||
rcu_lockdep_assert(rcu_read_lock_sched_held() || \
|
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() && \
|
||||||
lockdep_is_held(&wq_pool_mutex), \
|
!lockdep_is_held(&wq_pool_mutex), \
|
||||||
"sched RCU or wq_pool_mutex should be held")
|
"sched RCU or wq_pool_mutex should be held")
|
||||||
|
|
||||||
#define assert_rcu_or_wq_mutex(wq) \
|
#define assert_rcu_or_wq_mutex(wq) \
|
||||||
rcu_lockdep_assert(rcu_read_lock_sched_held() || \
|
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() && \
|
||||||
lockdep_is_held(&wq->mutex), \
|
!lockdep_is_held(&wq->mutex), \
|
||||||
"sched RCU or wq->mutex should be held")
|
"sched RCU or wq->mutex should be held")
|
||||||
|
|
||||||
#define assert_rcu_or_wq_mutex_or_pool_mutex(wq) \
|
#define assert_rcu_or_wq_mutex_or_pool_mutex(wq) \
|
||||||
rcu_lockdep_assert(rcu_read_lock_sched_held() || \
|
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() && \
|
||||||
lockdep_is_held(&wq->mutex) || \
|
!lockdep_is_held(&wq->mutex) && \
|
||||||
lockdep_is_held(&wq_pool_mutex), \
|
!lockdep_is_held(&wq_pool_mutex), \
|
||||||
"sched RCU, wq->mutex or wq_pool_mutex should be held")
|
"sched RCU, wq->mutex or wq_pool_mutex should be held")
|
||||||
|
|
||||||
#define for_each_cpu_worker_pool(pool, cpu) \
|
#define for_each_cpu_worker_pool(pool, cpu) \
|
||||||
for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0]; \
|
for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0]; \
|
||||||
|
|
|
@ -1353,20 +1353,6 @@ config RCU_CPU_STALL_TIMEOUT
|
||||||
RCU grace period persists, additional CPU stall warnings are
|
RCU grace period persists, additional CPU stall warnings are
|
||||||
printed at more widely spaced intervals.
|
printed at more widely spaced intervals.
|
||||||
|
|
||||||
config RCU_CPU_STALL_INFO
|
|
||||||
bool "Print additional diagnostics on RCU CPU stall"
|
|
||||||
depends on (TREE_RCU || PREEMPT_RCU) && DEBUG_KERNEL
|
|
||||||
default y
|
|
||||||
help
|
|
||||||
For each stalled CPU that is aware of the current RCU grace
|
|
||||||
period, print out additional per-CPU diagnostic information
|
|
||||||
regarding scheduling-clock ticks, idle state, and,
|
|
||||||
for RCU_FAST_NO_HZ kernels, idle-entry state.
|
|
||||||
|
|
||||||
Say N if you are unsure.
|
|
||||||
|
|
||||||
Say Y if you want to enable such diagnostics.
|
|
||||||
|
|
||||||
config RCU_TRACE
|
config RCU_TRACE
|
||||||
bool "Enable tracing for RCU"
|
bool "Enable tracing for RCU"
|
||||||
depends on DEBUG_KERNEL
|
depends on DEBUG_KERNEL
|
||||||
|
@ -1379,7 +1365,7 @@ config RCU_TRACE
|
||||||
Say N if you are unsure.
|
Say N if you are unsure.
|
||||||
|
|
||||||
config RCU_EQS_DEBUG
|
config RCU_EQS_DEBUG
|
||||||
bool "Use this when adding any sort of NO_HZ support to your arch"
|
bool "Provide debugging asserts for adding NO_HZ support to an arch"
|
||||||
depends on DEBUG_KERNEL
|
depends on DEBUG_KERNEL
|
||||||
help
|
help
|
||||||
This option provides consistency checks in RCU's handling of
|
This option provides consistency checks in RCU's handling of
|
||||||
|
|
|
@ -5011,6 +5011,7 @@ sub process {
|
||||||
"memory barrier without comment\n" . $herecurr);
|
"memory barrier without comment\n" . $herecurr);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# check for waitqueue_active without a comment.
|
# check for waitqueue_active without a comment.
|
||||||
if ($line =~ /\bwaitqueue_active\s*\(/) {
|
if ($line =~ /\bwaitqueue_active\s*\(/) {
|
||||||
if (!ctx_has_comment($first_line, $linenr)) {
|
if (!ctx_has_comment($first_line, $linenr)) {
|
||||||
|
@ -5018,6 +5019,24 @@ sub process {
|
||||||
"waitqueue_active without comment\n" . $herecurr);
|
"waitqueue_active without comment\n" . $herecurr);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Check for expedited grace periods that interrupt non-idle non-nohz
|
||||||
|
# online CPUs. These expedited can therefore degrade real-time response
|
||||||
|
# if used carelessly, and should be avoided where not absolutely
|
||||||
|
# needed. It is always OK to use synchronize_rcu_expedited() and
|
||||||
|
# synchronize_sched_expedited() at boot time (before real-time applications
|
||||||
|
# start) and in error situations where real-time response is compromised in
|
||||||
|
# any case. Note that synchronize_srcu_expedited() does -not- interrupt
|
||||||
|
# other CPUs, so don't warn on uses of synchronize_srcu_expedited().
|
||||||
|
# Of course, nothing comes for free, and srcu_read_lock() and
|
||||||
|
# srcu_read_unlock() do contain full memory barriers in payment for
|
||||||
|
# synchronize_srcu_expedited() non-interruption properties.
|
||||||
|
if ($line =~ /\b(synchronize_rcu_expedited|synchronize_sched_expedited)\(/) {
|
||||||
|
WARN("EXPEDITED_RCU_GRACE_PERIOD",
|
||||||
|
"expedited RCU grace periods should be avoided where they can degrade real-time response\n" . $herecurr);
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
# check of hardware specific defines
|
# check of hardware specific defines
|
||||||
if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m@include/asm-@) {
|
if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m@include/asm-@) {
|
||||||
CHK("ARCH_DEFINES",
|
CHK("ARCH_DEFINES",
|
||||||
|
|
|
@ -400,9 +400,9 @@ static bool verify_new_ex(struct dev_cgroup *dev_cgroup,
|
||||||
{
|
{
|
||||||
bool match = false;
|
bool match = false;
|
||||||
|
|
||||||
rcu_lockdep_assert(rcu_read_lock_held() ||
|
RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&
|
||||||
lockdep_is_held(&devcgroup_mutex),
|
lockdep_is_held(&devcgroup_mutex),
|
||||||
"device_cgroup:verify_new_ex called without proper synchronization");
|
"device_cgroup:verify_new_ex called without proper synchronization");
|
||||||
|
|
||||||
if (dev_cgroup->behavior == DEVCG_DEFAULT_ALLOW) {
|
if (dev_cgroup->behavior == DEVCG_DEFAULT_ALLOW) {
|
||||||
if (behavior == DEVCG_DEFAULT_ALLOW) {
|
if (behavior == DEVCG_DEFAULT_ALLOW) {
|
||||||
|
|
|
@ -5,6 +5,6 @@ CONFIG_PREEMPT_NONE=n
|
||||||
CONFIG_PREEMPT_VOLUNTARY=n
|
CONFIG_PREEMPT_VOLUNTARY=n
|
||||||
CONFIG_PREEMPT=y
|
CONFIG_PREEMPT=y
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=y
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
CONFIG_PROVE_LOCKING=n
|
CONFIG_PROVE_LOCKING=y
|
||||||
#CHECK#CONFIG_PROVE_RCU=n
|
#CHECK#CONFIG_PROVE_RCU=y
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -13,7 +13,6 @@ CONFIG_MAXSMP=y
|
||||||
CONFIG_RCU_NOCB_CPU=y
|
CONFIG_RCU_NOCB_CPU=y
|
||||||
CONFIG_RCU_NOCB_CPU_ZERO=y
|
CONFIG_RCU_NOCB_CPU_ZERO=y
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=n
|
CONFIG_RCU_BOOST=n
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -17,7 +17,6 @@ CONFIG_RCU_FANOUT_LEAF=3
|
||||||
CONFIG_RCU_NOCB_CPU=n
|
CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=y
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
CONFIG_PROVE_LOCKING=n
|
CONFIG_PROVE_LOCKING=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=n
|
CONFIG_RCU_BOOST=n
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -17,6 +17,5 @@ CONFIG_RCU_FANOUT_LEAF=3
|
||||||
CONFIG_RCU_NOCB_CPU=n
|
CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=y
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
CONFIG_PROVE_LOCKING=n
|
CONFIG_PROVE_LOCKING=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=n
|
CONFIG_RCU_BOOST=n
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
|
|
|
@ -13,7 +13,6 @@ CONFIG_RCU_FANOUT=2
|
||||||
CONFIG_RCU_FANOUT_LEAF=2
|
CONFIG_RCU_FANOUT_LEAF=2
|
||||||
CONFIG_RCU_NOCB_CPU=n
|
CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=y
|
CONFIG_RCU_BOOST=y
|
||||||
CONFIG_RCU_KTHREAD_PRIO=2
|
CONFIG_RCU_KTHREAD_PRIO=2
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
|
|
|
@ -17,6 +17,5 @@ CONFIG_RCU_FANOUT=4
|
||||||
CONFIG_RCU_FANOUT_LEAF=4
|
CONFIG_RCU_FANOUT_LEAF=4
|
||||||
CONFIG_RCU_NOCB_CPU=n
|
CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -17,6 +17,5 @@ CONFIG_RCU_NOCB_CPU_NONE=y
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=y
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
CONFIG_PROVE_LOCKING=y
|
CONFIG_PROVE_LOCKING=y
|
||||||
#CHECK#CONFIG_PROVE_RCU=y
|
#CHECK#CONFIG_PROVE_RCU=y
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -18,6 +18,5 @@ CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=y
|
CONFIG_DEBUG_LOCK_ALLOC=y
|
||||||
CONFIG_PROVE_LOCKING=y
|
CONFIG_PROVE_LOCKING=y
|
||||||
#CHECK#CONFIG_PROVE_RCU=y
|
#CHECK#CONFIG_PROVE_RCU=y
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -17,6 +17,5 @@ CONFIG_RCU_FANOUT=2
|
||||||
CONFIG_RCU_FANOUT_LEAF=2
|
CONFIG_RCU_FANOUT_LEAF=2
|
||||||
CONFIG_RCU_NOCB_CPU=n
|
CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -19,7 +19,6 @@ CONFIG_RCU_NOCB_CPU_ALL=y
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_PROVE_LOCKING=y
|
CONFIG_PROVE_LOCKING=y
|
||||||
#CHECK#CONFIG_PROVE_RCU=y
|
#CHECK#CONFIG_PROVE_RCU=y
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=n
|
CONFIG_RCU_BOOST=n
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
CONFIG_RCU_EXPERT=y
|
CONFIG_RCU_EXPERT=y
|
||||||
|
|
|
@ -17,6 +17,5 @@ CONFIG_RCU_FANOUT_LEAF=2
|
||||||
CONFIG_RCU_NOCB_CPU=y
|
CONFIG_RCU_NOCB_CPU=y
|
||||||
CONFIG_RCU_NOCB_CPU_ALL=y
|
CONFIG_RCU_NOCB_CPU_ALL=y
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=n
|
CONFIG_RCU_BOOST=n
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
|
|
|
@ -13,7 +13,6 @@ CONFIG_SUSPEND=n
|
||||||
CONFIG_HIBERNATION=n
|
CONFIG_HIBERNATION=n
|
||||||
CONFIG_RCU_NOCB_CPU=n
|
CONFIG_RCU_NOCB_CPU=n
|
||||||
CONFIG_DEBUG_LOCK_ALLOC=n
|
CONFIG_DEBUG_LOCK_ALLOC=n
|
||||||
CONFIG_RCU_CPU_STALL_INFO=n
|
|
||||||
CONFIG_RCU_BOOST=n
|
CONFIG_RCU_BOOST=n
|
||||||
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
|
||||||
#CHECK#CONFIG_RCU_EXPERT=n
|
#CHECK#CONFIG_RCU_EXPERT=n
|
||||||
|
|
|
@ -16,7 +16,6 @@ CONFIG_PROVE_LOCKING -- Do several, covering CONFIG_DEBUG_LOCK_ALLOC=y and not.
|
||||||
CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING.
|
CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING.
|
||||||
CONFIG_RCU_BOOST -- one of PREEMPT_RCU.
|
CONFIG_RCU_BOOST -- one of PREEMPT_RCU.
|
||||||
CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing.
|
CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing.
|
||||||
CONFIG_RCU_CPU_STALL_INFO -- Now default, avoid at least twice.
|
|
||||||
CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others.
|
CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others.
|
||||||
CONFIG_RCU_FANOUT_LEAF -- Do one non-default.
|
CONFIG_RCU_FANOUT_LEAF -- Do one non-default.
|
||||||
CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL.
|
CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL.
|
||||||
|
|
Loading…
Reference in New Issue