Merge branch 'for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu
Pull RCU updates from Paul E. McKenney: - Documentation updates. - Changes permitting use of call_rcu() and friends very early in boot, for example, before rcu_init() is invoked. - Miscellaneous fixes. - Add in-kernel API to enable and disable expediting of normal RCU grace periods. - Improve RCU's handling of (hotplug-) outgoing CPUs. Note: ARM support is lagging a bit here, and these improved diagnostics might generate (harmless) splats. - NO_HZ_FULL_SYSIDLE fixes. - Tiny RCU updates to make it more tiny. Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
commit
4bfe186dbe
|
@ -201,11 +201,11 @@ These routines add 1 and subtract 1, respectively, from the given
|
||||||
atomic_t and return the new counter value after the operation is
|
atomic_t and return the new counter value after the operation is
|
||||||
performed.
|
performed.
|
||||||
|
|
||||||
Unlike the above routines, it is required that explicit memory
|
Unlike the above routines, it is required that these primitives
|
||||||
barriers are performed before and after the operation. It must be
|
include explicit memory barriers that are performed before and after
|
||||||
done such that all memory operations before and after the atomic
|
the operation. It must be done such that all memory operations before
|
||||||
operation calls are strongly ordered with respect to the atomic
|
and after the atomic operation calls are strongly ordered with respect
|
||||||
operation itself.
|
to the atomic operation itself.
|
||||||
|
|
||||||
For example, it should behave as if a smp_mb() call existed both
|
For example, it should behave as if a smp_mb() call existed both
|
||||||
before and after the atomic operation.
|
before and after the atomic operation.
|
||||||
|
@ -233,21 +233,21 @@ These two routines increment and decrement by 1, respectively, the
|
||||||
given atomic counter. They return a boolean indicating whether the
|
given atomic counter. They return a boolean indicating whether the
|
||||||
resulting counter value was zero or not.
|
resulting counter value was zero or not.
|
||||||
|
|
||||||
It requires explicit memory barrier semantics around the operation as
|
Again, these primitives provide explicit memory barrier semantics around
|
||||||
above.
|
the atomic operation.
|
||||||
|
|
||||||
int atomic_sub_and_test(int i, atomic_t *v);
|
int atomic_sub_and_test(int i, atomic_t *v);
|
||||||
|
|
||||||
This is identical to atomic_dec_and_test() except that an explicit
|
This is identical to atomic_dec_and_test() except that an explicit
|
||||||
decrement is given instead of the implicit "1". It requires explicit
|
decrement is given instead of the implicit "1". This primitive must
|
||||||
memory barrier semantics around the operation.
|
provide explicit memory barrier semantics around the operation.
|
||||||
|
|
||||||
int atomic_add_negative(int i, atomic_t *v);
|
int atomic_add_negative(int i, atomic_t *v);
|
||||||
|
|
||||||
The given increment is added to the given atomic counter value. A
|
The given increment is added to the given atomic counter value. A boolean
|
||||||
boolean is return which indicates whether the resulting counter value
|
is return which indicates whether the resulting counter value is negative.
|
||||||
is negative. It requires explicit memory barrier semantics around the
|
This primitive must provide explicit memory barrier semantics around
|
||||||
operation.
|
the operation.
|
||||||
|
|
||||||
Then:
|
Then:
|
||||||
|
|
||||||
|
@ -257,7 +257,7 @@ This performs an atomic exchange operation on the atomic variable v, setting
|
||||||
the given new value. It returns the old value that the atomic variable v had
|
the given new value. It returns the old value that the atomic variable v had
|
||||||
just before the operation.
|
just before the operation.
|
||||||
|
|
||||||
atomic_xchg requires explicit memory barriers around the operation.
|
atomic_xchg must provide explicit memory barriers around the operation.
|
||||||
|
|
||||||
int atomic_cmpxchg(atomic_t *v, int old, int new);
|
int atomic_cmpxchg(atomic_t *v, int old, int new);
|
||||||
|
|
||||||
|
@ -266,7 +266,7 @@ with the given old and new values. Like all atomic_xxx operations,
|
||||||
atomic_cmpxchg will only satisfy its atomicity semantics as long as all
|
atomic_cmpxchg will only satisfy its atomicity semantics as long as all
|
||||||
other accesses of *v are performed through atomic_xxx operations.
|
other accesses of *v are performed through atomic_xxx operations.
|
||||||
|
|
||||||
atomic_cmpxchg requires explicit memory barriers around the operation.
|
atomic_cmpxchg must provide explicit memory barriers around the operation.
|
||||||
|
|
||||||
The semantics for atomic_cmpxchg are the same as those defined for 'cas'
|
The semantics for atomic_cmpxchg are the same as those defined for 'cas'
|
||||||
below.
|
below.
|
||||||
|
@ -279,8 +279,8 @@ If the atomic value v is not equal to u, this function adds a to v, and
|
||||||
returns non zero. If v is equal to u then it returns zero. This is done as
|
returns non zero. If v is equal to u then it returns zero. This is done as
|
||||||
an atomic operation.
|
an atomic operation.
|
||||||
|
|
||||||
atomic_add_unless requires explicit memory barriers around the operation
|
atomic_add_unless must provide explicit memory barriers around the
|
||||||
unless it fails (returns 0).
|
operation unless it fails (returns 0).
|
||||||
|
|
||||||
atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
|
atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
|
||||||
|
|
||||||
|
@ -460,9 +460,9 @@ the return value into an int. There are other places where things
|
||||||
like this occur as well.
|
like this occur as well.
|
||||||
|
|
||||||
These routines, like the atomic_t counter operations returning values,
|
These routines, like the atomic_t counter operations returning values,
|
||||||
require explicit memory barrier semantics around their execution. All
|
must provide explicit memory barrier semantics around their execution.
|
||||||
memory operations before the atomic bit operation call must be made
|
All memory operations before the atomic bit operation call must be
|
||||||
visible globally before the atomic bit operation is made visible.
|
made visible globally before the atomic bit operation is made visible.
|
||||||
Likewise, the atomic bit operation must be visible globally before any
|
Likewise, the atomic bit operation must be visible globally before any
|
||||||
subsequent memory operation is made visible. For example:
|
subsequent memory operation is made visible. For example:
|
||||||
|
|
||||||
|
@ -536,8 +536,9 @@ except that two underscores are prefixed to the interface name.
|
||||||
These non-atomic variants also do not require any special memory
|
These non-atomic variants also do not require any special memory
|
||||||
barrier semantics.
|
barrier semantics.
|
||||||
|
|
||||||
The routines xchg() and cmpxchg() need the same exact memory barriers
|
The routines xchg() and cmpxchg() must provide the same exact
|
||||||
as the atomic and bit operations returning values.
|
memory-barrier semantics as the atomic and bit operations returning
|
||||||
|
values.
|
||||||
|
|
||||||
Spinlocks and rwlocks have memory barrier expectations as well.
|
Spinlocks and rwlocks have memory barrier expectations as well.
|
||||||
The rule to follow is simple:
|
The rule to follow is simple:
|
||||||
|
|
|
@ -2968,6 +2968,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||||
Set maximum number of finished RCU callbacks to
|
Set maximum number of finished RCU callbacks to
|
||||||
process in one batch.
|
process in one batch.
|
||||||
|
|
||||||
|
rcutree.gp_init_delay= [KNL]
|
||||||
|
Set the number of jiffies to delay each step of
|
||||||
|
RCU grace-period initialization. This only has
|
||||||
|
effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT is
|
||||||
|
set.
|
||||||
|
|
||||||
rcutree.rcu_fanout_leaf= [KNL]
|
rcutree.rcu_fanout_leaf= [KNL]
|
||||||
Increase the number of CPUs assigned to each
|
Increase the number of CPUs assigned to each
|
||||||
leaf rcu_node structure. Useful for very large
|
leaf rcu_node structure. Useful for very large
|
||||||
|
@ -2991,11 +2997,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||||
value is one, and maximum value is HZ.
|
value is one, and maximum value is HZ.
|
||||||
|
|
||||||
rcutree.kthread_prio= [KNL,BOOT]
|
rcutree.kthread_prio= [KNL,BOOT]
|
||||||
Set the SCHED_FIFO priority of the RCU
|
Set the SCHED_FIFO priority of the RCU per-CPU
|
||||||
per-CPU kthreads (rcuc/N). This value is also
|
kthreads (rcuc/N). This value is also used for
|
||||||
used for the priority of the RCU boost threads
|
the priority of the RCU boost threads (rcub/N)
|
||||||
(rcub/N). Valid values are 1-99 and the default
|
and for the RCU grace-period kthreads (rcu_bh,
|
||||||
is 1 (the least-favored priority).
|
rcu_preempt, and rcu_sched). If RCU_BOOST is
|
||||||
|
set, valid values are 1-99 and the default is 1
|
||||||
|
(the least-favored priority). Otherwise, when
|
||||||
|
RCU_BOOST is not set, valid values are 0-99 and
|
||||||
|
the default is zero (non-realtime operation).
|
||||||
|
|
||||||
rcutree.rcu_nocb_leader_stride= [KNL]
|
rcutree.rcu_nocb_leader_stride= [KNL]
|
||||||
Set the number of NOCB kthread groups, which
|
Set the number of NOCB kthread groups, which
|
||||||
|
|
|
@ -190,20 +190,24 @@ To reduce its OS jitter, do any of the following:
|
||||||
on each CPU, including cs_dbs_timer() and od_dbs_timer().
|
on each CPU, including cs_dbs_timer() and od_dbs_timer().
|
||||||
WARNING: Please check your CPU specifications to
|
WARNING: Please check your CPU specifications to
|
||||||
make sure that this is safe on your particular system.
|
make sure that this is safe on your particular system.
|
||||||
d. It is not possible to entirely get rid of OS jitter
|
d. As of v3.18, Christoph Lameter's on-demand vmstat workers
|
||||||
from vmstat_update() on CONFIG_SMP=y systems, but you
|
commit prevents OS jitter due to vmstat_update() on
|
||||||
can decrease its frequency by writing a large value
|
CONFIG_SMP=y systems. Before v3.18, is not possible
|
||||||
to /proc/sys/vm/stat_interval. The default value is
|
to entirely get rid of the OS jitter, but you can
|
||||||
HZ, for an interval of one second. Of course, larger
|
decrease its frequency by writing a large value to
|
||||||
values will make your virtual-memory statistics update
|
/proc/sys/vm/stat_interval. The default value is HZ,
|
||||||
more slowly. Of course, you can also run your workload
|
for an interval of one second. Of course, larger values
|
||||||
at a real-time priority, thus preempting vmstat_update(),
|
will make your virtual-memory statistics update more
|
||||||
|
slowly. Of course, you can also run your workload at
|
||||||
|
a real-time priority, thus preempting vmstat_update(),
|
||||||
but if your workload is CPU-bound, this is a bad idea.
|
but if your workload is CPU-bound, this is a bad idea.
|
||||||
However, there is an RFC patch from Christoph Lameter
|
However, there is an RFC patch from Christoph Lameter
|
||||||
(based on an earlier one from Gilad Ben-Yossef) that
|
(based on an earlier one from Gilad Ben-Yossef) that
|
||||||
reduces or even eliminates vmstat overhead for some
|
reduces or even eliminates vmstat overhead for some
|
||||||
workloads at https://lkml.org/lkml/2013/9/4/379.
|
workloads at https://lkml.org/lkml/2013/9/4/379.
|
||||||
e. If running on high-end powerpc servers, build with
|
e. Boot with "elevator=noop" to avoid workqueue use by
|
||||||
|
the block layer.
|
||||||
|
f. If running on high-end powerpc servers, build with
|
||||||
CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS
|
CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS
|
||||||
daemon from running on each CPU every second or so.
|
daemon from running on each CPU every second or so.
|
||||||
(This will require editing Kconfig files and will defeat
|
(This will require editing Kconfig files and will defeat
|
||||||
|
@ -211,12 +215,12 @@ To reduce its OS jitter, do any of the following:
|
||||||
due to the rtas_event_scan() function.
|
due to the rtas_event_scan() function.
|
||||||
WARNING: Please check your CPU specifications to
|
WARNING: Please check your CPU specifications to
|
||||||
make sure that this is safe on your particular system.
|
make sure that this is safe on your particular system.
|
||||||
f. If running on Cell Processor, build your kernel with
|
g. If running on Cell Processor, build your kernel with
|
||||||
CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
|
CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
|
||||||
spu_gov_work().
|
spu_gov_work().
|
||||||
WARNING: Please check your CPU specifications to
|
WARNING: Please check your CPU specifications to
|
||||||
make sure that this is safe on your particular system.
|
make sure that this is safe on your particular system.
|
||||||
g. If running on PowerMAC, build your kernel with
|
h. If running on PowerMAC, build your kernel with
|
||||||
CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
|
CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
|
||||||
avoiding OS jitter from rackmeter_do_timer().
|
avoiding OS jitter from rackmeter_do_timer().
|
||||||
|
|
||||||
|
@ -258,8 +262,12 @@ Purpose: Detect software lockups on each CPU.
|
||||||
To reduce its OS jitter, do at least one of the following:
|
To reduce its OS jitter, do at least one of the following:
|
||||||
1. Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
|
1. Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
|
||||||
kthreads from being created in the first place.
|
kthreads from being created in the first place.
|
||||||
2. Echo a zero to /proc/sys/kernel/watchdog to disable the
|
2. Boot with "nosoftlockup=0", which will also prevent these kthreads
|
||||||
|
from being created. Other related watchdog and softlockup boot
|
||||||
|
parameters may be found in Documentation/kernel-parameters.txt
|
||||||
|
and Documentation/watchdog/watchdog-parameters.txt.
|
||||||
|
3. Echo a zero to /proc/sys/kernel/watchdog to disable the
|
||||||
watchdog timer.
|
watchdog timer.
|
||||||
3. Echo a large number of /proc/sys/kernel/watchdog_thresh in
|
4. Echo a large number of /proc/sys/kernel/watchdog_thresh in
|
||||||
order to reduce the frequency of OS jitter due to the watchdog
|
order to reduce the frequency of OS jitter due to the watchdog
|
||||||
timer down to a level that is acceptable for your workload.
|
timer down to a level that is acceptable for your workload.
|
||||||
|
|
|
@ -592,9 +592,9 @@ See also the subsection on "Cache Coherency" for a more thorough example.
|
||||||
CONTROL DEPENDENCIES
|
CONTROL DEPENDENCIES
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
A control dependency requires a full read memory barrier, not simply a data
|
A load-load control dependency requires a full read memory barrier, not
|
||||||
dependency barrier to make it work correctly. Consider the following bit of
|
simply a data dependency barrier to make it work correctly. Consider the
|
||||||
code:
|
following bit of code:
|
||||||
|
|
||||||
q = ACCESS_ONCE(a);
|
q = ACCESS_ONCE(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
|
@ -615,14 +615,15 @@ case what's actually required is:
|
||||||
}
|
}
|
||||||
|
|
||||||
However, stores are not speculated. This means that ordering -is- provided
|
However, stores are not speculated. This means that ordering -is- provided
|
||||||
in the following example:
|
for load-store control dependencies, as in the following example:
|
||||||
|
|
||||||
q = ACCESS_ONCE(a);
|
q = ACCESS_ONCE(a);
|
||||||
if (q) {
|
if (q) {
|
||||||
ACCESS_ONCE(b) = p;
|
ACCESS_ONCE(b) = p;
|
||||||
}
|
}
|
||||||
|
|
||||||
Please note that ACCESS_ONCE() is not optional! Without the
|
Control dependencies pair normally with other types of barriers.
|
||||||
|
That said, please note that ACCESS_ONCE() is not optional! Without the
|
||||||
ACCESS_ONCE(), might combine the load from 'a' with other loads from
|
ACCESS_ONCE(), might combine the load from 'a' with other loads from
|
||||||
'a', and the store to 'b' with other stores to 'b', with possible highly
|
'a', and the store to 'b' with other stores to 'b', with possible highly
|
||||||
counterintuitive effects on ordering.
|
counterintuitive effects on ordering.
|
||||||
|
@ -813,6 +814,8 @@ In summary:
|
||||||
barrier() can help to preserve your control dependency. Please
|
barrier() can help to preserve your control dependency. Please
|
||||||
see the Compiler Barrier section for more information.
|
see the Compiler Barrier section for more information.
|
||||||
|
|
||||||
|
(*) Control dependencies pair normally with other types of barriers.
|
||||||
|
|
||||||
(*) Control dependencies do -not- provide transitivity. If you
|
(*) Control dependencies do -not- provide transitivity. If you
|
||||||
need transitivity, use smp_mb().
|
need transitivity, use smp_mb().
|
||||||
|
|
||||||
|
@ -823,14 +826,14 @@ SMP BARRIER PAIRING
|
||||||
When dealing with CPU-CPU interactions, certain types of memory barrier should
|
When dealing with CPU-CPU interactions, certain types of memory barrier should
|
||||||
always be paired. A lack of appropriate pairing is almost certainly an error.
|
always be paired. A lack of appropriate pairing is almost certainly an error.
|
||||||
|
|
||||||
General barriers pair with each other, though they also pair with
|
General barriers pair with each other, though they also pair with most
|
||||||
most other types of barriers, albeit without transitivity. An acquire
|
other types of barriers, albeit without transitivity. An acquire barrier
|
||||||
barrier pairs with a release barrier, but both may also pair with other
|
pairs with a release barrier, but both may also pair with other barriers,
|
||||||
barriers, including of course general barriers. A write barrier pairs
|
including of course general barriers. A write barrier pairs with a data
|
||||||
with a data dependency barrier, an acquire barrier, a release barrier,
|
dependency barrier, a control dependency, an acquire barrier, a release
|
||||||
a read barrier, or a general barrier. Similarly a read barrier or a
|
barrier, a read barrier, or a general barrier. Similarly a read barrier,
|
||||||
data dependency barrier pairs with a write barrier, an acquire barrier,
|
control dependency, or a data dependency barrier pairs with a write
|
||||||
a release barrier, or a general barrier:
|
barrier, an acquire barrier, a release barrier, or a general barrier:
|
||||||
|
|
||||||
CPU 1 CPU 2
|
CPU 1 CPU 2
|
||||||
=============== ===============
|
=============== ===============
|
||||||
|
@ -850,6 +853,19 @@ Or:
|
||||||
<data dependency barrier>
|
<data dependency barrier>
|
||||||
y = *x;
|
y = *x;
|
||||||
|
|
||||||
|
Or even:
|
||||||
|
|
||||||
|
CPU 1 CPU 2
|
||||||
|
=============== ===============================
|
||||||
|
r1 = ACCESS_ONCE(y);
|
||||||
|
<general barrier>
|
||||||
|
ACCESS_ONCE(y) = 1; if (r2 = ACCESS_ONCE(x)) {
|
||||||
|
<implicit control dependency>
|
||||||
|
ACCESS_ONCE(y) = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
assert(r1 == 0 || r2 == 0);
|
||||||
|
|
||||||
Basically, the read barrier always has to be there, even though it can be of
|
Basically, the read barrier always has to be there, even though it can be of
|
||||||
the "weaker" type.
|
the "weaker" type.
|
||||||
|
|
||||||
|
|
|
@ -158,13 +158,9 @@ not come for free:
|
||||||
to the need to inform kernel subsystems (such as RCU) about
|
to the need to inform kernel subsystems (such as RCU) about
|
||||||
the change in mode.
|
the change in mode.
|
||||||
|
|
||||||
3. POSIX CPU timers on adaptive-tick CPUs may miss their deadlines
|
3. POSIX CPU timers prevent CPUs from entering adaptive-tick mode.
|
||||||
(perhaps indefinitely) because they currently rely on
|
Real-time applications needing to take actions based on CPU time
|
||||||
scheduling-tick interrupts. This will likely be fixed in
|
consumption need to use other means of doing so.
|
||||||
one of two ways: (1) Prevent CPUs with POSIX CPU timers from
|
|
||||||
entering adaptive-tick mode, or (2) Use hrtimers or other
|
|
||||||
adaptive-ticks-immune mechanism to cause the POSIX CPU timer to
|
|
||||||
fire properly.
|
|
||||||
|
|
||||||
4. If there are more perf events pending than the hardware can
|
4. If there are more perf events pending than the hardware can
|
||||||
accommodate, they are normally round-robined so as to collect
|
accommodate, they are normally round-robined so as to collect
|
||||||
|
|
|
@ -413,16 +413,14 @@ int __cpu_disable(void)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static DECLARE_COMPLETION(cpu_killed);
|
|
||||||
|
|
||||||
int __cpu_die(unsigned int cpu)
|
int __cpu_die(unsigned int cpu)
|
||||||
{
|
{
|
||||||
return wait_for_completion_timeout(&cpu_killed, 5000);
|
return cpu_wait_death(cpu, 5);
|
||||||
}
|
}
|
||||||
|
|
||||||
void cpu_die(void)
|
void cpu_die(void)
|
||||||
{
|
{
|
||||||
complete(&cpu_killed);
|
(void)cpu_report_death();
|
||||||
|
|
||||||
atomic_dec(&init_mm.mm_users);
|
atomic_dec(&init_mm.mm_users);
|
||||||
atomic_dec(&init_mm.mm_count);
|
atomic_dec(&init_mm.mm_count);
|
||||||
|
|
|
@ -261,7 +261,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_HOTPLUG_CPU
|
#ifdef CONFIG_HOTPLUG_CPU
|
||||||
static DECLARE_COMPLETION(cpu_killed);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* __cpu_disable runs on the processor to be shutdown.
|
* __cpu_disable runs on the processor to be shutdown.
|
||||||
|
@ -299,7 +298,7 @@ int __cpu_disable(void)
|
||||||
*/
|
*/
|
||||||
void __cpu_die(unsigned int cpu)
|
void __cpu_die(unsigned int cpu)
|
||||||
{
|
{
|
||||||
if (!wait_for_completion_timeout(&cpu_killed, msecs_to_jiffies(1)))
|
if (!cpu_wait_death(cpu, 1))
|
||||||
pr_err("CPU%u: unable to kill\n", cpu);
|
pr_err("CPU%u: unable to kill\n", cpu);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -314,7 +313,7 @@ void cpu_die(void)
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
idle_task_exit();
|
idle_task_exit();
|
||||||
|
|
||||||
complete(&cpu_killed);
|
(void)cpu_report_death();
|
||||||
|
|
||||||
asm ("XOR TXENABLE, D0Re0,D0Re0\n");
|
asm ("XOR TXENABLE, D0Re0,D0Re0\n");
|
||||||
}
|
}
|
||||||
|
|
|
@ -34,8 +34,6 @@ extern int _debug_hotplug_cpu(int cpu, int action);
|
||||||
#endif
|
#endif
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
DECLARE_PER_CPU(int, cpu_state);
|
|
||||||
|
|
||||||
int mwait_usable(const struct cpuinfo_x86 *);
|
int mwait_usable(const struct cpuinfo_x86 *);
|
||||||
|
|
||||||
#endif /* _ASM_X86_CPU_H */
|
#endif /* _ASM_X86_CPU_H */
|
||||||
|
|
|
@ -150,12 +150,12 @@ static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask)
|
||||||
}
|
}
|
||||||
|
|
||||||
void cpu_disable_common(void);
|
void cpu_disable_common(void);
|
||||||
void cpu_die_common(unsigned int cpu);
|
|
||||||
void native_smp_prepare_boot_cpu(void);
|
void native_smp_prepare_boot_cpu(void);
|
||||||
void native_smp_prepare_cpus(unsigned int max_cpus);
|
void native_smp_prepare_cpus(unsigned int max_cpus);
|
||||||
void native_smp_cpus_done(unsigned int max_cpus);
|
void native_smp_cpus_done(unsigned int max_cpus);
|
||||||
int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
|
int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
|
||||||
int native_cpu_disable(void);
|
int native_cpu_disable(void);
|
||||||
|
int common_cpu_die(unsigned int cpu);
|
||||||
void native_cpu_die(unsigned int cpu);
|
void native_cpu_die(unsigned int cpu);
|
||||||
void native_play_dead(void);
|
void native_play_dead(void);
|
||||||
void play_dead_common(void);
|
void play_dead_common(void);
|
||||||
|
|
|
@ -77,9 +77,6 @@
|
||||||
#include <asm/realmode.h>
|
#include <asm/realmode.h>
|
||||||
#include <asm/misc.h>
|
#include <asm/misc.h>
|
||||||
|
|
||||||
/* State of each CPU */
|
|
||||||
DEFINE_PER_CPU(int, cpu_state) = { 0 };
|
|
||||||
|
|
||||||
/* Number of siblings per CPU package */
|
/* Number of siblings per CPU package */
|
||||||
int smp_num_siblings = 1;
|
int smp_num_siblings = 1;
|
||||||
EXPORT_SYMBOL(smp_num_siblings);
|
EXPORT_SYMBOL(smp_num_siblings);
|
||||||
|
@ -257,7 +254,7 @@ static void notrace start_secondary(void *unused)
|
||||||
lock_vector_lock();
|
lock_vector_lock();
|
||||||
set_cpu_online(smp_processor_id(), true);
|
set_cpu_online(smp_processor_id(), true);
|
||||||
unlock_vector_lock();
|
unlock_vector_lock();
|
||||||
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
|
cpu_set_state_online(smp_processor_id());
|
||||||
x86_platform.nmi_init();
|
x86_platform.nmi_init();
|
||||||
|
|
||||||
/* enable local interrupts */
|
/* enable local interrupts */
|
||||||
|
@ -948,7 +945,10 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
|
||||||
*/
|
*/
|
||||||
mtrr_save_state();
|
mtrr_save_state();
|
||||||
|
|
||||||
per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
|
/* x86 CPUs take themselves offline, so delayed offline is OK. */
|
||||||
|
err = cpu_check_up_prepare(cpu);
|
||||||
|
if (err && err != -EBUSY)
|
||||||
|
return err;
|
||||||
|
|
||||||
/* the FPU context is blank, nobody can own it */
|
/* the FPU context is blank, nobody can own it */
|
||||||
__cpu_disable_lazy_restore(cpu);
|
__cpu_disable_lazy_restore(cpu);
|
||||||
|
@ -1191,7 +1191,7 @@ void __init native_smp_prepare_boot_cpu(void)
|
||||||
switch_to_new_gdt(me);
|
switch_to_new_gdt(me);
|
||||||
/* already set me in cpu_online_mask in boot_cpu_init() */
|
/* already set me in cpu_online_mask in boot_cpu_init() */
|
||||||
cpumask_set_cpu(me, cpu_callout_mask);
|
cpumask_set_cpu(me, cpu_callout_mask);
|
||||||
per_cpu(cpu_state, me) = CPU_ONLINE;
|
cpu_set_state_online(me);
|
||||||
}
|
}
|
||||||
|
|
||||||
void __init native_smp_cpus_done(unsigned int max_cpus)
|
void __init native_smp_cpus_done(unsigned int max_cpus)
|
||||||
|
@ -1318,14 +1318,10 @@ static void __ref remove_cpu_from_maps(int cpu)
|
||||||
numa_remove_cpu(cpu);
|
numa_remove_cpu(cpu);
|
||||||
}
|
}
|
||||||
|
|
||||||
static DEFINE_PER_CPU(struct completion, die_complete);
|
|
||||||
|
|
||||||
void cpu_disable_common(void)
|
void cpu_disable_common(void)
|
||||||
{
|
{
|
||||||
int cpu = smp_processor_id();
|
int cpu = smp_processor_id();
|
||||||
|
|
||||||
init_completion(&per_cpu(die_complete, smp_processor_id()));
|
|
||||||
|
|
||||||
remove_siblinginfo(cpu);
|
remove_siblinginfo(cpu);
|
||||||
|
|
||||||
/* It's now safe to remove this processor from the online map */
|
/* It's now safe to remove this processor from the online map */
|
||||||
|
@ -1349,24 +1345,27 @@ int native_cpu_disable(void)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void cpu_die_common(unsigned int cpu)
|
int common_cpu_die(unsigned int cpu)
|
||||||
{
|
{
|
||||||
wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ);
|
int ret = 0;
|
||||||
}
|
|
||||||
|
|
||||||
void native_cpu_die(unsigned int cpu)
|
|
||||||
{
|
|
||||||
/* We don't do anything here: idle task is faking death itself. */
|
/* We don't do anything here: idle task is faking death itself. */
|
||||||
|
|
||||||
cpu_die_common(cpu);
|
|
||||||
|
|
||||||
/* They ack this in play_dead() by setting CPU_DEAD */
|
/* They ack this in play_dead() by setting CPU_DEAD */
|
||||||
if (per_cpu(cpu_state, cpu) == CPU_DEAD) {
|
if (cpu_wait_death(cpu, 5)) {
|
||||||
if (system_state == SYSTEM_RUNNING)
|
if (system_state == SYSTEM_RUNNING)
|
||||||
pr_info("CPU %u is now offline\n", cpu);
|
pr_info("CPU %u is now offline\n", cpu);
|
||||||
} else {
|
} else {
|
||||||
pr_err("CPU %u didn't die...\n", cpu);
|
pr_err("CPU %u didn't die...\n", cpu);
|
||||||
|
ret = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
void native_cpu_die(unsigned int cpu)
|
||||||
|
{
|
||||||
|
common_cpu_die(cpu);
|
||||||
}
|
}
|
||||||
|
|
||||||
void play_dead_common(void)
|
void play_dead_common(void)
|
||||||
|
@ -1375,10 +1374,8 @@ void play_dead_common(void)
|
||||||
reset_lazy_tlbstate();
|
reset_lazy_tlbstate();
|
||||||
amd_e400_remove_cpu(raw_smp_processor_id());
|
amd_e400_remove_cpu(raw_smp_processor_id());
|
||||||
|
|
||||||
mb();
|
|
||||||
/* Ack it */
|
/* Ack it */
|
||||||
__this_cpu_write(cpu_state, CPU_DEAD);
|
(void)cpu_report_death();
|
||||||
complete(&per_cpu(die_complete, smp_processor_id()));
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* With physical CPU hotplug, we should halt the cpu
|
* With physical CPU hotplug, we should halt the cpu
|
||||||
|
|
|
@ -90,14 +90,10 @@ static void cpu_bringup(void)
|
||||||
|
|
||||||
set_cpu_online(cpu, true);
|
set_cpu_online(cpu, true);
|
||||||
|
|
||||||
this_cpu_write(cpu_state, CPU_ONLINE);
|
cpu_set_state_online(cpu); /* Implies full memory barrier. */
|
||||||
|
|
||||||
wmb();
|
|
||||||
|
|
||||||
/* We can take interrupts now: we're officially "up". */
|
/* We can take interrupts now: we're officially "up". */
|
||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
|
|
||||||
wmb(); /* make sure everything is out */
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -459,7 +455,13 @@ static int xen_cpu_up(unsigned int cpu, struct task_struct *idle)
|
||||||
xen_setup_timer(cpu);
|
xen_setup_timer(cpu);
|
||||||
xen_init_lock_cpu(cpu);
|
xen_init_lock_cpu(cpu);
|
||||||
|
|
||||||
per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
|
/*
|
||||||
|
* PV VCPUs are always successfully taken down (see 'while' loop
|
||||||
|
* in xen_cpu_die()), so -EBUSY is an error.
|
||||||
|
*/
|
||||||
|
rc = cpu_check_up_prepare(cpu);
|
||||||
|
if (rc)
|
||||||
|
return rc;
|
||||||
|
|
||||||
/* make sure interrupts start blocked */
|
/* make sure interrupts start blocked */
|
||||||
per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask = 1;
|
per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask = 1;
|
||||||
|
@ -479,10 +481,8 @@ static int xen_cpu_up(unsigned int cpu, struct task_struct *idle)
|
||||||
rc = HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL);
|
rc = HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL);
|
||||||
BUG_ON(rc);
|
BUG_ON(rc);
|
||||||
|
|
||||||
while(per_cpu(cpu_state, cpu) != CPU_ONLINE) {
|
while (cpu_report_state(cpu) != CPU_ONLINE)
|
||||||
HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
|
HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
|
||||||
barrier();
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -511,11 +511,11 @@ static void xen_cpu_die(unsigned int cpu)
|
||||||
schedule_timeout(HZ/10);
|
schedule_timeout(HZ/10);
|
||||||
}
|
}
|
||||||
|
|
||||||
cpu_die_common(cpu);
|
if (common_cpu_die(cpu) == 0) {
|
||||||
|
xen_smp_intr_free(cpu);
|
||||||
xen_smp_intr_free(cpu);
|
xen_uninit_lock_cpu(cpu);
|
||||||
xen_uninit_lock_cpu(cpu);
|
xen_teardown_timer(cpu);
|
||||||
xen_teardown_timer(cpu);
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xen_play_dead(void) /* used only with HOTPLUG_CPU */
|
static void xen_play_dead(void) /* used only with HOTPLUG_CPU */
|
||||||
|
@ -747,6 +747,16 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
|
||||||
static int xen_hvm_cpu_up(unsigned int cpu, struct task_struct *tidle)
|
static int xen_hvm_cpu_up(unsigned int cpu, struct task_struct *tidle)
|
||||||
{
|
{
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This can happen if CPU was offlined earlier and
|
||||||
|
* offlining timed out in common_cpu_die().
|
||||||
|
*/
|
||||||
|
if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
|
||||||
|
xen_smp_intr_free(cpu);
|
||||||
|
xen_uninit_lock_cpu(cpu);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* xen_smp_intr_init() needs to run before native_cpu_up()
|
* xen_smp_intr_init() needs to run before native_cpu_up()
|
||||||
* so that IPI vectors are set up on the booting CPU before
|
* so that IPI vectors are set up on the booting CPU before
|
||||||
|
@ -768,12 +778,6 @@ static int xen_hvm_cpu_up(unsigned int cpu, struct task_struct *tidle)
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xen_hvm_cpu_die(unsigned int cpu)
|
|
||||||
{
|
|
||||||
xen_cpu_die(cpu);
|
|
||||||
native_cpu_die(cpu);
|
|
||||||
}
|
|
||||||
|
|
||||||
void __init xen_hvm_smp_init(void)
|
void __init xen_hvm_smp_init(void)
|
||||||
{
|
{
|
||||||
if (!xen_have_vector_callback)
|
if (!xen_have_vector_callback)
|
||||||
|
@ -781,7 +785,7 @@ void __init xen_hvm_smp_init(void)
|
||||||
smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
|
smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
|
||||||
smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
|
smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
|
||||||
smp_ops.cpu_up = xen_hvm_cpu_up;
|
smp_ops.cpu_up = xen_hvm_cpu_up;
|
||||||
smp_ops.cpu_die = xen_hvm_cpu_die;
|
smp_ops.cpu_die = xen_cpu_die;
|
||||||
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
|
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
|
||||||
smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
|
smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
|
||||||
smp_ops.smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu;
|
smp_ops.smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu;
|
||||||
|
|
|
@ -95,6 +95,10 @@ enum {
|
||||||
* Called on the new cpu, just before
|
* Called on the new cpu, just before
|
||||||
* enabling interrupts. Must not sleep,
|
* enabling interrupts. Must not sleep,
|
||||||
* must not fail */
|
* must not fail */
|
||||||
|
#define CPU_DYING_IDLE 0x000B /* CPU (unsigned)v dying, reached
|
||||||
|
* idle loop. */
|
||||||
|
#define CPU_BROKEN 0x000C /* CPU (unsigned)v did not die properly,
|
||||||
|
* perhaps due to preemption. */
|
||||||
|
|
||||||
/* Used for CPU hotplug events occurring while tasks are frozen due to a suspend
|
/* Used for CPU hotplug events occurring while tasks are frozen due to a suspend
|
||||||
* operation in progress
|
* operation in progress
|
||||||
|
@ -271,4 +275,14 @@ void arch_cpu_idle_enter(void);
|
||||||
void arch_cpu_idle_exit(void);
|
void arch_cpu_idle_exit(void);
|
||||||
void arch_cpu_idle_dead(void);
|
void arch_cpu_idle_dead(void);
|
||||||
|
|
||||||
|
DECLARE_PER_CPU(bool, cpu_dead_idle);
|
||||||
|
|
||||||
|
int cpu_report_state(int cpu);
|
||||||
|
int cpu_check_up_prepare(int cpu);
|
||||||
|
void cpu_set_state_online(int cpu);
|
||||||
|
#ifdef CONFIG_HOTPLUG_CPU
|
||||||
|
bool cpu_wait_death(unsigned int cpu, int seconds);
|
||||||
|
bool cpu_report_death(void);
|
||||||
|
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||||
|
|
||||||
#endif /* _LINUX_CPU_H_ */
|
#endif /* _LINUX_CPU_H_ */
|
||||||
|
|
|
@ -531,8 +531,13 @@ do { \
|
||||||
# define might_lock_read(lock) do { } while (0)
|
# define might_lock_read(lock) do { } while (0)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_PROVE_RCU
|
#ifdef CONFIG_LOCKDEP
|
||||||
void lockdep_rcu_suspicious(const char *file, const int line, const char *s);
|
void lockdep_rcu_suspicious(const char *file, const int line, const char *s);
|
||||||
|
#else
|
||||||
|
static inline void
|
||||||
|
lockdep_rcu_suspicious(const char *file, const int line, const char *s)
|
||||||
|
{
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#endif /* __LINUX_LOCKDEP_H */
|
#endif /* __LINUX_LOCKDEP_H */
|
||||||
|
|
|
@ -48,6 +48,26 @@
|
||||||
|
|
||||||
extern int rcu_expedited; /* for sysctl */
|
extern int rcu_expedited; /* for sysctl */
|
||||||
|
|
||||||
|
#ifdef CONFIG_TINY_RCU
|
||||||
|
/* Tiny RCU doesn't expedite, as its purpose in life is instead to be tiny. */
|
||||||
|
static inline bool rcu_gp_is_expedited(void) /* Internal RCU use. */
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void rcu_expedite_gp(void)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void rcu_unexpedite_gp(void)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
#else /* #ifdef CONFIG_TINY_RCU */
|
||||||
|
bool rcu_gp_is_expedited(void); /* Internal RCU use. */
|
||||||
|
void rcu_expedite_gp(void);
|
||||||
|
void rcu_unexpedite_gp(void);
|
||||||
|
#endif /* #else #ifdef CONFIG_TINY_RCU */
|
||||||
|
|
||||||
enum rcutorture_type {
|
enum rcutorture_type {
|
||||||
RCU_FLAVOR,
|
RCU_FLAVOR,
|
||||||
RCU_BH_FLAVOR,
|
RCU_BH_FLAVOR,
|
||||||
|
@ -195,6 +215,15 @@ void call_rcu_sched(struct rcu_head *head,
|
||||||
|
|
||||||
void synchronize_sched(void);
|
void synchronize_sched(void);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Structure allowing asynchronous waiting on RCU.
|
||||||
|
*/
|
||||||
|
struct rcu_synchronize {
|
||||||
|
struct rcu_head head;
|
||||||
|
struct completion completion;
|
||||||
|
};
|
||||||
|
void wakeme_after_rcu(struct rcu_head *head);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* call_rcu_tasks() - Queue an RCU for invocation task-based grace period
|
* call_rcu_tasks() - Queue an RCU for invocation task-based grace period
|
||||||
* @head: structure to be used for queueing the RCU updates.
|
* @head: structure to be used for queueing the RCU updates.
|
||||||
|
@ -258,6 +287,7 @@ static inline int rcu_preempt_depth(void)
|
||||||
|
|
||||||
/* Internal to kernel */
|
/* Internal to kernel */
|
||||||
void rcu_init(void);
|
void rcu_init(void);
|
||||||
|
void rcu_end_inkernel_boot(void);
|
||||||
void rcu_sched_qs(void);
|
void rcu_sched_qs(void);
|
||||||
void rcu_bh_qs(void);
|
void rcu_bh_qs(void);
|
||||||
void rcu_check_callbacks(int user);
|
void rcu_check_callbacks(int user);
|
||||||
|
@ -266,6 +296,8 @@ void rcu_idle_enter(void);
|
||||||
void rcu_idle_exit(void);
|
void rcu_idle_exit(void);
|
||||||
void rcu_irq_enter(void);
|
void rcu_irq_enter(void);
|
||||||
void rcu_irq_exit(void);
|
void rcu_irq_exit(void);
|
||||||
|
int rcu_cpu_notify(struct notifier_block *self,
|
||||||
|
unsigned long action, void *hcpu);
|
||||||
|
|
||||||
#ifdef CONFIG_RCU_STALL_COMMON
|
#ifdef CONFIG_RCU_STALL_COMMON
|
||||||
void rcu_sysrq_start(void);
|
void rcu_sysrq_start(void);
|
||||||
|
@ -720,7 +752,7 @@ static inline void rcu_preempt_sleep_check(void)
|
||||||
* annotated as __rcu.
|
* annotated as __rcu.
|
||||||
*/
|
*/
|
||||||
#define rcu_dereference_check(p, c) \
|
#define rcu_dereference_check(p, c) \
|
||||||
__rcu_dereference_check((p), rcu_read_lock_held() || (c), __rcu)
|
__rcu_dereference_check((p), (c) || rcu_read_lock_held(), __rcu)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
|
* rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
|
||||||
|
@ -730,7 +762,7 @@ static inline void rcu_preempt_sleep_check(void)
|
||||||
* This is the RCU-bh counterpart to rcu_dereference_check().
|
* This is the RCU-bh counterpart to rcu_dereference_check().
|
||||||
*/
|
*/
|
||||||
#define rcu_dereference_bh_check(p, c) \
|
#define rcu_dereference_bh_check(p, c) \
|
||||||
__rcu_dereference_check((p), rcu_read_lock_bh_held() || (c), __rcu)
|
__rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
|
* rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
|
||||||
|
@ -740,7 +772,7 @@ static inline void rcu_preempt_sleep_check(void)
|
||||||
* This is the RCU-sched counterpart to rcu_dereference_check().
|
* This is the RCU-sched counterpart to rcu_dereference_check().
|
||||||
*/
|
*/
|
||||||
#define rcu_dereference_sched_check(p, c) \
|
#define rcu_dereference_sched_check(p, c) \
|
||||||
__rcu_dereference_check((p), rcu_read_lock_sched_held() || (c), \
|
__rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \
|
||||||
__rcu)
|
__rcu)
|
||||||
|
|
||||||
#define rcu_dereference_raw(p) rcu_dereference_check(p, 1) /*@@@ needed? @@@*/
|
#define rcu_dereference_raw(p) rcu_dereference_check(p, 1) /*@@@ needed? @@@*/
|
||||||
|
@ -933,9 +965,9 @@ static inline void rcu_read_unlock(void)
|
||||||
{
|
{
|
||||||
rcu_lockdep_assert(rcu_is_watching(),
|
rcu_lockdep_assert(rcu_is_watching(),
|
||||||
"rcu_read_unlock() used illegally while idle");
|
"rcu_read_unlock() used illegally while idle");
|
||||||
rcu_lock_release(&rcu_lock_map);
|
|
||||||
__release(RCU);
|
__release(RCU);
|
||||||
__rcu_read_unlock();
|
__rcu_read_unlock();
|
||||||
|
rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -182,7 +182,7 @@ static inline int srcu_read_lock_held(struct srcu_struct *sp)
|
||||||
* lockdep_is_held() calls.
|
* lockdep_is_held() calls.
|
||||||
*/
|
*/
|
||||||
#define srcu_dereference_check(p, sp, c) \
|
#define srcu_dereference_check(p, sp, c) \
|
||||||
__rcu_dereference_check((p), srcu_read_lock_held(sp) || (c), __rcu)
|
__rcu_dereference_check((p), (c) || srcu_read_lock_held(sp), __rcu)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing
|
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing
|
||||||
|
|
13
init/Kconfig
13
init/Kconfig
|
@ -791,6 +791,19 @@ config RCU_NOCB_CPU_ALL
|
||||||
|
|
||||||
endchoice
|
endchoice
|
||||||
|
|
||||||
|
config RCU_EXPEDITE_BOOT
|
||||||
|
bool
|
||||||
|
default n
|
||||||
|
help
|
||||||
|
This option enables expedited grace periods at boot time,
|
||||||
|
as if rcu_expedite_gp() had been invoked early in boot.
|
||||||
|
The corresponding rcu_unexpedite_gp() is invoked from
|
||||||
|
rcu_end_inkernel_boot(), which is intended to be invoked
|
||||||
|
at the end of the kernel-only boot sequence, just before
|
||||||
|
init is exec'ed.
|
||||||
|
|
||||||
|
Accept the default if unsure.
|
||||||
|
|
||||||
endmenu # "RCU Subsystem"
|
endmenu # "RCU Subsystem"
|
||||||
|
|
||||||
config BUILD_BIN2C
|
config BUILD_BIN2C
|
||||||
|
|
|
@ -408,8 +408,10 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
|
||||||
*
|
*
|
||||||
* Wait for the stop thread to go away.
|
* Wait for the stop thread to go away.
|
||||||
*/
|
*/
|
||||||
while (!idle_cpu(cpu))
|
while (!per_cpu(cpu_dead_idle, cpu))
|
||||||
cpu_relax();
|
cpu_relax();
|
||||||
|
smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */
|
||||||
|
per_cpu(cpu_dead_idle, cpu) = false;
|
||||||
|
|
||||||
/* This actually kills the CPU. */
|
/* This actually kills the CPU. */
|
||||||
__cpu_die(cpu);
|
__cpu_die(cpu);
|
||||||
|
|
|
@ -853,6 +853,8 @@ rcu_torture_fqs(void *arg)
|
||||||
static int
|
static int
|
||||||
rcu_torture_writer(void *arg)
|
rcu_torture_writer(void *arg)
|
||||||
{
|
{
|
||||||
|
bool can_expedite = !rcu_gp_is_expedited();
|
||||||
|
int expediting = 0;
|
||||||
unsigned long gp_snap;
|
unsigned long gp_snap;
|
||||||
bool gp_cond1 = gp_cond, gp_exp1 = gp_exp, gp_normal1 = gp_normal;
|
bool gp_cond1 = gp_cond, gp_exp1 = gp_exp, gp_normal1 = gp_normal;
|
||||||
bool gp_sync1 = gp_sync;
|
bool gp_sync1 = gp_sync;
|
||||||
|
@ -865,9 +867,15 @@ rcu_torture_writer(void *arg)
|
||||||
int nsynctypes = 0;
|
int nsynctypes = 0;
|
||||||
|
|
||||||
VERBOSE_TOROUT_STRING("rcu_torture_writer task started");
|
VERBOSE_TOROUT_STRING("rcu_torture_writer task started");
|
||||||
|
pr_alert("%s" TORTURE_FLAG
|
||||||
|
" Grace periods expedited from boot/sysfs for %s,\n",
|
||||||
|
torture_type, cur_ops->name);
|
||||||
|
pr_alert("%s" TORTURE_FLAG
|
||||||
|
" Testing of dynamic grace-period expediting diabled.\n",
|
||||||
|
torture_type);
|
||||||
|
|
||||||
/* Initialize synctype[] array. If none set, take default. */
|
/* Initialize synctype[] array. If none set, take default. */
|
||||||
if (!gp_cond1 && !gp_exp1 && !gp_normal1 && !gp_sync)
|
if (!gp_cond1 && !gp_exp1 && !gp_normal1 && !gp_sync1)
|
||||||
gp_cond1 = gp_exp1 = gp_normal1 = gp_sync1 = true;
|
gp_cond1 = gp_exp1 = gp_normal1 = gp_sync1 = true;
|
||||||
if (gp_cond1 && cur_ops->get_state && cur_ops->cond_sync)
|
if (gp_cond1 && cur_ops->get_state && cur_ops->cond_sync)
|
||||||
synctype[nsynctypes++] = RTWS_COND_GET;
|
synctype[nsynctypes++] = RTWS_COND_GET;
|
||||||
|
@ -949,9 +957,26 @@ rcu_torture_writer(void *arg)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
rcutorture_record_progress(++rcu_torture_current_version);
|
rcutorture_record_progress(++rcu_torture_current_version);
|
||||||
|
/* Cycle through nesting levels of rcu_expedite_gp() calls. */
|
||||||
|
if (can_expedite &&
|
||||||
|
!(torture_random(&rand) & 0xff & (!!expediting - 1))) {
|
||||||
|
WARN_ON_ONCE(expediting == 0 && rcu_gp_is_expedited());
|
||||||
|
if (expediting >= 0)
|
||||||
|
rcu_expedite_gp();
|
||||||
|
else
|
||||||
|
rcu_unexpedite_gp();
|
||||||
|
if (++expediting > 3)
|
||||||
|
expediting = -expediting;
|
||||||
|
}
|
||||||
rcu_torture_writer_state = RTWS_STUTTER;
|
rcu_torture_writer_state = RTWS_STUTTER;
|
||||||
stutter_wait("rcu_torture_writer");
|
stutter_wait("rcu_torture_writer");
|
||||||
} while (!torture_must_stop());
|
} while (!torture_must_stop());
|
||||||
|
/* Reset expediting back to unexpedited. */
|
||||||
|
if (expediting > 0)
|
||||||
|
expediting = -expediting;
|
||||||
|
while (can_expedite && expediting++ < 0)
|
||||||
|
rcu_unexpedite_gp();
|
||||||
|
WARN_ON_ONCE(can_expedite && rcu_gp_is_expedited());
|
||||||
rcu_torture_writer_state = RTWS_STOPPING;
|
rcu_torture_writer_state = RTWS_STOPPING;
|
||||||
torture_kthread_stopping("rcu_torture_writer");
|
torture_kthread_stopping("rcu_torture_writer");
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -402,23 +402,6 @@ void call_srcu(struct srcu_struct *sp, struct rcu_head *head,
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(call_srcu);
|
EXPORT_SYMBOL_GPL(call_srcu);
|
||||||
|
|
||||||
struct rcu_synchronize {
|
|
||||||
struct rcu_head head;
|
|
||||||
struct completion completion;
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Awaken the corresponding synchronize_srcu() instance now that a
|
|
||||||
* grace period has elapsed.
|
|
||||||
*/
|
|
||||||
static void wakeme_after_rcu(struct rcu_head *head)
|
|
||||||
{
|
|
||||||
struct rcu_synchronize *rcu;
|
|
||||||
|
|
||||||
rcu = container_of(head, struct rcu_synchronize, head);
|
|
||||||
complete(&rcu->completion);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void srcu_advance_batches(struct srcu_struct *sp, int trycount);
|
static void srcu_advance_batches(struct srcu_struct *sp, int trycount);
|
||||||
static void srcu_reschedule(struct srcu_struct *sp);
|
static void srcu_reschedule(struct srcu_struct *sp);
|
||||||
|
|
||||||
|
@ -507,7 +490,7 @@ static void __synchronize_srcu(struct srcu_struct *sp, int trycount)
|
||||||
*/
|
*/
|
||||||
void synchronize_srcu(struct srcu_struct *sp)
|
void synchronize_srcu(struct srcu_struct *sp)
|
||||||
{
|
{
|
||||||
__synchronize_srcu(sp, rcu_expedited
|
__synchronize_srcu(sp, rcu_gp_is_expedited()
|
||||||
? SYNCHRONIZE_SRCU_EXP_TRYCOUNT
|
? SYNCHRONIZE_SRCU_EXP_TRYCOUNT
|
||||||
: SYNCHRONIZE_SRCU_TRYCOUNT);
|
: SYNCHRONIZE_SRCU_TRYCOUNT);
|
||||||
}
|
}
|
||||||
|
|
|
@ -103,8 +103,7 @@ EXPORT_SYMBOL(__rcu_is_watching);
|
||||||
static int rcu_qsctr_help(struct rcu_ctrlblk *rcp)
|
static int rcu_qsctr_help(struct rcu_ctrlblk *rcp)
|
||||||
{
|
{
|
||||||
RCU_TRACE(reset_cpu_stall_ticks(rcp));
|
RCU_TRACE(reset_cpu_stall_ticks(rcp));
|
||||||
if (rcp->rcucblist != NULL &&
|
if (rcp->donetail != rcp->curtail) {
|
||||||
rcp->donetail != rcp->curtail) {
|
|
||||||
rcp->donetail = rcp->curtail;
|
rcp->donetail = rcp->curtail;
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
@ -169,17 +168,6 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
RCU_TRACE(int cb_count = 0);
|
RCU_TRACE(int cb_count = 0);
|
||||||
|
|
||||||
/* If no RCU callbacks ready to invoke, just return. */
|
|
||||||
if (&rcp->rcucblist == rcp->donetail) {
|
|
||||||
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, 0, -1));
|
|
||||||
RCU_TRACE(trace_rcu_batch_end(rcp->name, 0,
|
|
||||||
!!ACCESS_ONCE(rcp->rcucblist),
|
|
||||||
need_resched(),
|
|
||||||
is_idle_task(current),
|
|
||||||
false));
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Move the ready-to-invoke callbacks to a local list. */
|
/* Move the ready-to-invoke callbacks to a local list. */
|
||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
|
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
|
||||||
|
|
|
@ -91,8 +91,10 @@ static const char *tp_##sname##_varname __used __tracepoint_string = sname##_var
|
||||||
|
|
||||||
#define RCU_STATE_INITIALIZER(sname, sabbr, cr) \
|
#define RCU_STATE_INITIALIZER(sname, sabbr, cr) \
|
||||||
DEFINE_RCU_TPS(sname) \
|
DEFINE_RCU_TPS(sname) \
|
||||||
|
DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data); \
|
||||||
struct rcu_state sname##_state = { \
|
struct rcu_state sname##_state = { \
|
||||||
.level = { &sname##_state.node[0] }, \
|
.level = { &sname##_state.node[0] }, \
|
||||||
|
.rda = &sname##_data, \
|
||||||
.call = cr, \
|
.call = cr, \
|
||||||
.fqs_state = RCU_GP_IDLE, \
|
.fqs_state = RCU_GP_IDLE, \
|
||||||
.gpnum = 0UL - 300UL, \
|
.gpnum = 0UL - 300UL, \
|
||||||
|
@ -101,11 +103,9 @@ struct rcu_state sname##_state = { \
|
||||||
.orphan_nxttail = &sname##_state.orphan_nxtlist, \
|
.orphan_nxttail = &sname##_state.orphan_nxtlist, \
|
||||||
.orphan_donetail = &sname##_state.orphan_donelist, \
|
.orphan_donetail = &sname##_state.orphan_donelist, \
|
||||||
.barrier_mutex = __MUTEX_INITIALIZER(sname##_state.barrier_mutex), \
|
.barrier_mutex = __MUTEX_INITIALIZER(sname##_state.barrier_mutex), \
|
||||||
.onoff_mutex = __MUTEX_INITIALIZER(sname##_state.onoff_mutex), \
|
|
||||||
.name = RCU_STATE_NAME(sname), \
|
.name = RCU_STATE_NAME(sname), \
|
||||||
.abbr = sabbr, \
|
.abbr = sabbr, \
|
||||||
}; \
|
}
|
||||||
DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data)
|
|
||||||
|
|
||||||
RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched);
|
RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched);
|
||||||
RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh);
|
RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh);
|
||||||
|
@ -152,6 +152,8 @@ EXPORT_SYMBOL_GPL(rcu_scheduler_active);
|
||||||
*/
|
*/
|
||||||
static int rcu_scheduler_fully_active __read_mostly;
|
static int rcu_scheduler_fully_active __read_mostly;
|
||||||
|
|
||||||
|
static void rcu_init_new_rnp(struct rcu_node *rnp_leaf);
|
||||||
|
static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf);
|
||||||
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu);
|
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu);
|
||||||
static void invoke_rcu_core(void);
|
static void invoke_rcu_core(void);
|
||||||
static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp);
|
static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp);
|
||||||
|
@ -160,6 +162,12 @@ static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp);
|
||||||
static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO;
|
static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO;
|
||||||
module_param(kthread_prio, int, 0644);
|
module_param(kthread_prio, int, 0644);
|
||||||
|
|
||||||
|
/* Delay in jiffies for grace-period initialization delays. */
|
||||||
|
static int gp_init_delay = IS_ENABLED(CONFIG_RCU_TORTURE_TEST_SLOW_INIT)
|
||||||
|
? CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY
|
||||||
|
: 0;
|
||||||
|
module_param(gp_init_delay, int, 0644);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Track the rcutorture test sequence number and the update version
|
* Track the rcutorture test sequence number and the update version
|
||||||
* number within a given test. The rcutorture_testseq is incremented
|
* number within a given test. The rcutorture_testseq is incremented
|
||||||
|
@ -172,6 +180,17 @@ module_param(kthread_prio, int, 0644);
|
||||||
unsigned long rcutorture_testseq;
|
unsigned long rcutorture_testseq;
|
||||||
unsigned long rcutorture_vernum;
|
unsigned long rcutorture_vernum;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Compute the mask of online CPUs for the specified rcu_node structure.
|
||||||
|
* This will not be stable unless the rcu_node structure's ->lock is
|
||||||
|
* held, but the bit corresponding to the current CPU will be stable
|
||||||
|
* in most contexts.
|
||||||
|
*/
|
||||||
|
unsigned long rcu_rnp_online_cpus(struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
return ACCESS_ONCE(rnp->qsmaskinitnext);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return true if an RCU grace period is in progress. The ACCESS_ONCE()s
|
* Return true if an RCU grace period is in progress. The ACCESS_ONCE()s
|
||||||
* permit this function to be invoked without holding the root rcu_node
|
* permit this function to be invoked without holding the root rcu_node
|
||||||
|
@ -292,10 +311,10 @@ void rcu_note_context_switch(void)
|
||||||
EXPORT_SYMBOL_GPL(rcu_note_context_switch);
|
EXPORT_SYMBOL_GPL(rcu_note_context_switch);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Register a quiesecent state for all RCU flavors. If there is an
|
* Register a quiescent state for all RCU flavors. If there is an
|
||||||
* emergency, invoke rcu_momentary_dyntick_idle() to do a heavy-weight
|
* emergency, invoke rcu_momentary_dyntick_idle() to do a heavy-weight
|
||||||
* dyntick-idle quiescent state visible to other CPUs (but only for those
|
* dyntick-idle quiescent state visible to other CPUs (but only for those
|
||||||
* RCU flavors in desparate need of a quiescent state, which will normally
|
* RCU flavors in desperate need of a quiescent state, which will normally
|
||||||
* be none of them). Either way, do a lightweight quiescent state for
|
* be none of them). Either way, do a lightweight quiescent state for
|
||||||
* all RCU flavors.
|
* all RCU flavors.
|
||||||
*/
|
*/
|
||||||
|
@ -409,6 +428,15 @@ void rcu_bh_force_quiescent_state(void)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_bh_force_quiescent_state);
|
EXPORT_SYMBOL_GPL(rcu_bh_force_quiescent_state);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Force a quiescent state for RCU-sched.
|
||||||
|
*/
|
||||||
|
void rcu_sched_force_quiescent_state(void)
|
||||||
|
{
|
||||||
|
force_quiescent_state(&rcu_sched_state);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_sched_force_quiescent_state);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Show the state of the grace-period kthreads.
|
* Show the state of the grace-period kthreads.
|
||||||
*/
|
*/
|
||||||
|
@ -482,15 +510,6 @@ void rcutorture_record_progress(unsigned long vernum)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcutorture_record_progress);
|
EXPORT_SYMBOL_GPL(rcutorture_record_progress);
|
||||||
|
|
||||||
/*
|
|
||||||
* Force a quiescent state for RCU-sched.
|
|
||||||
*/
|
|
||||||
void rcu_sched_force_quiescent_state(void)
|
|
||||||
{
|
|
||||||
force_quiescent_state(&rcu_sched_state);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(rcu_sched_force_quiescent_state);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Does the CPU have callbacks ready to be invoked?
|
* Does the CPU have callbacks ready to be invoked?
|
||||||
*/
|
*/
|
||||||
|
@ -954,7 +973,7 @@ bool rcu_lockdep_current_cpu_online(void)
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
rdp = this_cpu_ptr(&rcu_sched_data);
|
rdp = this_cpu_ptr(&rcu_sched_data);
|
||||||
rnp = rdp->mynode;
|
rnp = rdp->mynode;
|
||||||
ret = (rdp->grpmask & rnp->qsmaskinit) ||
|
ret = (rdp->grpmask & rcu_rnp_online_cpus(rnp)) ||
|
||||||
!rcu_scheduler_fully_active;
|
!rcu_scheduler_fully_active;
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -1196,9 +1215,10 @@ static void print_other_cpu_stall(struct rcu_state *rsp, unsigned long gpnum)
|
||||||
} else {
|
} else {
|
||||||
j = jiffies;
|
j = jiffies;
|
||||||
gpa = ACCESS_ONCE(rsp->gp_activity);
|
gpa = ACCESS_ONCE(rsp->gp_activity);
|
||||||
pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld\n",
|
pr_err("All QSes seen, last %s kthread activity %ld (%ld-%ld), jiffies_till_next_fqs=%ld, root ->qsmask %#lx\n",
|
||||||
rsp->name, j - gpa, j, gpa,
|
rsp->name, j - gpa, j, gpa,
|
||||||
jiffies_till_next_fqs);
|
jiffies_till_next_fqs,
|
||||||
|
rcu_get_root(rsp)->qsmask);
|
||||||
/* In this case, the current CPU might be at fault. */
|
/* In this case, the current CPU might be at fault. */
|
||||||
sched_show_task(current);
|
sched_show_task(current);
|
||||||
}
|
}
|
||||||
|
@ -1327,18 +1347,28 @@ void rcu_cpu_stall_reset(void)
|
||||||
ACCESS_ONCE(rsp->jiffies_stall) = jiffies + ULONG_MAX / 2;
|
ACCESS_ONCE(rsp->jiffies_stall) = jiffies + ULONG_MAX / 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Initialize the specified rcu_data structure's default callback list
|
||||||
|
* to empty. The default callback list is the one that is not used by
|
||||||
|
* no-callbacks CPUs.
|
||||||
|
*/
|
||||||
|
static void init_default_callback_list(struct rcu_data *rdp)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
rdp->nxtlist = NULL;
|
||||||
|
for (i = 0; i < RCU_NEXT_SIZE; i++)
|
||||||
|
rdp->nxttail[i] = &rdp->nxtlist;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize the specified rcu_data structure's callback list to empty.
|
* Initialize the specified rcu_data structure's callback list to empty.
|
||||||
*/
|
*/
|
||||||
static void init_callback_list(struct rcu_data *rdp)
|
static void init_callback_list(struct rcu_data *rdp)
|
||||||
{
|
{
|
||||||
int i;
|
|
||||||
|
|
||||||
if (init_nocb_callback_list(rdp))
|
if (init_nocb_callback_list(rdp))
|
||||||
return;
|
return;
|
||||||
rdp->nxtlist = NULL;
|
init_default_callback_list(rdp);
|
||||||
for (i = 0; i < RCU_NEXT_SIZE; i++)
|
|
||||||
rdp->nxttail[i] = &rdp->nxtlist;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -1703,11 +1733,11 @@ static void note_gp_changes(struct rcu_state *rsp, struct rcu_data *rdp)
|
||||||
*/
|
*/
|
||||||
static int rcu_gp_init(struct rcu_state *rsp)
|
static int rcu_gp_init(struct rcu_state *rsp)
|
||||||
{
|
{
|
||||||
|
unsigned long oldmask;
|
||||||
struct rcu_data *rdp;
|
struct rcu_data *rdp;
|
||||||
struct rcu_node *rnp = rcu_get_root(rsp);
|
struct rcu_node *rnp = rcu_get_root(rsp);
|
||||||
|
|
||||||
ACCESS_ONCE(rsp->gp_activity) = jiffies;
|
ACCESS_ONCE(rsp->gp_activity) = jiffies;
|
||||||
rcu_bind_gp_kthread();
|
|
||||||
raw_spin_lock_irq(&rnp->lock);
|
raw_spin_lock_irq(&rnp->lock);
|
||||||
smp_mb__after_unlock_lock();
|
smp_mb__after_unlock_lock();
|
||||||
if (!ACCESS_ONCE(rsp->gp_flags)) {
|
if (!ACCESS_ONCE(rsp->gp_flags)) {
|
||||||
|
@ -1733,9 +1763,54 @@ static int rcu_gp_init(struct rcu_state *rsp)
|
||||||
trace_rcu_grace_period(rsp->name, rsp->gpnum, TPS("start"));
|
trace_rcu_grace_period(rsp->name, rsp->gpnum, TPS("start"));
|
||||||
raw_spin_unlock_irq(&rnp->lock);
|
raw_spin_unlock_irq(&rnp->lock);
|
||||||
|
|
||||||
/* Exclude any concurrent CPU-hotplug operations. */
|
/*
|
||||||
mutex_lock(&rsp->onoff_mutex);
|
* Apply per-leaf buffered online and offline operations to the
|
||||||
smp_mb__after_unlock_lock(); /* ->gpnum increment before GP! */
|
* rcu_node tree. Note that this new grace period need not wait
|
||||||
|
* for subsequent online CPUs, and that quiescent-state forcing
|
||||||
|
* will handle subsequent offline CPUs.
|
||||||
|
*/
|
||||||
|
rcu_for_each_leaf_node(rsp, rnp) {
|
||||||
|
raw_spin_lock_irq(&rnp->lock);
|
||||||
|
smp_mb__after_unlock_lock();
|
||||||
|
if (rnp->qsmaskinit == rnp->qsmaskinitnext &&
|
||||||
|
!rnp->wait_blkd_tasks) {
|
||||||
|
/* Nothing to do on this leaf rcu_node structure. */
|
||||||
|
raw_spin_unlock_irq(&rnp->lock);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Record old state, apply changes to ->qsmaskinit field. */
|
||||||
|
oldmask = rnp->qsmaskinit;
|
||||||
|
rnp->qsmaskinit = rnp->qsmaskinitnext;
|
||||||
|
|
||||||
|
/* If zero-ness of ->qsmaskinit changed, propagate up tree. */
|
||||||
|
if (!oldmask != !rnp->qsmaskinit) {
|
||||||
|
if (!oldmask) /* First online CPU for this rcu_node. */
|
||||||
|
rcu_init_new_rnp(rnp);
|
||||||
|
else if (rcu_preempt_has_tasks(rnp)) /* blocked tasks */
|
||||||
|
rnp->wait_blkd_tasks = true;
|
||||||
|
else /* Last offline CPU and can propagate. */
|
||||||
|
rcu_cleanup_dead_rnp(rnp);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If all waited-on tasks from prior grace period are
|
||||||
|
* done, and if all this rcu_node structure's CPUs are
|
||||||
|
* still offline, propagate up the rcu_node tree and
|
||||||
|
* clear ->wait_blkd_tasks. Otherwise, if one of this
|
||||||
|
* rcu_node structure's CPUs has since come back online,
|
||||||
|
* simply clear ->wait_blkd_tasks (but rcu_cleanup_dead_rnp()
|
||||||
|
* checks for this, so just call it unconditionally).
|
||||||
|
*/
|
||||||
|
if (rnp->wait_blkd_tasks &&
|
||||||
|
(!rcu_preempt_has_tasks(rnp) ||
|
||||||
|
rnp->qsmaskinit)) {
|
||||||
|
rnp->wait_blkd_tasks = false;
|
||||||
|
rcu_cleanup_dead_rnp(rnp);
|
||||||
|
}
|
||||||
|
|
||||||
|
raw_spin_unlock_irq(&rnp->lock);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Set the quiescent-state-needed bits in all the rcu_node
|
* Set the quiescent-state-needed bits in all the rcu_node
|
||||||
|
@ -1757,8 +1832,8 @@ static int rcu_gp_init(struct rcu_state *rsp)
|
||||||
rcu_preempt_check_blocked_tasks(rnp);
|
rcu_preempt_check_blocked_tasks(rnp);
|
||||||
rnp->qsmask = rnp->qsmaskinit;
|
rnp->qsmask = rnp->qsmaskinit;
|
||||||
ACCESS_ONCE(rnp->gpnum) = rsp->gpnum;
|
ACCESS_ONCE(rnp->gpnum) = rsp->gpnum;
|
||||||
WARN_ON_ONCE(rnp->completed != rsp->completed);
|
if (WARN_ON_ONCE(rnp->completed != rsp->completed))
|
||||||
ACCESS_ONCE(rnp->completed) = rsp->completed;
|
ACCESS_ONCE(rnp->completed) = rsp->completed;
|
||||||
if (rnp == rdp->mynode)
|
if (rnp == rdp->mynode)
|
||||||
(void)__note_gp_changes(rsp, rnp, rdp);
|
(void)__note_gp_changes(rsp, rnp, rdp);
|
||||||
rcu_preempt_boost_start_gp(rnp);
|
rcu_preempt_boost_start_gp(rnp);
|
||||||
|
@ -1768,9 +1843,12 @@ static int rcu_gp_init(struct rcu_state *rsp)
|
||||||
raw_spin_unlock_irq(&rnp->lock);
|
raw_spin_unlock_irq(&rnp->lock);
|
||||||
cond_resched_rcu_qs();
|
cond_resched_rcu_qs();
|
||||||
ACCESS_ONCE(rsp->gp_activity) = jiffies;
|
ACCESS_ONCE(rsp->gp_activity) = jiffies;
|
||||||
|
if (IS_ENABLED(CONFIG_RCU_TORTURE_TEST_SLOW_INIT) &&
|
||||||
|
gp_init_delay > 0 &&
|
||||||
|
!(rsp->gpnum % (rcu_num_nodes * 10)))
|
||||||
|
schedule_timeout_uninterruptible(gp_init_delay);
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&rsp->onoff_mutex);
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1798,7 +1876,7 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int fqs_state_in)
|
||||||
fqs_state = RCU_FORCE_QS;
|
fqs_state = RCU_FORCE_QS;
|
||||||
} else {
|
} else {
|
||||||
/* Handle dyntick-idle and offline CPUs. */
|
/* Handle dyntick-idle and offline CPUs. */
|
||||||
isidle = false;
|
isidle = true;
|
||||||
force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
|
force_qs_rnp(rsp, rcu_implicit_dynticks_qs, &isidle, &maxj);
|
||||||
}
|
}
|
||||||
/* Clear flag to prevent immediate re-entry. */
|
/* Clear flag to prevent immediate re-entry. */
|
||||||
|
@ -1852,6 +1930,8 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
|
||||||
rcu_for_each_node_breadth_first(rsp, rnp) {
|
rcu_for_each_node_breadth_first(rsp, rnp) {
|
||||||
raw_spin_lock_irq(&rnp->lock);
|
raw_spin_lock_irq(&rnp->lock);
|
||||||
smp_mb__after_unlock_lock();
|
smp_mb__after_unlock_lock();
|
||||||
|
WARN_ON_ONCE(rcu_preempt_blocked_readers_cgp(rnp));
|
||||||
|
WARN_ON_ONCE(rnp->qsmask);
|
||||||
ACCESS_ONCE(rnp->completed) = rsp->gpnum;
|
ACCESS_ONCE(rnp->completed) = rsp->gpnum;
|
||||||
rdp = this_cpu_ptr(rsp->rda);
|
rdp = this_cpu_ptr(rsp->rda);
|
||||||
if (rnp == rdp->mynode)
|
if (rnp == rdp->mynode)
|
||||||
|
@ -1895,6 +1975,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
|
||||||
struct rcu_state *rsp = arg;
|
struct rcu_state *rsp = arg;
|
||||||
struct rcu_node *rnp = rcu_get_root(rsp);
|
struct rcu_node *rnp = rcu_get_root(rsp);
|
||||||
|
|
||||||
|
rcu_bind_gp_kthread();
|
||||||
for (;;) {
|
for (;;) {
|
||||||
|
|
||||||
/* Handle grace-period start. */
|
/* Handle grace-period start. */
|
||||||
|
@ -2062,25 +2143,32 @@ static void rcu_report_qs_rsp(struct rcu_state *rsp, unsigned long flags)
|
||||||
* Similar to rcu_report_qs_rdp(), for which it is a helper function.
|
* Similar to rcu_report_qs_rdp(), for which it is a helper function.
|
||||||
* Allows quiescent states for a group of CPUs to be reported at one go
|
* Allows quiescent states for a group of CPUs to be reported at one go
|
||||||
* to the specified rcu_node structure, though all the CPUs in the group
|
* to the specified rcu_node structure, though all the CPUs in the group
|
||||||
* must be represented by the same rcu_node structure (which need not be
|
* must be represented by the same rcu_node structure (which need not be a
|
||||||
* a leaf rcu_node structure, though it often will be). That structure's
|
* leaf rcu_node structure, though it often will be). The gps parameter
|
||||||
* lock must be held upon entry, and it is released before return.
|
* is the grace-period snapshot, which means that the quiescent states
|
||||||
|
* are valid only if rnp->gpnum is equal to gps. That structure's lock
|
||||||
|
* must be held upon entry, and it is released before return.
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
|
rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
|
||||||
struct rcu_node *rnp, unsigned long flags)
|
struct rcu_node *rnp, unsigned long gps, unsigned long flags)
|
||||||
__releases(rnp->lock)
|
__releases(rnp->lock)
|
||||||
{
|
{
|
||||||
|
unsigned long oldmask = 0;
|
||||||
struct rcu_node *rnp_c;
|
struct rcu_node *rnp_c;
|
||||||
|
|
||||||
/* Walk up the rcu_node hierarchy. */
|
/* Walk up the rcu_node hierarchy. */
|
||||||
for (;;) {
|
for (;;) {
|
||||||
if (!(rnp->qsmask & mask)) {
|
if (!(rnp->qsmask & mask) || rnp->gpnum != gps) {
|
||||||
|
|
||||||
/* Our bit has already been cleared, so done. */
|
/*
|
||||||
|
* Our bit has already been cleared, or the
|
||||||
|
* relevant grace period is already over, so done.
|
||||||
|
*/
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
WARN_ON_ONCE(oldmask); /* Any child must be all zeroed! */
|
||||||
rnp->qsmask &= ~mask;
|
rnp->qsmask &= ~mask;
|
||||||
trace_rcu_quiescent_state_report(rsp->name, rnp->gpnum,
|
trace_rcu_quiescent_state_report(rsp->name, rnp->gpnum,
|
||||||
mask, rnp->qsmask, rnp->level,
|
mask, rnp->qsmask, rnp->level,
|
||||||
|
@ -2104,7 +2192,7 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
|
||||||
rnp = rnp->parent;
|
rnp = rnp->parent;
|
||||||
raw_spin_lock_irqsave(&rnp->lock, flags);
|
raw_spin_lock_irqsave(&rnp->lock, flags);
|
||||||
smp_mb__after_unlock_lock();
|
smp_mb__after_unlock_lock();
|
||||||
WARN_ON_ONCE(rnp_c->qsmask);
|
oldmask = rnp_c->qsmask;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -2115,6 +2203,46 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
|
||||||
rcu_report_qs_rsp(rsp, flags); /* releases rnp->lock. */
|
rcu_report_qs_rsp(rsp, flags); /* releases rnp->lock. */
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Record a quiescent state for all tasks that were previously queued
|
||||||
|
* on the specified rcu_node structure and that were blocking the current
|
||||||
|
* RCU grace period. The caller must hold the specified rnp->lock with
|
||||||
|
* irqs disabled, and this lock is released upon return, but irqs remain
|
||||||
|
* disabled.
|
||||||
|
*/
|
||||||
|
static void rcu_report_unblock_qs_rnp(struct rcu_state *rsp,
|
||||||
|
struct rcu_node *rnp, unsigned long flags)
|
||||||
|
__releases(rnp->lock)
|
||||||
|
{
|
||||||
|
unsigned long gps;
|
||||||
|
unsigned long mask;
|
||||||
|
struct rcu_node *rnp_p;
|
||||||
|
|
||||||
|
if (rcu_state_p == &rcu_sched_state || rsp != rcu_state_p ||
|
||||||
|
rnp->qsmask != 0 || rcu_preempt_blocked_readers_cgp(rnp)) {
|
||||||
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
|
return; /* Still need more quiescent states! */
|
||||||
|
}
|
||||||
|
|
||||||
|
rnp_p = rnp->parent;
|
||||||
|
if (rnp_p == NULL) {
|
||||||
|
/*
|
||||||
|
* Only one rcu_node structure in the tree, so don't
|
||||||
|
* try to report up to its nonexistent parent!
|
||||||
|
*/
|
||||||
|
rcu_report_qs_rsp(rsp, flags);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Report up the rest of the hierarchy, tracking current ->gpnum. */
|
||||||
|
gps = rnp->gpnum;
|
||||||
|
mask = rnp->grpmask;
|
||||||
|
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
||||||
|
raw_spin_lock(&rnp_p->lock); /* irqs already disabled. */
|
||||||
|
smp_mb__after_unlock_lock();
|
||||||
|
rcu_report_qs_rnp(mask, rsp, rnp_p, gps, flags);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Record a quiescent state for the specified CPU to that CPU's rcu_data
|
* Record a quiescent state for the specified CPU to that CPU's rcu_data
|
||||||
* structure. This must be either called from the specified CPU, or
|
* structure. This must be either called from the specified CPU, or
|
||||||
|
@ -2163,7 +2291,8 @@ rcu_report_qs_rdp(int cpu, struct rcu_state *rsp, struct rcu_data *rdp)
|
||||||
*/
|
*/
|
||||||
needwake = rcu_accelerate_cbs(rsp, rnp, rdp);
|
needwake = rcu_accelerate_cbs(rsp, rnp, rdp);
|
||||||
|
|
||||||
rcu_report_qs_rnp(mask, rsp, rnp, flags); /* rlses rnp->lock */
|
rcu_report_qs_rnp(mask, rsp, rnp, rnp->gpnum, flags);
|
||||||
|
/* ^^^ Released rnp->lock */
|
||||||
if (needwake)
|
if (needwake)
|
||||||
rcu_gp_kthread_wake(rsp);
|
rcu_gp_kthread_wake(rsp);
|
||||||
}
|
}
|
||||||
|
@ -2256,8 +2385,12 @@ rcu_send_cbs_to_orphanage(int cpu, struct rcu_state *rsp,
|
||||||
rsp->orphan_donetail = rdp->nxttail[RCU_DONE_TAIL];
|
rsp->orphan_donetail = rdp->nxttail[RCU_DONE_TAIL];
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Finally, initialize the rcu_data structure's list to empty. */
|
/*
|
||||||
|
* Finally, initialize the rcu_data structure's list to empty and
|
||||||
|
* disallow further callbacks on this CPU.
|
||||||
|
*/
|
||||||
init_callback_list(rdp);
|
init_callback_list(rdp);
|
||||||
|
rdp->nxttail[RCU_NEXT_TAIL] = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -2355,6 +2488,7 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
|
||||||
raw_spin_lock(&rnp->lock); /* irqs already disabled. */
|
raw_spin_lock(&rnp->lock); /* irqs already disabled. */
|
||||||
smp_mb__after_unlock_lock(); /* GP memory ordering. */
|
smp_mb__after_unlock_lock(); /* GP memory ordering. */
|
||||||
rnp->qsmaskinit &= ~mask;
|
rnp->qsmaskinit &= ~mask;
|
||||||
|
rnp->qsmask &= ~mask;
|
||||||
if (rnp->qsmaskinit) {
|
if (rnp->qsmaskinit) {
|
||||||
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
||||||
return;
|
return;
|
||||||
|
@ -2363,6 +2497,26 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The CPU is exiting the idle loop into the arch_cpu_idle_dead()
|
||||||
|
* function. We now remove it from the rcu_node tree's ->qsmaskinit
|
||||||
|
* bit masks.
|
||||||
|
*/
|
||||||
|
static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
unsigned long mask;
|
||||||
|
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
||||||
|
struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
|
||||||
|
|
||||||
|
/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
|
||||||
|
mask = rdp->grpmask;
|
||||||
|
raw_spin_lock_irqsave(&rnp->lock, flags);
|
||||||
|
smp_mb__after_unlock_lock(); /* Enforce GP memory-order guarantee. */
|
||||||
|
rnp->qsmaskinitnext &= ~mask;
|
||||||
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The CPU has been completely removed, and some other CPU is reporting
|
* The CPU has been completely removed, and some other CPU is reporting
|
||||||
* this fact from process context. Do the remainder of the cleanup,
|
* this fact from process context. Do the remainder of the cleanup,
|
||||||
|
@ -2379,29 +2533,15 @@ static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
|
||||||
/* Adjust any no-longer-needed kthreads. */
|
/* Adjust any no-longer-needed kthreads. */
|
||||||
rcu_boost_kthread_setaffinity(rnp, -1);
|
rcu_boost_kthread_setaffinity(rnp, -1);
|
||||||
|
|
||||||
/* Exclude any attempts to start a new grace period. */
|
|
||||||
mutex_lock(&rsp->onoff_mutex);
|
|
||||||
raw_spin_lock_irqsave(&rsp->orphan_lock, flags);
|
|
||||||
|
|
||||||
/* Orphan the dead CPU's callbacks, and adopt them if appropriate. */
|
/* Orphan the dead CPU's callbacks, and adopt them if appropriate. */
|
||||||
|
raw_spin_lock_irqsave(&rsp->orphan_lock, flags);
|
||||||
rcu_send_cbs_to_orphanage(cpu, rsp, rnp, rdp);
|
rcu_send_cbs_to_orphanage(cpu, rsp, rnp, rdp);
|
||||||
rcu_adopt_orphan_cbs(rsp, flags);
|
rcu_adopt_orphan_cbs(rsp, flags);
|
||||||
raw_spin_unlock_irqrestore(&rsp->orphan_lock, flags);
|
raw_spin_unlock_irqrestore(&rsp->orphan_lock, flags);
|
||||||
|
|
||||||
/* Remove outgoing CPU from mask in the leaf rcu_node structure. */
|
|
||||||
raw_spin_lock_irqsave(&rnp->lock, flags);
|
|
||||||
smp_mb__after_unlock_lock(); /* Enforce GP memory-order guarantee. */
|
|
||||||
rnp->qsmaskinit &= ~rdp->grpmask;
|
|
||||||
if (rnp->qsmaskinit == 0 && !rcu_preempt_has_tasks(rnp))
|
|
||||||
rcu_cleanup_dead_rnp(rnp);
|
|
||||||
rcu_report_qs_rnp(rdp->grpmask, rsp, rnp, flags); /* Rlses rnp->lock. */
|
|
||||||
WARN_ONCE(rdp->qlen != 0 || rdp->nxtlist != NULL,
|
WARN_ONCE(rdp->qlen != 0 || rdp->nxtlist != NULL,
|
||||||
"rcu_cleanup_dead_cpu: Callbacks on offline CPU %d: qlen=%lu, nxtlist=%p\n",
|
"rcu_cleanup_dead_cpu: Callbacks on offline CPU %d: qlen=%lu, nxtlist=%p\n",
|
||||||
cpu, rdp->qlen, rdp->nxtlist);
|
cpu, rdp->qlen, rdp->nxtlist);
|
||||||
init_callback_list(rdp);
|
|
||||||
/* Disallow further callbacks on this CPU. */
|
|
||||||
rdp->nxttail[RCU_NEXT_TAIL] = NULL;
|
|
||||||
mutex_unlock(&rsp->onoff_mutex);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_HOTPLUG_CPU */
|
#else /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||||
|
@ -2414,6 +2554,10 @@ static void __maybe_unused rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
|
static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
@ -2589,26 +2733,47 @@ static void force_qs_rnp(struct rcu_state *rsp,
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (rnp->qsmask == 0) {
|
if (rnp->qsmask == 0) {
|
||||||
rcu_initiate_boost(rnp, flags); /* releases rnp->lock */
|
if (rcu_state_p == &rcu_sched_state ||
|
||||||
continue;
|
rsp != rcu_state_p ||
|
||||||
|
rcu_preempt_blocked_readers_cgp(rnp)) {
|
||||||
|
/*
|
||||||
|
* No point in scanning bits because they
|
||||||
|
* are all zero. But we might need to
|
||||||
|
* priority-boost blocked readers.
|
||||||
|
*/
|
||||||
|
rcu_initiate_boost(rnp, flags);
|
||||||
|
/* rcu_initiate_boost() releases rnp->lock */
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (rnp->parent &&
|
||||||
|
(rnp->parent->qsmask & rnp->grpmask)) {
|
||||||
|
/*
|
||||||
|
* Race between grace-period
|
||||||
|
* initialization and task exiting RCU
|
||||||
|
* read-side critical section: Report.
|
||||||
|
*/
|
||||||
|
rcu_report_unblock_qs_rnp(rsp, rnp, flags);
|
||||||
|
/* rcu_report_unblock_qs_rnp() rlses ->lock */
|
||||||
|
continue;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
cpu = rnp->grplo;
|
cpu = rnp->grplo;
|
||||||
bit = 1;
|
bit = 1;
|
||||||
for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
|
for (; cpu <= rnp->grphi; cpu++, bit <<= 1) {
|
||||||
if ((rnp->qsmask & bit) != 0) {
|
if ((rnp->qsmask & bit) != 0) {
|
||||||
if ((rnp->qsmaskinit & bit) != 0)
|
if ((rnp->qsmaskinit & bit) == 0)
|
||||||
*isidle = false;
|
*isidle = false; /* Pending hotplug. */
|
||||||
if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
|
if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj))
|
||||||
mask |= bit;
|
mask |= bit;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (mask != 0) {
|
if (mask != 0) {
|
||||||
|
/* Idle/offline CPUs, report (releases rnp->lock. */
|
||||||
/* rcu_report_qs_rnp() releases rnp->lock. */
|
rcu_report_qs_rnp(mask, rsp, rnp, rnp->gpnum, flags);
|
||||||
rcu_report_qs_rnp(mask, rsp, rnp, flags);
|
} else {
|
||||||
continue;
|
/* Nothing to do here, so just drop the lock. */
|
||||||
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
}
|
}
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2741,7 +2906,7 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
|
||||||
* If called from an extended quiescent state, invoke the RCU
|
* If called from an extended quiescent state, invoke the RCU
|
||||||
* core in order to force a re-evaluation of RCU's idleness.
|
* core in order to force a re-evaluation of RCU's idleness.
|
||||||
*/
|
*/
|
||||||
if (!rcu_is_watching() && cpu_online(smp_processor_id()))
|
if (!rcu_is_watching())
|
||||||
invoke_rcu_core();
|
invoke_rcu_core();
|
||||||
|
|
||||||
/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
|
/* If interrupts were disabled or CPU offline, don't invoke RCU core. */
|
||||||
|
@ -2827,11 +2992,22 @@ __call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu),
|
||||||
|
|
||||||
if (cpu != -1)
|
if (cpu != -1)
|
||||||
rdp = per_cpu_ptr(rsp->rda, cpu);
|
rdp = per_cpu_ptr(rsp->rda, cpu);
|
||||||
offline = !__call_rcu_nocb(rdp, head, lazy, flags);
|
if (likely(rdp->mynode)) {
|
||||||
WARN_ON_ONCE(offline);
|
/* Post-boot, so this should be for a no-CBs CPU. */
|
||||||
/* _call_rcu() is illegal on offline CPU; leak the callback. */
|
offline = !__call_rcu_nocb(rdp, head, lazy, flags);
|
||||||
local_irq_restore(flags);
|
WARN_ON_ONCE(offline);
|
||||||
return;
|
/* Offline CPU, _call_rcu() illegal, leak callback. */
|
||||||
|
local_irq_restore(flags);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
/*
|
||||||
|
* Very early boot, before rcu_init(). Initialize if needed
|
||||||
|
* and then drop through to queue the callback.
|
||||||
|
*/
|
||||||
|
BUG_ON(cpu != -1);
|
||||||
|
WARN_ON_ONCE(!rcu_is_watching());
|
||||||
|
if (!likely(rdp->nxtlist))
|
||||||
|
init_default_callback_list(rdp);
|
||||||
}
|
}
|
||||||
ACCESS_ONCE(rdp->qlen) = rdp->qlen + 1;
|
ACCESS_ONCE(rdp->qlen) = rdp->qlen + 1;
|
||||||
if (lazy)
|
if (lazy)
|
||||||
|
@ -2954,7 +3130,7 @@ void synchronize_sched(void)
|
||||||
"Illegal synchronize_sched() in RCU-sched read-side critical section");
|
"Illegal synchronize_sched() in RCU-sched read-side critical section");
|
||||||
if (rcu_blocking_is_gp())
|
if (rcu_blocking_is_gp())
|
||||||
return;
|
return;
|
||||||
if (rcu_expedited)
|
if (rcu_gp_is_expedited())
|
||||||
synchronize_sched_expedited();
|
synchronize_sched_expedited();
|
||||||
else
|
else
|
||||||
wait_rcu_gp(call_rcu_sched);
|
wait_rcu_gp(call_rcu_sched);
|
||||||
|
@ -2981,7 +3157,7 @@ void synchronize_rcu_bh(void)
|
||||||
"Illegal synchronize_rcu_bh() in RCU-bh read-side critical section");
|
"Illegal synchronize_rcu_bh() in RCU-bh read-side critical section");
|
||||||
if (rcu_blocking_is_gp())
|
if (rcu_blocking_is_gp())
|
||||||
return;
|
return;
|
||||||
if (rcu_expedited)
|
if (rcu_gp_is_expedited())
|
||||||
synchronize_rcu_bh_expedited();
|
synchronize_rcu_bh_expedited();
|
||||||
else
|
else
|
||||||
wait_rcu_gp(call_rcu_bh);
|
wait_rcu_gp(call_rcu_bh);
|
||||||
|
@ -3517,6 +3693,28 @@ void rcu_barrier_sched(void)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rcu_barrier_sched);
|
EXPORT_SYMBOL_GPL(rcu_barrier_sched);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Propagate ->qsinitmask bits up the rcu_node tree to account for the
|
||||||
|
* first CPU in a given leaf rcu_node structure coming online. The caller
|
||||||
|
* must hold the corresponding leaf rcu_node ->lock with interrrupts
|
||||||
|
* disabled.
|
||||||
|
*/
|
||||||
|
static void rcu_init_new_rnp(struct rcu_node *rnp_leaf)
|
||||||
|
{
|
||||||
|
long mask;
|
||||||
|
struct rcu_node *rnp = rnp_leaf;
|
||||||
|
|
||||||
|
for (;;) {
|
||||||
|
mask = rnp->grpmask;
|
||||||
|
rnp = rnp->parent;
|
||||||
|
if (rnp == NULL)
|
||||||
|
return;
|
||||||
|
raw_spin_lock(&rnp->lock); /* Interrupts already disabled. */
|
||||||
|
rnp->qsmaskinit |= mask;
|
||||||
|
raw_spin_unlock(&rnp->lock); /* Interrupts remain disabled. */
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Do boot-time initialization of a CPU's per-CPU RCU data.
|
* Do boot-time initialization of a CPU's per-CPU RCU data.
|
||||||
*/
|
*/
|
||||||
|
@ -3553,49 +3751,37 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp)
|
||||||
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
||||||
struct rcu_node *rnp = rcu_get_root(rsp);
|
struct rcu_node *rnp = rcu_get_root(rsp);
|
||||||
|
|
||||||
/* Exclude new grace periods. */
|
|
||||||
mutex_lock(&rsp->onoff_mutex);
|
|
||||||
|
|
||||||
/* Set up local state, ensuring consistent view of global state. */
|
/* Set up local state, ensuring consistent view of global state. */
|
||||||
raw_spin_lock_irqsave(&rnp->lock, flags);
|
raw_spin_lock_irqsave(&rnp->lock, flags);
|
||||||
rdp->beenonline = 1; /* We have now been online. */
|
rdp->beenonline = 1; /* We have now been online. */
|
||||||
rdp->qlen_last_fqs_check = 0;
|
rdp->qlen_last_fqs_check = 0;
|
||||||
rdp->n_force_qs_snap = rsp->n_force_qs;
|
rdp->n_force_qs_snap = rsp->n_force_qs;
|
||||||
rdp->blimit = blimit;
|
rdp->blimit = blimit;
|
||||||
init_callback_list(rdp); /* Re-enable callbacks on this CPU. */
|
if (!rdp->nxtlist)
|
||||||
|
init_callback_list(rdp); /* Re-enable callbacks on this CPU. */
|
||||||
rdp->dynticks->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
|
rdp->dynticks->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
|
||||||
rcu_sysidle_init_percpu_data(rdp->dynticks);
|
rcu_sysidle_init_percpu_data(rdp->dynticks);
|
||||||
atomic_set(&rdp->dynticks->dynticks,
|
atomic_set(&rdp->dynticks->dynticks,
|
||||||
(atomic_read(&rdp->dynticks->dynticks) & ~0x1) + 1);
|
(atomic_read(&rdp->dynticks->dynticks) & ~0x1) + 1);
|
||||||
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
||||||
|
|
||||||
/* Add CPU to rcu_node bitmasks. */
|
/*
|
||||||
|
* Add CPU to leaf rcu_node pending-online bitmask. Any needed
|
||||||
|
* propagation up the rcu_node tree will happen at the beginning
|
||||||
|
* of the next grace period.
|
||||||
|
*/
|
||||||
rnp = rdp->mynode;
|
rnp = rdp->mynode;
|
||||||
mask = rdp->grpmask;
|
mask = rdp->grpmask;
|
||||||
do {
|
raw_spin_lock(&rnp->lock); /* irqs already disabled. */
|
||||||
/* Exclude any attempts to start a new GP on small systems. */
|
smp_mb__after_unlock_lock();
|
||||||
raw_spin_lock(&rnp->lock); /* irqs already disabled. */
|
rnp->qsmaskinitnext |= mask;
|
||||||
rnp->qsmaskinit |= mask;
|
rdp->gpnum = rnp->completed; /* Make CPU later note any new GP. */
|
||||||
mask = rnp->grpmask;
|
rdp->completed = rnp->completed;
|
||||||
if (rnp == rdp->mynode) {
|
rdp->passed_quiesce = false;
|
||||||
/*
|
rdp->rcu_qs_ctr_snap = __this_cpu_read(rcu_qs_ctr);
|
||||||
* If there is a grace period in progress, we will
|
rdp->qs_pending = false;
|
||||||
* set up to wait for it next time we run the
|
trace_rcu_grace_period(rsp->name, rdp->gpnum, TPS("cpuonl"));
|
||||||
* RCU core code.
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
*/
|
|
||||||
rdp->gpnum = rnp->completed;
|
|
||||||
rdp->completed = rnp->completed;
|
|
||||||
rdp->passed_quiesce = 0;
|
|
||||||
rdp->rcu_qs_ctr_snap = __this_cpu_read(rcu_qs_ctr);
|
|
||||||
rdp->qs_pending = 0;
|
|
||||||
trace_rcu_grace_period(rsp->name, rdp->gpnum, TPS("cpuonl"));
|
|
||||||
}
|
|
||||||
raw_spin_unlock(&rnp->lock); /* irqs already disabled. */
|
|
||||||
rnp = rnp->parent;
|
|
||||||
} while (rnp != NULL && !(rnp->qsmaskinit & mask));
|
|
||||||
local_irq_restore(flags);
|
|
||||||
|
|
||||||
mutex_unlock(&rsp->onoff_mutex);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rcu_prepare_cpu(int cpu)
|
static void rcu_prepare_cpu(int cpu)
|
||||||
|
@ -3609,15 +3795,14 @@ static void rcu_prepare_cpu(int cpu)
|
||||||
/*
|
/*
|
||||||
* Handle CPU online/offline notification events.
|
* Handle CPU online/offline notification events.
|
||||||
*/
|
*/
|
||||||
static int rcu_cpu_notify(struct notifier_block *self,
|
int rcu_cpu_notify(struct notifier_block *self,
|
||||||
unsigned long action, void *hcpu)
|
unsigned long action, void *hcpu)
|
||||||
{
|
{
|
||||||
long cpu = (long)hcpu;
|
long cpu = (long)hcpu;
|
||||||
struct rcu_data *rdp = per_cpu_ptr(rcu_state_p->rda, cpu);
|
struct rcu_data *rdp = per_cpu_ptr(rcu_state_p->rda, cpu);
|
||||||
struct rcu_node *rnp = rdp->mynode;
|
struct rcu_node *rnp = rdp->mynode;
|
||||||
struct rcu_state *rsp;
|
struct rcu_state *rsp;
|
||||||
|
|
||||||
trace_rcu_utilization(TPS("Start CPU hotplug"));
|
|
||||||
switch (action) {
|
switch (action) {
|
||||||
case CPU_UP_PREPARE:
|
case CPU_UP_PREPARE:
|
||||||
case CPU_UP_PREPARE_FROZEN:
|
case CPU_UP_PREPARE_FROZEN:
|
||||||
|
@ -3637,6 +3822,11 @@ static int rcu_cpu_notify(struct notifier_block *self,
|
||||||
for_each_rcu_flavor(rsp)
|
for_each_rcu_flavor(rsp)
|
||||||
rcu_cleanup_dying_cpu(rsp);
|
rcu_cleanup_dying_cpu(rsp);
|
||||||
break;
|
break;
|
||||||
|
case CPU_DYING_IDLE:
|
||||||
|
for_each_rcu_flavor(rsp) {
|
||||||
|
rcu_cleanup_dying_idle_cpu(cpu, rsp);
|
||||||
|
}
|
||||||
|
break;
|
||||||
case CPU_DEAD:
|
case CPU_DEAD:
|
||||||
case CPU_DEAD_FROZEN:
|
case CPU_DEAD_FROZEN:
|
||||||
case CPU_UP_CANCELED:
|
case CPU_UP_CANCELED:
|
||||||
|
@ -3649,7 +3839,6 @@ static int rcu_cpu_notify(struct notifier_block *self,
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
trace_rcu_utilization(TPS("End CPU hotplug"));
|
|
||||||
return NOTIFY_OK;
|
return NOTIFY_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3660,11 +3849,12 @@ static int rcu_pm_notify(struct notifier_block *self,
|
||||||
case PM_HIBERNATION_PREPARE:
|
case PM_HIBERNATION_PREPARE:
|
||||||
case PM_SUSPEND_PREPARE:
|
case PM_SUSPEND_PREPARE:
|
||||||
if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */
|
if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */
|
||||||
rcu_expedited = 1;
|
rcu_expedite_gp();
|
||||||
break;
|
break;
|
||||||
case PM_POST_HIBERNATION:
|
case PM_POST_HIBERNATION:
|
||||||
case PM_POST_SUSPEND:
|
case PM_POST_SUSPEND:
|
||||||
rcu_expedited = 0;
|
if (nr_cpu_ids <= 256) /* Expediting bad for large systems. */
|
||||||
|
rcu_unexpedite_gp();
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
|
@ -3734,30 +3924,26 @@ void rcu_scheduler_starting(void)
|
||||||
* Compute the per-level fanout, either using the exact fanout specified
|
* Compute the per-level fanout, either using the exact fanout specified
|
||||||
* or balancing the tree, depending on CONFIG_RCU_FANOUT_EXACT.
|
* or balancing the tree, depending on CONFIG_RCU_FANOUT_EXACT.
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_RCU_FANOUT_EXACT
|
|
||||||
static void __init rcu_init_levelspread(struct rcu_state *rsp)
|
static void __init rcu_init_levelspread(struct rcu_state *rsp)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
rsp->levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf;
|
if (IS_ENABLED(CONFIG_RCU_FANOUT_EXACT)) {
|
||||||
for (i = rcu_num_lvls - 2; i >= 0; i--)
|
rsp->levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf;
|
||||||
rsp->levelspread[i] = CONFIG_RCU_FANOUT;
|
for (i = rcu_num_lvls - 2; i >= 0; i--)
|
||||||
}
|
rsp->levelspread[i] = CONFIG_RCU_FANOUT;
|
||||||
#else /* #ifdef CONFIG_RCU_FANOUT_EXACT */
|
} else {
|
||||||
static void __init rcu_init_levelspread(struct rcu_state *rsp)
|
int ccur;
|
||||||
{
|
int cprv;
|
||||||
int ccur;
|
|
||||||
int cprv;
|
|
||||||
int i;
|
|
||||||
|
|
||||||
cprv = nr_cpu_ids;
|
cprv = nr_cpu_ids;
|
||||||
for (i = rcu_num_lvls - 1; i >= 0; i--) {
|
for (i = rcu_num_lvls - 1; i >= 0; i--) {
|
||||||
ccur = rsp->levelcnt[i];
|
ccur = rsp->levelcnt[i];
|
||||||
rsp->levelspread[i] = (cprv + ccur - 1) / ccur;
|
rsp->levelspread[i] = (cprv + ccur - 1) / ccur;
|
||||||
cprv = ccur;
|
cprv = ccur;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#endif /* #else #ifdef CONFIG_RCU_FANOUT_EXACT */
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Helper function for rcu_init() that initializes one rcu_state structure.
|
* Helper function for rcu_init() that initializes one rcu_state structure.
|
||||||
|
@ -3833,7 +4019,6 @@ static void __init rcu_init_one(struct rcu_state *rsp,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
rsp->rda = rda;
|
|
||||||
init_waitqueue_head(&rsp->gp_wq);
|
init_waitqueue_head(&rsp->gp_wq);
|
||||||
rnp = rsp->level[rcu_num_lvls - 1];
|
rnp = rsp->level[rcu_num_lvls - 1];
|
||||||
for_each_possible_cpu(i) {
|
for_each_possible_cpu(i) {
|
||||||
|
@ -3926,6 +4111,8 @@ void __init rcu_init(void)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
|
|
||||||
|
rcu_early_boot_tests();
|
||||||
|
|
||||||
rcu_bootup_announce();
|
rcu_bootup_announce();
|
||||||
rcu_init_geometry();
|
rcu_init_geometry();
|
||||||
rcu_init_one(&rcu_bh_state, &rcu_bh_data);
|
rcu_init_one(&rcu_bh_state, &rcu_bh_data);
|
||||||
|
@ -3942,8 +4129,6 @@ void __init rcu_init(void)
|
||||||
pm_notifier(rcu_pm_notify, 0);
|
pm_notifier(rcu_pm_notify, 0);
|
||||||
for_each_online_cpu(cpu)
|
for_each_online_cpu(cpu)
|
||||||
rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
|
rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
|
||||||
|
|
||||||
rcu_early_boot_tests();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#include "tree_plugin.h"
|
#include "tree_plugin.h"
|
||||||
|
|
|
@ -141,12 +141,20 @@ struct rcu_node {
|
||||||
/* complete (only for PREEMPT_RCU). */
|
/* complete (only for PREEMPT_RCU). */
|
||||||
unsigned long qsmaskinit;
|
unsigned long qsmaskinit;
|
||||||
/* Per-GP initial value for qsmask & expmask. */
|
/* Per-GP initial value for qsmask & expmask. */
|
||||||
|
/* Initialized from ->qsmaskinitnext at the */
|
||||||
|
/* beginning of each grace period. */
|
||||||
|
unsigned long qsmaskinitnext;
|
||||||
|
/* Online CPUs for next grace period. */
|
||||||
unsigned long grpmask; /* Mask to apply to parent qsmask. */
|
unsigned long grpmask; /* Mask to apply to parent qsmask. */
|
||||||
/* Only one bit will be set in this mask. */
|
/* Only one bit will be set in this mask. */
|
||||||
int grplo; /* lowest-numbered CPU or group here. */
|
int grplo; /* lowest-numbered CPU or group here. */
|
||||||
int grphi; /* highest-numbered CPU or group here. */
|
int grphi; /* highest-numbered CPU or group here. */
|
||||||
u8 grpnum; /* CPU/group number for next level up. */
|
u8 grpnum; /* CPU/group number for next level up. */
|
||||||
u8 level; /* root is at level 0. */
|
u8 level; /* root is at level 0. */
|
||||||
|
bool wait_blkd_tasks;/* Necessary to wait for blocked tasks to */
|
||||||
|
/* exit RCU read-side critical sections */
|
||||||
|
/* before propagating offline up the */
|
||||||
|
/* rcu_node tree? */
|
||||||
struct rcu_node *parent;
|
struct rcu_node *parent;
|
||||||
struct list_head blkd_tasks;
|
struct list_head blkd_tasks;
|
||||||
/* Tasks blocked in RCU read-side critical */
|
/* Tasks blocked in RCU read-side critical */
|
||||||
|
@ -448,8 +456,6 @@ struct rcu_state {
|
||||||
long qlen; /* Total number of callbacks. */
|
long qlen; /* Total number of callbacks. */
|
||||||
/* End of fields guarded by orphan_lock. */
|
/* End of fields guarded by orphan_lock. */
|
||||||
|
|
||||||
struct mutex onoff_mutex; /* Coordinate hotplug & GPs. */
|
|
||||||
|
|
||||||
struct mutex barrier_mutex; /* Guards barrier fields. */
|
struct mutex barrier_mutex; /* Guards barrier fields. */
|
||||||
atomic_t barrier_cpu_count; /* # CPUs waiting on. */
|
atomic_t barrier_cpu_count; /* # CPUs waiting on. */
|
||||||
struct completion barrier_completion; /* Wake at barrier end. */
|
struct completion barrier_completion; /* Wake at barrier end. */
|
||||||
|
@ -559,6 +565,7 @@ static void rcu_prepare_kthreads(int cpu);
|
||||||
static void rcu_cleanup_after_idle(void);
|
static void rcu_cleanup_after_idle(void);
|
||||||
static void rcu_prepare_for_idle(void);
|
static void rcu_prepare_for_idle(void);
|
||||||
static void rcu_idle_count_callbacks_posted(void);
|
static void rcu_idle_count_callbacks_posted(void);
|
||||||
|
static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
|
||||||
static void print_cpu_stall_info_begin(void);
|
static void print_cpu_stall_info_begin(void);
|
||||||
static void print_cpu_stall_info(struct rcu_state *rsp, int cpu);
|
static void print_cpu_stall_info(struct rcu_state *rsp, int cpu);
|
||||||
static void print_cpu_stall_info_end(void);
|
static void print_cpu_stall_info_end(void);
|
||||||
|
|
|
@ -58,38 +58,33 @@ static bool __read_mostly rcu_nocb_poll; /* Offload kthread are to poll. */
|
||||||
*/
|
*/
|
||||||
static void __init rcu_bootup_announce_oddness(void)
|
static void __init rcu_bootup_announce_oddness(void)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_RCU_TRACE
|
if (IS_ENABLED(CONFIG_RCU_TRACE))
|
||||||
pr_info("\tRCU debugfs-based tracing is enabled.\n");
|
pr_info("\tRCU debugfs-based tracing is enabled.\n");
|
||||||
#endif
|
if ((IS_ENABLED(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 64) ||
|
||||||
#if (defined(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 64) || (!defined(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 32)
|
(!IS_ENABLED(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 32))
|
||||||
pr_info("\tCONFIG_RCU_FANOUT set to non-default value of %d\n",
|
pr_info("\tCONFIG_RCU_FANOUT set to non-default value of %d\n",
|
||||||
CONFIG_RCU_FANOUT);
|
CONFIG_RCU_FANOUT);
|
||||||
#endif
|
if (IS_ENABLED(CONFIG_RCU_FANOUT_EXACT))
|
||||||
#ifdef CONFIG_RCU_FANOUT_EXACT
|
pr_info("\tHierarchical RCU autobalancing is disabled.\n");
|
||||||
pr_info("\tHierarchical RCU autobalancing is disabled.\n");
|
if (IS_ENABLED(CONFIG_RCU_FAST_NO_HZ))
|
||||||
#endif
|
pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n");
|
||||||
#ifdef CONFIG_RCU_FAST_NO_HZ
|
if (IS_ENABLED(CONFIG_PROVE_RCU))
|
||||||
pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n");
|
pr_info("\tRCU lockdep checking is enabled.\n");
|
||||||
#endif
|
if (IS_ENABLED(CONFIG_RCU_TORTURE_TEST_RUNNABLE))
|
||||||
#ifdef CONFIG_PROVE_RCU
|
pr_info("\tRCU torture testing starts during boot.\n");
|
||||||
pr_info("\tRCU lockdep checking is enabled.\n");
|
if (IS_ENABLED(CONFIG_RCU_CPU_STALL_INFO))
|
||||||
#endif
|
pr_info("\tAdditional per-CPU info printed with stalls.\n");
|
||||||
#ifdef CONFIG_RCU_TORTURE_TEST_RUNNABLE
|
if (NUM_RCU_LVL_4 != 0)
|
||||||
pr_info("\tRCU torture testing starts during boot.\n");
|
pr_info("\tFour-level hierarchy is enabled.\n");
|
||||||
#endif
|
if (CONFIG_RCU_FANOUT_LEAF != 16)
|
||||||
#if defined(CONFIG_RCU_CPU_STALL_INFO)
|
pr_info("\tBuild-time adjustment of leaf fanout to %d.\n",
|
||||||
pr_info("\tAdditional per-CPU info printed with stalls.\n");
|
CONFIG_RCU_FANOUT_LEAF);
|
||||||
#endif
|
|
||||||
#if NUM_RCU_LVL_4 != 0
|
|
||||||
pr_info("\tFour-level hierarchy is enabled.\n");
|
|
||||||
#endif
|
|
||||||
if (rcu_fanout_leaf != CONFIG_RCU_FANOUT_LEAF)
|
if (rcu_fanout_leaf != CONFIG_RCU_FANOUT_LEAF)
|
||||||
pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf);
|
pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf);
|
||||||
if (nr_cpu_ids != NR_CPUS)
|
if (nr_cpu_ids != NR_CPUS)
|
||||||
pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids);
|
pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids);
|
||||||
#ifdef CONFIG_RCU_BOOST
|
if (IS_ENABLED(CONFIG_RCU_BOOST))
|
||||||
pr_info("\tRCU kthread priority: %d.\n", kthread_prio);
|
pr_info("\tRCU kthread priority: %d.\n", kthread_prio);
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPT_RCU
|
#ifdef CONFIG_PREEMPT_RCU
|
||||||
|
@ -180,7 +175,7 @@ static void rcu_preempt_note_context_switch(void)
|
||||||
* But first, note that the current CPU must still be
|
* But first, note that the current CPU must still be
|
||||||
* on line!
|
* on line!
|
||||||
*/
|
*/
|
||||||
WARN_ON_ONCE((rdp->grpmask & rnp->qsmaskinit) == 0);
|
WARN_ON_ONCE((rdp->grpmask & rcu_rnp_online_cpus(rnp)) == 0);
|
||||||
WARN_ON_ONCE(!list_empty(&t->rcu_node_entry));
|
WARN_ON_ONCE(!list_empty(&t->rcu_node_entry));
|
||||||
if ((rnp->qsmask & rdp->grpmask) && rnp->gp_tasks != NULL) {
|
if ((rnp->qsmask & rdp->grpmask) && rnp->gp_tasks != NULL) {
|
||||||
list_add(&t->rcu_node_entry, rnp->gp_tasks->prev);
|
list_add(&t->rcu_node_entry, rnp->gp_tasks->prev);
|
||||||
|
@ -232,43 +227,6 @@ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp)
|
||||||
return rnp->gp_tasks != NULL;
|
return rnp->gp_tasks != NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Record a quiescent state for all tasks that were previously queued
|
|
||||||
* on the specified rcu_node structure and that were blocking the current
|
|
||||||
* RCU grace period. The caller must hold the specified rnp->lock with
|
|
||||||
* irqs disabled, and this lock is released upon return, but irqs remain
|
|
||||||
* disabled.
|
|
||||||
*/
|
|
||||||
static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp, unsigned long flags)
|
|
||||||
__releases(rnp->lock)
|
|
||||||
{
|
|
||||||
unsigned long mask;
|
|
||||||
struct rcu_node *rnp_p;
|
|
||||||
|
|
||||||
if (rnp->qsmask != 0 || rcu_preempt_blocked_readers_cgp(rnp)) {
|
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
|
||||||
return; /* Still need more quiescent states! */
|
|
||||||
}
|
|
||||||
|
|
||||||
rnp_p = rnp->parent;
|
|
||||||
if (rnp_p == NULL) {
|
|
||||||
/*
|
|
||||||
* Either there is only one rcu_node in the tree,
|
|
||||||
* or tasks were kicked up to root rcu_node due to
|
|
||||||
* CPUs going offline.
|
|
||||||
*/
|
|
||||||
rcu_report_qs_rsp(&rcu_preempt_state, flags);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Report up the rest of the hierarchy. */
|
|
||||||
mask = rnp->grpmask;
|
|
||||||
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
|
||||||
raw_spin_lock(&rnp_p->lock); /* irqs already disabled. */
|
|
||||||
smp_mb__after_unlock_lock();
|
|
||||||
rcu_report_qs_rnp(mask, &rcu_preempt_state, rnp_p, flags);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Advance a ->blkd_tasks-list pointer to the next entry, instead
|
* Advance a ->blkd_tasks-list pointer to the next entry, instead
|
||||||
* returning NULL if at the end of the list.
|
* returning NULL if at the end of the list.
|
||||||
|
@ -300,7 +258,6 @@ static bool rcu_preempt_has_tasks(struct rcu_node *rnp)
|
||||||
*/
|
*/
|
||||||
void rcu_read_unlock_special(struct task_struct *t)
|
void rcu_read_unlock_special(struct task_struct *t)
|
||||||
{
|
{
|
||||||
bool empty;
|
|
||||||
bool empty_exp;
|
bool empty_exp;
|
||||||
bool empty_norm;
|
bool empty_norm;
|
||||||
bool empty_exp_now;
|
bool empty_exp_now;
|
||||||
|
@ -334,7 +291,13 @@ void rcu_read_unlock_special(struct task_struct *t)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Hardware IRQ handlers cannot block, complain if they get here. */
|
/* Hardware IRQ handlers cannot block, complain if they get here. */
|
||||||
if (WARN_ON_ONCE(in_irq() || in_serving_softirq())) {
|
if (in_irq() || in_serving_softirq()) {
|
||||||
|
lockdep_rcu_suspicious(__FILE__, __LINE__,
|
||||||
|
"rcu_read_unlock() from irq or softirq with blocking in critical section!!!\n");
|
||||||
|
pr_alert("->rcu_read_unlock_special: %#x (b: %d, nq: %d)\n",
|
||||||
|
t->rcu_read_unlock_special.s,
|
||||||
|
t->rcu_read_unlock_special.b.blocked,
|
||||||
|
t->rcu_read_unlock_special.b.need_qs);
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -356,7 +319,6 @@ void rcu_read_unlock_special(struct task_struct *t)
|
||||||
break;
|
break;
|
||||||
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
|
||||||
}
|
}
|
||||||
empty = !rcu_preempt_has_tasks(rnp);
|
|
||||||
empty_norm = !rcu_preempt_blocked_readers_cgp(rnp);
|
empty_norm = !rcu_preempt_blocked_readers_cgp(rnp);
|
||||||
empty_exp = !rcu_preempted_readers_exp(rnp);
|
empty_exp = !rcu_preempted_readers_exp(rnp);
|
||||||
smp_mb(); /* ensure expedited fastpath sees end of RCU c-s. */
|
smp_mb(); /* ensure expedited fastpath sees end of RCU c-s. */
|
||||||
|
@ -376,14 +338,6 @@ void rcu_read_unlock_special(struct task_struct *t)
|
||||||
drop_boost_mutex = rt_mutex_owner(&rnp->boost_mtx) == t;
|
drop_boost_mutex = rt_mutex_owner(&rnp->boost_mtx) == t;
|
||||||
#endif /* #ifdef CONFIG_RCU_BOOST */
|
#endif /* #ifdef CONFIG_RCU_BOOST */
|
||||||
|
|
||||||
/*
|
|
||||||
* If this was the last task on the list, go see if we
|
|
||||||
* need to propagate ->qsmaskinit bit clearing up the
|
|
||||||
* rcu_node tree.
|
|
||||||
*/
|
|
||||||
if (!empty && !rcu_preempt_has_tasks(rnp))
|
|
||||||
rcu_cleanup_dead_rnp(rnp);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If this was the last task on the current list, and if
|
* If this was the last task on the current list, and if
|
||||||
* we aren't waiting on any CPUs, report the quiescent state.
|
* we aren't waiting on any CPUs, report the quiescent state.
|
||||||
|
@ -399,7 +353,8 @@ void rcu_read_unlock_special(struct task_struct *t)
|
||||||
rnp->grplo,
|
rnp->grplo,
|
||||||
rnp->grphi,
|
rnp->grphi,
|
||||||
!!rnp->gp_tasks);
|
!!rnp->gp_tasks);
|
||||||
rcu_report_unblock_qs_rnp(rnp, flags);
|
rcu_report_unblock_qs_rnp(&rcu_preempt_state,
|
||||||
|
rnp, flags);
|
||||||
} else {
|
} else {
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
}
|
}
|
||||||
|
@ -520,10 +475,6 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp)
|
||||||
WARN_ON_ONCE(rnp->qsmask);
|
WARN_ON_ONCE(rnp->qsmask);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_HOTPLUG_CPU
|
|
||||||
|
|
||||||
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Check for a quiescent state from the current CPU. When a task blocks,
|
* Check for a quiescent state from the current CPU. When a task blocks,
|
||||||
* the task is recorded in the corresponding CPU's rcu_node structure,
|
* the task is recorded in the corresponding CPU's rcu_node structure,
|
||||||
|
@ -585,7 +536,7 @@ void synchronize_rcu(void)
|
||||||
"Illegal synchronize_rcu() in RCU read-side critical section");
|
"Illegal synchronize_rcu() in RCU read-side critical section");
|
||||||
if (!rcu_scheduler_active)
|
if (!rcu_scheduler_active)
|
||||||
return;
|
return;
|
||||||
if (rcu_expedited)
|
if (rcu_gp_is_expedited())
|
||||||
synchronize_rcu_expedited();
|
synchronize_rcu_expedited();
|
||||||
else
|
else
|
||||||
wait_rcu_gp(call_rcu);
|
wait_rcu_gp(call_rcu);
|
||||||
|
@ -630,9 +581,6 @@ static int sync_rcu_preempt_exp_done(struct rcu_node *rnp)
|
||||||
* recursively up the tree. (Calm down, calm down, we do the recursion
|
* recursively up the tree. (Calm down, calm down, we do the recursion
|
||||||
* iteratively!)
|
* iteratively!)
|
||||||
*
|
*
|
||||||
* Most callers will set the "wake" flag, but the task initiating the
|
|
||||||
* expedited grace period need not wake itself.
|
|
||||||
*
|
|
||||||
* Caller must hold sync_rcu_preempt_exp_mutex.
|
* Caller must hold sync_rcu_preempt_exp_mutex.
|
||||||
*/
|
*/
|
||||||
static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
||||||
|
@ -667,29 +615,85 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Snapshot the tasks blocking the newly started preemptible-RCU expedited
|
* Snapshot the tasks blocking the newly started preemptible-RCU expedited
|
||||||
* grace period for the specified rcu_node structure. If there are no such
|
* grace period for the specified rcu_node structure, phase 1. If there
|
||||||
* tasks, report it up the rcu_node hierarchy.
|
* are such tasks, set the ->expmask bits up the rcu_node tree and also
|
||||||
|
* set the ->expmask bits on the leaf rcu_node structures to tell phase 2
|
||||||
|
* that work is needed here.
|
||||||
*
|
*
|
||||||
* Caller must hold sync_rcu_preempt_exp_mutex and must exclude
|
* Caller must hold sync_rcu_preempt_exp_mutex.
|
||||||
* CPU hotplug operations.
|
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
sync_rcu_preempt_exp_init(struct rcu_state *rsp, struct rcu_node *rnp)
|
sync_rcu_preempt_exp_init1(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int must_wait = 0;
|
unsigned long mask;
|
||||||
|
struct rcu_node *rnp_up;
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&rnp->lock, flags);
|
raw_spin_lock_irqsave(&rnp->lock, flags);
|
||||||
smp_mb__after_unlock_lock();
|
smp_mb__after_unlock_lock();
|
||||||
|
WARN_ON_ONCE(rnp->expmask);
|
||||||
|
WARN_ON_ONCE(rnp->exp_tasks);
|
||||||
if (!rcu_preempt_has_tasks(rnp)) {
|
if (!rcu_preempt_has_tasks(rnp)) {
|
||||||
|
/* No blocked tasks, nothing to do. */
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
} else {
|
return;
|
||||||
|
}
|
||||||
|
/* Call for Phase 2 and propagate ->expmask bits up the tree. */
|
||||||
|
rnp->expmask = 1;
|
||||||
|
rnp_up = rnp;
|
||||||
|
while (rnp_up->parent) {
|
||||||
|
mask = rnp_up->grpmask;
|
||||||
|
rnp_up = rnp_up->parent;
|
||||||
|
if (rnp_up->expmask & mask)
|
||||||
|
break;
|
||||||
|
raw_spin_lock(&rnp_up->lock); /* irqs already off */
|
||||||
|
smp_mb__after_unlock_lock();
|
||||||
|
rnp_up->expmask |= mask;
|
||||||
|
raw_spin_unlock(&rnp_up->lock); /* irqs still off */
|
||||||
|
}
|
||||||
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Snapshot the tasks blocking the newly started preemptible-RCU expedited
|
||||||
|
* grace period for the specified rcu_node structure, phase 2. If the
|
||||||
|
* leaf rcu_node structure has its ->expmask field set, check for tasks.
|
||||||
|
* If there are some, clear ->expmask and set ->exp_tasks accordingly,
|
||||||
|
* then initiate RCU priority boosting. Otherwise, clear ->expmask and
|
||||||
|
* invoke rcu_report_exp_rnp() to clear out the upper-level ->expmask bits,
|
||||||
|
* enabling rcu_read_unlock_special() to do the bit-clearing.
|
||||||
|
*
|
||||||
|
* Caller must hold sync_rcu_preempt_exp_mutex.
|
||||||
|
*/
|
||||||
|
static void
|
||||||
|
sync_rcu_preempt_exp_init2(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
raw_spin_lock_irqsave(&rnp->lock, flags);
|
||||||
|
smp_mb__after_unlock_lock();
|
||||||
|
if (!rnp->expmask) {
|
||||||
|
/* Phase 1 didn't do anything, so Phase 2 doesn't either. */
|
||||||
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Phase 1 is over. */
|
||||||
|
rnp->expmask = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If there are still blocked tasks, set up ->exp_tasks so that
|
||||||
|
* rcu_read_unlock_special() will wake us and then boost them.
|
||||||
|
*/
|
||||||
|
if (rcu_preempt_has_tasks(rnp)) {
|
||||||
rnp->exp_tasks = rnp->blkd_tasks.next;
|
rnp->exp_tasks = rnp->blkd_tasks.next;
|
||||||
rcu_initiate_boost(rnp, flags); /* releases rnp->lock */
|
rcu_initiate_boost(rnp, flags); /* releases rnp->lock */
|
||||||
must_wait = 1;
|
return;
|
||||||
}
|
}
|
||||||
if (!must_wait)
|
|
||||||
rcu_report_exp_rnp(rsp, rnp, false); /* Don't wake self. */
|
/* No longer any blocked tasks, so undo bit setting. */
|
||||||
|
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
||||||
|
rcu_report_exp_rnp(rsp, rnp, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -706,7 +710,6 @@ sync_rcu_preempt_exp_init(struct rcu_state *rsp, struct rcu_node *rnp)
|
||||||
*/
|
*/
|
||||||
void synchronize_rcu_expedited(void)
|
void synchronize_rcu_expedited(void)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
|
||||||
struct rcu_node *rnp;
|
struct rcu_node *rnp;
|
||||||
struct rcu_state *rsp = &rcu_preempt_state;
|
struct rcu_state *rsp = &rcu_preempt_state;
|
||||||
unsigned long snap;
|
unsigned long snap;
|
||||||
|
@ -757,19 +760,16 @@ void synchronize_rcu_expedited(void)
|
||||||
/* force all RCU readers onto ->blkd_tasks lists. */
|
/* force all RCU readers onto ->blkd_tasks lists. */
|
||||||
synchronize_sched_expedited();
|
synchronize_sched_expedited();
|
||||||
|
|
||||||
/* Initialize ->expmask for all non-leaf rcu_node structures. */
|
/*
|
||||||
rcu_for_each_nonleaf_node_breadth_first(rsp, rnp) {
|
* Snapshot current state of ->blkd_tasks lists into ->expmask.
|
||||||
raw_spin_lock_irqsave(&rnp->lock, flags);
|
* Phase 1 sets bits and phase 2 permits rcu_read_unlock_special()
|
||||||
smp_mb__after_unlock_lock();
|
* to start clearing them. Doing this in one phase leads to
|
||||||
rnp->expmask = rnp->qsmaskinit;
|
* strange races between setting and clearing bits, so just say "no"!
|
||||||
raw_spin_unlock_irqrestore(&rnp->lock, flags);
|
*/
|
||||||
}
|
|
||||||
|
|
||||||
/* Snapshot current state of ->blkd_tasks lists. */
|
|
||||||
rcu_for_each_leaf_node(rsp, rnp)
|
rcu_for_each_leaf_node(rsp, rnp)
|
||||||
sync_rcu_preempt_exp_init(rsp, rnp);
|
sync_rcu_preempt_exp_init1(rsp, rnp);
|
||||||
if (NUM_RCU_NODES > 1)
|
rcu_for_each_leaf_node(rsp, rnp)
|
||||||
sync_rcu_preempt_exp_init(rsp, rcu_get_root(rsp));
|
sync_rcu_preempt_exp_init2(rsp, rnp);
|
||||||
|
|
||||||
put_online_cpus();
|
put_online_cpus();
|
||||||
|
|
||||||
|
@ -859,8 +859,6 @@ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_HOTPLUG_CPU
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Because there is no preemptible RCU, there can be no readers blocked.
|
* Because there is no preemptible RCU, there can be no readers blocked.
|
||||||
*/
|
*/
|
||||||
|
@ -869,8 +867,6 @@ static bool rcu_preempt_has_tasks(struct rcu_node *rnp)
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Because preemptible RCU does not exist, we never have to check for
|
* Because preemptible RCU does not exist, we never have to check for
|
||||||
* tasks blocked within RCU read-side critical sections.
|
* tasks blocked within RCU read-side critical sections.
|
||||||
|
@ -1170,7 +1166,7 @@ static void rcu_preempt_boost_start_gp(struct rcu_node *rnp)
|
||||||
* Returns zero if all is well, a negated errno otherwise.
|
* Returns zero if all is well, a negated errno otherwise.
|
||||||
*/
|
*/
|
||||||
static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
|
static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
|
||||||
struct rcu_node *rnp)
|
struct rcu_node *rnp)
|
||||||
{
|
{
|
||||||
int rnp_index = rnp - &rsp->node[0];
|
int rnp_index = rnp - &rsp->node[0];
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
@ -1180,7 +1176,7 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
|
||||||
if (&rcu_preempt_state != rsp)
|
if (&rcu_preempt_state != rsp)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
if (!rcu_scheduler_fully_active || rnp->qsmaskinit == 0)
|
if (!rcu_scheduler_fully_active || rcu_rnp_online_cpus(rnp) == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
rsp->boost = 1;
|
rsp->boost = 1;
|
||||||
|
@ -1273,7 +1269,7 @@ static void rcu_cpu_kthread(unsigned int cpu)
|
||||||
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
|
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
|
||||||
{
|
{
|
||||||
struct task_struct *t = rnp->boost_kthread_task;
|
struct task_struct *t = rnp->boost_kthread_task;
|
||||||
unsigned long mask = rnp->qsmaskinit;
|
unsigned long mask = rcu_rnp_online_cpus(rnp);
|
||||||
cpumask_var_t cm;
|
cpumask_var_t cm;
|
||||||
int cpu;
|
int cpu;
|
||||||
|
|
||||||
|
@ -1945,7 +1941,8 @@ static bool rcu_nocb_cpu_needs_barrier(struct rcu_state *rsp, int cpu)
|
||||||
rhp = ACCESS_ONCE(rdp->nocb_follower_head);
|
rhp = ACCESS_ONCE(rdp->nocb_follower_head);
|
||||||
|
|
||||||
/* Having no rcuo kthread but CBs after scheduler starts is bad! */
|
/* Having no rcuo kthread but CBs after scheduler starts is bad! */
|
||||||
if (!ACCESS_ONCE(rdp->nocb_kthread) && rhp) {
|
if (!ACCESS_ONCE(rdp->nocb_kthread) && rhp &&
|
||||||
|
rcu_scheduler_fully_active) {
|
||||||
/* RCU callback enqueued before CPU first came online??? */
|
/* RCU callback enqueued before CPU first came online??? */
|
||||||
pr_err("RCU: Never-onlined no-CBs CPU %d has CB %p\n",
|
pr_err("RCU: Never-onlined no-CBs CPU %d has CB %p\n",
|
||||||
cpu, rhp->func);
|
cpu, rhp->func);
|
||||||
|
@ -2392,18 +2389,8 @@ void __init rcu_init_nohz(void)
|
||||||
pr_info("\tPoll for callbacks from no-CBs CPUs.\n");
|
pr_info("\tPoll for callbacks from no-CBs CPUs.\n");
|
||||||
|
|
||||||
for_each_rcu_flavor(rsp) {
|
for_each_rcu_flavor(rsp) {
|
||||||
for_each_cpu(cpu, rcu_nocb_mask) {
|
for_each_cpu(cpu, rcu_nocb_mask)
|
||||||
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
|
init_nocb_callback_list(per_cpu_ptr(rsp->rda, cpu));
|
||||||
|
|
||||||
/*
|
|
||||||
* If there are early callbacks, they will need
|
|
||||||
* to be moved to the nocb lists.
|
|
||||||
*/
|
|
||||||
WARN_ON_ONCE(rdp->nxttail[RCU_NEXT_TAIL] !=
|
|
||||||
&rdp->nxtlist &&
|
|
||||||
rdp->nxttail[RCU_NEXT_TAIL] != NULL);
|
|
||||||
init_nocb_callback_list(rdp);
|
|
||||||
}
|
|
||||||
rcu_organize_nocb_kthreads(rsp);
|
rcu_organize_nocb_kthreads(rsp);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -2540,6 +2527,16 @@ static bool init_nocb_callback_list(struct rcu_data *rdp)
|
||||||
if (!rcu_is_nocb_cpu(rdp->cpu))
|
if (!rcu_is_nocb_cpu(rdp->cpu))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
/* If there are early-boot callbacks, move them to nocb lists. */
|
||||||
|
if (rdp->nxtlist) {
|
||||||
|
rdp->nocb_head = rdp->nxtlist;
|
||||||
|
rdp->nocb_tail = rdp->nxttail[RCU_NEXT_TAIL];
|
||||||
|
atomic_long_set(&rdp->nocb_q_count, rdp->qlen);
|
||||||
|
atomic_long_set(&rdp->nocb_q_count_lazy, rdp->qlen_lazy);
|
||||||
|
rdp->nxtlist = NULL;
|
||||||
|
rdp->qlen = 0;
|
||||||
|
rdp->qlen_lazy = 0;
|
||||||
|
}
|
||||||
rdp->nxttail[RCU_NEXT_TAIL] = NULL;
|
rdp->nxttail[RCU_NEXT_TAIL] = NULL;
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
@ -2763,7 +2760,8 @@ static void rcu_sysidle_exit(int irq)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Check to see if the current CPU is idle. Note that usermode execution
|
* Check to see if the current CPU is idle. Note that usermode execution
|
||||||
* does not count as idle. The caller must have disabled interrupts.
|
* does not count as idle. The caller must have disabled interrupts,
|
||||||
|
* and must be running on tick_do_timer_cpu.
|
||||||
*/
|
*/
|
||||||
static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
|
static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
|
||||||
unsigned long *maxj)
|
unsigned long *maxj)
|
||||||
|
@ -2784,8 +2782,8 @@ static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
|
||||||
if (!*isidle || rdp->rsp != rcu_state_p ||
|
if (!*isidle || rdp->rsp != rcu_state_p ||
|
||||||
cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
|
cpu_is_offline(rdp->cpu) || rdp->cpu == tick_do_timer_cpu)
|
||||||
return;
|
return;
|
||||||
if (rcu_gp_in_progress(rdp->rsp))
|
/* Verify affinity of current kthread. */
|
||||||
WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
|
WARN_ON_ONCE(smp_processor_id() != tick_do_timer_cpu);
|
||||||
|
|
||||||
/* Pick up current idle and NMI-nesting counter and check. */
|
/* Pick up current idle and NMI-nesting counter and check. */
|
||||||
cur = atomic_read(&rdtp->dynticks_idle);
|
cur = atomic_read(&rdtp->dynticks_idle);
|
||||||
|
@ -3068,11 +3066,10 @@ static void rcu_bind_gp_kthread(void)
|
||||||
return;
|
return;
|
||||||
#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
|
#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
|
||||||
cpu = tick_do_timer_cpu;
|
cpu = tick_do_timer_cpu;
|
||||||
if (cpu >= 0 && cpu < nr_cpu_ids && raw_smp_processor_id() != cpu)
|
if (cpu >= 0 && cpu < nr_cpu_ids)
|
||||||
set_cpus_allowed_ptr(current, cpumask_of(cpu));
|
set_cpus_allowed_ptr(current, cpumask_of(cpu));
|
||||||
#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
|
#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
|
||||||
if (!is_housekeeping_cpu(raw_smp_processor_id()))
|
housekeeping_affine(current);
|
||||||
housekeeping_affine(current);
|
|
||||||
#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
|
#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -283,8 +283,8 @@ static void print_one_rcu_state(struct seq_file *m, struct rcu_state *rsp)
|
||||||
seq_puts(m, "\n");
|
seq_puts(m, "\n");
|
||||||
level = rnp->level;
|
level = rnp->level;
|
||||||
}
|
}
|
||||||
seq_printf(m, "%lx/%lx %c%c>%c %d:%d ^%d ",
|
seq_printf(m, "%lx/%lx->%lx %c%c>%c %d:%d ^%d ",
|
||||||
rnp->qsmask, rnp->qsmaskinit,
|
rnp->qsmask, rnp->qsmaskinit, rnp->qsmaskinitnext,
|
||||||
".G"[rnp->gp_tasks != NULL],
|
".G"[rnp->gp_tasks != NULL],
|
||||||
".E"[rnp->exp_tasks != NULL],
|
".E"[rnp->exp_tasks != NULL],
|
||||||
".T"[!list_empty(&rnp->blkd_tasks)],
|
".T"[!list_empty(&rnp->blkd_tasks)],
|
||||||
|
|
|
@ -62,6 +62,63 @@ MODULE_ALIAS("rcupdate");
|
||||||
|
|
||||||
module_param(rcu_expedited, int, 0);
|
module_param(rcu_expedited, int, 0);
|
||||||
|
|
||||||
|
#ifndef CONFIG_TINY_RCU
|
||||||
|
|
||||||
|
static atomic_t rcu_expedited_nesting =
|
||||||
|
ATOMIC_INIT(IS_ENABLED(CONFIG_RCU_EXPEDITE_BOOT) ? 1 : 0);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Should normal grace-period primitives be expedited? Intended for
|
||||||
|
* use within RCU. Note that this function takes the rcu_expedited
|
||||||
|
* sysfs/boot variable into account as well as the rcu_expedite_gp()
|
||||||
|
* nesting. So looping on rcu_unexpedite_gp() until rcu_gp_is_expedited()
|
||||||
|
* returns false is a -really- bad idea.
|
||||||
|
*/
|
||||||
|
bool rcu_gp_is_expedited(void)
|
||||||
|
{
|
||||||
|
return rcu_expedited || atomic_read(&rcu_expedited_nesting);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_gp_is_expedited);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_expedite_gp - Expedite future RCU grace periods
|
||||||
|
*
|
||||||
|
* After a call to this function, future calls to synchronize_rcu() and
|
||||||
|
* friends act as the corresponding synchronize_rcu_expedited() function
|
||||||
|
* had instead been called.
|
||||||
|
*/
|
||||||
|
void rcu_expedite_gp(void)
|
||||||
|
{
|
||||||
|
atomic_inc(&rcu_expedited_nesting);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_expedite_gp);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* rcu_unexpedite_gp - Cancel prior rcu_expedite_gp() invocation
|
||||||
|
*
|
||||||
|
* Undo a prior call to rcu_expedite_gp(). If all prior calls to
|
||||||
|
* rcu_expedite_gp() are undone by a subsequent call to rcu_unexpedite_gp(),
|
||||||
|
* and if the rcu_expedited sysfs/boot parameter is not set, then all
|
||||||
|
* subsequent calls to synchronize_rcu() and friends will return to
|
||||||
|
* their normal non-expedited behavior.
|
||||||
|
*/
|
||||||
|
void rcu_unexpedite_gp(void)
|
||||||
|
{
|
||||||
|
atomic_dec(&rcu_expedited_nesting);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(rcu_unexpedite_gp);
|
||||||
|
|
||||||
|
#endif /* #ifndef CONFIG_TINY_RCU */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Inform RCU of the end of the in-kernel boot sequence.
|
||||||
|
*/
|
||||||
|
void rcu_end_inkernel_boot(void)
|
||||||
|
{
|
||||||
|
if (IS_ENABLED(CONFIG_RCU_EXPEDITE_BOOT))
|
||||||
|
rcu_unexpedite_gp();
|
||||||
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPT_RCU
|
#ifdef CONFIG_PREEMPT_RCU
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -199,16 +256,13 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
|
||||||
|
|
||||||
#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
struct rcu_synchronize {
|
/**
|
||||||
struct rcu_head head;
|
* wakeme_after_rcu() - Callback function to awaken a task after grace period
|
||||||
struct completion completion;
|
* @head: Pointer to rcu_head member within rcu_synchronize structure
|
||||||
};
|
*
|
||||||
|
* Awaken the corresponding task now that a grace period has elapsed.
|
||||||
/*
|
|
||||||
* Awaken the corresponding synchronize_rcu() instance now that a
|
|
||||||
* grace period has elapsed.
|
|
||||||
*/
|
*/
|
||||||
static void wakeme_after_rcu(struct rcu_head *head)
|
void wakeme_after_rcu(struct rcu_head *head)
|
||||||
{
|
{
|
||||||
struct rcu_synchronize *rcu;
|
struct rcu_synchronize *rcu;
|
||||||
|
|
||||||
|
|
|
@ -210,6 +210,8 @@ use_default:
|
||||||
goto exit_idle;
|
goto exit_idle;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
DEFINE_PER_CPU(bool, cpu_dead_idle);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Generic idle loop implementation
|
* Generic idle loop implementation
|
||||||
*
|
*
|
||||||
|
@ -234,8 +236,13 @@ static void cpu_idle_loop(void)
|
||||||
check_pgt_cache();
|
check_pgt_cache();
|
||||||
rmb();
|
rmb();
|
||||||
|
|
||||||
if (cpu_is_offline(smp_processor_id()))
|
if (cpu_is_offline(smp_processor_id())) {
|
||||||
|
rcu_cpu_notify(NULL, CPU_DYING_IDLE,
|
||||||
|
(void *)(long)smp_processor_id());
|
||||||
|
smp_mb(); /* all activity before dead. */
|
||||||
|
this_cpu_write(cpu_dead_idle, true);
|
||||||
arch_cpu_idle_dead();
|
arch_cpu_idle_dead();
|
||||||
|
}
|
||||||
|
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
arch_cpu_idle_enter();
|
arch_cpu_idle_enter();
|
||||||
|
|
156
kernel/smpboot.c
156
kernel/smpboot.c
|
@ -4,6 +4,7 @@
|
||||||
#include <linux/cpu.h>
|
#include <linux/cpu.h>
|
||||||
#include <linux/err.h>
|
#include <linux/err.h>
|
||||||
#include <linux/smp.h>
|
#include <linux/smp.h>
|
||||||
|
#include <linux/delay.h>
|
||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/list.h>
|
#include <linux/list.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
|
@ -314,3 +315,158 @@ void smpboot_unregister_percpu_thread(struct smp_hotplug_thread *plug_thread)
|
||||||
put_online_cpus();
|
put_online_cpus();
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
|
EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
|
||||||
|
|
||||||
|
static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Called to poll specified CPU's state, for example, when waiting for
|
||||||
|
* a CPU to come online.
|
||||||
|
*/
|
||||||
|
int cpu_report_state(int cpu)
|
||||||
|
{
|
||||||
|
return atomic_read(&per_cpu(cpu_hotplug_state, cpu));
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If CPU has died properly, set its state to CPU_UP_PREPARE and
|
||||||
|
* return success. Otherwise, return -EBUSY if the CPU died after
|
||||||
|
* cpu_wait_death() timed out. And yet otherwise again, return -EAGAIN
|
||||||
|
* if cpu_wait_death() timed out and the CPU still hasn't gotten around
|
||||||
|
* to dying. In the latter two cases, the CPU might not be set up
|
||||||
|
* properly, but it is up to the arch-specific code to decide.
|
||||||
|
* Finally, -EIO indicates an unanticipated problem.
|
||||||
|
*
|
||||||
|
* Note that it is permissible to omit this call entirely, as is
|
||||||
|
* done in architectures that do no CPU-hotplug error checking.
|
||||||
|
*/
|
||||||
|
int cpu_check_up_prepare(int cpu)
|
||||||
|
{
|
||||||
|
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
|
||||||
|
atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
switch (atomic_read(&per_cpu(cpu_hotplug_state, cpu))) {
|
||||||
|
|
||||||
|
case CPU_POST_DEAD:
|
||||||
|
|
||||||
|
/* The CPU died properly, so just start it up again. */
|
||||||
|
atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
case CPU_DEAD_FROZEN:
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Timeout during CPU death, so let caller know.
|
||||||
|
* The outgoing CPU completed its processing, but after
|
||||||
|
* cpu_wait_death() timed out and reported the error. The
|
||||||
|
* caller is free to proceed, in which case the state
|
||||||
|
* will be reset properly by cpu_set_state_online().
|
||||||
|
* Proceeding despite this -EBUSY return makes sense
|
||||||
|
* for systems where the outgoing CPUs take themselves
|
||||||
|
* offline, with no post-death manipulation required from
|
||||||
|
* a surviving CPU.
|
||||||
|
*/
|
||||||
|
return -EBUSY;
|
||||||
|
|
||||||
|
case CPU_BROKEN:
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The most likely reason we got here is that there was
|
||||||
|
* a timeout during CPU death, and the outgoing CPU never
|
||||||
|
* did complete its processing. This could happen on
|
||||||
|
* a virtualized system if the outgoing VCPU gets preempted
|
||||||
|
* for more than five seconds, and the user attempts to
|
||||||
|
* immediately online that same CPU. Trying again later
|
||||||
|
* might return -EBUSY above, hence -EAGAIN.
|
||||||
|
*/
|
||||||
|
return -EAGAIN;
|
||||||
|
|
||||||
|
default:
|
||||||
|
|
||||||
|
/* Should not happen. Famous last words. */
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Mark the specified CPU online.
|
||||||
|
*
|
||||||
|
* Note that it is permissible to omit this call entirely, as is
|
||||||
|
* done in architectures that do no CPU-hotplug error checking.
|
||||||
|
*/
|
||||||
|
void cpu_set_state_online(int cpu)
|
||||||
|
{
|
||||||
|
(void)atomic_xchg(&per_cpu(cpu_hotplug_state, cpu), CPU_ONLINE);
|
||||||
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_HOTPLUG_CPU
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Wait for the specified CPU to exit the idle loop and die.
|
||||||
|
*/
|
||||||
|
bool cpu_wait_death(unsigned int cpu, int seconds)
|
||||||
|
{
|
||||||
|
int jf_left = seconds * HZ;
|
||||||
|
int oldstate;
|
||||||
|
bool ret = true;
|
||||||
|
int sleep_jf = 1;
|
||||||
|
|
||||||
|
might_sleep();
|
||||||
|
|
||||||
|
/* The outgoing CPU will normally get done quite quickly. */
|
||||||
|
if (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) == CPU_DEAD)
|
||||||
|
goto update_state;
|
||||||
|
udelay(5);
|
||||||
|
|
||||||
|
/* But if the outgoing CPU dawdles, wait increasingly long times. */
|
||||||
|
while (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) != CPU_DEAD) {
|
||||||
|
schedule_timeout_uninterruptible(sleep_jf);
|
||||||
|
jf_left -= sleep_jf;
|
||||||
|
if (jf_left <= 0)
|
||||||
|
break;
|
||||||
|
sleep_jf = DIV_ROUND_UP(sleep_jf * 11, 10);
|
||||||
|
}
|
||||||
|
update_state:
|
||||||
|
oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
|
||||||
|
if (oldstate == CPU_DEAD) {
|
||||||
|
/* Outgoing CPU died normally, update state. */
|
||||||
|
smp_mb(); /* atomic_read() before update. */
|
||||||
|
atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_POST_DEAD);
|
||||||
|
} else {
|
||||||
|
/* Outgoing CPU still hasn't died, set state accordingly. */
|
||||||
|
if (atomic_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
|
||||||
|
oldstate, CPU_BROKEN) != oldstate)
|
||||||
|
goto update_state;
|
||||||
|
ret = false;
|
||||||
|
}
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Called by the outgoing CPU to report its successful death. Return
|
||||||
|
* false if this report follows the surviving CPU's timing out.
|
||||||
|
*
|
||||||
|
* A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
|
||||||
|
* timed out. This approach allows architectures to omit calls to
|
||||||
|
* cpu_check_up_prepare() and cpu_set_state_online() without defeating
|
||||||
|
* the next cpu_wait_death()'s polling loop.
|
||||||
|
*/
|
||||||
|
bool cpu_report_death(void)
|
||||||
|
{
|
||||||
|
int oldstate;
|
||||||
|
int newstate;
|
||||||
|
int cpu = smp_processor_id();
|
||||||
|
|
||||||
|
do {
|
||||||
|
oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
|
||||||
|
if (oldstate != CPU_BROKEN)
|
||||||
|
newstate = CPU_DEAD;
|
||||||
|
else
|
||||||
|
newstate = CPU_DEAD_FROZEN;
|
||||||
|
} while (atomic_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
|
||||||
|
oldstate, newstate) != oldstate);
|
||||||
|
return newstate == CPU_DEAD;
|
||||||
|
}
|
||||||
|
|
||||||
|
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||||
|
|
|
@ -1180,16 +1180,7 @@ config DEBUG_CREDENTIALS
|
||||||
menu "RCU Debugging"
|
menu "RCU Debugging"
|
||||||
|
|
||||||
config PROVE_RCU
|
config PROVE_RCU
|
||||||
bool "RCU debugging: prove RCU correctness"
|
def_bool PROVE_LOCKING
|
||||||
depends on PROVE_LOCKING
|
|
||||||
default n
|
|
||||||
help
|
|
||||||
This feature enables lockdep extensions that check for correct
|
|
||||||
use of RCU APIs. This is currently under development. Say Y
|
|
||||||
if you want to debug RCU usage or help work on the PROVE_RCU
|
|
||||||
feature.
|
|
||||||
|
|
||||||
Say N if you are unsure.
|
|
||||||
|
|
||||||
config PROVE_RCU_REPEATEDLY
|
config PROVE_RCU_REPEATEDLY
|
||||||
bool "RCU debugging: don't disable PROVE_RCU on first splat"
|
bool "RCU debugging: don't disable PROVE_RCU on first splat"
|
||||||
|
@ -1257,6 +1248,30 @@ config RCU_TORTURE_TEST_RUNNABLE
|
||||||
Say N here if you want the RCU torture tests to start only
|
Say N here if you want the RCU torture tests to start only
|
||||||
after being manually enabled via /proc.
|
after being manually enabled via /proc.
|
||||||
|
|
||||||
|
config RCU_TORTURE_TEST_SLOW_INIT
|
||||||
|
bool "Slow down RCU grace-period initialization to expose races"
|
||||||
|
depends on RCU_TORTURE_TEST
|
||||||
|
help
|
||||||
|
This option makes grace-period initialization block for a
|
||||||
|
few jiffies between initializing each pair of consecutive
|
||||||
|
rcu_node structures. This helps to expose races involving
|
||||||
|
grace-period initialization, in other words, it makes your
|
||||||
|
kernel less stable. It can also greatly increase grace-period
|
||||||
|
latency, especially on systems with large numbers of CPUs.
|
||||||
|
This is useful when torture-testing RCU, but in almost no
|
||||||
|
other circumstance.
|
||||||
|
|
||||||
|
Say Y here if you want your system to crash and hang more often.
|
||||||
|
Say N if you want a sane system.
|
||||||
|
|
||||||
|
config RCU_TORTURE_TEST_SLOW_INIT_DELAY
|
||||||
|
int "How much to slow down RCU grace-period initialization"
|
||||||
|
range 0 5
|
||||||
|
default 3
|
||||||
|
help
|
||||||
|
This option specifies the number of jiffies to wait between
|
||||||
|
each rcu_node structure initialization.
|
||||||
|
|
||||||
config RCU_CPU_STALL_TIMEOUT
|
config RCU_CPU_STALL_TIMEOUT
|
||||||
int "RCU CPU stall timeout in seconds"
|
int "RCU CPU stall timeout in seconds"
|
||||||
depends on RCU_STALL_COMMON
|
depends on RCU_STALL_COMMON
|
||||||
|
|
|
@ -310,7 +310,7 @@ function dump(first, pastlast)
|
||||||
cfr[jn] = cf[j] "." cfrep[cf[j]];
|
cfr[jn] = cf[j] "." cfrep[cf[j]];
|
||||||
}
|
}
|
||||||
if (cpusr[jn] > ncpus && ncpus != 0)
|
if (cpusr[jn] > ncpus && ncpus != 0)
|
||||||
ovf = "(!)";
|
ovf = "-ovf";
|
||||||
else
|
else
|
||||||
ovf = "";
|
ovf = "";
|
||||||
print "echo ", cfr[jn], cpusr[jn] ovf ": Starting build. `date`";
|
print "echo ", cfr[jn], cpusr[jn] ovf ": Starting build. `date`";
|
||||||
|
|
|
@ -1,2 +1,3 @@
|
||||||
CONFIG_RCU_TORTURE_TEST=y
|
CONFIG_RCU_TORTURE_TEST=y
|
||||||
CONFIG_PRINTK_TIME=y
|
CONFIG_PRINTK_TIME=y
|
||||||
|
CONFIG_RCU_TORTURE_TEST_SLOW_INIT=y
|
||||||
|
|
Loading…
Reference in New Issue