Merge branch 'rcu/next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

Pull RCU updates from Paul E. McKenney:

 - Streamline RCU's use of per-CPU variables, shifting from "cpu"
   arguments to functions to "this_"-style per-CPU variable accessors.

 - Signal-handling RCU updates.

 - Real-time updates.

 - Torture-test updates.

 - Miscellaneous fixes.

 - Documentation updates.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2014-11-20 08:57:58 +01:00
commit d360b78f99
121 changed files with 408 additions and 1858 deletions

View File

@ -36,7 +36,7 @@ o How can the updater tell when a grace period has completed
executed in user mode, or executed in the idle loop, we can executed in user mode, or executed in the idle loop, we can
safely free up that item. safely free up that item.
Preemptible variants of RCU (CONFIG_TREE_PREEMPT_RCU) get the Preemptible variants of RCU (CONFIG_PREEMPT_RCU) get the
same effect, but require that the readers manipulate CPU-local same effect, but require that the readers manipulate CPU-local
counters. These counters allow limited types of blocking within counters. These counters allow limited types of blocking within
RCU read-side critical sections. SRCU also uses CPU-local RCU read-side critical sections. SRCU also uses CPU-local
@ -81,7 +81,7 @@ o I hear that RCU is patented? What is with that?
o I hear that RCU needs work in order to support realtime kernels? o I hear that RCU needs work in order to support realtime kernels?
This work is largely completed. Realtime-friendly RCU can be This work is largely completed. Realtime-friendly RCU can be
enabled via the CONFIG_TREE_PREEMPT_RCU kernel configuration enabled via the CONFIG_PREEMPT_RCU kernel configuration
parameter. However, work is in progress for enabling priority parameter. However, work is in progress for enabling priority
boosting of preempted RCU read-side critical sections. This is boosting of preempted RCU read-side critical sections. This is
needed if you have CPU-bound realtime threads. needed if you have CPU-bound realtime threads.

View File

@ -26,12 +26,6 @@ CONFIG_RCU_CPU_STALL_TIMEOUT
Stall-warning messages may be enabled and disabled completely via Stall-warning messages may be enabled and disabled completely via
/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress. /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
CONFIG_RCU_CPU_STALL_VERBOSE
This kernel configuration parameter causes the stall warning to
also dump the stacks of any tasks that are blocking the current
RCU-preempt grace period.
CONFIG_RCU_CPU_STALL_INFO CONFIG_RCU_CPU_STALL_INFO
This kernel configuration parameter causes the stall warning to This kernel configuration parameter causes the stall warning to
@ -77,7 +71,7 @@ This message indicates that CPU 5 detected that it was causing a stall,
and that the stall was affecting RCU-sched. This message will normally be and that the stall was affecting RCU-sched. This message will normally be
followed by a stack dump of the offending CPU. On TREE_RCU kernel builds, followed by a stack dump of the offending CPU. On TREE_RCU kernel builds,
RCU and RCU-sched are implemented by the same underlying mechanism, RCU and RCU-sched are implemented by the same underlying mechanism,
while on TREE_PREEMPT_RCU kernel builds, RCU is instead implemented while on PREEMPT_RCU kernel builds, RCU is instead implemented
by rcu_preempt_state. by rcu_preempt_state.
On the other hand, if the offending CPU fails to print out a stall-warning On the other hand, if the offending CPU fails to print out a stall-warning
@ -89,7 +83,7 @@ INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 j
This message indicates that CPU 2 detected that CPUs 3 and 5 were both This message indicates that CPU 2 detected that CPUs 3 and 5 were both
causing stalls, and that the stall was affecting RCU-bh. This message causing stalls, and that the stall was affecting RCU-bh. This message
will normally be followed by stack dumps for each CPU. Please note that will normally be followed by stack dumps for each CPU. Please note that
TREE_PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, PREEMPT_RCU builds can be stalled by tasks as well as by CPUs,
and that the tasks will be indicated by PID, for example, "P3421". and that the tasks will be indicated by PID, for example, "P3421".
It is even possible for a rcu_preempt_state stall to be caused by both It is even possible for a rcu_preempt_state stall to be caused by both
CPUs -and- tasks, in which case the offending CPUs and tasks will all CPUs -and- tasks, in which case the offending CPUs and tasks will all
@ -205,10 +199,10 @@ o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
is running at a higher priority than the RCU softirq threads. is running at a higher priority than the RCU softirq threads.
This will prevent RCU callbacks from ever being invoked, This will prevent RCU callbacks from ever being invoked,
and in a CONFIG_TREE_PREEMPT_RCU kernel will further prevent and in a CONFIG_PREEMPT_RCU kernel will further prevent
RCU grace periods from ever completing. Either way, the RCU grace periods from ever completing. Either way, the
system will eventually run out of memory and hang. In the system will eventually run out of memory and hang. In the
CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning CONFIG_PREEMPT_RCU case, you might see stall-warning
messages. messages.
o A hardware or software issue shuts off the scheduler-clock o A hardware or software issue shuts off the scheduler-clock

View File

@ -8,7 +8,7 @@ The following sections describe the debugfs files and formats, first
for rcutree and next for rcutiny. for rcutree and next for rcutiny.
CONFIG_TREE_RCU and CONFIG_TREE_PREEMPT_RCU debugfs Files and Formats CONFIG_TREE_RCU and CONFIG_PREEMPT_RCU debugfs Files and Formats
These implementations of RCU provide several debugfs directories under the These implementations of RCU provide several debugfs directories under the
top-level directory "rcu": top-level directory "rcu":
@ -18,7 +18,7 @@ rcu/rcu_preempt
rcu/rcu_sched rcu/rcu_sched
Each directory contains files for the corresponding flavor of RCU. Each directory contains files for the corresponding flavor of RCU.
Note that rcu/rcu_preempt is only present for CONFIG_TREE_PREEMPT_RCU. Note that rcu/rcu_preempt is only present for CONFIG_PREEMPT_RCU.
For CONFIG_TREE_RCU, the RCU flavor maps onto the RCU-sched flavor, For CONFIG_TREE_RCU, the RCU flavor maps onto the RCU-sched flavor,
so that activity for both appears in rcu/rcu_sched. so that activity for both appears in rcu/rcu_sched.

View File

@ -137,7 +137,7 @@ rcu_read_lock()
Used by a reader to inform the reclaimer that the reader is Used by a reader to inform the reclaimer that the reader is
entering an RCU read-side critical section. It is illegal entering an RCU read-side critical section. It is illegal
to block while in an RCU read-side critical section, though to block while in an RCU read-side critical section, though
kernels built with CONFIG_TREE_PREEMPT_RCU can preempt RCU kernels built with CONFIG_PREEMPT_RCU can preempt RCU
read-side critical sections. Any RCU-protected data structure read-side critical sections. Any RCU-protected data structure
accessed during an RCU read-side critical section is guaranteed to accessed during an RCU read-side critical section is guaranteed to
remain unreclaimed for the full duration of that critical section. remain unreclaimed for the full duration of that critical section.

View File

@ -7,12 +7,13 @@
maintainers on how to implement atomic counter, bitops, and spinlock maintainers on how to implement atomic counter, bitops, and spinlock
interfaces properly. interfaces properly.
The atomic_t type should be defined as a signed integer. The atomic_t type should be defined as a signed integer and
Also, it should be made opaque such that any kind of cast to a normal the atomic_long_t type as a signed long integer. Also, they should
C integer type will fail. Something like the following should be made opaque such that any kind of cast to a normal C integer type
suffice: will fail. Something like the following should suffice:
typedef struct { int counter; } atomic_t; typedef struct { int counter; } atomic_t;
typedef struct { long counter; } atomic_long_t;
Historically, counter has been declared volatile. This is now discouraged. Historically, counter has been declared volatile. This is now discouraged.
See Documentation/volatile-considered-harmful.txt for the complete rationale. See Documentation/volatile-considered-harmful.txt for the complete rationale.
@ -37,6 +38,9 @@ initializer is used before runtime. If the initializer is used at runtime, a
proper implicit or explicit read memory barrier is needed before reading the proper implicit or explicit read memory barrier is needed before reading the
value with atomic_read from another thread. value with atomic_read from another thread.
As with all of the atomic_ interfaces, replace the leading "atomic_"
with "atomic_long_" to operate on atomic_long_t.
The second interface can be used at runtime, as in: The second interface can be used at runtime, as in:
struct foo { atomic_t counter; }; struct foo { atomic_t counter; };

View File

@ -2940,6 +2940,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
quiescent states. Units are jiffies, minimum quiescent states. Units are jiffies, minimum
value is one, and maximum value is HZ. value is one, and maximum value is HZ.
rcutree.kthread_prio= [KNL,BOOT]
Set the SCHED_FIFO priority of the RCU
per-CPU kthreads (rcuc/N). This value is also
used for the priority of the RCU boost threads
(rcub/N). Valid values are 1-99 and the default
is 1 (the least-favored priority).
rcutree.rcu_nocb_leader_stride= [KNL] rcutree.rcu_nocb_leader_stride= [KNL]
Set the number of NOCB kthread groups, which Set the number of NOCB kthread groups, which
defaults to the square root of the number of defaults to the square root of the number of
@ -3089,6 +3096,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
messages. Disable with a value less than or equal messages. Disable with a value less than or equal
to zero. to zero.
rcupdate.rcu_self_test= [KNL]
Run the RCU early boot self tests
rcupdate.rcu_self_test_bh= [KNL]
Run the RCU bh early boot self tests
rcupdate.rcu_self_test_sched= [KNL]
Run the RCU sched early boot self tests
rdinit= [KNL] rdinit= [KNL]
Format: <full_path> Format: <full_path>
Run specified binary instead of /init from the ramdisk, Run specified binary instead of /init from the ramdisk,

View File

@ -121,22 +121,22 @@ For example, consider the following sequence of events:
The set of accesses as seen by the memory system in the middle can be arranged The set of accesses as seen by the memory system in the middle can be arranged
in 24 different combinations: in 24 different combinations:
STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
STORE B=4, ... STORE B=4, ...
... ...
and can thus result in four different combinations of values: and can thus result in four different combinations of values:
x == 1, y == 2 x == 2, y == 1
x == 1, y == 4 x == 2, y == 3
x == 3, y == 2 x == 4, y == 1
x == 3, y == 4 x == 4, y == 3
Furthermore, the stores committed by a CPU to the memory system may not be Furthermore, the stores committed by a CPU to the memory system may not be
@ -694,6 +694,24 @@ Please note once again that the stores to 'b' differ. If they were
identical, as noted earlier, the compiler could pull this store outside identical, as noted earlier, the compiler could pull this store outside
of the 'if' statement. of the 'if' statement.
You must also be careful not to rely too much on boolean short-circuit
evaluation. Consider this example:
q = ACCESS_ONCE(a);
if (a || 1 > 0)
ACCESS_ONCE(b) = 1;
Because the second condition is always true, the compiler can transform
this example as following, defeating control dependency:
q = ACCESS_ONCE(a);
ACCESS_ONCE(b) = 1;
This example underscores the need to ensure that the compiler cannot
out-guess your code. More generally, although ACCESS_ONCE() does force
the compiler to actually emit code for a given load, it does not force
the compiler to use the results.
Finally, control dependencies do -not- provide transitivity. This is Finally, control dependencies do -not- provide transitivity. This is
demonstrated by two related examples, with the initial values of demonstrated by two related examples, with the initial values of
x and y both being zero: x and y both being zero:

View File

@ -102,7 +102,7 @@ extern struct group_info init_groups;
#define INIT_IDS #define INIT_IDS
#endif #endif
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
#define INIT_TASK_RCU_TREE_PREEMPT() \ #define INIT_TASK_RCU_TREE_PREEMPT() \
.rcu_blocked_node = NULL, .rcu_blocked_node = NULL,
#else #else

View File

@ -57,7 +57,7 @@ enum rcutorture_type {
INVALID_RCU_FLAVOR INVALID_RCU_FLAVOR
}; };
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags,
unsigned long *gpnum, unsigned long *completed); unsigned long *gpnum, unsigned long *completed);
void rcutorture_record_test_transition(void); void rcutorture_record_test_transition(void);
@ -260,7 +260,7 @@ static inline int rcu_preempt_depth(void)
void rcu_init(void); void rcu_init(void);
void rcu_sched_qs(void); void rcu_sched_qs(void);
void rcu_bh_qs(void); void rcu_bh_qs(void);
void rcu_check_callbacks(int cpu, int user); void rcu_check_callbacks(int user);
struct notifier_block; struct notifier_block;
void rcu_idle_enter(void); void rcu_idle_enter(void);
void rcu_idle_exit(void); void rcu_idle_exit(void);
@ -348,8 +348,8 @@ extern struct srcu_struct tasks_rcu_exit_srcu;
*/ */
#define cond_resched_rcu_qs() \ #define cond_resched_rcu_qs() \
do { \ do { \
rcu_note_voluntary_context_switch(current); \ if (!cond_resched()) \
cond_resched(); \ rcu_note_voluntary_context_switch(current); \
} while (0) } while (0)
#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP)
@ -365,7 +365,7 @@ typedef void call_rcu_func_t(struct rcu_head *head,
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
void wait_rcu_gp(call_rcu_func_t crf); void wait_rcu_gp(call_rcu_func_t crf);
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
#include <linux/rcutree.h> #include <linux/rcutree.h>
#elif defined(CONFIG_TINY_RCU) #elif defined(CONFIG_TINY_RCU)
#include <linux/rcutiny.h> #include <linux/rcutiny.h>
@ -867,7 +867,7 @@ static inline void rcu_preempt_sleep_check(void)
* *
* In non-preemptible RCU implementations (TREE_RCU and TINY_RCU), * In non-preemptible RCU implementations (TREE_RCU and TINY_RCU),
* it is illegal to block while in an RCU read-side critical section. * it is illegal to block while in an RCU read-side critical section.
* In preemptible RCU implementations (TREE_PREEMPT_RCU) in CONFIG_PREEMPT * In preemptible RCU implementations (PREEMPT_RCU) in CONFIG_PREEMPT
* kernel builds, RCU read-side critical sections may be preempted, * kernel builds, RCU read-side critical sections may be preempted,
* but explicit blocking is illegal. Finally, in preemptible RCU * but explicit blocking is illegal. Finally, in preemptible RCU
* implementations in real-time (with -rt patchset) kernel builds, RCU * implementations in real-time (with -rt patchset) kernel builds, RCU
@ -902,7 +902,9 @@ static inline void rcu_read_lock(void)
* Unfortunately, this function acquires the scheduler's runqueue and * Unfortunately, this function acquires the scheduler's runqueue and
* priority-inheritance spinlocks. This means that deadlock could result * priority-inheritance spinlocks. This means that deadlock could result
* if the caller of rcu_read_unlock() already holds one of these locks or * if the caller of rcu_read_unlock() already holds one of these locks or
* any lock that is ever acquired while holding them. * any lock that is ever acquired while holding them; or any lock which
* can be taken from interrupt context because rcu_boost()->rt_mutex_lock()
* does not disable irqs while taking ->wait_lock.
* *
* That said, RCU readers are never priority boosted unless they were * That said, RCU readers are never priority boosted unless they were
* preempted. Therefore, one way to avoid deadlock is to make sure * preempted. Therefore, one way to avoid deadlock is to make sure
@ -1062,6 +1064,7 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
*/ */
#define RCU_INIT_POINTER(p, v) \ #define RCU_INIT_POINTER(p, v) \
do { \ do { \
rcu_dereference_sparse(p, __rcu); \
p = RCU_INITIALIZER(v); \ p = RCU_INITIALIZER(v); \
} while (0) } while (0)
@ -1118,7 +1121,7 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
__kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head))
#if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL)
static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) static inline int rcu_needs_cpu(unsigned long *delta_jiffies)
{ {
*delta_jiffies = ULONG_MAX; *delta_jiffies = ULONG_MAX;
return 0; return 0;

View File

@ -78,7 +78,7 @@ static inline void kfree_call_rcu(struct rcu_head *head,
call_rcu(head, func); call_rcu(head, func);
} }
static inline void rcu_note_context_switch(int cpu) static inline void rcu_note_context_switch(void)
{ {
rcu_sched_qs(); rcu_sched_qs();
} }

View File

@ -30,9 +30,9 @@
#ifndef __LINUX_RCUTREE_H #ifndef __LINUX_RCUTREE_H
#define __LINUX_RCUTREE_H #define __LINUX_RCUTREE_H
void rcu_note_context_switch(int cpu); void rcu_note_context_switch(void);
#ifndef CONFIG_RCU_NOCB_CPU_ALL #ifndef CONFIG_RCU_NOCB_CPU_ALL
int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies); int rcu_needs_cpu(unsigned long *delta_jiffies);
#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
void rcu_cpu_stall_reset(void); void rcu_cpu_stall_reset(void);
@ -43,7 +43,7 @@ void rcu_cpu_stall_reset(void);
*/ */
static inline void rcu_virt_note_context_switch(int cpu) static inline void rcu_virt_note_context_switch(int cpu)
{ {
rcu_note_context_switch(cpu); rcu_note_context_switch();
} }
void synchronize_rcu_bh(void); void synchronize_rcu_bh(void);

View File

@ -1278,9 +1278,9 @@ struct task_struct {
union rcu_special rcu_read_unlock_special; union rcu_special rcu_read_unlock_special;
struct list_head rcu_node_entry; struct list_head rcu_node_entry;
#endif /* #ifdef CONFIG_PREEMPT_RCU */ #endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
struct rcu_node *rcu_blocked_node; struct rcu_node *rcu_blocked_node;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ #endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_TASKS_RCU #ifdef CONFIG_TASKS_RCU
unsigned long rcu_tasks_nvcsw; unsigned long rcu_tasks_nvcsw;
bool rcu_tasks_holdout; bool rcu_tasks_holdout;

View File

@ -36,7 +36,7 @@ TRACE_EVENT(rcu_utilization,
#ifdef CONFIG_RCU_TRACE #ifdef CONFIG_RCU_TRACE
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
/* /*
* Tracepoint for grace-period events. Takes a string identifying the * Tracepoint for grace-period events. Takes a string identifying the
@ -345,7 +345,7 @@ TRACE_EVENT(rcu_fqs,
__entry->cpu, __entry->qsevent) __entry->cpu, __entry->qsevent)
); );
#endif /* #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) */ #endif /* #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) */
/* /*
* Tracepoint for dyntick-idle entry/exit events. These take a string * Tracepoint for dyntick-idle entry/exit events. These take a string

View File

@ -477,7 +477,7 @@ config TREE_RCU
thousands of CPUs. It also scales down nicely to thousands of CPUs. It also scales down nicely to
smaller systems. smaller systems.
config TREE_PREEMPT_RCU config PREEMPT_RCU
bool "Preemptible tree-based hierarchical RCU" bool "Preemptible tree-based hierarchical RCU"
depends on PREEMPT depends on PREEMPT
select IRQ_WORK select IRQ_WORK
@ -501,12 +501,6 @@ config TINY_RCU
endchoice endchoice
config PREEMPT_RCU
def_bool TREE_PREEMPT_RCU
help
This option enables preemptible-RCU code that is common between
TREE_PREEMPT_RCU and, in the old days, TINY_PREEMPT_RCU.
config TASKS_RCU config TASKS_RCU
bool "Task_based RCU implementation using voluntary context switch" bool "Task_based RCU implementation using voluntary context switch"
default n default n
@ -518,7 +512,7 @@ config TASKS_RCU
If unsure, say N. If unsure, say N.
config RCU_STALL_COMMON config RCU_STALL_COMMON
def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE ) def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE )
help help
This option enables RCU CPU stall code that is common between This option enables RCU CPU stall code that is common between
the TINY and TREE variants of RCU. The purpose is to allow the TINY and TREE variants of RCU. The purpose is to allow
@ -576,7 +570,7 @@ config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value" int "Tree-based hierarchical RCU fanout value"
range 2 64 if 64BIT range 2 64 if 64BIT
range 2 32 if !64BIT range 2 32 if !64BIT
depends on TREE_RCU || TREE_PREEMPT_RCU depends on TREE_RCU || PREEMPT_RCU
default 64 if 64BIT default 64 if 64BIT
default 32 if !64BIT default 32 if !64BIT
help help
@ -596,7 +590,7 @@ config RCU_FANOUT_LEAF
int "Tree-based hierarchical RCU leaf-level fanout value" int "Tree-based hierarchical RCU leaf-level fanout value"
range 2 RCU_FANOUT if 64BIT range 2 RCU_FANOUT if 64BIT
range 2 RCU_FANOUT if !64BIT range 2 RCU_FANOUT if !64BIT
depends on TREE_RCU || TREE_PREEMPT_RCU depends on TREE_RCU || PREEMPT_RCU
default 16 default 16
help help
This option controls the leaf-level fanout of hierarchical This option controls the leaf-level fanout of hierarchical
@ -621,7 +615,7 @@ config RCU_FANOUT_LEAF
config RCU_FANOUT_EXACT config RCU_FANOUT_EXACT
bool "Disable tree-based hierarchical RCU auto-balancing" bool "Disable tree-based hierarchical RCU auto-balancing"
depends on TREE_RCU || TREE_PREEMPT_RCU depends on TREE_RCU || PREEMPT_RCU
default n default n
help help
This option forces use of the exact RCU_FANOUT value specified, This option forces use of the exact RCU_FANOUT value specified,
@ -652,11 +646,11 @@ config RCU_FAST_NO_HZ
Say N if you are unsure. Say N if you are unsure.
config TREE_RCU_TRACE config TREE_RCU_TRACE
def_bool RCU_TRACE && ( TREE_RCU || TREE_PREEMPT_RCU ) def_bool RCU_TRACE && ( TREE_RCU || PREEMPT_RCU )
select DEBUG_FS select DEBUG_FS
help help
This option provides tracing for the TREE_RCU and This option provides tracing for the TREE_RCU and
TREE_PREEMPT_RCU implementations, permitting Makefile to PREEMPT_RCU implementations, permitting Makefile to
trivially select kernel/rcutree_trace.c. trivially select kernel/rcutree_trace.c.
config RCU_BOOST config RCU_BOOST
@ -672,30 +666,31 @@ config RCU_BOOST
Say Y here if you are working with real-time apps or heavy loads Say Y here if you are working with real-time apps or heavy loads
Say N here if you are unsure. Say N here if you are unsure.
config RCU_BOOST_PRIO config RCU_KTHREAD_PRIO
int "Real-time priority to boost RCU readers to" int "Real-time priority to use for RCU worker threads"
range 1 99 range 1 99
depends on RCU_BOOST depends on RCU_BOOST
default 1 default 1
help help
This option specifies the real-time priority to which long-term This option specifies the SCHED_FIFO priority value that will be
preempted RCU readers are to be boosted. If you are working assigned to the rcuc/n and rcub/n threads and is also the value
with a real-time application that has one or more CPU-bound used for RCU_BOOST (if enabled). If you are working with a
threads running at a real-time priority level, you should set real-time application that has one or more CPU-bound threads
RCU_BOOST_PRIO to a priority higher then the highest-priority running at a real-time priority level, you should set
real-time CPU-bound thread. The default RCU_BOOST_PRIO value RCU_KTHREAD_PRIO to a priority higher than the highest-priority
of 1 is appropriate in the common case, which is real-time real-time CPU-bound application thread. The default RCU_KTHREAD_PRIO
value of 1 is appropriate in the common case, which is real-time
applications that do not have any CPU-bound threads. applications that do not have any CPU-bound threads.
Some real-time applications might not have a single real-time Some real-time applications might not have a single real-time
thread that saturates a given CPU, but instead might have thread that saturates a given CPU, but instead might have
multiple real-time threads that, taken together, fully utilize multiple real-time threads that, taken together, fully utilize
that CPU. In this case, you should set RCU_BOOST_PRIO to that CPU. In this case, you should set RCU_KTHREAD_PRIO to
a priority higher than the lowest-priority thread that is a priority higher than the lowest-priority thread that is
conspiring to prevent the CPU from running any non-real-time conspiring to prevent the CPU from running any non-real-time
tasks. For example, if one thread at priority 10 and another tasks. For example, if one thread at priority 10 and another
thread at priority 5 are between themselves fully consuming thread at priority 5 are between themselves fully consuming
the CPU time on a given CPU, then RCU_BOOST_PRIO should be the CPU time on a given CPU, then RCU_KTHREAD_PRIO should be
set to priority 6 or higher. set to priority 6 or higher.
Specify the real-time priority, or take the default if unsure. Specify the real-time priority, or take the default if unsure.
@ -715,7 +710,7 @@ config RCU_BOOST_DELAY
config RCU_NOCB_CPU config RCU_NOCB_CPU
bool "Offload RCU callback processing from boot-selected CPUs" bool "Offload RCU callback processing from boot-selected CPUs"
depends on TREE_RCU || TREE_PREEMPT_RCU depends on TREE_RCU || PREEMPT_RCU
default n default n
help help
Use this option to reduce OS jitter for aggressive HPC or Use this option to reduce OS jitter for aggressive HPC or
@ -739,6 +734,7 @@ config RCU_NOCB_CPU
choice choice
prompt "Build-forced no-CBs CPUs" prompt "Build-forced no-CBs CPUs"
default RCU_NOCB_CPU_NONE default RCU_NOCB_CPU_NONE
depends on RCU_NOCB_CPU
help help
This option allows no-CBs CPUs (whose RCU callbacks are invoked This option allows no-CBs CPUs (whose RCU callbacks are invoked
from kthreads rather than from softirq context) to be specified from kthreads rather than from softirq context) to be specified
@ -747,7 +743,6 @@ choice
config RCU_NOCB_CPU_NONE config RCU_NOCB_CPU_NONE
bool "No build_forced no-CBs CPUs" bool "No build_forced no-CBs CPUs"
depends on RCU_NOCB_CPU
help help
This option does not force any of the CPUs to be no-CBs CPUs. This option does not force any of the CPUs to be no-CBs CPUs.
Only CPUs designated by the rcu_nocbs= boot parameter will be Only CPUs designated by the rcu_nocbs= boot parameter will be
@ -761,7 +756,6 @@ config RCU_NOCB_CPU_NONE
config RCU_NOCB_CPU_ZERO config RCU_NOCB_CPU_ZERO
bool "CPU 0 is a build_forced no-CBs CPU" bool "CPU 0 is a build_forced no-CBs CPU"
depends on RCU_NOCB_CPU
help help
This option forces CPU 0 to be a no-CBs CPU, so that its RCU This option forces CPU 0 to be a no-CBs CPU, so that its RCU
callbacks are invoked by a per-CPU kthread whose name begins callbacks are invoked by a per-CPU kthread whose name begins
@ -776,7 +770,6 @@ config RCU_NOCB_CPU_ZERO
config RCU_NOCB_CPU_ALL config RCU_NOCB_CPU_ALL
bool "All CPUs are build_forced no-CBs CPUs" bool "All CPUs are build_forced no-CBs CPUs"
depends on RCU_NOCB_CPU
help help
This option forces all CPUs to be no-CBs CPUs. The rcu_nocbs= This option forces all CPUs to be no-CBs CPUs. The rcu_nocbs=
boot parameter will be ignored. All CPUs' RCU callbacks will boot parameter will be ignored. All CPUs' RCU callbacks will

View File

@ -86,6 +86,16 @@ static struct {
#define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map) #define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map)
#define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map) #define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map)
static void apply_puts_pending(int max)
{
int delta;
if (atomic_read(&cpu_hotplug.puts_pending) >= max) {
delta = atomic_xchg(&cpu_hotplug.puts_pending, 0);
cpu_hotplug.refcount -= delta;
}
}
void get_online_cpus(void) void get_online_cpus(void)
{ {
might_sleep(); might_sleep();
@ -93,6 +103,7 @@ void get_online_cpus(void)
return; return;
cpuhp_lock_acquire_read(); cpuhp_lock_acquire_read();
mutex_lock(&cpu_hotplug.lock); mutex_lock(&cpu_hotplug.lock);
apply_puts_pending(65536);
cpu_hotplug.refcount++; cpu_hotplug.refcount++;
mutex_unlock(&cpu_hotplug.lock); mutex_unlock(&cpu_hotplug.lock);
} }
@ -105,6 +116,7 @@ bool try_get_online_cpus(void)
if (!mutex_trylock(&cpu_hotplug.lock)) if (!mutex_trylock(&cpu_hotplug.lock))
return false; return false;
cpuhp_lock_acquire_tryread(); cpuhp_lock_acquire_tryread();
apply_puts_pending(65536);
cpu_hotplug.refcount++; cpu_hotplug.refcount++;
mutex_unlock(&cpu_hotplug.lock); mutex_unlock(&cpu_hotplug.lock);
return true; return true;
@ -161,12 +173,7 @@ void cpu_hotplug_begin(void)
cpuhp_lock_acquire(); cpuhp_lock_acquire();
for (;;) { for (;;) {
mutex_lock(&cpu_hotplug.lock); mutex_lock(&cpu_hotplug.lock);
if (atomic_read(&cpu_hotplug.puts_pending)) { apply_puts_pending(1);
int delta;
delta = atomic_xchg(&cpu_hotplug.puts_pending, 0);
cpu_hotplug.refcount -= delta;
}
if (likely(!cpu_hotplug.refcount)) if (likely(!cpu_hotplug.refcount))
break; break;
__set_current_state(TASK_UNINTERRUPTIBLE); __set_current_state(TASK_UNINTERRUPTIBLE);

View File

@ -1022,11 +1022,14 @@ void __cleanup_sighand(struct sighand_struct *sighand)
{ {
if (atomic_dec_and_test(&sighand->count)) { if (atomic_dec_and_test(&sighand->count)) {
signalfd_cleanup(sighand); signalfd_cleanup(sighand);
/*
* sighand_cachep is SLAB_DESTROY_BY_RCU so we can free it
* without an RCU grace period, see __lock_task_sighand().
*/
kmem_cache_free(sighand_cachep, sighand); kmem_cache_free(sighand_cachep, sighand);
} }
} }
/* /*
* Initialize POSIX timer handling for a thread group. * Initialize POSIX timer handling for a thread group.
*/ */

View File

@ -1,6 +1,6 @@
obj-y += update.o srcu.o obj-y += update.o srcu.o
obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
obj-$(CONFIG_TREE_RCU) += tree.o obj-$(CONFIG_TREE_RCU) += tree.o
obj-$(CONFIG_TREE_PREEMPT_RCU) += tree.o obj-$(CONFIG_PREEMPT_RCU) += tree.o
obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o
obj-$(CONFIG_TINY_RCU) += tiny.o obj-$(CONFIG_TINY_RCU) += tiny.o

View File

@ -135,4 +135,6 @@ int rcu_jiffies_till_stall_check(void);
*/ */
#define TPS(x) tracepoint_string(x) #define TPS(x) tracepoint_string(x)
void rcu_early_boot_tests(void);
#endif /* __LINUX_RCU_H */ #endif /* __LINUX_RCU_H */

View File

@ -812,6 +812,7 @@ rcu_torture_cbflood(void *arg)
cur_ops->cb_barrier(); cur_ops->cb_barrier();
stutter_wait("rcu_torture_cbflood"); stutter_wait("rcu_torture_cbflood");
} while (!torture_must_stop()); } while (!torture_must_stop());
vfree(rhp);
torture_kthread_stopping("rcu_torture_cbflood"); torture_kthread_stopping("rcu_torture_cbflood");
return 0; return 0;
} }

View File

@ -247,7 +247,7 @@ void rcu_bh_qs(void)
* be called from hardirq context. It is normally called from the * be called from hardirq context. It is normally called from the
* scheduling-clock interrupt. * scheduling-clock interrupt.
*/ */
void rcu_check_callbacks(int cpu, int user) void rcu_check_callbacks(int user)
{ {
RCU_TRACE(check_cpu_stalls()); RCU_TRACE(check_cpu_stalls());
if (user || rcu_is_cpu_rrupt_from_idle()) if (user || rcu_is_cpu_rrupt_from_idle())
@ -380,7 +380,9 @@ void call_rcu_bh(struct rcu_head *head, void (*func)(struct rcu_head *rcu))
} }
EXPORT_SYMBOL_GPL(call_rcu_bh); EXPORT_SYMBOL_GPL(call_rcu_bh);
void rcu_init(void) void __init rcu_init(void)
{ {
open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
rcu_early_boot_tests();
} }

View File

@ -105,7 +105,7 @@ struct rcu_state sname##_state = { \
.name = RCU_STATE_NAME(sname), \ .name = RCU_STATE_NAME(sname), \
.abbr = sabbr, \ .abbr = sabbr, \
}; \ }; \
DEFINE_PER_CPU(struct rcu_data, sname##_data) DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data)
RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched); RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched);
RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh); RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh);
@ -152,19 +152,6 @@ EXPORT_SYMBOL_GPL(rcu_scheduler_active);
*/ */
static int rcu_scheduler_fully_active __read_mostly; static int rcu_scheduler_fully_active __read_mostly;
#ifdef CONFIG_RCU_BOOST
/*
* Control variables for per-CPU and per-rcu_node kthreads. These
* handle all flavors of RCU.
*/
static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
DEFINE_PER_CPU(char, rcu_cpu_has_work);
#endif /* #ifdef CONFIG_RCU_BOOST */
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu); static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu);
static void invoke_rcu_core(void); static void invoke_rcu_core(void);
static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp); static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp);
@ -286,11 +273,11 @@ static void rcu_momentary_dyntick_idle(void)
* and requires special handling for preemptible RCU. * and requires special handling for preemptible RCU.
* The caller must have disabled preemption. * The caller must have disabled preemption.
*/ */
void rcu_note_context_switch(int cpu) void rcu_note_context_switch(void)
{ {
trace_rcu_utilization(TPS("Start context switch")); trace_rcu_utilization(TPS("Start context switch"));
rcu_sched_qs(); rcu_sched_qs();
rcu_preempt_note_context_switch(cpu); rcu_preempt_note_context_switch();
if (unlikely(raw_cpu_read(rcu_sched_qs_mask))) if (unlikely(raw_cpu_read(rcu_sched_qs_mask)))
rcu_momentary_dyntick_idle(); rcu_momentary_dyntick_idle();
trace_rcu_utilization(TPS("End context switch")); trace_rcu_utilization(TPS("End context switch"));
@ -325,7 +312,7 @@ static void force_qs_rnp(struct rcu_state *rsp,
unsigned long *maxj), unsigned long *maxj),
bool *isidle, unsigned long *maxj); bool *isidle, unsigned long *maxj);
static void force_quiescent_state(struct rcu_state *rsp); static void force_quiescent_state(struct rcu_state *rsp);
static int rcu_pending(int cpu); static int rcu_pending(void);
/* /*
* Return the number of RCU-sched batches processed thus far for debug & stats. * Return the number of RCU-sched batches processed thus far for debug & stats.
@ -510,11 +497,11 @@ cpu_needs_another_gp(struct rcu_state *rsp, struct rcu_data *rdp)
* we really have entered idle, and must do the appropriate accounting. * we really have entered idle, and must do the appropriate accounting.
* The caller must have disabled interrupts. * The caller must have disabled interrupts.
*/ */
static void rcu_eqs_enter_common(struct rcu_dynticks *rdtp, long long oldval, static void rcu_eqs_enter_common(long long oldval, bool user)
bool user)
{ {
struct rcu_state *rsp; struct rcu_state *rsp;
struct rcu_data *rdp; struct rcu_data *rdp;
struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting); trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting);
if (!user && !is_idle_task(current)) { if (!user && !is_idle_task(current)) {
@ -531,7 +518,7 @@ static void rcu_eqs_enter_common(struct rcu_dynticks *rdtp, long long oldval,
rdp = this_cpu_ptr(rsp->rda); rdp = this_cpu_ptr(rsp->rda);
do_nocb_deferred_wakeup(rdp); do_nocb_deferred_wakeup(rdp);
} }
rcu_prepare_for_idle(smp_processor_id()); rcu_prepare_for_idle();
/* CPUs seeing atomic_inc() must see prior RCU read-side crit sects */ /* CPUs seeing atomic_inc() must see prior RCU read-side crit sects */
smp_mb__before_atomic(); /* See above. */ smp_mb__before_atomic(); /* See above. */
atomic_inc(&rdtp->dynticks); atomic_inc(&rdtp->dynticks);
@ -565,7 +552,7 @@ static void rcu_eqs_enter(bool user)
WARN_ON_ONCE((oldval & DYNTICK_TASK_NEST_MASK) == 0); WARN_ON_ONCE((oldval & DYNTICK_TASK_NEST_MASK) == 0);
if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) { if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) {
rdtp->dynticks_nesting = 0; rdtp->dynticks_nesting = 0;
rcu_eqs_enter_common(rdtp, oldval, user); rcu_eqs_enter_common(oldval, user);
} else { } else {
rdtp->dynticks_nesting -= DYNTICK_TASK_NEST_VALUE; rdtp->dynticks_nesting -= DYNTICK_TASK_NEST_VALUE;
} }
@ -589,7 +576,7 @@ void rcu_idle_enter(void)
local_irq_save(flags); local_irq_save(flags);
rcu_eqs_enter(false); rcu_eqs_enter(false);
rcu_sysidle_enter(this_cpu_ptr(&rcu_dynticks), 0); rcu_sysidle_enter(0);
local_irq_restore(flags); local_irq_restore(flags);
} }
EXPORT_SYMBOL_GPL(rcu_idle_enter); EXPORT_SYMBOL_GPL(rcu_idle_enter);
@ -639,8 +626,8 @@ void rcu_irq_exit(void)
if (rdtp->dynticks_nesting) if (rdtp->dynticks_nesting)
trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting); trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting);
else else
rcu_eqs_enter_common(rdtp, oldval, true); rcu_eqs_enter_common(oldval, true);
rcu_sysidle_enter(rdtp, 1); rcu_sysidle_enter(1);
local_irq_restore(flags); local_irq_restore(flags);
} }
@ -651,16 +638,17 @@ void rcu_irq_exit(void)
* we really have exited idle, and must do the appropriate accounting. * we really have exited idle, and must do the appropriate accounting.
* The caller must have disabled interrupts. * The caller must have disabled interrupts.
*/ */
static void rcu_eqs_exit_common(struct rcu_dynticks *rdtp, long long oldval, static void rcu_eqs_exit_common(long long oldval, int user)
int user)
{ {
struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
rcu_dynticks_task_exit(); rcu_dynticks_task_exit();
smp_mb__before_atomic(); /* Force ordering w/previous sojourn. */ smp_mb__before_atomic(); /* Force ordering w/previous sojourn. */
atomic_inc(&rdtp->dynticks); atomic_inc(&rdtp->dynticks);
/* CPUs seeing atomic_inc() must see later RCU read-side crit sects */ /* CPUs seeing atomic_inc() must see later RCU read-side crit sects */
smp_mb__after_atomic(); /* See above. */ smp_mb__after_atomic(); /* See above. */
WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks) & 0x1)); WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks) & 0x1));
rcu_cleanup_after_idle(smp_processor_id()); rcu_cleanup_after_idle();
trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting); trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting);
if (!user && !is_idle_task(current)) { if (!user && !is_idle_task(current)) {
struct task_struct *idle __maybe_unused = struct task_struct *idle __maybe_unused =
@ -691,7 +679,7 @@ static void rcu_eqs_exit(bool user)
rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE; rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE;
} else { } else {
rdtp->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE; rdtp->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
rcu_eqs_exit_common(rdtp, oldval, user); rcu_eqs_exit_common(oldval, user);
} }
} }
@ -712,7 +700,7 @@ void rcu_idle_exit(void)
local_irq_save(flags); local_irq_save(flags);
rcu_eqs_exit(false); rcu_eqs_exit(false);
rcu_sysidle_exit(this_cpu_ptr(&rcu_dynticks), 0); rcu_sysidle_exit(0);
local_irq_restore(flags); local_irq_restore(flags);
} }
EXPORT_SYMBOL_GPL(rcu_idle_exit); EXPORT_SYMBOL_GPL(rcu_idle_exit);
@ -763,8 +751,8 @@ void rcu_irq_enter(void)
if (oldval) if (oldval)
trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting); trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting);
else else
rcu_eqs_exit_common(rdtp, oldval, true); rcu_eqs_exit_common(oldval, true);
rcu_sysidle_exit(rdtp, 1); rcu_sysidle_exit(1);
local_irq_restore(flags); local_irq_restore(flags);
} }
@ -2387,7 +2375,7 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp)
* invoked from the scheduling-clock interrupt. If rcu_pending returns * invoked from the scheduling-clock interrupt. If rcu_pending returns
* false, there is no point in invoking rcu_check_callbacks(). * false, there is no point in invoking rcu_check_callbacks().
*/ */
void rcu_check_callbacks(int cpu, int user) void rcu_check_callbacks(int user)
{ {
trace_rcu_utilization(TPS("Start scheduler-tick")); trace_rcu_utilization(TPS("Start scheduler-tick"));
increment_cpu_stall_ticks(); increment_cpu_stall_ticks();
@ -2419,8 +2407,8 @@ void rcu_check_callbacks(int cpu, int user)
rcu_bh_qs(); rcu_bh_qs();
} }
rcu_preempt_check_callbacks(cpu); rcu_preempt_check_callbacks();
if (rcu_pending(cpu)) if (rcu_pending())
invoke_rcu_core(); invoke_rcu_core();
if (user) if (user)
rcu_note_voluntary_context_switch(current); rcu_note_voluntary_context_switch(current);
@ -2963,6 +2951,9 @@ static int synchronize_sched_expedited_cpu_stop(void *data)
*/ */
void synchronize_sched_expedited(void) void synchronize_sched_expedited(void)
{ {
cpumask_var_t cm;
bool cma = false;
int cpu;
long firstsnap, s, snap; long firstsnap, s, snap;
int trycount = 0; int trycount = 0;
struct rcu_state *rsp = &rcu_sched_state; struct rcu_state *rsp = &rcu_sched_state;
@ -2997,11 +2988,26 @@ void synchronize_sched_expedited(void)
} }
WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id())); WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id()));
/* Offline CPUs, idle CPUs, and any CPU we run on are quiescent. */
cma = zalloc_cpumask_var(&cm, GFP_KERNEL);
if (cma) {
cpumask_copy(cm, cpu_online_mask);
cpumask_clear_cpu(raw_smp_processor_id(), cm);
for_each_cpu(cpu, cm) {
struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
if (!(atomic_add_return(0, &rdtp->dynticks) & 0x1))
cpumask_clear_cpu(cpu, cm);
}
if (cpumask_weight(cm) == 0)
goto all_cpus_idle;
}
/* /*
* Each pass through the following loop attempts to force a * Each pass through the following loop attempts to force a
* context switch on each CPU. * context switch on each CPU.
*/ */
while (try_stop_cpus(cpu_online_mask, while (try_stop_cpus(cma ? cm : cpu_online_mask,
synchronize_sched_expedited_cpu_stop, synchronize_sched_expedited_cpu_stop,
NULL) == -EAGAIN) { NULL) == -EAGAIN) {
put_online_cpus(); put_online_cpus();
@ -3013,6 +3019,7 @@ void synchronize_sched_expedited(void)
/* ensure test happens before caller kfree */ /* ensure test happens before caller kfree */
smp_mb__before_atomic(); /* ^^^ */ smp_mb__before_atomic(); /* ^^^ */
atomic_long_inc(&rsp->expedited_workdone1); atomic_long_inc(&rsp->expedited_workdone1);
free_cpumask_var(cm);
return; return;
} }
@ -3022,6 +3029,7 @@ void synchronize_sched_expedited(void)
} else { } else {
wait_rcu_gp(call_rcu_sched); wait_rcu_gp(call_rcu_sched);
atomic_long_inc(&rsp->expedited_normal); atomic_long_inc(&rsp->expedited_normal);
free_cpumask_var(cm);
return; return;
} }
@ -3031,6 +3039,7 @@ void synchronize_sched_expedited(void)
/* ensure test happens before caller kfree */ /* ensure test happens before caller kfree */
smp_mb__before_atomic(); /* ^^^ */ smp_mb__before_atomic(); /* ^^^ */
atomic_long_inc(&rsp->expedited_workdone2); atomic_long_inc(&rsp->expedited_workdone2);
free_cpumask_var(cm);
return; return;
} }
@ -3045,6 +3054,7 @@ void synchronize_sched_expedited(void)
/* CPU hotplug operation in flight, use normal GP. */ /* CPU hotplug operation in flight, use normal GP. */
wait_rcu_gp(call_rcu_sched); wait_rcu_gp(call_rcu_sched);
atomic_long_inc(&rsp->expedited_normal); atomic_long_inc(&rsp->expedited_normal);
free_cpumask_var(cm);
return; return;
} }
snap = atomic_long_read(&rsp->expedited_start); snap = atomic_long_read(&rsp->expedited_start);
@ -3052,6 +3062,9 @@ void synchronize_sched_expedited(void)
} }
atomic_long_inc(&rsp->expedited_stoppedcpus); atomic_long_inc(&rsp->expedited_stoppedcpus);
all_cpus_idle:
free_cpumask_var(cm);
/* /*
* Everyone up to our most recent fetch is covered by our grace * Everyone up to our most recent fetch is covered by our grace
* period. Update the counter, but only if our work is still * period. Update the counter, but only if our work is still
@ -3143,12 +3156,12 @@ static int __rcu_pending(struct rcu_state *rsp, struct rcu_data *rdp)
* by the current CPU, returning 1 if so. This function is part of the * by the current CPU, returning 1 if so. This function is part of the
* RCU implementation; it is -not- an exported member of the RCU API. * RCU implementation; it is -not- an exported member of the RCU API.
*/ */
static int rcu_pending(int cpu) static int rcu_pending(void)
{ {
struct rcu_state *rsp; struct rcu_state *rsp;
for_each_rcu_flavor(rsp) for_each_rcu_flavor(rsp)
if (__rcu_pending(rsp, per_cpu_ptr(rsp->rda, cpu))) if (__rcu_pending(rsp, this_cpu_ptr(rsp->rda)))
return 1; return 1;
return 0; return 0;
} }
@ -3158,7 +3171,7 @@ static int rcu_pending(int cpu)
* non-NULL, store an indication of whether all callbacks are lazy. * non-NULL, store an indication of whether all callbacks are lazy.
* (If there are no callbacks, all of them are deemed to be lazy.) * (If there are no callbacks, all of them are deemed to be lazy.)
*/ */
static int __maybe_unused rcu_cpu_has_callbacks(int cpu, bool *all_lazy) static int __maybe_unused rcu_cpu_has_callbacks(bool *all_lazy)
{ {
bool al = true; bool al = true;
bool hc = false; bool hc = false;
@ -3166,7 +3179,7 @@ static int __maybe_unused rcu_cpu_has_callbacks(int cpu, bool *all_lazy)
struct rcu_state *rsp; struct rcu_state *rsp;
for_each_rcu_flavor(rsp) { for_each_rcu_flavor(rsp) {
rdp = per_cpu_ptr(rsp->rda, cpu); rdp = this_cpu_ptr(rsp->rda);
if (!rdp->nxtlist) if (!rdp->nxtlist)
continue; continue;
hc = true; hc = true;
@ -3485,8 +3498,10 @@ static int rcu_cpu_notify(struct notifier_block *self,
case CPU_DEAD_FROZEN: case CPU_DEAD_FROZEN:
case CPU_UP_CANCELED: case CPU_UP_CANCELED:
case CPU_UP_CANCELED_FROZEN: case CPU_UP_CANCELED_FROZEN:
for_each_rcu_flavor(rsp) for_each_rcu_flavor(rsp) {
rcu_cleanup_dead_cpu(cpu, rsp); rcu_cleanup_dead_cpu(cpu, rsp);
do_nocb_deferred_wakeup(per_cpu_ptr(rsp->rda, cpu));
}
break; break;
default: default:
break; break;
@ -3766,6 +3781,8 @@ void __init rcu_init(void)
pm_notifier(rcu_pm_notify, 0); pm_notifier(rcu_pm_notify, 0);
for_each_online_cpu(cpu) for_each_online_cpu(cpu)
rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu); rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
rcu_early_boot_tests();
} }
#include "tree_plugin.h" #include "tree_plugin.h"

View File

@ -139,7 +139,7 @@ struct rcu_node {
unsigned long expmask; /* Groups that have ->blkd_tasks */ unsigned long expmask; /* Groups that have ->blkd_tasks */
/* elements that need to drain to allow the */ /* elements that need to drain to allow the */
/* current expedited grace period to */ /* current expedited grace period to */
/* complete (only for TREE_PREEMPT_RCU). */ /* complete (only for PREEMPT_RCU). */
unsigned long qsmaskinit; unsigned long qsmaskinit;
/* Per-GP initial value for qsmask & expmask. */ /* Per-GP initial value for qsmask & expmask. */
unsigned long grpmask; /* Mask to apply to parent qsmask. */ unsigned long grpmask; /* Mask to apply to parent qsmask. */
@ -530,10 +530,10 @@ DECLARE_PER_CPU(struct rcu_data, rcu_sched_data);
extern struct rcu_state rcu_bh_state; extern struct rcu_state rcu_bh_state;
DECLARE_PER_CPU(struct rcu_data, rcu_bh_data); DECLARE_PER_CPU(struct rcu_data, rcu_bh_data);
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
extern struct rcu_state rcu_preempt_state; extern struct rcu_state rcu_preempt_state;
DECLARE_PER_CPU(struct rcu_data, rcu_preempt_data); DECLARE_PER_CPU(struct rcu_data, rcu_preempt_data);
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ #endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status); DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
@ -547,7 +547,7 @@ DECLARE_PER_CPU(char, rcu_cpu_has_work);
/* Forward declarations for rcutree_plugin.h */ /* Forward declarations for rcutree_plugin.h */
static void rcu_bootup_announce(void); static void rcu_bootup_announce(void);
long rcu_batches_completed(void); long rcu_batches_completed(void);
static void rcu_preempt_note_context_switch(int cpu); static void rcu_preempt_note_context_switch(void);
static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp); static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp, static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp,
@ -561,12 +561,12 @@ static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
struct rcu_node *rnp, struct rcu_node *rnp,
struct rcu_data *rdp); struct rcu_data *rdp);
#endif /* #ifdef CONFIG_HOTPLUG_CPU */ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
static void rcu_preempt_check_callbacks(int cpu); static void rcu_preempt_check_callbacks(void);
void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PREEMPT_RCU)
static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
bool wake); bool wake);
#endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */ #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PREEMPT_RCU) */
static void __init __rcu_init_preempt(void); static void __init __rcu_init_preempt(void);
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags); static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
static void rcu_preempt_boost_start_gp(struct rcu_node *rnp); static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
@ -579,8 +579,8 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
static void __init rcu_spawn_boost_kthreads(void); static void __init rcu_spawn_boost_kthreads(void);
static void rcu_prepare_kthreads(int cpu); static void rcu_prepare_kthreads(int cpu);
static void rcu_cleanup_after_idle(int cpu); static void rcu_cleanup_after_idle(void);
static void rcu_prepare_for_idle(int cpu); static void rcu_prepare_for_idle(void);
static void rcu_idle_count_callbacks_posted(void); static void rcu_idle_count_callbacks_posted(void);
static void print_cpu_stall_info_begin(void); static void print_cpu_stall_info_begin(void);
static void print_cpu_stall_info(struct rcu_state *rsp, int cpu); static void print_cpu_stall_info(struct rcu_state *rsp, int cpu);
@ -606,8 +606,8 @@ static void __init rcu_organize_nocb_kthreads(struct rcu_state *rsp);
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */ #endif /* #ifdef CONFIG_RCU_NOCB_CPU */
static void __maybe_unused rcu_kick_nohz_cpu(int cpu); static void __maybe_unused rcu_kick_nohz_cpu(int cpu);
static bool init_nocb_callback_list(struct rcu_data *rdp); static bool init_nocb_callback_list(struct rcu_data *rdp);
static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_enter(int irq);
static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_exit(int irq);
static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle, static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
unsigned long *maxj); unsigned long *maxj);
static bool is_sysidle_rcu_state(struct rcu_state *rsp); static bool is_sysidle_rcu_state(struct rcu_state *rsp);

View File

@ -30,14 +30,24 @@
#include <linux/smpboot.h> #include <linux/smpboot.h>
#include "../time/tick-internal.h" #include "../time/tick-internal.h"
#define RCU_KTHREAD_PRIO 1
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
#include "../locking/rtmutex_common.h" #include "../locking/rtmutex_common.h"
#define RCU_BOOST_PRIO CONFIG_RCU_BOOST_PRIO
#else /* rcuc/rcub kthread realtime priority */
#define RCU_BOOST_PRIO RCU_KTHREAD_PRIO static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO;
#endif module_param(kthread_prio, int, 0644);
/*
* Control variables for per-CPU and per-rcu_node kthreads. These
* handle all flavors of RCU.
*/
static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
DEFINE_PER_CPU(char, rcu_cpu_has_work);
#endif /* #ifdef CONFIG_RCU_BOOST */
#ifdef CONFIG_RCU_NOCB_CPU #ifdef CONFIG_RCU_NOCB_CPU
static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */
@ -72,9 +82,6 @@ static void __init rcu_bootup_announce_oddness(void)
#ifdef CONFIG_RCU_TORTURE_TEST_RUNNABLE #ifdef CONFIG_RCU_TORTURE_TEST_RUNNABLE
pr_info("\tRCU torture testing starts during boot.\n"); pr_info("\tRCU torture testing starts during boot.\n");
#endif #endif
#if defined(CONFIG_TREE_PREEMPT_RCU) && !defined(CONFIG_RCU_CPU_STALL_VERBOSE)
pr_info("\tDump stacks of tasks blocking RCU-preempt GP.\n");
#endif
#if defined(CONFIG_RCU_CPU_STALL_INFO) #if defined(CONFIG_RCU_CPU_STALL_INFO)
pr_info("\tAdditional per-CPU info printed with stalls.\n"); pr_info("\tAdditional per-CPU info printed with stalls.\n");
#endif #endif
@ -85,9 +92,12 @@ static void __init rcu_bootup_announce_oddness(void)
pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf); pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf);
if (nr_cpu_ids != NR_CPUS) if (nr_cpu_ids != NR_CPUS)
pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids); pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids);
#ifdef CONFIG_RCU_BOOST
pr_info("\tRCU kthread priority: %d.\n", kthread_prio);
#endif
} }
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu); RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu);
static struct rcu_state *rcu_state_p = &rcu_preempt_state; static struct rcu_state *rcu_state_p = &rcu_preempt_state;
@ -156,7 +166,7 @@ static void rcu_preempt_qs(void)
* *
* Caller must disable preemption. * Caller must disable preemption.
*/ */
static void rcu_preempt_note_context_switch(int cpu) static void rcu_preempt_note_context_switch(void)
{ {
struct task_struct *t = current; struct task_struct *t = current;
unsigned long flags; unsigned long flags;
@ -167,7 +177,7 @@ static void rcu_preempt_note_context_switch(int cpu)
!t->rcu_read_unlock_special.b.blocked) { !t->rcu_read_unlock_special.b.blocked) {
/* Possibly blocking in an RCU read-side critical section. */ /* Possibly blocking in an RCU read-side critical section. */
rdp = per_cpu_ptr(rcu_preempt_state.rda, cpu); rdp = this_cpu_ptr(rcu_preempt_state.rda);
rnp = rdp->mynode; rnp = rdp->mynode;
raw_spin_lock_irqsave(&rnp->lock, flags); raw_spin_lock_irqsave(&rnp->lock, flags);
smp_mb__after_unlock_lock(); smp_mb__after_unlock_lock();
@ -415,8 +425,6 @@ void rcu_read_unlock_special(struct task_struct *t)
} }
} }
#ifdef CONFIG_RCU_CPU_STALL_VERBOSE
/* /*
* Dump detailed information for all tasks blocking the current RCU * Dump detailed information for all tasks blocking the current RCU
* grace period on the specified rcu_node structure. * grace period on the specified rcu_node structure.
@ -451,14 +459,6 @@ static void rcu_print_detail_task_stall(struct rcu_state *rsp)
rcu_print_detail_task_stall_rnp(rnp); rcu_print_detail_task_stall_rnp(rnp);
} }
#else /* #ifdef CONFIG_RCU_CPU_STALL_VERBOSE */
static void rcu_print_detail_task_stall(struct rcu_state *rsp)
{
}
#endif /* #else #ifdef CONFIG_RCU_CPU_STALL_VERBOSE */
#ifdef CONFIG_RCU_CPU_STALL_INFO #ifdef CONFIG_RCU_CPU_STALL_INFO
static void rcu_print_task_stall_begin(struct rcu_node *rnp) static void rcu_print_task_stall_begin(struct rcu_node *rnp)
@ -621,7 +621,7 @@ static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
* *
* Caller must disable hard irqs. * Caller must disable hard irqs.
*/ */
static void rcu_preempt_check_callbacks(int cpu) static void rcu_preempt_check_callbacks(void)
{ {
struct task_struct *t = current; struct task_struct *t = current;
@ -630,8 +630,8 @@ static void rcu_preempt_check_callbacks(int cpu)
return; return;
} }
if (t->rcu_read_lock_nesting > 0 && if (t->rcu_read_lock_nesting > 0 &&
per_cpu(rcu_preempt_data, cpu).qs_pending && __this_cpu_read(rcu_preempt_data.qs_pending) &&
!per_cpu(rcu_preempt_data, cpu).passed_quiesce) !__this_cpu_read(rcu_preempt_data.passed_quiesce))
t->rcu_read_unlock_special.b.need_qs = true; t->rcu_read_unlock_special.b.need_qs = true;
} }
@ -919,7 +919,7 @@ void exit_rcu(void)
__rcu_read_unlock(); __rcu_read_unlock();
} }
#else /* #ifdef CONFIG_TREE_PREEMPT_RCU */ #else /* #ifdef CONFIG_PREEMPT_RCU */
static struct rcu_state *rcu_state_p = &rcu_sched_state; static struct rcu_state *rcu_state_p = &rcu_sched_state;
@ -945,7 +945,7 @@ EXPORT_SYMBOL_GPL(rcu_batches_completed);
* Because preemptible RCU does not exist, we never have to check for * Because preemptible RCU does not exist, we never have to check for
* CPUs being in quiescent states. * CPUs being in quiescent states.
*/ */
static void rcu_preempt_note_context_switch(int cpu) static void rcu_preempt_note_context_switch(void)
{ {
} }
@ -1017,7 +1017,7 @@ static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
* Because preemptible RCU does not exist, it never has any callbacks * Because preemptible RCU does not exist, it never has any callbacks
* to check. * to check.
*/ */
static void rcu_preempt_check_callbacks(int cpu) static void rcu_preempt_check_callbacks(void)
{ {
} }
@ -1070,7 +1070,7 @@ void exit_rcu(void)
{ {
} }
#endif /* #else #ifdef CONFIG_TREE_PREEMPT_RCU */ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
@ -1326,7 +1326,7 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
smp_mb__after_unlock_lock(); smp_mb__after_unlock_lock();
rnp->boost_kthread_task = t; rnp->boost_kthread_task = t;
raw_spin_unlock_irqrestore(&rnp->lock, flags); raw_spin_unlock_irqrestore(&rnp->lock, flags);
sp.sched_priority = RCU_BOOST_PRIO; sp.sched_priority = kthread_prio;
sched_setscheduler_nocheck(t, SCHED_FIFO, &sp); sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */ wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */
return 0; return 0;
@ -1343,7 +1343,7 @@ static void rcu_cpu_kthread_setup(unsigned int cpu)
{ {
struct sched_param sp; struct sched_param sp;
sp.sched_priority = RCU_KTHREAD_PRIO; sp.sched_priority = kthread_prio;
sched_setscheduler_nocheck(current, SCHED_FIFO, &sp); sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
} }
@ -1512,10 +1512,10 @@ static void rcu_prepare_kthreads(int cpu)
* any flavor of RCU. * any flavor of RCU.
*/ */
#ifndef CONFIG_RCU_NOCB_CPU_ALL #ifndef CONFIG_RCU_NOCB_CPU_ALL
int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) int rcu_needs_cpu(unsigned long *delta_jiffies)
{ {
*delta_jiffies = ULONG_MAX; *delta_jiffies = ULONG_MAX;
return rcu_cpu_has_callbacks(cpu, NULL); return rcu_cpu_has_callbacks(NULL);
} }
#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
@ -1523,7 +1523,7 @@ int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)
* Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up
* after it. * after it.
*/ */
static void rcu_cleanup_after_idle(int cpu) static void rcu_cleanup_after_idle(void)
{ {
} }
@ -1531,7 +1531,7 @@ static void rcu_cleanup_after_idle(int cpu)
* Do the idle-entry grace-period work, which, because CONFIG_RCU_FAST_NO_HZ=n, * Do the idle-entry grace-period work, which, because CONFIG_RCU_FAST_NO_HZ=n,
* is nothing. * is nothing.
*/ */
static void rcu_prepare_for_idle(int cpu) static void rcu_prepare_for_idle(void)
{ {
} }
@ -1624,15 +1624,15 @@ static bool __maybe_unused rcu_try_advance_all_cbs(void)
* The caller must have disabled interrupts. * The caller must have disabled interrupts.
*/ */
#ifndef CONFIG_RCU_NOCB_CPU_ALL #ifndef CONFIG_RCU_NOCB_CPU_ALL
int rcu_needs_cpu(int cpu, unsigned long *dj) int rcu_needs_cpu(unsigned long *dj)
{ {
struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
/* Snapshot to detect later posting of non-lazy callback. */ /* Snapshot to detect later posting of non-lazy callback. */
rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted; rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;
/* If no callbacks, RCU doesn't need the CPU. */ /* If no callbacks, RCU doesn't need the CPU. */
if (!rcu_cpu_has_callbacks(cpu, &rdtp->all_lazy)) { if (!rcu_cpu_has_callbacks(&rdtp->all_lazy)) {
*dj = ULONG_MAX; *dj = ULONG_MAX;
return 0; return 0;
} }
@ -1666,12 +1666,12 @@ int rcu_needs_cpu(int cpu, unsigned long *dj)
* *
* The caller must have disabled interrupts. * The caller must have disabled interrupts.
*/ */
static void rcu_prepare_for_idle(int cpu) static void rcu_prepare_for_idle(void)
{ {
#ifndef CONFIG_RCU_NOCB_CPU_ALL #ifndef CONFIG_RCU_NOCB_CPU_ALL
bool needwake; bool needwake;
struct rcu_data *rdp; struct rcu_data *rdp;
struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
struct rcu_node *rnp; struct rcu_node *rnp;
struct rcu_state *rsp; struct rcu_state *rsp;
int tne; int tne;
@ -1679,7 +1679,7 @@ static void rcu_prepare_for_idle(int cpu)
/* Handle nohz enablement switches conservatively. */ /* Handle nohz enablement switches conservatively. */
tne = ACCESS_ONCE(tick_nohz_active); tne = ACCESS_ONCE(tick_nohz_active);
if (tne != rdtp->tick_nohz_enabled_snap) { if (tne != rdtp->tick_nohz_enabled_snap) {
if (rcu_cpu_has_callbacks(cpu, NULL)) if (rcu_cpu_has_callbacks(NULL))
invoke_rcu_core(); /* force nohz to see update. */ invoke_rcu_core(); /* force nohz to see update. */
rdtp->tick_nohz_enabled_snap = tne; rdtp->tick_nohz_enabled_snap = tne;
return; return;
@ -1688,7 +1688,7 @@ static void rcu_prepare_for_idle(int cpu)
return; return;
/* If this is a no-CBs CPU, no callbacks, just return. */ /* If this is a no-CBs CPU, no callbacks, just return. */
if (rcu_is_nocb_cpu(cpu)) if (rcu_is_nocb_cpu(smp_processor_id()))
return; return;
/* /*
@ -1712,7 +1712,7 @@ static void rcu_prepare_for_idle(int cpu)
return; return;
rdtp->last_accelerate = jiffies; rdtp->last_accelerate = jiffies;
for_each_rcu_flavor(rsp) { for_each_rcu_flavor(rsp) {
rdp = per_cpu_ptr(rsp->rda, cpu); rdp = this_cpu_ptr(rsp->rda);
if (!*rdp->nxttail[RCU_DONE_TAIL]) if (!*rdp->nxttail[RCU_DONE_TAIL])
continue; continue;
rnp = rdp->mynode; rnp = rdp->mynode;
@ -1731,10 +1731,10 @@ static void rcu_prepare_for_idle(int cpu)
* any grace periods that elapsed while the CPU was idle, and if any * any grace periods that elapsed while the CPU was idle, and if any
* callbacks are now ready to invoke, initiate invocation. * callbacks are now ready to invoke, initiate invocation.
*/ */
static void rcu_cleanup_after_idle(int cpu) static void rcu_cleanup_after_idle(void)
{ {
#ifndef CONFIG_RCU_NOCB_CPU_ALL #ifndef CONFIG_RCU_NOCB_CPU_ALL
if (rcu_is_nocb_cpu(cpu)) if (rcu_is_nocb_cpu(smp_processor_id()))
return; return;
if (rcu_try_advance_all_cbs()) if (rcu_try_advance_all_cbs())
invoke_rcu_core(); invoke_rcu_core();
@ -2573,9 +2573,13 @@ static void rcu_spawn_one_nocb_kthread(struct rcu_state *rsp, int cpu)
rdp->nocb_leader = rdp_spawn; rdp->nocb_leader = rdp_spawn;
if (rdp_last && rdp != rdp_spawn) if (rdp_last && rdp != rdp_spawn)
rdp_last->nocb_next_follower = rdp; rdp_last->nocb_next_follower = rdp;
rdp_last = rdp; if (rdp == rdp_spawn) {
rdp = rdp->nocb_next_follower; rdp = rdp->nocb_next_follower;
rdp_last->nocb_next_follower = NULL; } else {
rdp_last = rdp;
rdp = rdp->nocb_next_follower;
rdp_last->nocb_next_follower = NULL;
}
} while (rdp); } while (rdp);
rdp_spawn->nocb_next_follower = rdp_old_leader; rdp_spawn->nocb_next_follower = rdp_old_leader;
} }
@ -2761,9 +2765,10 @@ static int full_sysidle_state; /* Current system-idle state. */
* to detect full-system idle states, not RCU quiescent states and grace * to detect full-system idle states, not RCU quiescent states and grace
* periods. The caller must have disabled interrupts. * periods. The caller must have disabled interrupts.
*/ */
static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq) static void rcu_sysidle_enter(int irq)
{ {
unsigned long j; unsigned long j;
struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
/* If there are no nohz_full= CPUs, no need to track this. */ /* If there are no nohz_full= CPUs, no need to track this. */
if (!tick_nohz_full_enabled()) if (!tick_nohz_full_enabled())
@ -2832,8 +2837,10 @@ void rcu_sysidle_force_exit(void)
* usermode execution does -not- count as idle here! The caller must * usermode execution does -not- count as idle here! The caller must
* have disabled interrupts. * have disabled interrupts.
*/ */
static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq) static void rcu_sysidle_exit(int irq)
{ {
struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
/* If there are no nohz_full= CPUs, no need to track this. */ /* If there are no nohz_full= CPUs, no need to track this. */
if (!tick_nohz_full_enabled()) if (!tick_nohz_full_enabled())
return; return;
@ -3127,11 +3134,11 @@ static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp)
#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ #else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq) static void rcu_sysidle_enter(int irq)
{ {
} }
static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq) static void rcu_sysidle_exit(int irq)
{ {
} }

View File

@ -306,7 +306,7 @@ struct debug_obj_descr rcuhead_debug_descr = {
EXPORT_SYMBOL_GPL(rcuhead_debug_descr); EXPORT_SYMBOL_GPL(rcuhead_debug_descr);
#endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE)
void do_trace_rcu_torture_read(const char *rcutorturename, struct rcu_head *rhp, void do_trace_rcu_torture_read(const char *rcutorturename, struct rcu_head *rhp,
unsigned long secs, unsigned long secs,
unsigned long c_old, unsigned long c) unsigned long c_old, unsigned long c)
@ -531,7 +531,8 @@ static int __noreturn rcu_tasks_kthread(void *arg)
struct rcu_head *next; struct rcu_head *next;
LIST_HEAD(rcu_tasks_holdouts); LIST_HEAD(rcu_tasks_holdouts);
/* FIXME: Add housekeeping affinity. */ /* Run on housekeeping CPUs by default. Sysadm can move if desired. */
housekeeping_affine(current);
/* /*
* Each pass through the following loop makes one check for * Each pass through the following loop makes one check for
@ -690,3 +691,87 @@ static void rcu_spawn_tasks_kthread(void)
} }
#endif /* #ifdef CONFIG_TASKS_RCU */ #endif /* #ifdef CONFIG_TASKS_RCU */
#ifdef CONFIG_PROVE_RCU
/*
* Early boot self test parameters, one for each flavor
*/
static bool rcu_self_test;
static bool rcu_self_test_bh;
static bool rcu_self_test_sched;
module_param(rcu_self_test, bool, 0444);
module_param(rcu_self_test_bh, bool, 0444);
module_param(rcu_self_test_sched, bool, 0444);
static int rcu_self_test_counter;
static void test_callback(struct rcu_head *r)
{
rcu_self_test_counter++;
pr_info("RCU test callback executed %d\n", rcu_self_test_counter);
}
static void early_boot_test_call_rcu(void)
{
static struct rcu_head head;
call_rcu(&head, test_callback);
}
static void early_boot_test_call_rcu_bh(void)
{
static struct rcu_head head;
call_rcu_bh(&head, test_callback);
}
static void early_boot_test_call_rcu_sched(void)
{
static struct rcu_head head;
call_rcu_sched(&head, test_callback);
}
void rcu_early_boot_tests(void)
{
pr_info("Running RCU self tests\n");
if (rcu_self_test)
early_boot_test_call_rcu();
if (rcu_self_test_bh)
early_boot_test_call_rcu_bh();
if (rcu_self_test_sched)
early_boot_test_call_rcu_sched();
}
static int rcu_verify_early_boot_tests(void)
{
int ret = 0;
int early_boot_test_counter = 0;
if (rcu_self_test) {
early_boot_test_counter++;
rcu_barrier();
}
if (rcu_self_test_bh) {
early_boot_test_counter++;
rcu_barrier_bh();
}
if (rcu_self_test_sched) {
early_boot_test_counter++;
rcu_barrier_sched();
}
if (rcu_self_test_counter != early_boot_test_counter) {
WARN_ON(1);
ret = -1;
}
return ret;
}
late_initcall(rcu_verify_early_boot_tests);
#else
void rcu_early_boot_tests(void) {}
#endif /* CONFIG_PROVE_RCU */

View File

@ -2802,7 +2802,7 @@ need_resched:
preempt_disable(); preempt_disable();
cpu = smp_processor_id(); cpu = smp_processor_id();
rq = cpu_rq(cpu); rq = cpu_rq(cpu);
rcu_note_context_switch(cpu); rcu_note_context_switch();
prev = rq->curr; prev = rq->curr;
schedule_debug(prev); schedule_debug(prev);

View File

@ -1275,7 +1275,17 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
local_irq_restore(*flags); local_irq_restore(*flags);
break; break;
} }
/*
* This sighand can be already freed and even reused, but
* we rely on SLAB_DESTROY_BY_RCU and sighand_ctor() which
* initializes ->siglock: this slab can't go away, it has
* the same object type, ->siglock can't be reinitialized.
*
* We need to ensure that tsk->sighand is still the same
* after we take the lock, we can race with de_thread() or
* __exit_signal(). In the latter case the next iteration
* must see ->sighand == NULL.
*/
spin_lock(&sighand->siglock); spin_lock(&sighand->siglock);
if (likely(sighand == tsk->sighand)) { if (likely(sighand == tsk->sighand)) {
rcu_read_unlock(); rcu_read_unlock();
@ -1331,23 +1341,21 @@ int kill_pid_info(int sig, struct siginfo *info, struct pid *pid)
int error = -ESRCH; int error = -ESRCH;
struct task_struct *p; struct task_struct *p;
rcu_read_lock(); for (;;) {
retry: rcu_read_lock();
p = pid_task(pid, PIDTYPE_PID); p = pid_task(pid, PIDTYPE_PID);
if (p) { if (p)
error = group_send_sig_info(sig, info, p); error = group_send_sig_info(sig, info, p);
if (unlikely(error == -ESRCH)) rcu_read_unlock();
/* if (likely(!p || error != -ESRCH))
* The task was unhashed in between, try again. return error;
* If it is dead, pid_task() will return NULL,
* if we race with de_thread() it will find the
* new leader.
*/
goto retry;
}
rcu_read_unlock();
return error; /*
* The task was unhashed in between, try again. If it
* is dead, pid_task() will return NULL, if we race with
* de_thread() it will find the new leader.
*/
}
} }
int kill_proc_info(int sig, struct siginfo *info, pid_t pid) int kill_proc_info(int sig, struct siginfo *info, pid_t pid)

View File

@ -656,7 +656,7 @@ static void run_ksoftirqd(unsigned int cpu)
* in the task stack here. * in the task stack here.
*/ */
__do_softirq(); __do_softirq();
rcu_note_context_switch(cpu); rcu_note_context_switch();
local_irq_enable(); local_irq_enable();
cond_resched(); cond_resched();
return; return;

View File

@ -585,7 +585,7 @@ static ktime_t tick_nohz_stop_sched_tick(struct tick_sched *ts,
last_jiffies = jiffies; last_jiffies = jiffies;
} while (read_seqretry(&jiffies_lock, seq)); } while (read_seqretry(&jiffies_lock, seq));
if (rcu_needs_cpu(cpu, &rcu_delta_jiffies) || if (rcu_needs_cpu(&rcu_delta_jiffies) ||
arch_needs_cpu() || irq_work_needs_cpu()) { arch_needs_cpu() || irq_work_needs_cpu()) {
next_jiffies = last_jiffies + 1; next_jiffies = last_jiffies + 1;
delta_jiffies = 1; delta_jiffies = 1;

View File

@ -1377,12 +1377,11 @@ unsigned long get_next_timer_interrupt(unsigned long now)
void update_process_times(int user_tick) void update_process_times(int user_tick)
{ {
struct task_struct *p = current; struct task_struct *p = current;
int cpu = smp_processor_id();
/* Note: this timer irq context must be accounted for as well. */ /* Note: this timer irq context must be accounted for as well. */
account_process_tick(p, user_tick); account_process_tick(p, user_tick);
run_local_timers(); run_local_timers();
rcu_check_callbacks(cpu, user_tick); rcu_check_callbacks(user_tick);
#ifdef CONFIG_IRQ_WORK #ifdef CONFIG_IRQ_WORK
if (in_irq()) if (in_irq())
irq_work_tick(); irq_work_tick();

View File

@ -1238,21 +1238,9 @@ config RCU_CPU_STALL_TIMEOUT
RCU grace period persists, additional CPU stall warnings are RCU grace period persists, additional CPU stall warnings are
printed at more widely spaced intervals. printed at more widely spaced intervals.
config RCU_CPU_STALL_VERBOSE
bool "Print additional per-task information for RCU_CPU_STALL_DETECTOR"
depends on TREE_PREEMPT_RCU
default y
help
This option causes RCU to printk detailed per-task information
for any tasks that are stalling the current RCU grace period.
Say N if you are unsure.
Say Y if you want to enable such checks.
config RCU_CPU_STALL_INFO config RCU_CPU_STALL_INFO
bool "Print additional diagnostics on RCU CPU stall" bool "Print additional diagnostics on RCU CPU stall"
depends on (TREE_RCU || TREE_PREEMPT_RCU) && DEBUG_KERNEL depends on (TREE_RCU || PREEMPT_RCU) && DEBUG_KERNEL
default n default n
help help
For each stalled CPU that is aware of the current RCU grace For each stalled CPU that is aware of the current RCU grace

View File

@ -45,7 +45,7 @@ trap 'rm -rf $T' 0
touch $T touch $T
. $KVM/bin/functions.sh . $KVM/bin/functions.sh
. $KVPATH/ver_functions.sh . $CONFIGFRAG/ver_functions.sh
config_template=${1} config_template=${1}
config_dir=`echo $config_template | sed -e 's,/[^/]*$,,'` config_dir=`echo $config_template | sed -e 's,/[^/]*$,,'`
@ -168,8 +168,8 @@ then
touch $resdir/buildonly touch $resdir/buildonly
exit 0 exit 0
fi fi
echo $QEMU $qemu_args -m 512 -kernel $builddir/$BOOT_IMAGE -append \"$qemu_append $boot_args\" > $resdir/qemu-cmd echo $QEMU $qemu_args -m 512 -kernel $resdir/bzImage -append \"$qemu_append $boot_args\" > $resdir/qemu-cmd
( $QEMU $qemu_args -m 512 -kernel $builddir/$BOOT_IMAGE -append "$qemu_append $boot_args"; echo $? > $resdir/qemu-retval ) & ( $QEMU $qemu_args -m 512 -kernel $resdir/bzImage -append "$qemu_append $boot_args"; echo $? > $resdir/qemu-retval ) &
qemu_pid=$! qemu_pid=$!
commandcompleted=0 commandcompleted=0
echo Monitoring qemu job at pid $qemu_pid echo Monitoring qemu job at pid $qemu_pid

View File

@ -47,7 +47,6 @@ resdir=""
configs="" configs=""
cpus=0 cpus=0
ds=`date +%Y.%m.%d-%H:%M:%S` ds=`date +%Y.%m.%d-%H:%M:%S`
kversion=""
. functions.sh . functions.sh
@ -64,7 +63,6 @@ usage () {
echo " --duration minutes" echo " --duration minutes"
echo " --interactive" echo " --interactive"
echo " --kmake-arg kernel-make-arguments" echo " --kmake-arg kernel-make-arguments"
echo " --kversion vN.NN"
echo " --mac nn:nn:nn:nn:nn:nn" echo " --mac nn:nn:nn:nn:nn:nn"
echo " --no-initrd" echo " --no-initrd"
echo " --qemu-args qemu-system-..." echo " --qemu-args qemu-system-..."
@ -128,11 +126,6 @@ do
TORTURE_KMAKE_ARG="$2" TORTURE_KMAKE_ARG="$2"
shift shift
;; ;;
--kversion)
checkarg --kversion "(kernel version)" $# "$2" '^v[0-9.]*$' '^error'
kversion=$2
shift
;;
--mac) --mac)
checkarg --mac "(MAC address)" $# "$2" '^\([0-9a-fA-F]\{2\}:\)\{5\}[0-9a-fA-F]\{2\}$' error checkarg --mac "(MAC address)" $# "$2" '^\([0-9a-fA-F]\{2\}:\)\{5\}[0-9a-fA-F]\{2\}$' error
TORTURE_QEMU_MAC=$2 TORTURE_QEMU_MAC=$2
@ -170,11 +163,10 @@ do
done done
CONFIGFRAG=${KVM}/configs/${TORTURE_SUITE}; export CONFIGFRAG CONFIGFRAG=${KVM}/configs/${TORTURE_SUITE}; export CONFIGFRAG
KVPATH=${CONFIGFRAG}/$kversion; export KVPATH
if test -z "$configs" if test -z "$configs"
then then
configs="`cat $CONFIGFRAG/$kversion/CFLIST`" configs="`cat $CONFIGFRAG/CFLIST`"
fi fi
if test -z "$resdir" if test -z "$resdir"
@ -186,10 +178,10 @@ fi
touch $T/cfgcpu touch $T/cfgcpu
for CF in $configs for CF in $configs
do do
if test -f "$CONFIGFRAG/$kversion/$CF" if test -f "$CONFIGFRAG/$CF"
then then
cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$kversion/$CF` cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF`
cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$kversion/$CF" "$cpu_count"` cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF" "$cpu_count"`
echo $CF $cpu_count >> $T/cfgcpu echo $CF $cpu_count >> $T/cfgcpu
else else
echo "The --configs file $CF does not exist, terminating." echo "The --configs file $CF does not exist, terminating."
@ -252,7 +244,6 @@ END {
cat << ___EOF___ > $T/script cat << ___EOF___ > $T/script
CONFIGFRAG="$CONFIGFRAG"; export CONFIGFRAG CONFIGFRAG="$CONFIGFRAG"; export CONFIGFRAG
KVM="$KVM"; export KVM KVM="$KVM"; export KVM
KVPATH="$KVPATH"; export KVPATH
PATH="$PATH"; export PATH PATH="$PATH"; export PATH
TORTURE_BOOT_IMAGE="$TORTURE_BOOT_IMAGE"; export TORTURE_BOOT_IMAGE TORTURE_BOOT_IMAGE="$TORTURE_BOOT_IMAGE"; export TORTURE_BOOT_IMAGE
TORTURE_BUILDONLY="$TORTURE_BUILDONLY"; export TORTURE_BUILDONLY TORTURE_BUILDONLY="$TORTURE_BUILDONLY"; export TORTURE_BUILDONLY
@ -285,7 +276,7 @@ then
fi fi
___EOF___ ___EOF___
awk < $T/cfgcpu.pack \ awk < $T/cfgcpu.pack \
-v CONFIGDIR="$CONFIGFRAG/$kversion/" \ -v CONFIGDIR="$CONFIGFRAG/" \
-v KVM="$KVM" \ -v KVM="$KVM" \
-v ncpus=$cpus \ -v ncpus=$cpus \
-v rd=$resdir/$ds/ \ -v rd=$resdir/$ds/ \

View File

@ -7,6 +7,8 @@ CONFIG_HZ_PERIODIC=y
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_PREEMPT_COUNT=y CONFIG_PREEMPT_COUNT=y

View File

@ -0,0 +1,2 @@
rcupdate.rcu_self_test=1
rcupdate.rcu_self_test_bh=1

View File

@ -2,7 +2,7 @@ CONFIG_SMP=y
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -14,6 +14,5 @@ CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ZERO=y CONFIG_RCU_NOCB_CPU_ZERO=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -3,7 +3,7 @@ CONFIG_NR_CPUS=8
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -19,6 +19,5 @@ CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=y
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -3,7 +3,7 @@ CONFIG_NR_CPUS=8
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -19,6 +19,5 @@ CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=y
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -3,7 +3,7 @@ CONFIG_NR_CPUS=8
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=y CONFIG_HZ_PERIODIC=y
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -15,7 +15,6 @@ CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=y CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2 CONFIG_RCU_KTHREAD_PRIO=2
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -19,5 +19,4 @@ CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_RCU_CPU_STALL_VERBOSE=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -19,5 +19,4 @@ CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y CONFIG_PROVE_RCU=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -1 +1,2 @@
rcutorture.torture_type=sched rcutorture.torture_type=sched
rcupdate.rcu_self_test_sched=1

View File

@ -20,5 +20,4 @@ CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y CONFIG_PROVE_RCU=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y

View File

@ -0,0 +1,3 @@
rcupdate.rcu_self_test=1
rcupdate.rcu_self_test_bh=1
rcupdate.rcu_self_test_sched=1

View File

@ -19,5 +19,4 @@ CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -3,7 +3,7 @@ CONFIG_NR_CPUS=16
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -18,7 +18,8 @@ CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y CONFIG_RCU_NOCB_CPU_ALL=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -3,7 +3,7 @@ CONFIG_NR_CPUS=16
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -19,6 +19,5 @@ CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y CONFIG_RCU_NOCB_CPU_ALL=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -1 +1,3 @@
rcutorture.torture_type=sched rcutorture.torture_type=sched
rcupdate.rcu_self_test=1
rcupdate.rcu_self_test_sched=1

View File

@ -3,7 +3,7 @@ CONFIG_NR_CPUS=1
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_PREEMPT_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
@ -14,6 +14,5 @@ CONFIG_HIBERNATION=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n

View File

@ -1,14 +0,0 @@
P1-S-T-NH-SD-SMP-HP
P2-2-t-nh-sd-SMP-hp
P3-3-T-nh-SD-SMP-hp
P4-A-t-NH-sd-SMP-HP
P5-U-T-NH-sd-SMP-hp
N1-S-T-NH-SD-SMP-HP
N2-2-t-nh-sd-SMP-hp
N3-3-T-nh-SD-SMP-hp
N4-A-t-NH-sd-SMP-HP
N5-U-T-NH-sd-SMP-hp
PT1-nh
PT2-NH
NT1-nh
NT3-NH

View File

@ -1,18 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=8
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=4
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,18 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,23 +0,0 @@
#CHECK#CONFIG_TINY_RCU=y
CONFIG_RCU_TRACE=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=n
#
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
#CHECK#CONFIG_TINY_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=y
#
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,19 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=8
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=4
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_RT_MUTEXES=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,27 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,23 +0,0 @@
CONFIG_TINY_PREEMPT_RCU=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_RCU_TRACE=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=n
#
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_TINY_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=y
#
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,33 +0,0 @@
#!/bin/bash
#
# Kernel-version-dependent shell functions for the rest of the scripts.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, you can access it online at
# http://www.gnu.org/licenses/gpl-2.0.html.
#
# Copyright (C) IBM Corporation, 2013
#
# Authors: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
# per_version_boot_params bootparam-string config-file seconds
#
# Adds per-version torture-module parameters to kernels supporting them.
# Which old kernels do not.
per_version_boot_params () {
echo rcutorture.stat_interval=15 \
rcutorture.shutdown_secs=$3 \
rcutorture.rcutorture_runnable=1 \
rcutorture.test_no_idle_hz=1 \
rcutorture.verbose=1
}

View File

@ -1,17 +0,0 @@
sysidleY.2013.06.19a
sysidleN.2013.06.19a
P1-S-T-NH-SD-SMP-HP
P2-2-t-nh-sd-SMP-hp
P3-3-T-nh-SD-SMP-hp
P4-A-t-NH-sd-SMP-HP
P5-U-T-NH-sd-SMP-hp
P6---t-nh-SD-smp-hp
N1-S-T-NH-SD-SMP-HP
N2-2-t-nh-sd-SMP-hp
N3-3-T-nh-SD-SMP-hp
N4-A-t-NH-sd-SMP-HP
N5-U-T-NH-sd-SMP-hp
PT1-nh
PT2-NH
NT1-nh
NT3-NH

View File

@ -1,19 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=8
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=4
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,18 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,19 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_NR_CPUS=1
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,26 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=16
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=y
CONFIG_RCU_NOCB_CPU_ZERO=n
CONFIG_RCU_NOCB_CPU_ALL=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=14
CONFIG_NR_CPUS=16
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,23 +0,0 @@
#CHECK#CONFIG_TINY_RCU=y
CONFIG_RCU_TRACE=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=n
#
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
#CHECK#CONFIG_TINY_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=y
#
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=8
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=4
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_RT_MUTEXES=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,27 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,18 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=n
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,30 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=16
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=n
CONFIG_RCU_NOCB_CPU_ZERO=n
CONFIG_RCU_NOCB_CPU_ALL=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_SLUB=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,30 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=16
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=y
CONFIG_RCU_NOCB_CPU_ZERO=n
CONFIG_RCU_NOCB_CPU_ALL=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_SLUB=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,30 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=16
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=y
CONFIG_RCU_NOCB_CPU_ZERO=n
CONFIG_RCU_NOCB_CPU_ALL=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_SLUB=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,30 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=16
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=n
CONFIG_RCU_NOCB_CPU_ZERO=y
CONFIG_RCU_NOCB_CPU_ALL=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_SLUB=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,23 +0,0 @@
CONFIG_TINY_PREEMPT_RCU=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_RCU_TRACE=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=n
#
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_TINY_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=y
#
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,14 +0,0 @@
P1-S-T-NH-SD-SMP-HP
P2-2-t-nh-sd-SMP-hp
P3-3-T-nh-SD-SMP-hp
P4-A-t-NH-sd-SMP-HP
P5-U-T-NH-sd-SMP-hp
N1-S-T-NH-SD-SMP-HP
N2-2-t-nh-sd-SMP-hp
N3-3-T-nh-SD-SMP-hp
N4-A-t-NH-sd-SMP-HP
N5-U-T-NH-sd-SMP-hp
PT1-nh
PT2-NH
NT1-nh
NT3-NH

View File

@ -1,19 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=8
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=4
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,18 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,23 +0,0 @@
#CHECK#CONFIG_TINY_RCU=y
CONFIG_RCU_TRACE=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=n
#
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
#CHECK#CONFIG_TINY_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
#
CONFIG_SMP=n
#
CONFIG_HOTPLUG_CPU=n
#
CONFIG_NO_HZ=y
#
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=8
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=4
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,20 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_NO_HZ=n
CONFIG_SMP=y
CONFIG_RCU_FANOUT=2
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,22 +0,0 @@
CONFIG_RCU_TRACE=n
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_RT_MUTEXES=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

View File

@ -1,27 +0,0 @@
CONFIG_RCU_TRACE=y
CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_NO_HZ=y
CONFIG_SMP=y
CONFIG_RCU_FANOUT=6
CONFIG_NR_CPUS=8
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=2
CONFIG_RCU_TORTURE_TEST=m
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_PRINTK_TIME=y

Some files were not shown because too many files have changed in this diff Show More