locking/lockdep: Exclude local_lock_t from IRQ inversions

The purpose of local_lock_t is to abstract: preempt_disable() /
local_bh_disable() / local_irq_disable(). These are the traditional
means of gaining access to per-cpu data, but are fundamentally
non-preemptible.

local_lock_t provides a per-cpu lock, that on !PREEMPT_RT reduces to
no-ops, just like regular spinlocks do on UP.

This gives rise to:

	CPU0			CPU1

	local_lock(B)		spin_lock_irq(A)
	<IRQ>
	  spin_lock(A)		local_lock(B)

Where lockdep then figures things will lock up; which would be true if
B were any other kind of lock. However this is a false positive, no
such deadlock actually exists.

For !RT the above local_lock(B) is preempt_disable(), and there's
obviously no deadlock; alternatively, CPU0's B != CPU1's B.

For RT the argument is that since local_lock() nests inside
spin_lock(), it cannot be used in hardirq context, and therefore CPU0
cannot in fact happen. Even though B is a real lock, it is a
preemptible lock and any threaded-irq would simply schedule out and
let the preempted task (which holds B) continue such that the task on
CPU1 can make progress, after which the threaded-irq resumes and can
finish.

This means that we can never form an IRQ inversion on a local_lock
dependency, so terminate the graph walk when looking for IRQ
inversions when we encounter one.

One consequence is that (for LOCKDEP_SMALL) when we look for redundant
dependencies, A -> B is not redundant in the presence of A -> L -> B.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
[peterz: Changelog]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
This commit is contained in:
Boqun Feng 2020-12-10 11:15:00 +01:00 committed by Peter Zijlstra
parent 175b1a60e8
commit 5f2962401c
1 changed files with 53 additions and 4 deletions

View File

@ -2200,6 +2200,44 @@ static inline bool usage_match(struct lock_list *entry, void *mask)
return !!((entry->class->usage_mask & LOCKF_IRQ) & *(unsigned long *)mask); return !!((entry->class->usage_mask & LOCKF_IRQ) & *(unsigned long *)mask);
} }
static inline bool usage_skip(struct lock_list *entry, void *mask)
{
/*
* Skip local_lock() for irq inversion detection.
*
* For !RT, local_lock() is not a real lock, so it won't carry any
* dependency.
*
* For RT, an irq inversion happens when we have lock A and B, and on
* some CPU we can have:
*
* lock(A);
* <interrupted>
* lock(B);
*
* where lock(B) cannot sleep, and we have a dependency B -> ... -> A.
*
* Now we prove local_lock() cannot exist in that dependency. First we
* have the observation for any lock chain L1 -> ... -> Ln, for any
* 1 <= i <= n, Li.inner_wait_type <= L1.inner_wait_type, otherwise
* wait context check will complain. And since B is not a sleep lock,
* therefore B.inner_wait_type >= 2, and since the inner_wait_type of
* local_lock() is 3, which is greater than 2, therefore there is no
* way the local_lock() exists in the dependency B -> ... -> A.
*
* As a result, we will skip local_lock(), when we search for irq
* inversion bugs.
*/
if (entry->class->lock_type == LD_LOCK_PERCPU) {
if (DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < LD_WAIT_CONFIG))
return false;
return true;
}
return false;
}
/* /*
* Find a node in the forwards-direction dependency sub-graph starting * Find a node in the forwards-direction dependency sub-graph starting
* at @root->class that matches @bit. * at @root->class that matches @bit.
@ -2215,7 +2253,7 @@ find_usage_forwards(struct lock_list *root, unsigned long usage_mask,
debug_atomic_inc(nr_find_usage_forwards_checks); debug_atomic_inc(nr_find_usage_forwards_checks);
result = __bfs_forwards(root, &usage_mask, usage_match, NULL, target_entry); result = __bfs_forwards(root, &usage_mask, usage_match, usage_skip, target_entry);
return result; return result;
} }
@ -2232,7 +2270,7 @@ find_usage_backwards(struct lock_list *root, unsigned long usage_mask,
debug_atomic_inc(nr_find_usage_backwards_checks); debug_atomic_inc(nr_find_usage_backwards_checks);
result = __bfs_backwards(root, &usage_mask, usage_match, NULL, target_entry); result = __bfs_backwards(root, &usage_mask, usage_match, usage_skip, target_entry);
return result; return result;
} }
@ -2597,7 +2635,7 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
*/ */
bfs_init_rootb(&this, prev); bfs_init_rootb(&this, prev);
ret = __bfs_backwards(&this, &usage_mask, usage_accumulate, NULL, NULL); ret = __bfs_backwards(&this, &usage_mask, usage_accumulate, usage_skip, NULL);
if (bfs_error(ret)) { if (bfs_error(ret)) {
print_bfs_bug(ret); print_bfs_bug(ret);
return 0; return 0;
@ -2664,6 +2702,12 @@ static inline int check_irq_usage(struct task_struct *curr,
{ {
return 1; return 1;
} }
static inline bool usage_skip(struct lock_list *entry, void *mask)
{
return false;
}
#endif /* CONFIG_TRACE_IRQFLAGS */ #endif /* CONFIG_TRACE_IRQFLAGS */
#ifdef CONFIG_LOCKDEP_SMALL #ifdef CONFIG_LOCKDEP_SMALL
@ -2697,7 +2741,12 @@ check_redundant(struct held_lock *src, struct held_lock *target)
debug_atomic_inc(nr_redundant_checks); debug_atomic_inc(nr_redundant_checks);
ret = check_path(target, &src_entry, hlock_equal, NULL, &target_entry); /*
* Note: we skip local_lock() for redundant check, because as the
* comment in usage_skip(), A -> local_lock() -> B and A -> B are not
* the same.
*/
ret = check_path(target, &src_entry, hlock_equal, usage_skip, &target_entry);
if (ret == BFS_RMATCH) if (ret == BFS_RMATCH)
debug_atomic_inc(nr_redundant); debug_atomic_inc(nr_redundant);