mutex: Fix optimistic spinning vs. BKL
Currently, we can hit a nasty case with optimistic spinning on mutexes: CPU A tries to take a mutex, while holding the BKL CPU B tried to take the BLK while holding the mutex This looks like a AB-BA scenario but in practice, is allowed and happens due to the auto-release on schedule() nature of the BKL. In that case, the optimistic spinning code can get us into a situation where instead of going to sleep, A will spin waiting for B who is spinning waiting for A, and the only way out of that loop is the need_resched() test in mutex_spin_on_owner(). This patch fixes it by completely disabling spinning if we own the BKL. This adds one more detail to the extensive list of reasons why it's a bad idea for kernel code to be holding the BKL. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <stable@kernel.org> LKML-Reference: <20100519054636.GC12389@ozlabs.org> [ added an unlikely() attribute to the branch ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
parent
537b60d178
commit
fd6be105b8
|
@ -171,6 +171,13 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
|
|||
for (;;) {
|
||||
struct thread_info *owner;
|
||||
|
||||
/*
|
||||
* If we own the BKL, then don't spin. The owner of
|
||||
* the mutex might be waiting on us to release the BKL.
|
||||
*/
|
||||
if (unlikely(current->lock_depth >= 0))
|
||||
break;
|
||||
|
||||
/*
|
||||
* If there's an owner, wait for it to either
|
||||
* release the lock or go to sleep.
|
||||
|
|
Loading…
Reference in New Issue