padata: use smp_mb in padata_reorder to avoid orphaned padata jobs
Testing padata with the tcrypt module on a 5.2 kernel...
# modprobe tcrypt alg="pcrypt(rfc4106(gcm(aes)))" type=3
# modprobe tcrypt mode=211 sec=1
...produces this splat:
INFO: task modprobe:10075 blocked for more than 120 seconds.
Not tainted 5.2.0-base+ #16
modprobe D 0 10075 10064 0x80004080
Call Trace:
? __schedule+0x4dd/0x610
? ring_buffer_unlock_commit+0x23/0x100
schedule+0x6c/0x90
schedule_timeout+0x3b/0x320
? trace_buffer_unlock_commit_regs+0x4f/0x1f0
wait_for_common+0x160/0x1a0
? wake_up_q+0x80/0x80
{ crypto_wait_req } # entries in braces added by hand
{ do_one_aead_op }
{ test_aead_jiffies }
test_aead_speed.constprop.17+0x681/0xf30 [tcrypt]
do_test+0x4053/0x6a2b [tcrypt]
? 0xffffffffa00f4000
tcrypt_mod_init+0x50/0x1000 [tcrypt]
...
The second modprobe command never finishes because in padata_reorder,
CPU0's load of reorder_objects is executed before the unlocking store in
spin_unlock_bh(pd->lock), causing CPU0 to miss CPU1's increment:
CPU0 CPU1
padata_reorder padata_do_serial
LOAD reorder_objects // 0
INC reorder_objects // 1
padata_reorder
TRYLOCK pd->lock // failed
UNLOCK pd->lock
CPU0 deletes the timer before returning from padata_reorder and since no
other job is submitted to padata, modprobe waits indefinitely.
Add a pair of full barriers to guarantee proper ordering:
CPU0 CPU1
padata_reorder padata_do_serial
UNLOCK pd->lock
smp_mb()
LOAD reorder_objects
INC reorder_objects
smp_mb__after_atomic()
padata_reorder
TRYLOCK pd->lock
smp_mb__after_atomic is needed so the read part of the trylock operation
comes after the INC, as Andrea points out. Thanks also to Andrea for
help with writing a litmus test.
Fixes: 16295bec63
("padata: Generic parallelization/serialization interface")
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: <stable@vger.kernel.org>
Cc: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Paul E. McKenney <paulmck@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-arch@vger.kernel.org
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This commit is contained in:
parent
83bf42510d
commit
cf144f81a9
|
@ -267,7 +267,12 @@ static void padata_reorder(struct parallel_data *pd)
|
||||||
* The next object that needs serialization might have arrived to
|
* The next object that needs serialization might have arrived to
|
||||||
* the reorder queues in the meantime, we will be called again
|
* the reorder queues in the meantime, we will be called again
|
||||||
* from the timer function if no one else cares for it.
|
* from the timer function if no one else cares for it.
|
||||||
|
*
|
||||||
|
* Ensure reorder_objects is read after pd->lock is dropped so we see
|
||||||
|
* an increment from another task in padata_do_serial. Pairs with
|
||||||
|
* smp_mb__after_atomic in padata_do_serial.
|
||||||
*/
|
*/
|
||||||
|
smp_mb();
|
||||||
if (atomic_read(&pd->reorder_objects)
|
if (atomic_read(&pd->reorder_objects)
|
||||||
&& !(pinst->flags & PADATA_RESET))
|
&& !(pinst->flags & PADATA_RESET))
|
||||||
mod_timer(&pd->timer, jiffies + HZ);
|
mod_timer(&pd->timer, jiffies + HZ);
|
||||||
|
@ -387,6 +392,13 @@ void padata_do_serial(struct padata_priv *padata)
|
||||||
list_add_tail(&padata->list, &pqueue->reorder.list);
|
list_add_tail(&padata->list, &pqueue->reorder.list);
|
||||||
spin_unlock(&pqueue->reorder.lock);
|
spin_unlock(&pqueue->reorder.lock);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Ensure the atomic_inc of reorder_objects above is ordered correctly
|
||||||
|
* with the trylock of pd->lock in padata_reorder. Pairs with smp_mb
|
||||||
|
* in padata_reorder.
|
||||||
|
*/
|
||||||
|
smp_mb__after_atomic();
|
||||||
|
|
||||||
put_cpu();
|
put_cpu();
|
||||||
|
|
||||||
/* If we're running on the wrong CPU, call padata_reorder() via a
|
/* If we're running on the wrong CPU, call padata_reorder() via a
|
||||||
|
|
Loading…
Reference in New Issue