[PATCH] dm: work around mempool_alloc, bio_alloc_bioset deadlocks
This patch works around a complex dm-related deadlock/livelock down in the mempool allocator. Alasdair said: Several dm targets suffer from this. Mempools are not yet used correctly everywhere in device-mapper: they can get shared when devices are stacked, and some targets share them across multiple instances. I made fixing this one of the prerequisites for this patch: md-dm-reduce-stack-usage-with-stacked-block-devices.patch which in some cases makes people more likely to hit the problem. There's been some progress on this recently with (unfinished) dm-crypt patches at: http://www.kernel.org/pub/linux/kernel/people/agk/patches/2.6/editing/ (dm-crypt-move-io-to-workqueue.patch plus dependencies) and: I've no problems with a temporary workaround like that, but Milan Broz (a new Redhat developer in the Czech Republic) has started reviewing all the mempool usage in device-mapper so I'm expecting we'll soon have a proper fix for this associated problems. [He's back from holiday at the start of next week.] For now, this sad-but-safe little patch will allow the machine to recover. [akpm@osdl.org: rewrote changelog] Cc: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
parent
1e5f5e5cd6
commit
0b1d647a02
|
@ -238,8 +238,13 @@ repeat_alloc:
|
||||||
init_wait(&wait);
|
init_wait(&wait);
|
||||||
prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
|
prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||||
smp_mb();
|
smp_mb();
|
||||||
if (!pool->curr_nr)
|
if (!pool->curr_nr) {
|
||||||
io_schedule();
|
/*
|
||||||
|
* FIXME: this should be io_schedule(). The timeout is there
|
||||||
|
* as a workaround for some DM problems in 2.6.18.
|
||||||
|
*/
|
||||||
|
io_schedule_timeout(5*HZ);
|
||||||
|
}
|
||||||
finish_wait(&pool->wait, &wait);
|
finish_wait(&pool->wait, &wait);
|
||||||
|
|
||||||
goto repeat_alloc;
|
goto repeat_alloc;
|
||||||
|
|
Loading…
Reference in New Issue