block: lift the initial queue bypass mode on blk_register_queue() instead of blk_init_allocated_queue()

b82d4b197c ("blkcg: make request_queue bypassing on allocation") made
request_queues bypassed on allocation to avoid switching on and off
bypass mode on a queue being initialized.  Some drivers allocate and
then destroy a lot of queues without fully initializing them and
incurring bypass latency overhead on each of them could add upto
significant overhead.

Unfortunately, blk_init_allocated_queue() is never used by queues of
bio-based drivers, which means that all bio-based driver queues are in
bypass mode even after initialization and registration complete
successfully.

Due to the limited way request_queues are used by bio drivers, this
problem is hidden pretty well but it shows up when blk-throttle is
used in combination with a bio-based driver.  Trying to configure
(echoing to cgroupfs file) blk-throttle for a bio-based driver hangs
indefinitely in blkg_conf_prep() waiting for bypass mode to end.

This patch moves the initial blk_queue_bypass_end() call from
blk_init_allocated_queue() to blk_register_queue() which is called for
any userland-visible queues regardless of its type.

I believe this is correct because I don't think there is any block
driver which needs or wants working elevator and blk-cgroup on a queue
which isn't visible to userland.  If there are such users, we need a
different solution.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Joseph Glanville <joseph.glanville@orionvm.com.au>
Cc: stable@vger.kernel.org
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
Tejun Heo 2012-09-20 14:08:52 -07:00 committed by Jens Axboe
parent 66ba32dc16
commit 749fefe677
2 changed files with 8 additions and 5 deletions

View File

@ -608,8 +608,8 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
/* /*
* A queue starts its life with bypass turned on to avoid * A queue starts its life with bypass turned on to avoid
* unnecessary bypass on/off overhead and nasty surprises during * unnecessary bypass on/off overhead and nasty surprises during
* init. The initial bypass will be finished at the end of * init. The initial bypass will be finished when the queue is
* blk_init_allocated_queue(). * registered by blk_register_queue().
*/ */
q->bypass_depth = 1; q->bypass_depth = 1;
__set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags); __set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
@ -712,9 +712,6 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
/* init elevator */ /* init elevator */
if (elevator_init(q, NULL)) if (elevator_init(q, NULL))
return NULL; return NULL;
/* all done, end the initial bypass */
blk_queue_bypass_end(q);
return q; return q;
} }
EXPORT_SYMBOL(blk_init_allocated_queue); EXPORT_SYMBOL(blk_init_allocated_queue);

View File

@ -561,6 +561,12 @@ int blk_register_queue(struct gendisk *disk)
if (WARN_ON(!q)) if (WARN_ON(!q))
return -ENXIO; return -ENXIO;
/*
* Initialization must be complete by now. Finish the initial
* bypass from queue allocation.
*/
blk_queue_bypass_end(q);
ret = blk_trace_init_sysfs(dev); ret = blk_trace_init_sysfs(dev);
if (ret) if (ret)
return ret; return ret;