[BLOCK] Get rid of request_queue_t typedef

Some of the code has been gradually transitioned to using the proper
struct request_queue, but there's lots left. So do a full sweet of
the kernel and get rid of this typedef and replace its uses with
the proper type.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This commit is contained in:
Jens Axboe 2007-07-24 09:28:11 +02:00
parent f695baf2df
commit 165125e1e4
85 changed files with 529 additions and 510 deletions

View File

@ -79,9 +79,9 @@ and how to prepare flush requests. Note that the term 'ordered' is
used to indicate the whole sequence of performing barrier requests used to indicate the whole sequence of performing barrier requests
including draining and flushing. including draining and flushing.
typedef void (prepare_flush_fn)(request_queue_t *q, struct request *rq); typedef void (prepare_flush_fn)(struct request_queue *q, struct request *rq);
int blk_queue_ordered(request_queue_t *q, unsigned ordered, int blk_queue_ordered(struct request_queue *q, unsigned ordered,
prepare_flush_fn *prepare_flush_fn); prepare_flush_fn *prepare_flush_fn);
@q : the queue in question @q : the queue in question
@ -92,7 +92,7 @@ int blk_queue_ordered(request_queue_t *q, unsigned ordered,
For example, SCSI disk driver's prepare_flush_fn looks like the For example, SCSI disk driver's prepare_flush_fn looks like the
following. following.
static void sd_prepare_flush(request_queue_t *q, struct request *rq) static void sd_prepare_flush(struct request_queue *q, struct request *rq)
{ {
memset(rq->cmd, 0, sizeof(rq->cmd)); memset(rq->cmd, 0, sizeof(rq->cmd));
rq->cmd_type = REQ_TYPE_BLOCK_PC; rq->cmd_type = REQ_TYPE_BLOCK_PC;

View File

@ -740,12 +740,12 @@ Block now offers some simple generic functionality to help support command
queueing (typically known as tagged command queueing), ie manage more than queueing (typically known as tagged command queueing), ie manage more than
one outstanding command on a queue at any given time. one outstanding command on a queue at any given time.
blk_queue_init_tags(request_queue_t *q, int depth) blk_queue_init_tags(struct request_queue *q, int depth)
Initialize internal command tagging structures for a maximum Initialize internal command tagging structures for a maximum
depth of 'depth'. depth of 'depth'.
blk_queue_free_tags((request_queue_t *q) blk_queue_free_tags((struct request_queue *q)
Teardown tag info associated with the queue. This will be done Teardown tag info associated with the queue. This will be done
automatically by block if blk_queue_cleanup() is called on a queue automatically by block if blk_queue_cleanup() is called on a queue
@ -754,7 +754,7 @@ one outstanding command on a queue at any given time.
The above are initialization and exit management, the main helpers during The above are initialization and exit management, the main helpers during
normal operations are: normal operations are:
blk_queue_start_tag(request_queue_t *q, struct request *rq) blk_queue_start_tag(struct request_queue *q, struct request *rq)
Start tagged operation for this request. A free tag number between Start tagged operation for this request. A free tag number between
0 and 'depth' is assigned to the request (rq->tag holds this number), 0 and 'depth' is assigned to the request (rq->tag holds this number),
@ -762,7 +762,7 @@ normal operations are:
for this queue is already achieved (or if the tag wasn't started for for this queue is already achieved (or if the tag wasn't started for
some other reason), 1 is returned. Otherwise 0 is returned. some other reason), 1 is returned. Otherwise 0 is returned.
blk_queue_end_tag(request_queue_t *q, struct request *rq) blk_queue_end_tag(struct request_queue *q, struct request *rq)
End tagged operation on this request. 'rq' is removed from the internal End tagged operation on this request. 'rq' is removed from the internal
book keeping structures. book keeping structures.
@ -781,7 +781,7 @@ queue. For instance, on IDE any tagged request error needs to clear both
the hardware and software block queue and enable the driver to sanely restart the hardware and software block queue and enable the driver to sanely restart
all the outstanding requests. There's a third helper to do that: all the outstanding requests. There's a third helper to do that:
blk_queue_invalidate_tags(request_queue_t *q) blk_queue_invalidate_tags(struct request_queue *q)
Clear the internal block tag queue and re-add all the pending requests Clear the internal block tag queue and re-add all the pending requests
to the request queue. The driver will receive them again on the to the request queue. The driver will receive them again on the

View File

@ -83,6 +83,6 @@ struct bio *bio DBI First bio in request
struct bio *biotail DBI Last bio in request struct bio *biotail DBI Last bio in request
request_queue_t *q DB Request queue this request belongs to struct request_queue *q DB Request queue this request belongs to
struct request_list *rl B Request list this request came from struct request_list *rl B Request list this request came from

View File

@ -79,7 +79,7 @@ Field 8 -- # of milliseconds spent writing
measured from __make_request() to end_that_request_last()). measured from __make_request() to end_that_request_last()).
Field 9 -- # of I/Os currently in progress Field 9 -- # of I/Os currently in progress
The only field that should go to zero. Incremented as requests are The only field that should go to zero. Incremented as requests are
given to appropriate request_queue_t and decremented as they finish. given to appropriate struct request_queue and decremented as they finish.
Field 10 -- # of milliseconds spent doing I/Os Field 10 -- # of milliseconds spent doing I/Os
This field is increases so long as field 9 is nonzero. This field is increases so long as field 9 is nonzero.
Field 11 -- weighted # of milliseconds spent doing I/Os Field 11 -- weighted # of milliseconds spent doing I/Os

View File

@ -161,11 +161,11 @@ static void mbox_rx_work(struct work_struct *work)
/* /*
* Mailbox interrupt handler * Mailbox interrupt handler
*/ */
static void mbox_txq_fn(request_queue_t * q) static void mbox_txq_fn(struct request_queue * q)
{ {
} }
static void mbox_rxq_fn(request_queue_t * q) static void mbox_rxq_fn(struct request_queue * q)
{ {
} }
@ -180,7 +180,7 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox)
{ {
struct request *rq; struct request *rq;
mbox_msg_t msg; mbox_msg_t msg;
request_queue_t *q = mbox->rxq->queue; struct request_queue *q = mbox->rxq->queue;
disable_mbox_irq(mbox, IRQ_RX); disable_mbox_irq(mbox, IRQ_RX);
@ -297,7 +297,7 @@ static struct omap_mbox_queue *mbox_queue_alloc(struct omap_mbox *mbox,
request_fn_proc * proc, request_fn_proc * proc,
void (*work) (struct work_struct *)) void (*work) (struct work_struct *))
{ {
request_queue_t *q; struct request_queue *q;
struct omap_mbox_queue *mq; struct omap_mbox_queue *mq;
mq = kzalloc(sizeof(struct omap_mbox_queue), GFP_KERNEL); mq = kzalloc(sizeof(struct omap_mbox_queue), GFP_KERNEL);

View File

@ -469,7 +469,7 @@ __uml_help(fakehd,
" Change the ubd device name to \"hd\".\n\n" " Change the ubd device name to \"hd\".\n\n"
); );
static void do_ubd_request(request_queue_t * q); static void do_ubd_request(struct request_queue * q);
/* Only changed by ubd_init, which is an initcall. */ /* Only changed by ubd_init, which is an initcall. */
int thread_fd = -1; int thread_fd = -1;
@ -1081,7 +1081,7 @@ static void prepare_request(struct request *req, struct io_thread_req *io_req,
} }
/* Called with dev->lock held */ /* Called with dev->lock held */
static void do_ubd_request(request_queue_t *q) static void do_ubd_request(struct request_queue *q)
{ {
struct io_thread_req *io_req; struct io_thread_req *io_req;
struct request *req; struct request *req;

View File

@ -796,7 +796,7 @@ static void update_write_batch(struct as_data *ad)
* as_completed_request is to be called when a request has completed and * as_completed_request is to be called when a request has completed and
* returned something to the requesting process, be it an error or data. * returned something to the requesting process, be it an error or data.
*/ */
static void as_completed_request(request_queue_t *q, struct request *rq) static void as_completed_request(struct request_queue *q, struct request *rq)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
@ -853,7 +853,8 @@ out:
* reference unless it replaces the request at somepart of the elevator * reference unless it replaces the request at somepart of the elevator
* (ie. the dispatch queue) * (ie. the dispatch queue)
*/ */
static void as_remove_queued_request(request_queue_t *q, struct request *rq) static void as_remove_queued_request(struct request_queue *q,
struct request *rq)
{ {
const int data_dir = rq_is_sync(rq); const int data_dir = rq_is_sync(rq);
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
@ -978,7 +979,7 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
* read/write expire, batch expire, etc, and moves it to the dispatch * read/write expire, batch expire, etc, and moves it to the dispatch
* queue. Returns 1 if a request was found, 0 otherwise. * queue. Returns 1 if a request was found, 0 otherwise.
*/ */
static int as_dispatch_request(request_queue_t *q, int force) static int as_dispatch_request(struct request_queue *q, int force)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
const int reads = !list_empty(&ad->fifo_list[REQ_SYNC]); const int reads = !list_empty(&ad->fifo_list[REQ_SYNC]);
@ -1139,7 +1140,7 @@ fifo_expired:
/* /*
* add rq to rbtree and fifo * add rq to rbtree and fifo
*/ */
static void as_add_request(request_queue_t *q, struct request *rq) static void as_add_request(struct request_queue *q, struct request *rq)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
int data_dir; int data_dir;
@ -1167,7 +1168,7 @@ static void as_add_request(request_queue_t *q, struct request *rq)
RQ_SET_STATE(rq, AS_RQ_QUEUED); RQ_SET_STATE(rq, AS_RQ_QUEUED);
} }
static void as_activate_request(request_queue_t *q, struct request *rq) static void as_activate_request(struct request_queue *q, struct request *rq)
{ {
WARN_ON(RQ_STATE(rq) != AS_RQ_DISPATCHED); WARN_ON(RQ_STATE(rq) != AS_RQ_DISPATCHED);
RQ_SET_STATE(rq, AS_RQ_REMOVED); RQ_SET_STATE(rq, AS_RQ_REMOVED);
@ -1175,7 +1176,7 @@ static void as_activate_request(request_queue_t *q, struct request *rq)
atomic_dec(&RQ_IOC(rq)->aic->nr_dispatched); atomic_dec(&RQ_IOC(rq)->aic->nr_dispatched);
} }
static void as_deactivate_request(request_queue_t *q, struct request *rq) static void as_deactivate_request(struct request_queue *q, struct request *rq)
{ {
WARN_ON(RQ_STATE(rq) != AS_RQ_REMOVED); WARN_ON(RQ_STATE(rq) != AS_RQ_REMOVED);
RQ_SET_STATE(rq, AS_RQ_DISPATCHED); RQ_SET_STATE(rq, AS_RQ_DISPATCHED);
@ -1189,7 +1190,7 @@ static void as_deactivate_request(request_queue_t *q, struct request *rq)
* is not empty - it is used in the block layer to check for plugging and * is not empty - it is used in the block layer to check for plugging and
* merging opportunities * merging opportunities
*/ */
static int as_queue_empty(request_queue_t *q) static int as_queue_empty(struct request_queue *q)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
@ -1198,7 +1199,7 @@ static int as_queue_empty(request_queue_t *q)
} }
static int static int
as_merge(request_queue_t *q, struct request **req, struct bio *bio) as_merge(struct request_queue *q, struct request **req, struct bio *bio)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
sector_t rb_key = bio->bi_sector + bio_sectors(bio); sector_t rb_key = bio->bi_sector + bio_sectors(bio);
@ -1216,7 +1217,8 @@ as_merge(request_queue_t *q, struct request **req, struct bio *bio)
return ELEVATOR_NO_MERGE; return ELEVATOR_NO_MERGE;
} }
static void as_merged_request(request_queue_t *q, struct request *req, int type) static void as_merged_request(struct request_queue *q, struct request *req,
int type)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
@ -1234,7 +1236,7 @@ static void as_merged_request(request_queue_t *q, struct request *req, int type)
} }
} }
static void as_merged_requests(request_queue_t *q, struct request *req, static void as_merged_requests(struct request_queue *q, struct request *req,
struct request *next) struct request *next)
{ {
/* /*
@ -1285,7 +1287,7 @@ static void as_work_handler(struct work_struct *work)
spin_unlock_irqrestore(q->queue_lock, flags); spin_unlock_irqrestore(q->queue_lock, flags);
} }
static int as_may_queue(request_queue_t *q, int rw) static int as_may_queue(struct request_queue *q, int rw)
{ {
int ret = ELV_MQUEUE_MAY; int ret = ELV_MQUEUE_MAY;
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
@ -1318,7 +1320,7 @@ static void as_exit_queue(elevator_t *e)
/* /*
* initialize elevator private data (as_data). * initialize elevator private data (as_data).
*/ */
static void *as_init_queue(request_queue_t *q) static void *as_init_queue(struct request_queue *q)
{ {
struct as_data *ad; struct as_data *ad;

View File

@ -231,7 +231,7 @@ static void blk_trace_cleanup(struct blk_trace *bt)
kfree(bt); kfree(bt);
} }
static int blk_trace_remove(request_queue_t *q) static int blk_trace_remove(struct request_queue *q)
{ {
struct blk_trace *bt; struct blk_trace *bt;
@ -312,7 +312,7 @@ static struct rchan_callbacks blk_relay_callbacks = {
/* /*
* Setup everything required to start tracing * Setup everything required to start tracing
*/ */
static int blk_trace_setup(request_queue_t *q, struct block_device *bdev, static int blk_trace_setup(struct request_queue *q, struct block_device *bdev,
char __user *arg) char __user *arg)
{ {
struct blk_user_trace_setup buts; struct blk_user_trace_setup buts;
@ -401,7 +401,7 @@ err:
return ret; return ret;
} }
static int blk_trace_startstop(request_queue_t *q, int start) static int blk_trace_startstop(struct request_queue *q, int start)
{ {
struct blk_trace *bt; struct blk_trace *bt;
int ret; int ret;
@ -444,7 +444,7 @@ static int blk_trace_startstop(request_queue_t *q, int start)
**/ **/
int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg) int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
{ {
request_queue_t *q; struct request_queue *q;
int ret, start = 0; int ret, start = 0;
q = bdev_get_queue(bdev); q = bdev_get_queue(bdev);
@ -479,7 +479,7 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
* @q: the request queue associated with the device * @q: the request queue associated with the device
* *
**/ **/
void blk_trace_shutdown(request_queue_t *q) void blk_trace_shutdown(struct request_queue *q)
{ {
if (q->blk_trace) { if (q->blk_trace) {
blk_trace_startstop(q, 0); blk_trace_startstop(q, 0);

View File

@ -37,7 +37,7 @@
#define BSG_VERSION "0.4" #define BSG_VERSION "0.4"
struct bsg_device { struct bsg_device {
request_queue_t *queue; struct request_queue *queue;
spinlock_t lock; spinlock_t lock;
struct list_head busy_list; struct list_head busy_list;
struct list_head done_list; struct list_head done_list;
@ -180,7 +180,7 @@ unlock:
return ret; return ret;
} }
static int blk_fill_sgv4_hdr_rq(request_queue_t *q, struct request *rq, static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq,
struct sg_io_v4 *hdr, int has_write_perm) struct sg_io_v4 *hdr, int has_write_perm)
{ {
memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */ memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
@ -214,7 +214,7 @@ static int blk_fill_sgv4_hdr_rq(request_queue_t *q, struct request *rq,
* Check if sg_io_v4 from user is allowed and valid * Check if sg_io_v4 from user is allowed and valid
*/ */
static int static int
bsg_validate_sgv4_hdr(request_queue_t *q, struct sg_io_v4 *hdr, int *rw) bsg_validate_sgv4_hdr(struct request_queue *q, struct sg_io_v4 *hdr, int *rw)
{ {
int ret = 0; int ret = 0;
@ -250,7 +250,7 @@ bsg_validate_sgv4_hdr(request_queue_t *q, struct sg_io_v4 *hdr, int *rw)
static struct request * static struct request *
bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr) bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr)
{ {
request_queue_t *q = bd->queue; struct request_queue *q = bd->queue;
struct request *rq, *next_rq = NULL; struct request *rq, *next_rq = NULL;
int ret, rw; int ret, rw;
unsigned int dxfer_len; unsigned int dxfer_len;
@ -345,7 +345,7 @@ static void bsg_rq_end_io(struct request *rq, int uptodate)
* do final setup of a 'bc' and submit the matching 'rq' to the block * do final setup of a 'bc' and submit the matching 'rq' to the block
* layer for io * layer for io
*/ */
static void bsg_add_command(struct bsg_device *bd, request_queue_t *q, static void bsg_add_command(struct bsg_device *bd, struct request_queue *q,
struct bsg_command *bc, struct request *rq) struct bsg_command *bc, struct request *rq)
{ {
rq->sense = bc->sense; rq->sense = bc->sense;
@ -611,7 +611,7 @@ static int __bsg_write(struct bsg_device *bd, const char __user *buf,
bc = NULL; bc = NULL;
ret = 0; ret = 0;
while (nr_commands) { while (nr_commands) {
request_queue_t *q = bd->queue; struct request_queue *q = bd->queue;
bc = bsg_alloc_command(bd); bc = bsg_alloc_command(bd);
if (IS_ERR(bc)) { if (IS_ERR(bc)) {

View File

@ -71,7 +71,7 @@ struct cfq_rb_root {
* Per block device queue structure * Per block device queue structure
*/ */
struct cfq_data { struct cfq_data {
request_queue_t *queue; struct request_queue *queue;
/* /*
* rr list of queues with requests and the count of them * rr list of queues with requests and the count of them
@ -197,7 +197,7 @@ CFQ_CFQQ_FNS(slice_new);
CFQ_CFQQ_FNS(sync); CFQ_CFQQ_FNS(sync);
#undef CFQ_CFQQ_FNS #undef CFQ_CFQQ_FNS
static void cfq_dispatch_insert(request_queue_t *, struct request *); static void cfq_dispatch_insert(struct request_queue *, struct request *);
static struct cfq_queue *cfq_get_queue(struct cfq_data *, int, static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,
struct task_struct *, gfp_t); struct task_struct *, gfp_t);
static struct cfq_io_context *cfq_cic_rb_lookup(struct cfq_data *, static struct cfq_io_context *cfq_cic_rb_lookup(struct cfq_data *,
@ -237,7 +237,7 @@ static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)
kblockd_schedule_work(&cfqd->unplug_work); kblockd_schedule_work(&cfqd->unplug_work);
} }
static int cfq_queue_empty(request_queue_t *q) static int cfq_queue_empty(struct request_queue *q)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
@ -623,7 +623,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
return NULL; return NULL;
} }
static void cfq_activate_request(request_queue_t *q, struct request *rq) static void cfq_activate_request(struct request_queue *q, struct request *rq)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
@ -641,7 +641,7 @@ static void cfq_activate_request(request_queue_t *q, struct request *rq)
cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors; cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors;
} }
static void cfq_deactivate_request(request_queue_t *q, struct request *rq) static void cfq_deactivate_request(struct request_queue *q, struct request *rq)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
@ -665,7 +665,8 @@ static void cfq_remove_request(struct request *rq)
} }
} }
static int cfq_merge(request_queue_t *q, struct request **req, struct bio *bio) static int cfq_merge(struct request_queue *q, struct request **req,
struct bio *bio)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
struct request *__rq; struct request *__rq;
@ -679,7 +680,7 @@ static int cfq_merge(request_queue_t *q, struct request **req, struct bio *bio)
return ELEVATOR_NO_MERGE; return ELEVATOR_NO_MERGE;
} }
static void cfq_merged_request(request_queue_t *q, struct request *req, static void cfq_merged_request(struct request_queue *q, struct request *req,
int type) int type)
{ {
if (type == ELEVATOR_FRONT_MERGE) { if (type == ELEVATOR_FRONT_MERGE) {
@ -690,7 +691,7 @@ static void cfq_merged_request(request_queue_t *q, struct request *req,
} }
static void static void
cfq_merged_requests(request_queue_t *q, struct request *rq, cfq_merged_requests(struct request_queue *q, struct request *rq,
struct request *next) struct request *next)
{ {
/* /*
@ -703,7 +704,7 @@ cfq_merged_requests(request_queue_t *q, struct request *rq,
cfq_remove_request(next); cfq_remove_request(next);
} }
static int cfq_allow_merge(request_queue_t *q, struct request *rq, static int cfq_allow_merge(struct request_queue *q, struct request *rq,
struct bio *bio) struct bio *bio)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
@ -913,7 +914,7 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
/* /*
* Move request from internal lists to the request queue dispatch list. * Move request from internal lists to the request queue dispatch list.
*/ */
static void cfq_dispatch_insert(request_queue_t *q, struct request *rq) static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_queue *cfqq = RQ_CFQQ(rq); struct cfq_queue *cfqq = RQ_CFQQ(rq);
@ -1093,7 +1094,7 @@ static int cfq_forced_dispatch(struct cfq_data *cfqd)
return dispatched; return dispatched;
} }
static int cfq_dispatch_requests(request_queue_t *q, int force) static int cfq_dispatch_requests(struct request_queue *q, int force)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_queue *cfqq; struct cfq_queue *cfqq;
@ -1214,7 +1215,7 @@ static void cfq_exit_single_io_context(struct cfq_io_context *cic)
struct cfq_data *cfqd = cic->key; struct cfq_data *cfqd = cic->key;
if (cfqd) { if (cfqd) {
request_queue_t *q = cfqd->queue; struct request_queue *q = cfqd->queue;
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
__cfq_exit_single_io_context(cfqd, cic); __cfq_exit_single_io_context(cfqd, cic);
@ -1775,7 +1776,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
} }
} }
static void cfq_insert_request(request_queue_t *q, struct request *rq) static void cfq_insert_request(struct request_queue *q, struct request *rq)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_queue *cfqq = RQ_CFQQ(rq); struct cfq_queue *cfqq = RQ_CFQQ(rq);
@ -1789,7 +1790,7 @@ static void cfq_insert_request(request_queue_t *q, struct request *rq)
cfq_rq_enqueued(cfqd, cfqq, rq); cfq_rq_enqueued(cfqd, cfqq, rq);
} }
static void cfq_completed_request(request_queue_t *q, struct request *rq) static void cfq_completed_request(struct request_queue *q, struct request *rq)
{ {
struct cfq_queue *cfqq = RQ_CFQQ(rq); struct cfq_queue *cfqq = RQ_CFQQ(rq);
struct cfq_data *cfqd = cfqq->cfqd; struct cfq_data *cfqd = cfqq->cfqd;
@ -1868,7 +1869,7 @@ static inline int __cfq_may_queue(struct cfq_queue *cfqq)
return ELV_MQUEUE_MAY; return ELV_MQUEUE_MAY;
} }
static int cfq_may_queue(request_queue_t *q, int rw) static int cfq_may_queue(struct request_queue *q, int rw)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
struct task_struct *tsk = current; struct task_struct *tsk = current;
@ -1922,7 +1923,7 @@ static void cfq_put_request(struct request *rq)
* Allocate cfq data structures associated with this request. * Allocate cfq data structures associated with this request.
*/ */
static int static int
cfq_set_request(request_queue_t *q, struct request *rq, gfp_t gfp_mask) cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
{ {
struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_data *cfqd = q->elevator->elevator_data;
struct task_struct *tsk = current; struct task_struct *tsk = current;
@ -1974,7 +1975,7 @@ static void cfq_kick_queue(struct work_struct *work)
{ {
struct cfq_data *cfqd = struct cfq_data *cfqd =
container_of(work, struct cfq_data, unplug_work); container_of(work, struct cfq_data, unplug_work);
request_queue_t *q = cfqd->queue; struct request_queue *q = cfqd->queue;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
@ -2072,7 +2073,7 @@ static void cfq_put_async_queues(struct cfq_data *cfqd)
static void cfq_exit_queue(elevator_t *e) static void cfq_exit_queue(elevator_t *e)
{ {
struct cfq_data *cfqd = e->elevator_data; struct cfq_data *cfqd = e->elevator_data;
request_queue_t *q = cfqd->queue; struct request_queue *q = cfqd->queue;
cfq_shutdown_timer_wq(cfqd); cfq_shutdown_timer_wq(cfqd);
@ -2098,7 +2099,7 @@ static void cfq_exit_queue(elevator_t *e)
kfree(cfqd); kfree(cfqd);
} }
static void *cfq_init_queue(request_queue_t *q) static void *cfq_init_queue(struct request_queue *q)
{ {
struct cfq_data *cfqd; struct cfq_data *cfqd;

View File

@ -106,7 +106,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
/* /*
* remove rq from rbtree and fifo. * remove rq from rbtree and fifo.
*/ */
static void deadline_remove_request(request_queue_t *q, struct request *rq) static void deadline_remove_request(struct request_queue *q, struct request *rq)
{ {
struct deadline_data *dd = q->elevator->elevator_data; struct deadline_data *dd = q->elevator->elevator_data;
@ -115,7 +115,7 @@ static void deadline_remove_request(request_queue_t *q, struct request *rq)
} }
static int static int
deadline_merge(request_queue_t *q, struct request **req, struct bio *bio) deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
{ {
struct deadline_data *dd = q->elevator->elevator_data; struct deadline_data *dd = q->elevator->elevator_data;
struct request *__rq; struct request *__rq;
@ -144,8 +144,8 @@ out:
return ret; return ret;
} }
static void deadline_merged_request(request_queue_t *q, struct request *req, static void deadline_merged_request(struct request_queue *q,
int type) struct request *req, int type)
{ {
struct deadline_data *dd = q->elevator->elevator_data; struct deadline_data *dd = q->elevator->elevator_data;
@ -159,7 +159,7 @@ static void deadline_merged_request(request_queue_t *q, struct request *req,
} }
static void static void
deadline_merged_requests(request_queue_t *q, struct request *req, deadline_merged_requests(struct request_queue *q, struct request *req,
struct request *next) struct request *next)
{ {
/* /*
@ -185,7 +185,7 @@ deadline_merged_requests(request_queue_t *q, struct request *req,
static inline void static inline void
deadline_move_to_dispatch(struct deadline_data *dd, struct request *rq) deadline_move_to_dispatch(struct deadline_data *dd, struct request *rq)
{ {
request_queue_t *q = rq->q; struct request_queue *q = rq->q;
deadline_remove_request(q, rq); deadline_remove_request(q, rq);
elv_dispatch_add_tail(q, rq); elv_dispatch_add_tail(q, rq);
@ -236,7 +236,7 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
* deadline_dispatch_requests selects the best request according to * deadline_dispatch_requests selects the best request according to
* read/write expire, fifo_batch, etc * read/write expire, fifo_batch, etc
*/ */
static int deadline_dispatch_requests(request_queue_t *q, int force) static int deadline_dispatch_requests(struct request_queue *q, int force)
{ {
struct deadline_data *dd = q->elevator->elevator_data; struct deadline_data *dd = q->elevator->elevator_data;
const int reads = !list_empty(&dd->fifo_list[READ]); const int reads = !list_empty(&dd->fifo_list[READ]);
@ -335,7 +335,7 @@ dispatch_request:
return 1; return 1;
} }
static int deadline_queue_empty(request_queue_t *q) static int deadline_queue_empty(struct request_queue *q)
{ {
struct deadline_data *dd = q->elevator->elevator_data; struct deadline_data *dd = q->elevator->elevator_data;
@ -356,7 +356,7 @@ static void deadline_exit_queue(elevator_t *e)
/* /*
* initialize elevator private data (deadline_data). * initialize elevator private data (deadline_data).
*/ */
static void *deadline_init_queue(request_queue_t *q) static void *deadline_init_queue(struct request_queue *q)
{ {
struct deadline_data *dd; struct deadline_data *dd;

View File

@ -56,7 +56,7 @@ static const int elv_hash_shift = 6;
*/ */
static int elv_iosched_allow_merge(struct request *rq, struct bio *bio) static int elv_iosched_allow_merge(struct request *rq, struct bio *bio)
{ {
request_queue_t *q = rq->q; struct request_queue *q = rq->q;
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
if (e->ops->elevator_allow_merge_fn) if (e->ops->elevator_allow_merge_fn)
@ -141,12 +141,13 @@ static struct elevator_type *elevator_get(const char *name)
return e; return e;
} }
static void *elevator_init_queue(request_queue_t *q, struct elevator_queue *eq) static void *elevator_init_queue(struct request_queue *q,
struct elevator_queue *eq)
{ {
return eq->ops->elevator_init_fn(q); return eq->ops->elevator_init_fn(q);
} }
static void elevator_attach(request_queue_t *q, struct elevator_queue *eq, static void elevator_attach(struct request_queue *q, struct elevator_queue *eq,
void *data) void *data)
{ {
q->elevator = eq; q->elevator = eq;
@ -172,7 +173,8 @@ __setup("elevator=", elevator_setup);
static struct kobj_type elv_ktype; static struct kobj_type elv_ktype;
static elevator_t *elevator_alloc(request_queue_t *q, struct elevator_type *e) static elevator_t *elevator_alloc(struct request_queue *q,
struct elevator_type *e)
{ {
elevator_t *eq; elevator_t *eq;
int i; int i;
@ -212,7 +214,7 @@ static void elevator_release(struct kobject *kobj)
kfree(e); kfree(e);
} }
int elevator_init(request_queue_t *q, char *name) int elevator_init(struct request_queue *q, char *name)
{ {
struct elevator_type *e = NULL; struct elevator_type *e = NULL;
struct elevator_queue *eq; struct elevator_queue *eq;
@ -264,7 +266,7 @@ void elevator_exit(elevator_t *e)
EXPORT_SYMBOL(elevator_exit); EXPORT_SYMBOL(elevator_exit);
static void elv_activate_rq(request_queue_t *q, struct request *rq) static void elv_activate_rq(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -272,7 +274,7 @@ static void elv_activate_rq(request_queue_t *q, struct request *rq)
e->ops->elevator_activate_req_fn(q, rq); e->ops->elevator_activate_req_fn(q, rq);
} }
static void elv_deactivate_rq(request_queue_t *q, struct request *rq) static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -285,13 +287,13 @@ static inline void __elv_rqhash_del(struct request *rq)
hlist_del_init(&rq->hash); hlist_del_init(&rq->hash);
} }
static void elv_rqhash_del(request_queue_t *q, struct request *rq) static void elv_rqhash_del(struct request_queue *q, struct request *rq)
{ {
if (ELV_ON_HASH(rq)) if (ELV_ON_HASH(rq))
__elv_rqhash_del(rq); __elv_rqhash_del(rq);
} }
static void elv_rqhash_add(request_queue_t *q, struct request *rq) static void elv_rqhash_add(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -299,13 +301,13 @@ static void elv_rqhash_add(request_queue_t *q, struct request *rq)
hlist_add_head(&rq->hash, &e->hash[ELV_HASH_FN(rq_hash_key(rq))]); hlist_add_head(&rq->hash, &e->hash[ELV_HASH_FN(rq_hash_key(rq))]);
} }
static void elv_rqhash_reposition(request_queue_t *q, struct request *rq) static void elv_rqhash_reposition(struct request_queue *q, struct request *rq)
{ {
__elv_rqhash_del(rq); __elv_rqhash_del(rq);
elv_rqhash_add(q, rq); elv_rqhash_add(q, rq);
} }
static struct request *elv_rqhash_find(request_queue_t *q, sector_t offset) static struct request *elv_rqhash_find(struct request_queue *q, sector_t offset)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
struct hlist_head *hash_list = &e->hash[ELV_HASH_FN(offset)]; struct hlist_head *hash_list = &e->hash[ELV_HASH_FN(offset)];
@ -391,7 +393,7 @@ EXPORT_SYMBOL(elv_rb_find);
* entry. rq is sort insted into the dispatch queue. To be used by * entry. rq is sort insted into the dispatch queue. To be used by
* specific elevators. * specific elevators.
*/ */
void elv_dispatch_sort(request_queue_t *q, struct request *rq) void elv_dispatch_sort(struct request_queue *q, struct request *rq)
{ {
sector_t boundary; sector_t boundary;
struct list_head *entry; struct list_head *entry;
@ -449,7 +451,7 @@ void elv_dispatch_add_tail(struct request_queue *q, struct request *rq)
EXPORT_SYMBOL(elv_dispatch_add_tail); EXPORT_SYMBOL(elv_dispatch_add_tail);
int elv_merge(request_queue_t *q, struct request **req, struct bio *bio) int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
struct request *__rq; struct request *__rq;
@ -481,7 +483,7 @@ int elv_merge(request_queue_t *q, struct request **req, struct bio *bio)
return ELEVATOR_NO_MERGE; return ELEVATOR_NO_MERGE;
} }
void elv_merged_request(request_queue_t *q, struct request *rq, int type) void elv_merged_request(struct request_queue *q, struct request *rq, int type)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -494,7 +496,7 @@ void elv_merged_request(request_queue_t *q, struct request *rq, int type)
q->last_merge = rq; q->last_merge = rq;
} }
void elv_merge_requests(request_queue_t *q, struct request *rq, void elv_merge_requests(struct request_queue *q, struct request *rq,
struct request *next) struct request *next)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -509,7 +511,7 @@ void elv_merge_requests(request_queue_t *q, struct request *rq,
q->last_merge = rq; q->last_merge = rq;
} }
void elv_requeue_request(request_queue_t *q, struct request *rq) void elv_requeue_request(struct request_queue *q, struct request *rq)
{ {
/* /*
* it already went through dequeue, we need to decrement the * it already went through dequeue, we need to decrement the
@ -526,7 +528,7 @@ void elv_requeue_request(request_queue_t *q, struct request *rq)
elv_insert(q, rq, ELEVATOR_INSERT_REQUEUE); elv_insert(q, rq, ELEVATOR_INSERT_REQUEUE);
} }
static void elv_drain_elevator(request_queue_t *q) static void elv_drain_elevator(struct request_queue *q)
{ {
static int printed; static int printed;
while (q->elevator->ops->elevator_dispatch_fn(q, 1)) while (q->elevator->ops->elevator_dispatch_fn(q, 1))
@ -540,7 +542,7 @@ static void elv_drain_elevator(request_queue_t *q)
} }
} }
void elv_insert(request_queue_t *q, struct request *rq, int where) void elv_insert(struct request_queue *q, struct request *rq, int where)
{ {
struct list_head *pos; struct list_head *pos;
unsigned ordseq; unsigned ordseq;
@ -638,7 +640,7 @@ void elv_insert(request_queue_t *q, struct request *rq, int where)
} }
} }
void __elv_add_request(request_queue_t *q, struct request *rq, int where, void __elv_add_request(struct request_queue *q, struct request *rq, int where,
int plug) int plug)
{ {
if (q->ordcolor) if (q->ordcolor)
@ -676,7 +678,7 @@ void __elv_add_request(request_queue_t *q, struct request *rq, int where,
EXPORT_SYMBOL(__elv_add_request); EXPORT_SYMBOL(__elv_add_request);
void elv_add_request(request_queue_t *q, struct request *rq, int where, void elv_add_request(struct request_queue *q, struct request *rq, int where,
int plug) int plug)
{ {
unsigned long flags; unsigned long flags;
@ -688,7 +690,7 @@ void elv_add_request(request_queue_t *q, struct request *rq, int where,
EXPORT_SYMBOL(elv_add_request); EXPORT_SYMBOL(elv_add_request);
static inline struct request *__elv_next_request(request_queue_t *q) static inline struct request *__elv_next_request(struct request_queue *q)
{ {
struct request *rq; struct request *rq;
@ -704,7 +706,7 @@ static inline struct request *__elv_next_request(request_queue_t *q)
} }
} }
struct request *elv_next_request(request_queue_t *q) struct request *elv_next_request(struct request_queue *q)
{ {
struct request *rq; struct request *rq;
int ret; int ret;
@ -770,7 +772,7 @@ struct request *elv_next_request(request_queue_t *q)
EXPORT_SYMBOL(elv_next_request); EXPORT_SYMBOL(elv_next_request);
void elv_dequeue_request(request_queue_t *q, struct request *rq) void elv_dequeue_request(struct request_queue *q, struct request *rq)
{ {
BUG_ON(list_empty(&rq->queuelist)); BUG_ON(list_empty(&rq->queuelist));
BUG_ON(ELV_ON_HASH(rq)); BUG_ON(ELV_ON_HASH(rq));
@ -788,7 +790,7 @@ void elv_dequeue_request(request_queue_t *q, struct request *rq)
EXPORT_SYMBOL(elv_dequeue_request); EXPORT_SYMBOL(elv_dequeue_request);
int elv_queue_empty(request_queue_t *q) int elv_queue_empty(struct request_queue *q)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -803,7 +805,7 @@ int elv_queue_empty(request_queue_t *q)
EXPORT_SYMBOL(elv_queue_empty); EXPORT_SYMBOL(elv_queue_empty);
struct request *elv_latter_request(request_queue_t *q, struct request *rq) struct request *elv_latter_request(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -812,7 +814,7 @@ struct request *elv_latter_request(request_queue_t *q, struct request *rq)
return NULL; return NULL;
} }
struct request *elv_former_request(request_queue_t *q, struct request *rq) struct request *elv_former_request(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -821,7 +823,7 @@ struct request *elv_former_request(request_queue_t *q, struct request *rq)
return NULL; return NULL;
} }
int elv_set_request(request_queue_t *q, struct request *rq, gfp_t gfp_mask) int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -832,7 +834,7 @@ int elv_set_request(request_queue_t *q, struct request *rq, gfp_t gfp_mask)
return 0; return 0;
} }
void elv_put_request(request_queue_t *q, struct request *rq) void elv_put_request(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -840,7 +842,7 @@ void elv_put_request(request_queue_t *q, struct request *rq)
e->ops->elevator_put_req_fn(rq); e->ops->elevator_put_req_fn(rq);
} }
int elv_may_queue(request_queue_t *q, int rw) int elv_may_queue(struct request_queue *q, int rw)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -850,7 +852,7 @@ int elv_may_queue(request_queue_t *q, int rw)
return ELV_MQUEUE_MAY; return ELV_MQUEUE_MAY;
} }
void elv_completed_request(request_queue_t *q, struct request *rq) void elv_completed_request(struct request_queue *q, struct request *rq)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
@ -1006,7 +1008,7 @@ EXPORT_SYMBOL_GPL(elv_unregister);
* need for the new one. this way we have a chance of going back to the old * need for the new one. this way we have a chance of going back to the old
* one, if the new one fails init for some reason. * one, if the new one fails init for some reason.
*/ */
static int elevator_switch(request_queue_t *q, struct elevator_type *new_e) static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
{ {
elevator_t *old_elevator, *e; elevator_t *old_elevator, *e;
void *data; void *data;
@ -1078,7 +1080,8 @@ fail_register:
return 0; return 0;
} }
ssize_t elv_iosched_store(request_queue_t *q, const char *name, size_t count) ssize_t elv_iosched_store(struct request_queue *q, const char *name,
size_t count)
{ {
char elevator_name[ELV_NAME_MAX]; char elevator_name[ELV_NAME_MAX];
size_t len; size_t len;
@ -1107,7 +1110,7 @@ ssize_t elv_iosched_store(request_queue_t *q, const char *name, size_t count)
return count; return count;
} }
ssize_t elv_iosched_show(request_queue_t *q, char *name) ssize_t elv_iosched_show(struct request_queue *q, char *name)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
struct elevator_type *elv = e->elevator_type; struct elevator_type *elv = e->elevator_type;
@ -1127,7 +1130,8 @@ ssize_t elv_iosched_show(request_queue_t *q, char *name)
return len; return len;
} }
struct request *elv_rb_former_request(request_queue_t *q, struct request *rq) struct request *elv_rb_former_request(struct request_queue *q,
struct request *rq)
{ {
struct rb_node *rbprev = rb_prev(&rq->rb_node); struct rb_node *rbprev = rb_prev(&rq->rb_node);
@ -1139,7 +1143,8 @@ struct request *elv_rb_former_request(request_queue_t *q, struct request *rq)
EXPORT_SYMBOL(elv_rb_former_request); EXPORT_SYMBOL(elv_rb_former_request);
struct request *elv_rb_latter_request(request_queue_t *q, struct request *rq) struct request *elv_rb_latter_request(struct request_queue *q,
struct request *rq)
{ {
struct rb_node *rbnext = rb_next(&rq->rb_node); struct rb_node *rbnext = rb_next(&rq->rb_node);

View File

@ -40,7 +40,7 @@ static void blk_unplug_work(struct work_struct *work);
static void blk_unplug_timeout(unsigned long data); static void blk_unplug_timeout(unsigned long data);
static void drive_stat_acct(struct request *rq, int nr_sectors, int new_io); static void drive_stat_acct(struct request *rq, int nr_sectors, int new_io);
static void init_request_from_bio(struct request *req, struct bio *bio); static void init_request_from_bio(struct request *req, struct bio *bio);
static int __make_request(request_queue_t *q, struct bio *bio); static int __make_request(struct request_queue *q, struct bio *bio);
static struct io_context *current_io_context(gfp_t gfp_flags, int node); static struct io_context *current_io_context(gfp_t gfp_flags, int node);
/* /*
@ -121,7 +121,7 @@ static void blk_queue_congestion_threshold(struct request_queue *q)
struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev) struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev)
{ {
struct backing_dev_info *ret = NULL; struct backing_dev_info *ret = NULL;
request_queue_t *q = bdev_get_queue(bdev); struct request_queue *q = bdev_get_queue(bdev);
if (q) if (q)
ret = &q->backing_dev_info; ret = &q->backing_dev_info;
@ -140,7 +140,7 @@ EXPORT_SYMBOL(blk_get_backing_dev_info);
* cdb from the request data for instance. * cdb from the request data for instance.
* *
*/ */
void blk_queue_prep_rq(request_queue_t *q, prep_rq_fn *pfn) void blk_queue_prep_rq(struct request_queue *q, prep_rq_fn *pfn)
{ {
q->prep_rq_fn = pfn; q->prep_rq_fn = pfn;
} }
@ -163,14 +163,14 @@ EXPORT_SYMBOL(blk_queue_prep_rq);
* no merge_bvec_fn is defined for a queue, and only the fixed limits are * no merge_bvec_fn is defined for a queue, and only the fixed limits are
* honored. * honored.
*/ */
void blk_queue_merge_bvec(request_queue_t *q, merge_bvec_fn *mbfn) void blk_queue_merge_bvec(struct request_queue *q, merge_bvec_fn *mbfn)
{ {
q->merge_bvec_fn = mbfn; q->merge_bvec_fn = mbfn;
} }
EXPORT_SYMBOL(blk_queue_merge_bvec); EXPORT_SYMBOL(blk_queue_merge_bvec);
void blk_queue_softirq_done(request_queue_t *q, softirq_done_fn *fn) void blk_queue_softirq_done(struct request_queue *q, softirq_done_fn *fn)
{ {
q->softirq_done_fn = fn; q->softirq_done_fn = fn;
} }
@ -199,7 +199,7 @@ EXPORT_SYMBOL(blk_queue_softirq_done);
* __bio_kmap_atomic() to get a temporary kernel mapping, or by calling * __bio_kmap_atomic() to get a temporary kernel mapping, or by calling
* blk_queue_bounce() to create a buffer in normal memory. * blk_queue_bounce() to create a buffer in normal memory.
**/ **/
void blk_queue_make_request(request_queue_t * q, make_request_fn * mfn) void blk_queue_make_request(struct request_queue * q, make_request_fn * mfn)
{ {
/* /*
* set defaults * set defaults
@ -235,7 +235,7 @@ void blk_queue_make_request(request_queue_t * q, make_request_fn * mfn)
EXPORT_SYMBOL(blk_queue_make_request); EXPORT_SYMBOL(blk_queue_make_request);
static void rq_init(request_queue_t *q, struct request *rq) static void rq_init(struct request_queue *q, struct request *rq)
{ {
INIT_LIST_HEAD(&rq->queuelist); INIT_LIST_HEAD(&rq->queuelist);
INIT_LIST_HEAD(&rq->donelist); INIT_LIST_HEAD(&rq->donelist);
@ -272,7 +272,7 @@ static void rq_init(request_queue_t *q, struct request *rq)
* feature should call this function and indicate so. * feature should call this function and indicate so.
* *
**/ **/
int blk_queue_ordered(request_queue_t *q, unsigned ordered, int blk_queue_ordered(struct request_queue *q, unsigned ordered,
prepare_flush_fn *prepare_flush_fn) prepare_flush_fn *prepare_flush_fn)
{ {
if (ordered & (QUEUE_ORDERED_PREFLUSH | QUEUE_ORDERED_POSTFLUSH) && if (ordered & (QUEUE_ORDERED_PREFLUSH | QUEUE_ORDERED_POSTFLUSH) &&
@ -311,7 +311,7 @@ EXPORT_SYMBOL(blk_queue_ordered);
* to the block layer by defining it through this call. * to the block layer by defining it through this call.
* *
**/ **/
void blk_queue_issue_flush_fn(request_queue_t *q, issue_flush_fn *iff) void blk_queue_issue_flush_fn(struct request_queue *q, issue_flush_fn *iff)
{ {
q->issue_flush_fn = iff; q->issue_flush_fn = iff;
} }
@ -321,7 +321,7 @@ EXPORT_SYMBOL(blk_queue_issue_flush_fn);
/* /*
* Cache flushing for ordered writes handling * Cache flushing for ordered writes handling
*/ */
inline unsigned blk_ordered_cur_seq(request_queue_t *q) inline unsigned blk_ordered_cur_seq(struct request_queue *q)
{ {
if (!q->ordseq) if (!q->ordseq)
return 0; return 0;
@ -330,7 +330,7 @@ inline unsigned blk_ordered_cur_seq(request_queue_t *q)
unsigned blk_ordered_req_seq(struct request *rq) unsigned blk_ordered_req_seq(struct request *rq)
{ {
request_queue_t *q = rq->q; struct request_queue *q = rq->q;
BUG_ON(q->ordseq == 0); BUG_ON(q->ordseq == 0);
@ -357,7 +357,7 @@ unsigned blk_ordered_req_seq(struct request *rq)
return QUEUE_ORDSEQ_DONE; return QUEUE_ORDSEQ_DONE;
} }
void blk_ordered_complete_seq(request_queue_t *q, unsigned seq, int error) void blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
{ {
struct request *rq; struct request *rq;
int uptodate; int uptodate;
@ -401,7 +401,7 @@ static void post_flush_end_io(struct request *rq, int error)
blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_POSTFLUSH, error); blk_ordered_complete_seq(rq->q, QUEUE_ORDSEQ_POSTFLUSH, error);
} }
static void queue_flush(request_queue_t *q, unsigned which) static void queue_flush(struct request_queue *q, unsigned which)
{ {
struct request *rq; struct request *rq;
rq_end_io_fn *end_io; rq_end_io_fn *end_io;
@ -425,7 +425,7 @@ static void queue_flush(request_queue_t *q, unsigned which)
elv_insert(q, rq, ELEVATOR_INSERT_FRONT); elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
} }
static inline struct request *start_ordered(request_queue_t *q, static inline struct request *start_ordered(struct request_queue *q,
struct request *rq) struct request *rq)
{ {
q->bi_size = 0; q->bi_size = 0;
@ -476,7 +476,7 @@ static inline struct request *start_ordered(request_queue_t *q,
return rq; return rq;
} }
int blk_do_ordered(request_queue_t *q, struct request **rqp) int blk_do_ordered(struct request_queue *q, struct request **rqp)
{ {
struct request *rq = *rqp; struct request *rq = *rqp;
int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq); int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq);
@ -527,7 +527,7 @@ int blk_do_ordered(request_queue_t *q, struct request **rqp)
static int flush_dry_bio_endio(struct bio *bio, unsigned int bytes, int error) static int flush_dry_bio_endio(struct bio *bio, unsigned int bytes, int error)
{ {
request_queue_t *q = bio->bi_private; struct request_queue *q = bio->bi_private;
/* /*
* This is dry run, restore bio_sector and size. We'll finish * This is dry run, restore bio_sector and size. We'll finish
@ -551,7 +551,7 @@ static int flush_dry_bio_endio(struct bio *bio, unsigned int bytes, int error)
static int ordered_bio_endio(struct request *rq, struct bio *bio, static int ordered_bio_endio(struct request *rq, struct bio *bio,
unsigned int nbytes, int error) unsigned int nbytes, int error)
{ {
request_queue_t *q = rq->q; struct request_queue *q = rq->q;
bio_end_io_t *endio; bio_end_io_t *endio;
void *private; void *private;
@ -588,7 +588,7 @@ static int ordered_bio_endio(struct request *rq, struct bio *bio,
* blk_queue_bounce_limit to have lower memory pages allocated as bounce * blk_queue_bounce_limit to have lower memory pages allocated as bounce
* buffers for doing I/O to pages residing above @page. * buffers for doing I/O to pages residing above @page.
**/ **/
void blk_queue_bounce_limit(request_queue_t *q, u64 dma_addr) void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr)
{ {
unsigned long bounce_pfn = dma_addr >> PAGE_SHIFT; unsigned long bounce_pfn = dma_addr >> PAGE_SHIFT;
int dma = 0; int dma = 0;
@ -624,7 +624,7 @@ EXPORT_SYMBOL(blk_queue_bounce_limit);
* Enables a low level driver to set an upper limit on the size of * Enables a low level driver to set an upper limit on the size of
* received requests. * received requests.
**/ **/
void blk_queue_max_sectors(request_queue_t *q, unsigned int max_sectors) void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors)
{ {
if ((max_sectors << 9) < PAGE_CACHE_SIZE) { if ((max_sectors << 9) < PAGE_CACHE_SIZE) {
max_sectors = 1 << (PAGE_CACHE_SHIFT - 9); max_sectors = 1 << (PAGE_CACHE_SHIFT - 9);
@ -651,7 +651,8 @@ EXPORT_SYMBOL(blk_queue_max_sectors);
* physical data segments in a request. This would be the largest sized * physical data segments in a request. This would be the largest sized
* scatter list the driver could handle. * scatter list the driver could handle.
**/ **/
void blk_queue_max_phys_segments(request_queue_t *q, unsigned short max_segments) void blk_queue_max_phys_segments(struct request_queue *q,
unsigned short max_segments)
{ {
if (!max_segments) { if (!max_segments) {
max_segments = 1; max_segments = 1;
@ -674,7 +675,8 @@ EXPORT_SYMBOL(blk_queue_max_phys_segments);
* address/length pairs the host adapter can actually give as once * address/length pairs the host adapter can actually give as once
* to the device. * to the device.
**/ **/
void blk_queue_max_hw_segments(request_queue_t *q, unsigned short max_segments) void blk_queue_max_hw_segments(struct request_queue *q,
unsigned short max_segments)
{ {
if (!max_segments) { if (!max_segments) {
max_segments = 1; max_segments = 1;
@ -695,7 +697,7 @@ EXPORT_SYMBOL(blk_queue_max_hw_segments);
* Enables a low level driver to set an upper limit on the size of a * Enables a low level driver to set an upper limit on the size of a
* coalesced segment * coalesced segment
**/ **/
void blk_queue_max_segment_size(request_queue_t *q, unsigned int max_size) void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
{ {
if (max_size < PAGE_CACHE_SIZE) { if (max_size < PAGE_CACHE_SIZE) {
max_size = PAGE_CACHE_SIZE; max_size = PAGE_CACHE_SIZE;
@ -718,7 +720,7 @@ EXPORT_SYMBOL(blk_queue_max_segment_size);
* even internal read-modify-write operations). Usually the default * even internal read-modify-write operations). Usually the default
* of 512 covers most hardware. * of 512 covers most hardware.
**/ **/
void blk_queue_hardsect_size(request_queue_t *q, unsigned short size) void blk_queue_hardsect_size(struct request_queue *q, unsigned short size)
{ {
q->hardsect_size = size; q->hardsect_size = size;
} }
@ -735,7 +737,7 @@ EXPORT_SYMBOL(blk_queue_hardsect_size);
* @t: the stacking driver (top) * @t: the stacking driver (top)
* @b: the underlying device (bottom) * @b: the underlying device (bottom)
**/ **/
void blk_queue_stack_limits(request_queue_t *t, request_queue_t *b) void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
{ {
/* zero is "infinity" */ /* zero is "infinity" */
t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors); t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors);
@ -756,7 +758,7 @@ EXPORT_SYMBOL(blk_queue_stack_limits);
* @q: the request queue for the device * @q: the request queue for the device
* @mask: the memory boundary mask * @mask: the memory boundary mask
**/ **/
void blk_queue_segment_boundary(request_queue_t *q, unsigned long mask) void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
{ {
if (mask < PAGE_CACHE_SIZE - 1) { if (mask < PAGE_CACHE_SIZE - 1) {
mask = PAGE_CACHE_SIZE - 1; mask = PAGE_CACHE_SIZE - 1;
@ -778,7 +780,7 @@ EXPORT_SYMBOL(blk_queue_segment_boundary);
* this is used when buiding direct io requests for the queue. * this is used when buiding direct io requests for the queue.
* *
**/ **/
void blk_queue_dma_alignment(request_queue_t *q, int mask) void blk_queue_dma_alignment(struct request_queue *q, int mask)
{ {
q->dma_alignment = mask; q->dma_alignment = mask;
} }
@ -796,7 +798,7 @@ EXPORT_SYMBOL(blk_queue_dma_alignment);
* *
* no locks need be held. * no locks need be held.
**/ **/
struct request *blk_queue_find_tag(request_queue_t *q, int tag) struct request *blk_queue_find_tag(struct request_queue *q, int tag)
{ {
return blk_map_queue_find_tag(q->queue_tags, tag); return blk_map_queue_find_tag(q->queue_tags, tag);
} }
@ -840,7 +842,7 @@ static int __blk_free_tags(struct blk_queue_tag *bqt)
* blk_cleanup_queue() will take care of calling this function, if tagging * blk_cleanup_queue() will take care of calling this function, if tagging
* has been used. So there's no need to call this directly. * has been used. So there's no need to call this directly.
**/ **/
static void __blk_queue_free_tags(request_queue_t *q) static void __blk_queue_free_tags(struct request_queue *q)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
@ -877,7 +879,7 @@ EXPORT_SYMBOL(blk_free_tags);
* This is used to disabled tagged queuing to a device, yet leave * This is used to disabled tagged queuing to a device, yet leave
* queue in function. * queue in function.
**/ **/
void blk_queue_free_tags(request_queue_t *q) void blk_queue_free_tags(struct request_queue *q)
{ {
clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags); clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
} }
@ -885,7 +887,7 @@ void blk_queue_free_tags(request_queue_t *q)
EXPORT_SYMBOL(blk_queue_free_tags); EXPORT_SYMBOL(blk_queue_free_tags);
static int static int
init_tag_map(request_queue_t *q, struct blk_queue_tag *tags, int depth) init_tag_map(struct request_queue *q, struct blk_queue_tag *tags, int depth)
{ {
struct request **tag_index; struct request **tag_index;
unsigned long *tag_map; unsigned long *tag_map;
@ -955,7 +957,7 @@ EXPORT_SYMBOL(blk_init_tags);
* @depth: the maximum queue depth supported * @depth: the maximum queue depth supported
* @tags: the tag to use * @tags: the tag to use
**/ **/
int blk_queue_init_tags(request_queue_t *q, int depth, int blk_queue_init_tags(struct request_queue *q, int depth,
struct blk_queue_tag *tags) struct blk_queue_tag *tags)
{ {
int rc; int rc;
@ -996,7 +998,7 @@ EXPORT_SYMBOL(blk_queue_init_tags);
* Notes: * Notes:
* Must be called with the queue lock held. * Must be called with the queue lock held.
**/ **/
int blk_queue_resize_tags(request_queue_t *q, int new_depth) int blk_queue_resize_tags(struct request_queue *q, int new_depth)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
struct request **tag_index; struct request **tag_index;
@ -1059,7 +1061,7 @@ EXPORT_SYMBOL(blk_queue_resize_tags);
* Notes: * Notes:
* queue lock must be held. * queue lock must be held.
**/ **/
void blk_queue_end_tag(request_queue_t *q, struct request *rq) void blk_queue_end_tag(struct request_queue *q, struct request *rq)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
int tag = rq->tag; int tag = rq->tag;
@ -1111,7 +1113,7 @@ EXPORT_SYMBOL(blk_queue_end_tag);
* Notes: * Notes:
* queue lock must be held. * queue lock must be held.
**/ **/
int blk_queue_start_tag(request_queue_t *q, struct request *rq) int blk_queue_start_tag(struct request_queue *q, struct request *rq)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
int tag; int tag;
@ -1158,7 +1160,7 @@ EXPORT_SYMBOL(blk_queue_start_tag);
* Notes: * Notes:
* queue lock must be held. * queue lock must be held.
**/ **/
void blk_queue_invalidate_tags(request_queue_t *q) void blk_queue_invalidate_tags(struct request_queue *q)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
struct list_head *tmp, *n; struct list_head *tmp, *n;
@ -1205,7 +1207,7 @@ void blk_dump_rq_flags(struct request *rq, char *msg)
EXPORT_SYMBOL(blk_dump_rq_flags); EXPORT_SYMBOL(blk_dump_rq_flags);
void blk_recount_segments(request_queue_t *q, struct bio *bio) void blk_recount_segments(struct request_queue *q, struct bio *bio)
{ {
struct bio_vec *bv, *bvprv = NULL; struct bio_vec *bv, *bvprv = NULL;
int i, nr_phys_segs, nr_hw_segs, seg_size, hw_seg_size, cluster; int i, nr_phys_segs, nr_hw_segs, seg_size, hw_seg_size, cluster;
@ -1267,7 +1269,7 @@ new_hw_segment:
} }
EXPORT_SYMBOL(blk_recount_segments); EXPORT_SYMBOL(blk_recount_segments);
static int blk_phys_contig_segment(request_queue_t *q, struct bio *bio, static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
struct bio *nxt) struct bio *nxt)
{ {
if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER))) if (!(q->queue_flags & (1 << QUEUE_FLAG_CLUSTER)))
@ -1288,7 +1290,7 @@ static int blk_phys_contig_segment(request_queue_t *q, struct bio *bio,
return 0; return 0;
} }
static int blk_hw_contig_segment(request_queue_t *q, struct bio *bio, static int blk_hw_contig_segment(struct request_queue *q, struct bio *bio,
struct bio *nxt) struct bio *nxt)
{ {
if (unlikely(!bio_flagged(bio, BIO_SEG_VALID))) if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
@ -1308,7 +1310,8 @@ static int blk_hw_contig_segment(request_queue_t *q, struct bio *bio,
* map a request to scatterlist, return number of sg entries setup. Caller * map a request to scatterlist, return number of sg entries setup. Caller
* must make sure sg can hold rq->nr_phys_segments entries * must make sure sg can hold rq->nr_phys_segments entries
*/ */
int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg) int blk_rq_map_sg(struct request_queue *q, struct request *rq,
struct scatterlist *sg)
{ {
struct bio_vec *bvec, *bvprv; struct bio_vec *bvec, *bvprv;
struct bio *bio; struct bio *bio;
@ -1361,7 +1364,7 @@ EXPORT_SYMBOL(blk_rq_map_sg);
* specific ones if so desired * specific ones if so desired
*/ */
static inline int ll_new_mergeable(request_queue_t *q, static inline int ll_new_mergeable(struct request_queue *q,
struct request *req, struct request *req,
struct bio *bio) struct bio *bio)
{ {
@ -1382,7 +1385,7 @@ static inline int ll_new_mergeable(request_queue_t *q,
return 1; return 1;
} }
static inline int ll_new_hw_segment(request_queue_t *q, static inline int ll_new_hw_segment(struct request_queue *q,
struct request *req, struct request *req,
struct bio *bio) struct bio *bio)
{ {
@ -1406,7 +1409,7 @@ static inline int ll_new_hw_segment(request_queue_t *q,
return 1; return 1;
} }
int ll_back_merge_fn(request_queue_t *q, struct request *req, struct bio *bio) int ll_back_merge_fn(struct request_queue *q, struct request *req, struct bio *bio)
{ {
unsigned short max_sectors; unsigned short max_sectors;
int len; int len;
@ -1444,7 +1447,7 @@ int ll_back_merge_fn(request_queue_t *q, struct request *req, struct bio *bio)
} }
EXPORT_SYMBOL(ll_back_merge_fn); EXPORT_SYMBOL(ll_back_merge_fn);
static int ll_front_merge_fn(request_queue_t *q, struct request *req, static int ll_front_merge_fn(struct request_queue *q, struct request *req,
struct bio *bio) struct bio *bio)
{ {
unsigned short max_sectors; unsigned short max_sectors;
@ -1483,7 +1486,7 @@ static int ll_front_merge_fn(request_queue_t *q, struct request *req,
return ll_new_hw_segment(q, req, bio); return ll_new_hw_segment(q, req, bio);
} }
static int ll_merge_requests_fn(request_queue_t *q, struct request *req, static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
struct request *next) struct request *next)
{ {
int total_phys_segments; int total_phys_segments;
@ -1539,7 +1542,7 @@ static int ll_merge_requests_fn(request_queue_t *q, struct request *req,
* This is called with interrupts off and no requests on the queue and * This is called with interrupts off and no requests on the queue and
* with the queue lock held. * with the queue lock held.
*/ */
void blk_plug_device(request_queue_t *q) void blk_plug_device(struct request_queue *q)
{ {
WARN_ON(!irqs_disabled()); WARN_ON(!irqs_disabled());
@ -1562,7 +1565,7 @@ EXPORT_SYMBOL(blk_plug_device);
* remove the queue from the plugged list, if present. called with * remove the queue from the plugged list, if present. called with
* queue lock held and interrupts disabled. * queue lock held and interrupts disabled.
*/ */
int blk_remove_plug(request_queue_t *q) int blk_remove_plug(struct request_queue *q)
{ {
WARN_ON(!irqs_disabled()); WARN_ON(!irqs_disabled());
@ -1578,7 +1581,7 @@ EXPORT_SYMBOL(blk_remove_plug);
/* /*
* remove the plug and let it rip.. * remove the plug and let it rip..
*/ */
void __generic_unplug_device(request_queue_t *q) void __generic_unplug_device(struct request_queue *q)
{ {
if (unlikely(blk_queue_stopped(q))) if (unlikely(blk_queue_stopped(q)))
return; return;
@ -1592,7 +1595,7 @@ EXPORT_SYMBOL(__generic_unplug_device);
/** /**
* generic_unplug_device - fire a request queue * generic_unplug_device - fire a request queue
* @q: The &request_queue_t in question * @q: The &struct request_queue in question
* *
* Description: * Description:
* Linux uses plugging to build bigger requests queues before letting * Linux uses plugging to build bigger requests queues before letting
@ -1601,7 +1604,7 @@ EXPORT_SYMBOL(__generic_unplug_device);
* gets unplugged, the request_fn defined for the queue is invoked and * gets unplugged, the request_fn defined for the queue is invoked and
* transfers started. * transfers started.
**/ **/
void generic_unplug_device(request_queue_t *q) void generic_unplug_device(struct request_queue *q)
{ {
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
__generic_unplug_device(q); __generic_unplug_device(q);
@ -1612,7 +1615,7 @@ EXPORT_SYMBOL(generic_unplug_device);
static void blk_backing_dev_unplug(struct backing_dev_info *bdi, static void blk_backing_dev_unplug(struct backing_dev_info *bdi,
struct page *page) struct page *page)
{ {
request_queue_t *q = bdi->unplug_io_data; struct request_queue *q = bdi->unplug_io_data;
/* /*
* devices don't necessarily have an ->unplug_fn defined * devices don't necessarily have an ->unplug_fn defined
@ -1627,7 +1630,8 @@ static void blk_backing_dev_unplug(struct backing_dev_info *bdi,
static void blk_unplug_work(struct work_struct *work) static void blk_unplug_work(struct work_struct *work)
{ {
request_queue_t *q = container_of(work, request_queue_t, unplug_work); struct request_queue *q =
container_of(work, struct request_queue, unplug_work);
blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL, blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_IO, NULL,
q->rq.count[READ] + q->rq.count[WRITE]); q->rq.count[READ] + q->rq.count[WRITE]);
@ -1637,7 +1641,7 @@ static void blk_unplug_work(struct work_struct *work)
static void blk_unplug_timeout(unsigned long data) static void blk_unplug_timeout(unsigned long data)
{ {
request_queue_t *q = (request_queue_t *)data; struct request_queue *q = (struct request_queue *)data;
blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL, blk_add_trace_pdu_int(q, BLK_TA_UNPLUG_TIMER, NULL,
q->rq.count[READ] + q->rq.count[WRITE]); q->rq.count[READ] + q->rq.count[WRITE]);
@ -1647,14 +1651,14 @@ static void blk_unplug_timeout(unsigned long data)
/** /**
* blk_start_queue - restart a previously stopped queue * blk_start_queue - restart a previously stopped queue
* @q: The &request_queue_t in question * @q: The &struct request_queue in question
* *
* Description: * Description:
* blk_start_queue() will clear the stop flag on the queue, and call * blk_start_queue() will clear the stop flag on the queue, and call
* the request_fn for the queue if it was in a stopped state when * the request_fn for the queue if it was in a stopped state when
* entered. Also see blk_stop_queue(). Queue lock must be held. * entered. Also see blk_stop_queue(). Queue lock must be held.
**/ **/
void blk_start_queue(request_queue_t *q) void blk_start_queue(struct request_queue *q)
{ {
WARN_ON(!irqs_disabled()); WARN_ON(!irqs_disabled());
@ -1677,7 +1681,7 @@ EXPORT_SYMBOL(blk_start_queue);
/** /**
* blk_stop_queue - stop a queue * blk_stop_queue - stop a queue
* @q: The &request_queue_t in question * @q: The &struct request_queue in question
* *
* Description: * Description:
* The Linux block layer assumes that a block driver will consume all * The Linux block layer assumes that a block driver will consume all
@ -1689,7 +1693,7 @@ EXPORT_SYMBOL(blk_start_queue);
* the driver has signalled it's ready to go again. This happens by calling * the driver has signalled it's ready to go again. This happens by calling
* blk_start_queue() to restart queue operations. Queue lock must be held. * blk_start_queue() to restart queue operations. Queue lock must be held.
**/ **/
void blk_stop_queue(request_queue_t *q) void blk_stop_queue(struct request_queue *q)
{ {
blk_remove_plug(q); blk_remove_plug(q);
set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags); set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
@ -1746,7 +1750,7 @@ void blk_run_queue(struct request_queue *q)
EXPORT_SYMBOL(blk_run_queue); EXPORT_SYMBOL(blk_run_queue);
/** /**
* blk_cleanup_queue: - release a &request_queue_t when it is no longer needed * blk_cleanup_queue: - release a &struct request_queue when it is no longer needed
* @kobj: the kobj belonging of the request queue to be released * @kobj: the kobj belonging of the request queue to be released
* *
* Description: * Description:
@ -1762,7 +1766,8 @@ EXPORT_SYMBOL(blk_run_queue);
**/ **/
static void blk_release_queue(struct kobject *kobj) static void blk_release_queue(struct kobject *kobj)
{ {
request_queue_t *q = container_of(kobj, struct request_queue, kobj); struct request_queue *q =
container_of(kobj, struct request_queue, kobj);
struct request_list *rl = &q->rq; struct request_list *rl = &q->rq;
blk_sync_queue(q); blk_sync_queue(q);
@ -1778,13 +1783,13 @@ static void blk_release_queue(struct kobject *kobj)
kmem_cache_free(requestq_cachep, q); kmem_cache_free(requestq_cachep, q);
} }
void blk_put_queue(request_queue_t *q) void blk_put_queue(struct request_queue *q)
{ {
kobject_put(&q->kobj); kobject_put(&q->kobj);
} }
EXPORT_SYMBOL(blk_put_queue); EXPORT_SYMBOL(blk_put_queue);
void blk_cleanup_queue(request_queue_t * q) void blk_cleanup_queue(struct request_queue * q)
{ {
mutex_lock(&q->sysfs_lock); mutex_lock(&q->sysfs_lock);
set_bit(QUEUE_FLAG_DEAD, &q->queue_flags); set_bit(QUEUE_FLAG_DEAD, &q->queue_flags);
@ -1798,7 +1803,7 @@ void blk_cleanup_queue(request_queue_t * q)
EXPORT_SYMBOL(blk_cleanup_queue); EXPORT_SYMBOL(blk_cleanup_queue);
static int blk_init_free_list(request_queue_t *q) static int blk_init_free_list(struct request_queue *q)
{ {
struct request_list *rl = &q->rq; struct request_list *rl = &q->rq;
@ -1817,7 +1822,7 @@ static int blk_init_free_list(request_queue_t *q)
return 0; return 0;
} }
request_queue_t *blk_alloc_queue(gfp_t gfp_mask) struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
{ {
return blk_alloc_queue_node(gfp_mask, -1); return blk_alloc_queue_node(gfp_mask, -1);
} }
@ -1825,9 +1830,9 @@ EXPORT_SYMBOL(blk_alloc_queue);
static struct kobj_type queue_ktype; static struct kobj_type queue_ktype;
request_queue_t *blk_alloc_queue_node(gfp_t gfp_mask, int node_id) struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
{ {
request_queue_t *q; struct request_queue *q;
q = kmem_cache_alloc_node(requestq_cachep, q = kmem_cache_alloc_node(requestq_cachep,
gfp_mask | __GFP_ZERO, node_id); gfp_mask | __GFP_ZERO, node_id);
@ -1882,16 +1887,16 @@ EXPORT_SYMBOL(blk_alloc_queue_node);
* when the block device is deactivated (such as at module unload). * when the block device is deactivated (such as at module unload).
**/ **/
request_queue_t *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock) struct request_queue *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock)
{ {
return blk_init_queue_node(rfn, lock, -1); return blk_init_queue_node(rfn, lock, -1);
} }
EXPORT_SYMBOL(blk_init_queue); EXPORT_SYMBOL(blk_init_queue);
request_queue_t * struct request_queue *
blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
{ {
request_queue_t *q = blk_alloc_queue_node(GFP_KERNEL, node_id); struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
if (!q) if (!q)
return NULL; return NULL;
@ -1940,7 +1945,7 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
} }
EXPORT_SYMBOL(blk_init_queue_node); EXPORT_SYMBOL(blk_init_queue_node);
int blk_get_queue(request_queue_t *q) int blk_get_queue(struct request_queue *q)
{ {
if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) { if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
kobject_get(&q->kobj); kobject_get(&q->kobj);
@ -1952,7 +1957,7 @@ int blk_get_queue(request_queue_t *q)
EXPORT_SYMBOL(blk_get_queue); EXPORT_SYMBOL(blk_get_queue);
static inline void blk_free_request(request_queue_t *q, struct request *rq) static inline void blk_free_request(struct request_queue *q, struct request *rq)
{ {
if (rq->cmd_flags & REQ_ELVPRIV) if (rq->cmd_flags & REQ_ELVPRIV)
elv_put_request(q, rq); elv_put_request(q, rq);
@ -1960,7 +1965,7 @@ static inline void blk_free_request(request_queue_t *q, struct request *rq)
} }
static struct request * static struct request *
blk_alloc_request(request_queue_t *q, int rw, int priv, gfp_t gfp_mask) blk_alloc_request(struct request_queue *q, int rw, int priv, gfp_t gfp_mask)
{ {
struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask); struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
@ -1988,7 +1993,7 @@ blk_alloc_request(request_queue_t *q, int rw, int priv, gfp_t gfp_mask)
* ioc_batching returns true if the ioc is a valid batching request and * ioc_batching returns true if the ioc is a valid batching request and
* should be given priority access to a request. * should be given priority access to a request.
*/ */
static inline int ioc_batching(request_queue_t *q, struct io_context *ioc) static inline int ioc_batching(struct request_queue *q, struct io_context *ioc)
{ {
if (!ioc) if (!ioc)
return 0; return 0;
@ -2009,7 +2014,7 @@ static inline int ioc_batching(request_queue_t *q, struct io_context *ioc)
* is the behaviour we want though - once it gets a wakeup it should be given * is the behaviour we want though - once it gets a wakeup it should be given
* a nice run. * a nice run.
*/ */
static void ioc_set_batching(request_queue_t *q, struct io_context *ioc) static void ioc_set_batching(struct request_queue *q, struct io_context *ioc)
{ {
if (!ioc || ioc_batching(q, ioc)) if (!ioc || ioc_batching(q, ioc))
return; return;
@ -2018,7 +2023,7 @@ static void ioc_set_batching(request_queue_t *q, struct io_context *ioc)
ioc->last_waited = jiffies; ioc->last_waited = jiffies;
} }
static void __freed_request(request_queue_t *q, int rw) static void __freed_request(struct request_queue *q, int rw)
{ {
struct request_list *rl = &q->rq; struct request_list *rl = &q->rq;
@ -2037,7 +2042,7 @@ static void __freed_request(request_queue_t *q, int rw)
* A request has just been released. Account for it, update the full and * A request has just been released. Account for it, update the full and
* congestion status, wake up any waiters. Called under q->queue_lock. * congestion status, wake up any waiters. Called under q->queue_lock.
*/ */
static void freed_request(request_queue_t *q, int rw, int priv) static void freed_request(struct request_queue *q, int rw, int priv)
{ {
struct request_list *rl = &q->rq; struct request_list *rl = &q->rq;
@ -2057,7 +2062,7 @@ static void freed_request(request_queue_t *q, int rw, int priv)
* Returns NULL on failure, with queue_lock held. * Returns NULL on failure, with queue_lock held.
* Returns !NULL on success, with queue_lock *not held*. * Returns !NULL on success, with queue_lock *not held*.
*/ */
static struct request *get_request(request_queue_t *q, int rw_flags, static struct request *get_request(struct request_queue *q, int rw_flags,
struct bio *bio, gfp_t gfp_mask) struct bio *bio, gfp_t gfp_mask)
{ {
struct request *rq = NULL; struct request *rq = NULL;
@ -2162,7 +2167,7 @@ out:
* *
* Called with q->queue_lock held, and returns with it unlocked. * Called with q->queue_lock held, and returns with it unlocked.
*/ */
static struct request *get_request_wait(request_queue_t *q, int rw_flags, static struct request *get_request_wait(struct request_queue *q, int rw_flags,
struct bio *bio) struct bio *bio)
{ {
const int rw = rw_flags & 0x01; const int rw = rw_flags & 0x01;
@ -2204,7 +2209,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw_flags,
return rq; return rq;
} }
struct request *blk_get_request(request_queue_t *q, int rw, gfp_t gfp_mask) struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
{ {
struct request *rq; struct request *rq;
@ -2234,7 +2239,7 @@ EXPORT_SYMBOL(blk_get_request);
* *
* The queue lock must be held with interrupts disabled. * The queue lock must be held with interrupts disabled.
*/ */
void blk_start_queueing(request_queue_t *q) void blk_start_queueing(struct request_queue *q)
{ {
if (!blk_queue_plugged(q)) if (!blk_queue_plugged(q))
q->request_fn(q); q->request_fn(q);
@ -2253,7 +2258,7 @@ EXPORT_SYMBOL(blk_start_queueing);
* more, when that condition happens we need to put the request back * more, when that condition happens we need to put the request back
* on the queue. Must be called with queue lock held. * on the queue. Must be called with queue lock held.
*/ */
void blk_requeue_request(request_queue_t *q, struct request *rq) void blk_requeue_request(struct request_queue *q, struct request *rq)
{ {
blk_add_trace_rq(q, rq, BLK_TA_REQUEUE); blk_add_trace_rq(q, rq, BLK_TA_REQUEUE);
@ -2284,7 +2289,7 @@ EXPORT_SYMBOL(blk_requeue_request);
* of the queue for things like a QUEUE_FULL message from a device, or a * of the queue for things like a QUEUE_FULL message from a device, or a
* host that is unable to accept a particular command. * host that is unable to accept a particular command.
*/ */
void blk_insert_request(request_queue_t *q, struct request *rq, void blk_insert_request(struct request_queue *q, struct request *rq,
int at_head, void *data) int at_head, void *data)
{ {
int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK; int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
@ -2330,7 +2335,7 @@ static int __blk_rq_unmap_user(struct bio *bio)
return ret; return ret;
} }
static int __blk_rq_map_user(request_queue_t *q, struct request *rq, static int __blk_rq_map_user(struct request_queue *q, struct request *rq,
void __user *ubuf, unsigned int len) void __user *ubuf, unsigned int len)
{ {
unsigned long uaddr; unsigned long uaddr;
@ -2403,8 +2408,8 @@ unmap_bio:
* original bio must be passed back in to blk_rq_unmap_user() for proper * original bio must be passed back in to blk_rq_unmap_user() for proper
* unmapping. * unmapping.
*/ */
int blk_rq_map_user(request_queue_t *q, struct request *rq, void __user *ubuf, int blk_rq_map_user(struct request_queue *q, struct request *rq,
unsigned long len) void __user *ubuf, unsigned long len)
{ {
unsigned long bytes_read = 0; unsigned long bytes_read = 0;
struct bio *bio = NULL; struct bio *bio = NULL;
@ -2470,7 +2475,7 @@ EXPORT_SYMBOL(blk_rq_map_user);
* original bio must be passed back in to blk_rq_unmap_user() for proper * original bio must be passed back in to blk_rq_unmap_user() for proper
* unmapping. * unmapping.
*/ */
int blk_rq_map_user_iov(request_queue_t *q, struct request *rq, int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
struct sg_iovec *iov, int iov_count, unsigned int len) struct sg_iovec *iov, int iov_count, unsigned int len)
{ {
struct bio *bio; struct bio *bio;
@ -2540,7 +2545,7 @@ EXPORT_SYMBOL(blk_rq_unmap_user);
* @len: length of user data * @len: length of user data
* @gfp_mask: memory allocation flags * @gfp_mask: memory allocation flags
*/ */
int blk_rq_map_kern(request_queue_t *q, struct request *rq, void *kbuf, int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
unsigned int len, gfp_t gfp_mask) unsigned int len, gfp_t gfp_mask)
{ {
struct bio *bio; struct bio *bio;
@ -2577,7 +2582,7 @@ EXPORT_SYMBOL(blk_rq_map_kern);
* Insert a fully prepared request at the back of the io scheduler queue * Insert a fully prepared request at the back of the io scheduler queue
* for execution. Don't wait for completion. * for execution. Don't wait for completion.
*/ */
void blk_execute_rq_nowait(request_queue_t *q, struct gendisk *bd_disk, void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
struct request *rq, int at_head, struct request *rq, int at_head,
rq_end_io_fn *done) rq_end_io_fn *done)
{ {
@ -2605,7 +2610,7 @@ EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
* Insert a fully prepared request at the back of the io scheduler queue * Insert a fully prepared request at the back of the io scheduler queue
* for execution and wait for completion. * for execution and wait for completion.
*/ */
int blk_execute_rq(request_queue_t *q, struct gendisk *bd_disk, int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
struct request *rq, int at_head) struct request *rq, int at_head)
{ {
DECLARE_COMPLETION_ONSTACK(wait); DECLARE_COMPLETION_ONSTACK(wait);
@ -2648,7 +2653,7 @@ EXPORT_SYMBOL(blk_execute_rq);
*/ */
int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector) int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector)
{ {
request_queue_t *q; struct request_queue *q;
if (bdev->bd_disk == NULL) if (bdev->bd_disk == NULL)
return -ENXIO; return -ENXIO;
@ -2684,7 +2689,7 @@ static void drive_stat_acct(struct request *rq, int nr_sectors, int new_io)
* queue lock is held and interrupts disabled, as we muck with the * queue lock is held and interrupts disabled, as we muck with the
* request queue list. * request queue list.
*/ */
static inline void add_request(request_queue_t * q, struct request * req) static inline void add_request(struct request_queue * q, struct request * req)
{ {
drive_stat_acct(req, req->nr_sectors, 1); drive_stat_acct(req, req->nr_sectors, 1);
@ -2730,7 +2735,7 @@ EXPORT_SYMBOL_GPL(disk_round_stats);
/* /*
* queue lock must be held * queue lock must be held
*/ */
void __blk_put_request(request_queue_t *q, struct request *req) void __blk_put_request(struct request_queue *q, struct request *req)
{ {
if (unlikely(!q)) if (unlikely(!q))
return; return;
@ -2760,7 +2765,7 @@ EXPORT_SYMBOL_GPL(__blk_put_request);
void blk_put_request(struct request *req) void blk_put_request(struct request *req)
{ {
unsigned long flags; unsigned long flags;
request_queue_t *q = req->q; struct request_queue *q = req->q;
/* /*
* Gee, IDE calls in w/ NULL q. Fix IDE and remove the * Gee, IDE calls in w/ NULL q. Fix IDE and remove the
@ -2798,7 +2803,7 @@ EXPORT_SYMBOL(blk_end_sync_rq);
/* /*
* Has to be called with the request spinlock acquired * Has to be called with the request spinlock acquired
*/ */
static int attempt_merge(request_queue_t *q, struct request *req, static int attempt_merge(struct request_queue *q, struct request *req,
struct request *next) struct request *next)
{ {
if (!rq_mergeable(req) || !rq_mergeable(next)) if (!rq_mergeable(req) || !rq_mergeable(next))
@ -2851,7 +2856,8 @@ static int attempt_merge(request_queue_t *q, struct request *req,
return 1; return 1;
} }
static inline int attempt_back_merge(request_queue_t *q, struct request *rq) static inline int attempt_back_merge(struct request_queue *q,
struct request *rq)
{ {
struct request *next = elv_latter_request(q, rq); struct request *next = elv_latter_request(q, rq);
@ -2861,7 +2867,8 @@ static inline int attempt_back_merge(request_queue_t *q, struct request *rq)
return 0; return 0;
} }
static inline int attempt_front_merge(request_queue_t *q, struct request *rq) static inline int attempt_front_merge(struct request_queue *q,
struct request *rq)
{ {
struct request *prev = elv_former_request(q, rq); struct request *prev = elv_former_request(q, rq);
@ -2905,7 +2912,7 @@ static void init_request_from_bio(struct request *req, struct bio *bio)
req->start_time = jiffies; req->start_time = jiffies;
} }
static int __make_request(request_queue_t *q, struct bio *bio) static int __make_request(struct request_queue *q, struct bio *bio)
{ {
struct request *req; struct request *req;
int el_ret, nr_sectors, barrier, err; int el_ret, nr_sectors, barrier, err;
@ -3119,7 +3126,7 @@ static inline int should_fail_request(struct bio *bio)
*/ */
static inline void __generic_make_request(struct bio *bio) static inline void __generic_make_request(struct bio *bio)
{ {
request_queue_t *q; struct request_queue *q;
sector_t maxsector; sector_t maxsector;
sector_t old_sector; sector_t old_sector;
int ret, nr_sectors = bio_sectors(bio); int ret, nr_sectors = bio_sectors(bio);
@ -3312,7 +3319,7 @@ static void blk_recalc_rq_segments(struct request *rq)
struct bio *bio, *prevbio = NULL; struct bio *bio, *prevbio = NULL;
int nr_phys_segs, nr_hw_segs; int nr_phys_segs, nr_hw_segs;
unsigned int phys_size, hw_size; unsigned int phys_size, hw_size;
request_queue_t *q = rq->q; struct request_queue *q = rq->q;
if (!rq->bio) if (!rq->bio)
return; return;
@ -3658,7 +3665,8 @@ void end_request(struct request *req, int uptodate)
EXPORT_SYMBOL(end_request); EXPORT_SYMBOL(end_request);
void blk_rq_bio_prep(request_queue_t *q, struct request *rq, struct bio *bio) void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
struct bio *bio)
{ {
/* first two bits are identical in rq->cmd_flags and bio->bi_rw */ /* first two bits are identical in rq->cmd_flags and bio->bi_rw */
rq->cmd_flags |= (bio->bi_rw & 3); rq->cmd_flags |= (bio->bi_rw & 3);
@ -3701,7 +3709,7 @@ int __init blk_dev_init(void)
sizeof(struct request), 0, SLAB_PANIC, NULL); sizeof(struct request), 0, SLAB_PANIC, NULL);
requestq_cachep = kmem_cache_create("blkdev_queue", requestq_cachep = kmem_cache_create("blkdev_queue",
sizeof(request_queue_t), 0, SLAB_PANIC, NULL); sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
iocontext_cachep = kmem_cache_create("blkdev_ioc", iocontext_cachep = kmem_cache_create("blkdev_ioc",
sizeof(struct io_context), 0, SLAB_PANIC, NULL); sizeof(struct io_context), 0, SLAB_PANIC, NULL);
@ -4021,7 +4029,8 @@ static ssize_t
queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page) queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
{ {
struct queue_sysfs_entry *entry = to_queue(attr); struct queue_sysfs_entry *entry = to_queue(attr);
request_queue_t *q = container_of(kobj, struct request_queue, kobj); struct request_queue *q =
container_of(kobj, struct request_queue, kobj);
ssize_t res; ssize_t res;
if (!entry->show) if (!entry->show)
@ -4041,7 +4050,7 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
const char *page, size_t length) const char *page, size_t length)
{ {
struct queue_sysfs_entry *entry = to_queue(attr); struct queue_sysfs_entry *entry = to_queue(attr);
request_queue_t *q = container_of(kobj, struct request_queue, kobj); struct request_queue *q = container_of(kobj, struct request_queue, kobj);
ssize_t res; ssize_t res;
@ -4072,7 +4081,7 @@ int blk_register_queue(struct gendisk *disk)
{ {
int ret; int ret;
request_queue_t *q = disk->queue; struct request_queue *q = disk->queue;
if (!q || !q->request_fn) if (!q || !q->request_fn)
return -ENXIO; return -ENXIO;
@ -4097,7 +4106,7 @@ int blk_register_queue(struct gendisk *disk)
void blk_unregister_queue(struct gendisk *disk) void blk_unregister_queue(struct gendisk *disk)
{ {
request_queue_t *q = disk->queue; struct request_queue *q = disk->queue;
if (q && q->request_fn) { if (q && q->request_fn) {
elv_unregister_queue(q); elv_unregister_queue(q);

View File

@ -11,13 +11,13 @@ struct noop_data {
struct list_head queue; struct list_head queue;
}; };
static void noop_merged_requests(request_queue_t *q, struct request *rq, static void noop_merged_requests(struct request_queue *q, struct request *rq,
struct request *next) struct request *next)
{ {
list_del_init(&next->queuelist); list_del_init(&next->queuelist);
} }
static int noop_dispatch(request_queue_t *q, int force) static int noop_dispatch(struct request_queue *q, int force)
{ {
struct noop_data *nd = q->elevator->elevator_data; struct noop_data *nd = q->elevator->elevator_data;
@ -31,14 +31,14 @@ static int noop_dispatch(request_queue_t *q, int force)
return 0; return 0;
} }
static void noop_add_request(request_queue_t *q, struct request *rq) static void noop_add_request(struct request_queue *q, struct request *rq)
{ {
struct noop_data *nd = q->elevator->elevator_data; struct noop_data *nd = q->elevator->elevator_data;
list_add_tail(&rq->queuelist, &nd->queue); list_add_tail(&rq->queuelist, &nd->queue);
} }
static int noop_queue_empty(request_queue_t *q) static int noop_queue_empty(struct request_queue *q)
{ {
struct noop_data *nd = q->elevator->elevator_data; struct noop_data *nd = q->elevator->elevator_data;
@ -46,7 +46,7 @@ static int noop_queue_empty(request_queue_t *q)
} }
static struct request * static struct request *
noop_former_request(request_queue_t *q, struct request *rq) noop_former_request(struct request_queue *q, struct request *rq)
{ {
struct noop_data *nd = q->elevator->elevator_data; struct noop_data *nd = q->elevator->elevator_data;
@ -56,7 +56,7 @@ noop_former_request(request_queue_t *q, struct request *rq)
} }
static struct request * static struct request *
noop_latter_request(request_queue_t *q, struct request *rq) noop_latter_request(struct request_queue *q, struct request *rq)
{ {
struct noop_data *nd = q->elevator->elevator_data; struct noop_data *nd = q->elevator->elevator_data;
@ -65,7 +65,7 @@ noop_latter_request(request_queue_t *q, struct request *rq)
return list_entry(rq->queuelist.next, struct request, queuelist); return list_entry(rq->queuelist.next, struct request, queuelist);
} }
static void *noop_init_queue(request_queue_t *q) static void *noop_init_queue(struct request_queue *q)
{ {
struct noop_data *nd; struct noop_data *nd;

View File

@ -49,22 +49,22 @@ static int sg_get_version(int __user *p)
return put_user(sg_version_num, p); return put_user(sg_version_num, p);
} }
static int scsi_get_idlun(request_queue_t *q, int __user *p) static int scsi_get_idlun(struct request_queue *q, int __user *p)
{ {
return put_user(0, p); return put_user(0, p);
} }
static int scsi_get_bus(request_queue_t *q, int __user *p) static int scsi_get_bus(struct request_queue *q, int __user *p)
{ {
return put_user(0, p); return put_user(0, p);
} }
static int sg_get_timeout(request_queue_t *q) static int sg_get_timeout(struct request_queue *q)
{ {
return q->sg_timeout / (HZ / USER_HZ); return q->sg_timeout / (HZ / USER_HZ);
} }
static int sg_set_timeout(request_queue_t *q, int __user *p) static int sg_set_timeout(struct request_queue *q, int __user *p)
{ {
int timeout, err = get_user(timeout, p); int timeout, err = get_user(timeout, p);
@ -74,14 +74,14 @@ static int sg_set_timeout(request_queue_t *q, int __user *p)
return err; return err;
} }
static int sg_get_reserved_size(request_queue_t *q, int __user *p) static int sg_get_reserved_size(struct request_queue *q, int __user *p)
{ {
unsigned val = min(q->sg_reserved_size, q->max_sectors << 9); unsigned val = min(q->sg_reserved_size, q->max_sectors << 9);
return put_user(val, p); return put_user(val, p);
} }
static int sg_set_reserved_size(request_queue_t *q, int __user *p) static int sg_set_reserved_size(struct request_queue *q, int __user *p)
{ {
int size, err = get_user(size, p); int size, err = get_user(size, p);
@ -101,7 +101,7 @@ static int sg_set_reserved_size(request_queue_t *q, int __user *p)
* will always return that we are ATAPI even for a real SCSI drive, I'm not * will always return that we are ATAPI even for a real SCSI drive, I'm not
* so sure this is worth doing anything about (why would you care??) * so sure this is worth doing anything about (why would you care??)
*/ */
static int sg_emulated_host(request_queue_t *q, int __user *p) static int sg_emulated_host(struct request_queue *q, int __user *p)
{ {
return put_user(1, p); return put_user(1, p);
} }
@ -214,7 +214,7 @@ int blk_verify_command(unsigned char *cmd, int has_write_perm)
} }
EXPORT_SYMBOL_GPL(blk_verify_command); EXPORT_SYMBOL_GPL(blk_verify_command);
static int blk_fill_sghdr_rq(request_queue_t *q, struct request *rq, static int blk_fill_sghdr_rq(struct request_queue *q, struct request *rq,
struct sg_io_hdr *hdr, int has_write_perm) struct sg_io_hdr *hdr, int has_write_perm)
{ {
memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */ memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
@ -286,7 +286,7 @@ static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
return r; return r;
} }
static int sg_io(struct file *file, request_queue_t *q, static int sg_io(struct file *file, struct request_queue *q,
struct gendisk *bd_disk, struct sg_io_hdr *hdr) struct gendisk *bd_disk, struct sg_io_hdr *hdr)
{ {
unsigned long start_time; unsigned long start_time;
@ -519,7 +519,8 @@ error:
EXPORT_SYMBOL_GPL(sg_scsi_ioctl); EXPORT_SYMBOL_GPL(sg_scsi_ioctl);
/* Send basic block requests */ /* Send basic block requests */
static int __blk_send_generic(request_queue_t *q, struct gendisk *bd_disk, int cmd, int data) static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
int cmd, int data)
{ {
struct request *rq; struct request *rq;
int err; int err;
@ -539,7 +540,8 @@ static int __blk_send_generic(request_queue_t *q, struct gendisk *bd_disk, int c
return err; return err;
} }
static inline int blk_send_start_stop(request_queue_t *q, struct gendisk *bd_disk, int data) static inline int blk_send_start_stop(struct request_queue *q,
struct gendisk *bd_disk, int data)
{ {
return __blk_send_generic(q, bd_disk, GPCMD_START_STOP_UNIT, data); return __blk_send_generic(q, bd_disk, GPCMD_START_STOP_UNIT, data);
} }

View File

@ -372,7 +372,7 @@ static int fd_test_drive_present(int drive);
static void config_types(void); static void config_types(void);
static int floppy_open(struct inode *inode, struct file *filp); static int floppy_open(struct inode *inode, struct file *filp);
static int floppy_release(struct inode *inode, struct file *filp); static int floppy_release(struct inode *inode, struct file *filp);
static void do_fd_request(request_queue_t *); static void do_fd_request(struct request_queue *);
/************************* End of Prototypes **************************/ /************************* End of Prototypes **************************/
@ -1271,7 +1271,7 @@ static void fd1772_checkint(void)
} }
} }
static void do_fd_request(request_queue_t* q) static void do_fd_request(struct request_queue* q)
{ {
unsigned long flags; unsigned long flags;

View File

@ -924,7 +924,7 @@ static void mfm_request(void)
DBG("mfm_request: Dropping out bottom\n"); DBG("mfm_request: Dropping out bottom\n");
} }
static void do_mfm_request(request_queue_t *q) static void do_mfm_request(struct request_queue *q)
{ {
DBG("do_mfm_request: about to mfm_request\n"); DBG("do_mfm_request: about to mfm_request\n");
mfm_request(); mfm_request();

View File

@ -768,7 +768,7 @@ static void ata_scsi_dev_config(struct scsi_device *sdev,
* Decrement max hw segments accordingly. * Decrement max hw segments accordingly.
*/ */
if (dev->class == ATA_DEV_ATAPI) { if (dev->class == ATA_DEV_ATAPI) {
request_queue_t *q = sdev->request_queue; struct request_queue *q = sdev->request_queue;
blk_queue_max_hw_segments(q, q->max_hw_segments - 1); blk_queue_max_hw_segments(q, q->max_hw_segments - 1);
} }

View File

@ -1422,7 +1422,7 @@ static void redo_fd_request(void)
goto repeat; goto repeat;
} }
static void do_fd_request(request_queue_t * q) static void do_fd_request(struct request_queue * q)
{ {
redo_fd_request(); redo_fd_request();
} }

View File

@ -138,7 +138,7 @@ struct aoedev {
u16 maxbcnt; u16 maxbcnt;
struct work_struct work;/* disk create work struct */ struct work_struct work;/* disk create work struct */
struct gendisk *gd; struct gendisk *gd;
request_queue_t blkq; struct request_queue blkq;
struct hd_geometry geo; struct hd_geometry geo;
sector_t ssize; sector_t ssize;
struct timer_list timer; struct timer_list timer;

View File

@ -125,7 +125,7 @@ aoeblk_release(struct inode *inode, struct file *filp)
} }
static int static int
aoeblk_make_request(request_queue_t *q, struct bio *bio) aoeblk_make_request(struct request_queue *q, struct bio *bio)
{ {
struct aoedev *d; struct aoedev *d;
struct buf *buf; struct buf *buf;

View File

@ -1466,7 +1466,7 @@ repeat:
} }
void do_fd_request(request_queue_t * q) void do_fd_request(struct request_queue * q)
{ {
unsigned long flags; unsigned long flags;

View File

@ -139,7 +139,7 @@ static struct board_type products[] = {
static ctlr_info_t *hba[MAX_CTLR]; static ctlr_info_t *hba[MAX_CTLR];
static void do_cciss_request(request_queue_t *q); static void do_cciss_request(struct request_queue *q);
static irqreturn_t do_cciss_intr(int irq, void *dev_id); static irqreturn_t do_cciss_intr(int irq, void *dev_id);
static int cciss_open(struct inode *inode, struct file *filep); static int cciss_open(struct inode *inode, struct file *filep);
static int cciss_release(struct inode *inode, struct file *filep); static int cciss_release(struct inode *inode, struct file *filep);
@ -1584,7 +1584,7 @@ static int deregister_disk(struct gendisk *disk, drive_info_struct *drv,
*/ */
if (h->gendisk[0] != disk) { if (h->gendisk[0] != disk) {
if (disk) { if (disk) {
request_queue_t *q = disk->queue; struct request_queue *q = disk->queue;
if (disk->flags & GENHD_FL_UP) if (disk->flags & GENHD_FL_UP)
del_gendisk(disk); del_gendisk(disk);
if (q) { if (q) {
@ -2511,7 +2511,7 @@ after_error_processing:
/* /*
* Get a request and submit it to the controller. * Get a request and submit it to the controller.
*/ */
static void do_cciss_request(request_queue_t *q) static void do_cciss_request(struct request_queue *q)
{ {
ctlr_info_t *h = q->queuedata; ctlr_info_t *h = q->queuedata;
CommandList_struct *c; CommandList_struct *c;
@ -3380,7 +3380,7 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
do { do {
drive_info_struct *drv = &(hba[i]->drv[j]); drive_info_struct *drv = &(hba[i]->drv[j]);
struct gendisk *disk = hba[i]->gendisk[j]; struct gendisk *disk = hba[i]->gendisk[j];
request_queue_t *q; struct request_queue *q;
/* Check if the disk was allocated already */ /* Check if the disk was allocated already */
if (!disk){ if (!disk){
@ -3523,7 +3523,7 @@ static void __devexit cciss_remove_one(struct pci_dev *pdev)
for (j = 0; j < CISS_MAX_LUN; j++) { for (j = 0; j < CISS_MAX_LUN; j++) {
struct gendisk *disk = hba[i]->gendisk[j]; struct gendisk *disk = hba[i]->gendisk[j];
if (disk) { if (disk) {
request_queue_t *q = disk->queue; struct request_queue *q = disk->queue;
if (disk->flags & GENHD_FL_UP) if (disk->flags & GENHD_FL_UP)
del_gendisk(disk); del_gendisk(disk);

View File

@ -161,7 +161,7 @@ static int ida_ioctl(struct inode *inode, struct file *filep, unsigned int cmd,
static int ida_getgeo(struct block_device *bdev, struct hd_geometry *geo); static int ida_getgeo(struct block_device *bdev, struct hd_geometry *geo);
static int ida_ctlr_ioctl(ctlr_info_t *h, int dsk, ida_ioctl_t *io); static int ida_ctlr_ioctl(ctlr_info_t *h, int dsk, ida_ioctl_t *io);
static void do_ida_request(request_queue_t *q); static void do_ida_request(struct request_queue *q);
static void start_io(ctlr_info_t *h); static void start_io(ctlr_info_t *h);
static inline void addQ(cmdlist_t **Qptr, cmdlist_t *c); static inline void addQ(cmdlist_t **Qptr, cmdlist_t *c);
@ -391,7 +391,7 @@ static void __devexit cpqarray_remove_one_eisa (int i)
/* pdev is NULL for eisa */ /* pdev is NULL for eisa */
static int __init cpqarray_register_ctlr( int i, struct pci_dev *pdev) static int __init cpqarray_register_ctlr( int i, struct pci_dev *pdev)
{ {
request_queue_t *q; struct request_queue *q;
int j; int j;
/* /*
@ -886,7 +886,7 @@ static inline cmdlist_t *removeQ(cmdlist_t **Qptr, cmdlist_t *c)
* are in here (either via the dummy do_ida_request functions or by being * are in here (either via the dummy do_ida_request functions or by being
* called from the interrupt handler * called from the interrupt handler
*/ */
static void do_ida_request(request_queue_t *q) static void do_ida_request(struct request_queue *q)
{ {
ctlr_info_t *h = q->queuedata; ctlr_info_t *h = q->queuedata;
cmdlist_t *c; cmdlist_t *c;

View File

@ -251,7 +251,7 @@ static int irqdma_allocated;
static struct request *current_req; static struct request *current_req;
static struct request_queue *floppy_queue; static struct request_queue *floppy_queue;
static void do_fd_request(request_queue_t * q); static void do_fd_request(struct request_queue * q);
#ifndef fd_get_dma_residue #ifndef fd_get_dma_residue
#define fd_get_dma_residue() get_dma_residue(FLOPPY_DMA) #define fd_get_dma_residue() get_dma_residue(FLOPPY_DMA)
@ -2981,7 +2981,7 @@ static void process_fd_request(void)
schedule_bh(redo_fd_request); schedule_bh(redo_fd_request);
} }
static void do_fd_request(request_queue_t * q) static void do_fd_request(struct request_queue * q)
{ {
if (max_buffer_sectors == 0) { if (max_buffer_sectors == 0) {
printk("VFS: do_fd_request called on non-open device\n"); printk("VFS: do_fd_request called on non-open device\n");

View File

@ -137,7 +137,7 @@ static void do_read(struct blockdev *bd, struct request *req)
lguest_send_dma(bd->phys_addr, &ping); lguest_send_dma(bd->phys_addr, &ping);
} }
static void do_lgb_request(request_queue_t *q) static void do_lgb_request(struct request_queue *q)
{ {
struct blockdev *bd; struct blockdev *bd;
struct request *req; struct request *req;

View File

@ -529,7 +529,7 @@ static struct bio *loop_get_bio(struct loop_device *lo)
return bio; return bio;
} }
static int loop_make_request(request_queue_t *q, struct bio *old_bio) static int loop_make_request(struct request_queue *q, struct bio *old_bio)
{ {
struct loop_device *lo = q->queuedata; struct loop_device *lo = q->queuedata;
int rw = bio_rw(old_bio); int rw = bio_rw(old_bio);
@ -558,7 +558,7 @@ out:
/* /*
* kick off io on the underlying address space * kick off io on the underlying address space
*/ */
static void loop_unplug(request_queue_t *q) static void loop_unplug(struct request_queue *q)
{ {
struct loop_device *lo = q->queuedata; struct loop_device *lo = q->queuedata;

View File

@ -100,7 +100,7 @@ static const char *nbdcmd_to_ascii(int cmd)
static void nbd_end_request(struct request *req) static void nbd_end_request(struct request *req)
{ {
int uptodate = (req->errors == 0) ? 1 : 0; int uptodate = (req->errors == 0) ? 1 : 0;
request_queue_t *q = req->q; struct request_queue *q = req->q;
unsigned long flags; unsigned long flags;
dprintk(DBG_BLKDEV, "%s: request %p: %s\n", req->rq_disk->disk_name, dprintk(DBG_BLKDEV, "%s: request %p: %s\n", req->rq_disk->disk_name,
@ -410,7 +410,7 @@ static void nbd_clear_que(struct nbd_device *lo)
* { printk( "Warning: Ignoring result!\n"); nbd_end_request( req ); } * { printk( "Warning: Ignoring result!\n"); nbd_end_request( req ); }
*/ */
static void do_nbd_request(request_queue_t * q) static void do_nbd_request(struct request_queue * q)
{ {
struct request *req; struct request *req;

View File

@ -183,7 +183,7 @@ static int pcd_packet(struct cdrom_device_info *cdi,
static int pcd_detect(void); static int pcd_detect(void);
static void pcd_probe_capabilities(void); static void pcd_probe_capabilities(void);
static void do_pcd_read_drq(void); static void do_pcd_read_drq(void);
static void do_pcd_request(request_queue_t * q); static void do_pcd_request(struct request_queue * q);
static void do_pcd_read(void); static void do_pcd_read(void);
struct pcd_unit { struct pcd_unit {
@ -713,7 +713,7 @@ static int pcd_detect(void)
/* I/O request processing */ /* I/O request processing */
static struct request_queue *pcd_queue; static struct request_queue *pcd_queue;
static void do_pcd_request(request_queue_t * q) static void do_pcd_request(struct request_queue * q)
{ {
if (pcd_busy) if (pcd_busy)
return; return;

View File

@ -698,7 +698,7 @@ static enum action pd_identify(struct pd_unit *disk)
/* end of io request engine */ /* end of io request engine */
static void do_pd_request(request_queue_t * q) static void do_pd_request(struct request_queue * q)
{ {
if (pd_req) if (pd_req)
return; return;

View File

@ -202,7 +202,7 @@ module_param_array(drive3, int, NULL, 0);
#define ATAPI_WRITE_10 0x2a #define ATAPI_WRITE_10 0x2a
static int pf_open(struct inode *inode, struct file *file); static int pf_open(struct inode *inode, struct file *file);
static void do_pf_request(request_queue_t * q); static void do_pf_request(struct request_queue * q);
static int pf_ioctl(struct inode *inode, struct file *file, static int pf_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg); unsigned int cmd, unsigned long arg);
static int pf_getgeo(struct block_device *bdev, struct hd_geometry *geo); static int pf_getgeo(struct block_device *bdev, struct hd_geometry *geo);
@ -760,7 +760,7 @@ static void pf_end_request(int uptodate)
} }
} }
static void do_pf_request(request_queue_t * q) static void do_pf_request(struct request_queue * q)
{ {
if (pf_busy) if (pf_busy)
return; return;

View File

@ -752,7 +752,7 @@ static inline struct bio *pkt_get_list_first(struct bio **list_head, struct bio
*/ */
static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *cgc) static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *cgc)
{ {
request_queue_t *q = bdev_get_queue(pd->bdev); struct request_queue *q = bdev_get_queue(pd->bdev);
struct request *rq; struct request *rq;
int ret = 0; int ret = 0;
@ -979,7 +979,7 @@ static void pkt_iosched_process_queue(struct pktcdvd_device *pd)
* Special care is needed if the underlying block device has a small * Special care is needed if the underlying block device has a small
* max_phys_segments value. * max_phys_segments value.
*/ */
static int pkt_set_segment_merging(struct pktcdvd_device *pd, request_queue_t *q) static int pkt_set_segment_merging(struct pktcdvd_device *pd, struct request_queue *q)
{ {
if ((pd->settings.size << 9) / CD_FRAMESIZE <= q->max_phys_segments) { if ((pd->settings.size << 9) / CD_FRAMESIZE <= q->max_phys_segments) {
/* /*
@ -2314,7 +2314,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, int write)
{ {
int ret; int ret;
long lba; long lba;
request_queue_t *q; struct request_queue *q;
/* /*
* We need to re-open the cdrom device without O_NONBLOCK to be able * We need to re-open the cdrom device without O_NONBLOCK to be able
@ -2477,7 +2477,7 @@ static int pkt_end_io_read_cloned(struct bio *bio, unsigned int bytes_done, int
return 0; return 0;
} }
static int pkt_make_request(request_queue_t *q, struct bio *bio) static int pkt_make_request(struct request_queue *q, struct bio *bio)
{ {
struct pktcdvd_device *pd; struct pktcdvd_device *pd;
char b[BDEVNAME_SIZE]; char b[BDEVNAME_SIZE];
@ -2626,7 +2626,7 @@ end_io:
static int pkt_merge_bvec(request_queue_t *q, struct bio *bio, struct bio_vec *bvec) static int pkt_merge_bvec(struct request_queue *q, struct bio *bio, struct bio_vec *bvec)
{ {
struct pktcdvd_device *pd = q->queuedata; struct pktcdvd_device *pd = q->queuedata;
sector_t zone = ZONE(bio->bi_sector, pd); sector_t zone = ZONE(bio->bi_sector, pd);
@ -2647,7 +2647,7 @@ static int pkt_merge_bvec(request_queue_t *q, struct bio *bio, struct bio_vec *b
static void pkt_init_queue(struct pktcdvd_device *pd) static void pkt_init_queue(struct pktcdvd_device *pd)
{ {
request_queue_t *q = pd->disk->queue; struct request_queue *q = pd->disk->queue;
blk_queue_make_request(q, pkt_make_request); blk_queue_make_request(q, pkt_make_request);
blk_queue_hardsect_size(q, CD_FRAMESIZE); blk_queue_hardsect_size(q, CD_FRAMESIZE);

View File

@ -64,7 +64,7 @@ static void reset_ctrl(void);
static int ps2esdi_geninit(void); static int ps2esdi_geninit(void);
static void do_ps2esdi_request(request_queue_t * q); static void do_ps2esdi_request(struct request_queue * q);
static void ps2esdi_readwrite(int cmd, struct request *req); static void ps2esdi_readwrite(int cmd, struct request *req);
@ -473,7 +473,7 @@ static void __init ps2esdi_get_device_cfg(void)
} }
/* strategy routine that handles most of the IO requests */ /* strategy routine that handles most of the IO requests */
static void do_ps2esdi_request(request_queue_t * q) static void do_ps2esdi_request(struct request_queue * q)
{ {
struct request *req; struct request *req;
/* since, this routine is called with interrupts cleared - they /* since, this routine is called with interrupts cleared - they

View File

@ -190,7 +190,7 @@ static int ps3disk_submit_flush_request(struct ps3_storage_device *dev,
} }
static void ps3disk_do_request(struct ps3_storage_device *dev, static void ps3disk_do_request(struct ps3_storage_device *dev,
request_queue_t *q) struct request_queue *q)
{ {
struct request *req; struct request *req;
@ -211,7 +211,7 @@ static void ps3disk_do_request(struct ps3_storage_device *dev,
} }
} }
static void ps3disk_request(request_queue_t *q) static void ps3disk_request(struct request_queue *q)
{ {
struct ps3_storage_device *dev = q->queuedata; struct ps3_storage_device *dev = q->queuedata;
struct ps3disk_private *priv = dev->sbd.core.driver_data; struct ps3disk_private *priv = dev->sbd.core.driver_data;
@ -404,7 +404,7 @@ static int ps3disk_identify(struct ps3_storage_device *dev)
return 0; return 0;
} }
static void ps3disk_prepare_flush(request_queue_t *q, struct request *req) static void ps3disk_prepare_flush(struct request_queue *q, struct request *req)
{ {
struct ps3_storage_device *dev = q->queuedata; struct ps3_storage_device *dev = q->queuedata;
@ -414,7 +414,7 @@ static void ps3disk_prepare_flush(request_queue_t *q, struct request *req)
req->cmd_type = REQ_TYPE_FLUSH; req->cmd_type = REQ_TYPE_FLUSH;
} }
static int ps3disk_issue_flush(request_queue_t *q, struct gendisk *gendisk, static int ps3disk_issue_flush(struct request_queue *q, struct gendisk *gendisk,
sector_t *sector) sector_t *sector)
{ {
struct ps3_storage_device *dev = q->queuedata; struct ps3_storage_device *dev = q->queuedata;

View File

@ -264,7 +264,7 @@ static int rd_blkdev_pagecache_IO(int rw, struct bio_vec *vec, sector_t sector,
* 19-JAN-1998 Richard Gooch <rgooch@atnf.csiro.au> Added devfs support * 19-JAN-1998 Richard Gooch <rgooch@atnf.csiro.au> Added devfs support
* *
*/ */
static int rd_make_request(request_queue_t *q, struct bio *bio) static int rd_make_request(struct request_queue *q, struct bio *bio)
{ {
struct block_device *bdev = bio->bi_bdev; struct block_device *bdev = bio->bi_bdev;
struct address_space * mapping = bdev->bd_inode->i_mapping; struct address_space * mapping = bdev->bd_inode->i_mapping;

View File

@ -444,7 +444,7 @@ out:
return err; return err;
} }
static void do_vdc_request(request_queue_t *q) static void do_vdc_request(struct request_queue *q)
{ {
while (1) { while (1) {
struct request *req = elv_next_request(q); struct request *req = elv_next_request(q);

View File

@ -225,7 +225,7 @@ static unsigned short write_postamble[] = {
static void swim3_select(struct floppy_state *fs, int sel); static void swim3_select(struct floppy_state *fs, int sel);
static void swim3_action(struct floppy_state *fs, int action); static void swim3_action(struct floppy_state *fs, int action);
static int swim3_readbit(struct floppy_state *fs, int bit); static int swim3_readbit(struct floppy_state *fs, int bit);
static void do_fd_request(request_queue_t * q); static void do_fd_request(struct request_queue * q);
static void start_request(struct floppy_state *fs); static void start_request(struct floppy_state *fs);
static void set_timeout(struct floppy_state *fs, int nticks, static void set_timeout(struct floppy_state *fs, int nticks,
void (*proc)(unsigned long)); void (*proc)(unsigned long));
@ -290,7 +290,7 @@ static int swim3_readbit(struct floppy_state *fs, int bit)
return (stat & DATA) == 0; return (stat & DATA) == 0;
} }
static void do_fd_request(request_queue_t * q) static void do_fd_request(struct request_queue * q)
{ {
int i; int i;
for(i=0;i<floppy_count;i++) for(i=0;i<floppy_count;i++)

View File

@ -278,7 +278,7 @@ struct carm_host {
unsigned int state; unsigned int state;
u32 fw_ver; u32 fw_ver;
request_queue_t *oob_q; struct request_queue *oob_q;
unsigned int n_oob; unsigned int n_oob;
unsigned int hw_sg_used; unsigned int hw_sg_used;
@ -287,7 +287,7 @@ struct carm_host {
unsigned int wait_q_prod; unsigned int wait_q_prod;
unsigned int wait_q_cons; unsigned int wait_q_cons;
request_queue_t *wait_q[CARM_MAX_WAIT_Q]; struct request_queue *wait_q[CARM_MAX_WAIT_Q];
unsigned int n_msgs; unsigned int n_msgs;
u64 msg_alloc; u64 msg_alloc;
@ -756,7 +756,7 @@ static inline void carm_end_request_queued(struct carm_host *host,
assert(rc == 0); assert(rc == 0);
} }
static inline void carm_push_q (struct carm_host *host, request_queue_t *q) static inline void carm_push_q (struct carm_host *host, struct request_queue *q)
{ {
unsigned int idx = host->wait_q_prod % CARM_MAX_WAIT_Q; unsigned int idx = host->wait_q_prod % CARM_MAX_WAIT_Q;
@ -768,7 +768,7 @@ static inline void carm_push_q (struct carm_host *host, request_queue_t *q)
BUG_ON(host->wait_q_prod == host->wait_q_cons); /* overrun */ BUG_ON(host->wait_q_prod == host->wait_q_cons); /* overrun */
} }
static inline request_queue_t *carm_pop_q(struct carm_host *host) static inline struct request_queue *carm_pop_q(struct carm_host *host)
{ {
unsigned int idx; unsigned int idx;
@ -783,7 +783,7 @@ static inline request_queue_t *carm_pop_q(struct carm_host *host)
static inline void carm_round_robin(struct carm_host *host) static inline void carm_round_robin(struct carm_host *host)
{ {
request_queue_t *q = carm_pop_q(host); struct request_queue *q = carm_pop_q(host);
if (q) { if (q) {
blk_start_queue(q); blk_start_queue(q);
VPRINTK("STARTED QUEUE %p\n", q); VPRINTK("STARTED QUEUE %p\n", q);
@ -802,7 +802,7 @@ static inline void carm_end_rq(struct carm_host *host, struct carm_request *crq,
} }
} }
static void carm_oob_rq_fn(request_queue_t *q) static void carm_oob_rq_fn(struct request_queue *q)
{ {
struct carm_host *host = q->queuedata; struct carm_host *host = q->queuedata;
struct carm_request *crq; struct carm_request *crq;
@ -833,7 +833,7 @@ static void carm_oob_rq_fn(request_queue_t *q)
} }
} }
static void carm_rq_fn(request_queue_t *q) static void carm_rq_fn(struct request_queue *q)
{ {
struct carm_port *port = q->queuedata; struct carm_port *port = q->queuedata;
struct carm_host *host = port->host; struct carm_host *host = port->host;
@ -1494,7 +1494,7 @@ static int carm_init_disks(struct carm_host *host)
for (i = 0; i < CARM_MAX_PORTS; i++) { for (i = 0; i < CARM_MAX_PORTS; i++) {
struct gendisk *disk; struct gendisk *disk;
request_queue_t *q; struct request_queue *q;
struct carm_port *port; struct carm_port *port;
port = &host->port[i]; port = &host->port[i];
@ -1538,7 +1538,7 @@ static void carm_free_disks(struct carm_host *host)
for (i = 0; i < CARM_MAX_PORTS; i++) { for (i = 0; i < CARM_MAX_PORTS; i++) {
struct gendisk *disk = host->port[i].disk; struct gendisk *disk = host->port[i].disk;
if (disk) { if (disk) {
request_queue_t *q = disk->queue; struct request_queue *q = disk->queue;
if (disk->flags & GENHD_FL_UP) if (disk->flags & GENHD_FL_UP)
del_gendisk(disk); del_gendisk(disk);
@ -1571,7 +1571,7 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
struct carm_host *host; struct carm_host *host;
unsigned int pci_dac; unsigned int pci_dac;
int rc; int rc;
request_queue_t *q; struct request_queue *q;
unsigned int i; unsigned int i;
if (!printed_version++) if (!printed_version++)

View File

@ -503,7 +503,7 @@ static void ub_cleanup(struct ub_dev *sc)
{ {
struct list_head *p; struct list_head *p;
struct ub_lun *lun; struct ub_lun *lun;
request_queue_t *q; struct request_queue *q;
while (!list_empty(&sc->luns)) { while (!list_empty(&sc->luns)) {
p = sc->luns.next; p = sc->luns.next;
@ -619,7 +619,7 @@ static struct ub_scsi_cmd *ub_cmdq_pop(struct ub_dev *sc)
* The request function is our main entry point * The request function is our main entry point
*/ */
static void ub_request_fn(request_queue_t *q) static void ub_request_fn(struct request_queue *q)
{ {
struct ub_lun *lun = q->queuedata; struct ub_lun *lun = q->queuedata;
struct request *rq; struct request *rq;
@ -2273,7 +2273,7 @@ err_core:
static int ub_probe_lun(struct ub_dev *sc, int lnum) static int ub_probe_lun(struct ub_dev *sc, int lnum)
{ {
struct ub_lun *lun; struct ub_lun *lun;
request_queue_t *q; struct request_queue *q;
struct gendisk *disk; struct gendisk *disk;
int rc; int rc;

View File

@ -114,7 +114,7 @@ struct cardinfo {
*/ */
struct bio *bio, *currentbio, **biotail; struct bio *bio, *currentbio, **biotail;
request_queue_t *queue; struct request_queue *queue;
struct mm_page { struct mm_page {
dma_addr_t page_dma; dma_addr_t page_dma;
@ -357,7 +357,7 @@ static inline void reset_page(struct mm_page *page)
page->biotail = & page->bio; page->biotail = & page->bio;
} }
static void mm_unplug_device(request_queue_t *q) static void mm_unplug_device(struct request_queue *q)
{ {
struct cardinfo *card = q->queuedata; struct cardinfo *card = q->queuedata;
unsigned long flags; unsigned long flags;
@ -541,7 +541,7 @@ static void process_page(unsigned long data)
-- mm_make_request -- mm_make_request
----------------------------------------------------------------------------------- -----------------------------------------------------------------------------------
*/ */
static int mm_make_request(request_queue_t *q, struct bio *bio) static int mm_make_request(struct request_queue *q, struct bio *bio)
{ {
struct cardinfo *card = q->queuedata; struct cardinfo *card = q->queuedata;
pr_debug("mm_make_request %llu %u\n", pr_debug("mm_make_request %llu %u\n",

View File

@ -400,7 +400,7 @@ error_ret:
/* /*
* This is the external request processing routine * This is the external request processing routine
*/ */
static void do_viodasd_request(request_queue_t *q) static void do_viodasd_request(struct request_queue *q)
{ {
struct request *req; struct request *req;

View File

@ -298,7 +298,7 @@ static u_char __init xd_detect (u_char *controller, unsigned int *address)
} }
/* do_xd_request: handle an incoming request */ /* do_xd_request: handle an incoming request */
static void do_xd_request (request_queue_t * q) static void do_xd_request (struct request_queue * q)
{ {
struct request *req; struct request *req;

View File

@ -104,7 +104,7 @@ static int xd_manual_geo_init (char *command);
static u_char xd_detect (u_char *controller, unsigned int *address); static u_char xd_detect (u_char *controller, unsigned int *address);
static u_char xd_initdrives (void (*init_drive)(u_char drive)); static u_char xd_initdrives (void (*init_drive)(u_char drive));
static void do_xd_request (request_queue_t * q); static void do_xd_request (struct request_queue * q);
static int xd_ioctl (struct inode *inode,struct file *file,unsigned int cmd,unsigned long arg); static int xd_ioctl (struct inode *inode,struct file *file,unsigned int cmd,unsigned long arg);
static int xd_readwrite (u_char operation,XD_INFO *disk,char *buffer,u_int block,u_int count); static int xd_readwrite (u_char operation,XD_INFO *disk,char *buffer,u_int block,u_int count);
static void xd_recalibrate (u_char drive); static void xd_recalibrate (u_char drive);

View File

@ -241,7 +241,7 @@ static inline void flush_requests(struct blkfront_info *info)
* do_blkif_request * do_blkif_request
* read a block; request is in a request queue * read a block; request is in a request queue
*/ */
static void do_blkif_request(request_queue_t *rq) static void do_blkif_request(struct request_queue *rq)
{ {
struct blkfront_info *info = NULL; struct blkfront_info *info = NULL;
struct request *req; struct request *req;
@ -287,7 +287,7 @@ wait:
static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size) static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size)
{ {
request_queue_t *rq; struct request_queue *rq;
rq = blk_init_queue(do_blkif_request, &blkif_io_lock); rq = blk_init_queue(do_blkif_request, &blkif_io_lock);
if (rq == NULL) if (rq == NULL)

View File

@ -458,7 +458,7 @@ static inline void ace_fsm_yieldirq(struct ace_device *ace)
} }
/* Get the next read/write request; ending requests that we don't handle */ /* Get the next read/write request; ending requests that we don't handle */
struct request *ace_get_next_request(request_queue_t * q) struct request *ace_get_next_request(struct request_queue * q)
{ {
struct request *req; struct request *req;
@ -825,7 +825,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id)
/* --------------------------------------------------------------------- /* ---------------------------------------------------------------------
* Block ops * Block ops
*/ */
static void ace_request(request_queue_t * q) static void ace_request(struct request_queue * q)
{ {
struct request *req; struct request *req;
struct ace_device *ace; struct ace_device *ace;

View File

@ -67,7 +67,7 @@ static DEFINE_SPINLOCK(z2ram_lock);
static struct block_device_operations z2_fops; static struct block_device_operations z2_fops;
static struct gendisk *z2ram_gendisk; static struct gendisk *z2ram_gendisk;
static void do_z2_request(request_queue_t *q) static void do_z2_request(struct request_queue *q)
{ {
struct request *req; struct request *req;
while ((req = elv_next_request(q)) != NULL) { while ((req = elv_next_request(q)) != NULL) {

View File

@ -2094,7 +2094,7 @@ out:
static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf, static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf,
int lba, int nframes) int lba, int nframes)
{ {
request_queue_t *q = cdi->disk->queue; struct request_queue *q = cdi->disk->queue;
struct request *rq; struct request *rq;
struct bio *bio; struct bio *bio;
unsigned int len; unsigned int len;

View File

@ -398,7 +398,7 @@ static void viocd_end_request(struct request *req, int uptodate)
static int rwreq; static int rwreq;
static void do_viocd_request(request_queue_t *q) static void do_viocd_request(struct request_queue *q)
{ {
struct request *req; struct request *req;

View File

@ -3071,7 +3071,7 @@ static inline void ide_cdrom_add_settings(ide_drive_t *drive) { ; }
/* /*
* standard prep_rq_fn that builds 10 byte cmds * standard prep_rq_fn that builds 10 byte cmds
*/ */
static int ide_cdrom_prep_fs(request_queue_t *q, struct request *rq) static int ide_cdrom_prep_fs(struct request_queue *q, struct request *rq)
{ {
int hard_sect = queue_hardsect_size(q); int hard_sect = queue_hardsect_size(q);
long block = (long)rq->hard_sector / (hard_sect >> 9); long block = (long)rq->hard_sector / (hard_sect >> 9);
@ -3137,7 +3137,7 @@ static int ide_cdrom_prep_pc(struct request *rq)
return BLKPREP_OK; return BLKPREP_OK;
} }
static int ide_cdrom_prep_fn(request_queue_t *q, struct request *rq) static int ide_cdrom_prep_fn(struct request_queue *q, struct request *rq)
{ {
if (blk_fs_request(rq)) if (blk_fs_request(rq))
return ide_cdrom_prep_fs(q, rq); return ide_cdrom_prep_fs(q, rq);

View File

@ -679,7 +679,7 @@ static ide_proc_entry_t idedisk_proc[] = {
}; };
#endif /* CONFIG_IDE_PROC_FS */ #endif /* CONFIG_IDE_PROC_FS */
static void idedisk_prepare_flush(request_queue_t *q, struct request *rq) static void idedisk_prepare_flush(struct request_queue *q, struct request *rq)
{ {
ide_drive_t *drive = q->queuedata; ide_drive_t *drive = q->queuedata;
@ -697,7 +697,7 @@ static void idedisk_prepare_flush(request_queue_t *q, struct request *rq)
rq->buffer = rq->cmd; rq->buffer = rq->cmd;
} }
static int idedisk_issue_flush(request_queue_t *q, struct gendisk *disk, static int idedisk_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
ide_drive_t *drive = q->queuedata; ide_drive_t *drive = q->queuedata;

View File

@ -1327,7 +1327,7 @@ static void ide_do_request (ide_hwgroup_t *hwgroup, int masked_irq)
/* /*
* Passes the stuff to ide_do_request * Passes the stuff to ide_do_request
*/ */
void do_ide_request(request_queue_t *q) void do_ide_request(struct request_queue *q)
{ {
ide_drive_t *drive = q->queuedata; ide_drive_t *drive = q->queuedata;

View File

@ -945,7 +945,7 @@ static void save_match(ide_hwif_t *hwif, ide_hwif_t *new, ide_hwif_t **match)
*/ */
static int ide_init_queue(ide_drive_t *drive) static int ide_init_queue(ide_drive_t *drive)
{ {
request_queue_t *q; struct request_queue *q;
ide_hwif_t *hwif = HWIF(drive); ide_hwif_t *hwif = HWIF(drive);
int max_sectors = 256; int max_sectors = 256;
int max_sg_entries = PRD_ENTRIES; int max_sg_entries = PRD_ENTRIES;

View File

@ -652,7 +652,7 @@ repeat:
} }
} }
static void do_hd_request (request_queue_t * q) static void do_hd_request (struct request_queue * q)
{ {
disable_irq(HD_IRQ); disable_irq(HD_IRQ);
hd_request(); hd_request();

View File

@ -526,7 +526,7 @@ static int __table_get_device(struct dm_table *t, struct dm_target *ti,
void dm_set_device_limits(struct dm_target *ti, struct block_device *bdev) void dm_set_device_limits(struct dm_target *ti, struct block_device *bdev)
{ {
request_queue_t *q = bdev_get_queue(bdev); struct request_queue *q = bdev_get_queue(bdev);
struct io_restrictions *rs = &ti->limits; struct io_restrictions *rs = &ti->limits;
/* /*
@ -979,7 +979,7 @@ int dm_table_any_congested(struct dm_table *t, int bdi_bits)
devices = dm_table_get_devices(t); devices = dm_table_get_devices(t);
for (d = devices->next; d != devices; d = d->next) { for (d = devices->next; d != devices; d = d->next) {
struct dm_dev *dd = list_entry(d, struct dm_dev, list); struct dm_dev *dd = list_entry(d, struct dm_dev, list);
request_queue_t *q = bdev_get_queue(dd->bdev); struct request_queue *q = bdev_get_queue(dd->bdev);
r |= bdi_congested(&q->backing_dev_info, bdi_bits); r |= bdi_congested(&q->backing_dev_info, bdi_bits);
} }
@ -992,7 +992,7 @@ void dm_table_unplug_all(struct dm_table *t)
for (d = devices->next; d != devices; d = d->next) { for (d = devices->next; d != devices; d = d->next) {
struct dm_dev *dd = list_entry(d, struct dm_dev, list); struct dm_dev *dd = list_entry(d, struct dm_dev, list);
request_queue_t *q = bdev_get_queue(dd->bdev); struct request_queue *q = bdev_get_queue(dd->bdev);
if (q->unplug_fn) if (q->unplug_fn)
q->unplug_fn(q); q->unplug_fn(q);
@ -1011,7 +1011,7 @@ int dm_table_flush_all(struct dm_table *t)
for (d = devices->next; d != devices; d = d->next) { for (d = devices->next; d != devices; d = d->next) {
struct dm_dev *dd = list_entry(d, struct dm_dev, list); struct dm_dev *dd = list_entry(d, struct dm_dev, list);
request_queue_t *q = bdev_get_queue(dd->bdev); struct request_queue *q = bdev_get_queue(dd->bdev);
int err; int err;
if (!q->issue_flush_fn) if (!q->issue_flush_fn)

View File

@ -80,7 +80,7 @@ struct mapped_device {
unsigned long flags; unsigned long flags;
request_queue_t *queue; struct request_queue *queue;
struct gendisk *disk; struct gendisk *disk;
char name[16]; char name[16];
@ -792,7 +792,7 @@ static void __split_bio(struct mapped_device *md, struct bio *bio)
* The request function that just remaps the bio built up by * The request function that just remaps the bio built up by
* dm_merge_bvec. * dm_merge_bvec.
*/ */
static int dm_request(request_queue_t *q, struct bio *bio) static int dm_request(struct request_queue *q, struct bio *bio)
{ {
int r; int r;
int rw = bio_data_dir(bio); int rw = bio_data_dir(bio);
@ -844,7 +844,7 @@ static int dm_request(request_queue_t *q, struct bio *bio)
return 0; return 0;
} }
static int dm_flush_all(request_queue_t *q, struct gendisk *disk, static int dm_flush_all(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
struct mapped_device *md = q->queuedata; struct mapped_device *md = q->queuedata;
@ -859,7 +859,7 @@ static int dm_flush_all(request_queue_t *q, struct gendisk *disk,
return ret; return ret;
} }
static void dm_unplug_all(request_queue_t *q) static void dm_unplug_all(struct request_queue *q)
{ {
struct mapped_device *md = q->queuedata; struct mapped_device *md = q->queuedata;
struct dm_table *map = dm_get_table(md); struct dm_table *map = dm_get_table(md);
@ -1110,7 +1110,7 @@ static void __set_size(struct mapped_device *md, sector_t size)
static int __bind(struct mapped_device *md, struct dm_table *t) static int __bind(struct mapped_device *md, struct dm_table *t)
{ {
request_queue_t *q = md->queue; struct request_queue *q = md->queue;
sector_t size; sector_t size;
size = dm_table_get_size(t); size = dm_table_get_size(t);

View File

@ -167,7 +167,7 @@ static void add_sector(conf_t *conf, sector_t start, int mode)
conf->nfaults = n+1; conf->nfaults = n+1;
} }
static int make_request(request_queue_t *q, struct bio *bio) static int make_request(struct request_queue *q, struct bio *bio)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
conf_t *conf = (conf_t*)mddev->private; conf_t *conf = (conf_t*)mddev->private;

View File

@ -55,7 +55,7 @@ static inline dev_info_t *which_dev(mddev_t *mddev, sector_t sector)
* *
* Return amount of bytes we can take at this offset * Return amount of bytes we can take at this offset
*/ */
static int linear_mergeable_bvec(request_queue_t *q, struct bio *bio, struct bio_vec *biovec) static int linear_mergeable_bvec(struct request_queue *q, struct bio *bio, struct bio_vec *biovec)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
dev_info_t *dev0; dev_info_t *dev0;
@ -79,20 +79,20 @@ static int linear_mergeable_bvec(request_queue_t *q, struct bio *bio, struct bio
return maxsectors << 9; return maxsectors << 9;
} }
static void linear_unplug(request_queue_t *q) static void linear_unplug(struct request_queue *q)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
linear_conf_t *conf = mddev_to_conf(mddev); linear_conf_t *conf = mddev_to_conf(mddev);
int i; int i;
for (i=0; i < mddev->raid_disks; i++) { for (i=0; i < mddev->raid_disks; i++) {
request_queue_t *r_queue = bdev_get_queue(conf->disks[i].rdev->bdev); struct request_queue *r_queue = bdev_get_queue(conf->disks[i].rdev->bdev);
if (r_queue->unplug_fn) if (r_queue->unplug_fn)
r_queue->unplug_fn(r_queue); r_queue->unplug_fn(r_queue);
} }
} }
static int linear_issue_flush(request_queue_t *q, struct gendisk *disk, static int linear_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -101,7 +101,7 @@ static int linear_issue_flush(request_queue_t *q, struct gendisk *disk,
for (i=0; i < mddev->raid_disks && ret == 0; i++) { for (i=0; i < mddev->raid_disks && ret == 0; i++) {
struct block_device *bdev = conf->disks[i].rdev->bdev; struct block_device *bdev = conf->disks[i].rdev->bdev;
request_queue_t *r_queue = bdev_get_queue(bdev); struct request_queue *r_queue = bdev_get_queue(bdev);
if (!r_queue->issue_flush_fn) if (!r_queue->issue_flush_fn)
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -118,7 +118,7 @@ static int linear_congested(void *data, int bits)
int i, ret = 0; int i, ret = 0;
for (i = 0; i < mddev->raid_disks && !ret ; i++) { for (i = 0; i < mddev->raid_disks && !ret ; i++) {
request_queue_t *q = bdev_get_queue(conf->disks[i].rdev->bdev); struct request_queue *q = bdev_get_queue(conf->disks[i].rdev->bdev);
ret |= bdi_congested(&q->backing_dev_info, bits); ret |= bdi_congested(&q->backing_dev_info, bits);
} }
return ret; return ret;
@ -330,7 +330,7 @@ static int linear_stop (mddev_t *mddev)
return 0; return 0;
} }
static int linear_make_request (request_queue_t *q, struct bio *bio) static int linear_make_request (struct request_queue *q, struct bio *bio)
{ {
const int rw = bio_data_dir(bio); const int rw = bio_data_dir(bio);
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;

View File

@ -211,7 +211,7 @@ static DEFINE_SPINLOCK(all_mddevs_lock);
) )
static int md_fail_request (request_queue_t *q, struct bio *bio) static int md_fail_request (struct request_queue *q, struct bio *bio)
{ {
bio_io_error(bio, bio->bi_size); bio_io_error(bio, bio->bi_size);
return 0; return 0;

View File

@ -125,7 +125,7 @@ static void unplug_slaves(mddev_t *mddev)
mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags) if (rdev && !test_bit(Faulty, &rdev->flags)
&& atomic_read(&rdev->nr_pending)) { && atomic_read(&rdev->nr_pending)) {
request_queue_t *r_queue = bdev_get_queue(rdev->bdev); struct request_queue *r_queue = bdev_get_queue(rdev->bdev);
atomic_inc(&rdev->nr_pending); atomic_inc(&rdev->nr_pending);
rcu_read_unlock(); rcu_read_unlock();
@ -140,13 +140,13 @@ static void unplug_slaves(mddev_t *mddev)
rcu_read_unlock(); rcu_read_unlock();
} }
static void multipath_unplug(request_queue_t *q) static void multipath_unplug(struct request_queue *q)
{ {
unplug_slaves(q->queuedata); unplug_slaves(q->queuedata);
} }
static int multipath_make_request (request_queue_t *q, struct bio * bio) static int multipath_make_request (struct request_queue *q, struct bio * bio)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
multipath_conf_t *conf = mddev_to_conf(mddev); multipath_conf_t *conf = mddev_to_conf(mddev);
@ -199,7 +199,7 @@ static void multipath_status (struct seq_file *seq, mddev_t *mddev)
seq_printf (seq, "]"); seq_printf (seq, "]");
} }
static int multipath_issue_flush(request_queue_t *q, struct gendisk *disk, static int multipath_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -211,7 +211,7 @@ static int multipath_issue_flush(request_queue_t *q, struct gendisk *disk,
mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
struct block_device *bdev = rdev->bdev; struct block_device *bdev = rdev->bdev;
request_queue_t *r_queue = bdev_get_queue(bdev); struct request_queue *r_queue = bdev_get_queue(bdev);
if (!r_queue->issue_flush_fn) if (!r_queue->issue_flush_fn)
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -238,7 +238,7 @@ static int multipath_congested(void *data, int bits)
for (i = 0; i < mddev->raid_disks ; i++) { for (i = 0; i < mddev->raid_disks ; i++) {
mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
request_queue_t *q = bdev_get_queue(rdev->bdev); struct request_queue *q = bdev_get_queue(rdev->bdev);
ret |= bdi_congested(&q->backing_dev_info, bits); ret |= bdi_congested(&q->backing_dev_info, bits);
/* Just like multipath_map, we just check the /* Just like multipath_map, we just check the

View File

@ -25,7 +25,7 @@
#define MD_DRIVER #define MD_DRIVER
#define MD_PERSONALITY #define MD_PERSONALITY
static void raid0_unplug(request_queue_t *q) static void raid0_unplug(struct request_queue *q)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
raid0_conf_t *conf = mddev_to_conf(mddev); raid0_conf_t *conf = mddev_to_conf(mddev);
@ -33,14 +33,14 @@ static void raid0_unplug(request_queue_t *q)
int i; int i;
for (i=0; i<mddev->raid_disks; i++) { for (i=0; i<mddev->raid_disks; i++) {
request_queue_t *r_queue = bdev_get_queue(devlist[i]->bdev); struct request_queue *r_queue = bdev_get_queue(devlist[i]->bdev);
if (r_queue->unplug_fn) if (r_queue->unplug_fn)
r_queue->unplug_fn(r_queue); r_queue->unplug_fn(r_queue);
} }
} }
static int raid0_issue_flush(request_queue_t *q, struct gendisk *disk, static int raid0_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -50,7 +50,7 @@ static int raid0_issue_flush(request_queue_t *q, struct gendisk *disk,
for (i=0; i<mddev->raid_disks && ret == 0; i++) { for (i=0; i<mddev->raid_disks && ret == 0; i++) {
struct block_device *bdev = devlist[i]->bdev; struct block_device *bdev = devlist[i]->bdev;
request_queue_t *r_queue = bdev_get_queue(bdev); struct request_queue *r_queue = bdev_get_queue(bdev);
if (!r_queue->issue_flush_fn) if (!r_queue->issue_flush_fn)
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -68,7 +68,7 @@ static int raid0_congested(void *data, int bits)
int i, ret = 0; int i, ret = 0;
for (i = 0; i < mddev->raid_disks && !ret ; i++) { for (i = 0; i < mddev->raid_disks && !ret ; i++) {
request_queue_t *q = bdev_get_queue(devlist[i]->bdev); struct request_queue *q = bdev_get_queue(devlist[i]->bdev);
ret |= bdi_congested(&q->backing_dev_info, bits); ret |= bdi_congested(&q->backing_dev_info, bits);
} }
@ -268,7 +268,7 @@ static int create_strip_zones (mddev_t *mddev)
* *
* Return amount of bytes we can accept at this offset * Return amount of bytes we can accept at this offset
*/ */
static int raid0_mergeable_bvec(request_queue_t *q, struct bio *bio, struct bio_vec *biovec) static int raid0_mergeable_bvec(struct request_queue *q, struct bio *bio, struct bio_vec *biovec)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
sector_t sector = bio->bi_sector + get_start_sect(bio->bi_bdev); sector_t sector = bio->bi_sector + get_start_sect(bio->bi_bdev);
@ -408,7 +408,7 @@ static int raid0_stop (mddev_t *mddev)
return 0; return 0;
} }
static int raid0_make_request (request_queue_t *q, struct bio *bio) static int raid0_make_request (struct request_queue *q, struct bio *bio)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
unsigned int sect_in_chunk, chunksize_bits, chunk_size, chunk_sects; unsigned int sect_in_chunk, chunksize_bits, chunk_size, chunk_sects;

View File

@ -552,7 +552,7 @@ static void unplug_slaves(mddev_t *mddev)
for (i=0; i<mddev->raid_disks; i++) { for (i=0; i<mddev->raid_disks; i++) {
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags) && atomic_read(&rdev->nr_pending)) { if (rdev && !test_bit(Faulty, &rdev->flags) && atomic_read(&rdev->nr_pending)) {
request_queue_t *r_queue = bdev_get_queue(rdev->bdev); struct request_queue *r_queue = bdev_get_queue(rdev->bdev);
atomic_inc(&rdev->nr_pending); atomic_inc(&rdev->nr_pending);
rcu_read_unlock(); rcu_read_unlock();
@ -567,7 +567,7 @@ static void unplug_slaves(mddev_t *mddev)
rcu_read_unlock(); rcu_read_unlock();
} }
static void raid1_unplug(request_queue_t *q) static void raid1_unplug(struct request_queue *q)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -575,7 +575,7 @@ static void raid1_unplug(request_queue_t *q)
md_wakeup_thread(mddev->thread); md_wakeup_thread(mddev->thread);
} }
static int raid1_issue_flush(request_queue_t *q, struct gendisk *disk, static int raid1_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -587,7 +587,7 @@ static int raid1_issue_flush(request_queue_t *q, struct gendisk *disk,
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
struct block_device *bdev = rdev->bdev; struct block_device *bdev = rdev->bdev;
request_queue_t *r_queue = bdev_get_queue(bdev); struct request_queue *r_queue = bdev_get_queue(bdev);
if (!r_queue->issue_flush_fn) if (!r_queue->issue_flush_fn)
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -615,7 +615,7 @@ static int raid1_congested(void *data, int bits)
for (i = 0; i < mddev->raid_disks; i++) { for (i = 0; i < mddev->raid_disks; i++) {
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
request_queue_t *q = bdev_get_queue(rdev->bdev); struct request_queue *q = bdev_get_queue(rdev->bdev);
/* Note the '|| 1' - when read_balance prefers /* Note the '|| 1' - when read_balance prefers
* non-congested targets, it can be removed * non-congested targets, it can be removed
@ -765,7 +765,7 @@ do_sync_io:
return NULL; return NULL;
} }
static int make_request(request_queue_t *q, struct bio * bio) static int make_request(struct request_queue *q, struct bio * bio)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
conf_t *conf = mddev_to_conf(mddev); conf_t *conf = mddev_to_conf(mddev);

View File

@ -453,7 +453,7 @@ static sector_t raid10_find_virt(conf_t *conf, sector_t sector, int dev)
* If near_copies == raid_disk, there are no striping issues, * If near_copies == raid_disk, there are no striping issues,
* but in that case, the function isn't called at all. * but in that case, the function isn't called at all.
*/ */
static int raid10_mergeable_bvec(request_queue_t *q, struct bio *bio, static int raid10_mergeable_bvec(struct request_queue *q, struct bio *bio,
struct bio_vec *bio_vec) struct bio_vec *bio_vec)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -595,7 +595,7 @@ static void unplug_slaves(mddev_t *mddev)
for (i=0; i<mddev->raid_disks; i++) { for (i=0; i<mddev->raid_disks; i++) {
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags) && atomic_read(&rdev->nr_pending)) { if (rdev && !test_bit(Faulty, &rdev->flags) && atomic_read(&rdev->nr_pending)) {
request_queue_t *r_queue = bdev_get_queue(rdev->bdev); struct request_queue *r_queue = bdev_get_queue(rdev->bdev);
atomic_inc(&rdev->nr_pending); atomic_inc(&rdev->nr_pending);
rcu_read_unlock(); rcu_read_unlock();
@ -610,7 +610,7 @@ static void unplug_slaves(mddev_t *mddev)
rcu_read_unlock(); rcu_read_unlock();
} }
static void raid10_unplug(request_queue_t *q) static void raid10_unplug(struct request_queue *q)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -618,7 +618,7 @@ static void raid10_unplug(request_queue_t *q)
md_wakeup_thread(mddev->thread); md_wakeup_thread(mddev->thread);
} }
static int raid10_issue_flush(request_queue_t *q, struct gendisk *disk, static int raid10_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -630,7 +630,7 @@ static int raid10_issue_flush(request_queue_t *q, struct gendisk *disk,
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
struct block_device *bdev = rdev->bdev; struct block_device *bdev = rdev->bdev;
request_queue_t *r_queue = bdev_get_queue(bdev); struct request_queue *r_queue = bdev_get_queue(bdev);
if (!r_queue->issue_flush_fn) if (!r_queue->issue_flush_fn)
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -658,7 +658,7 @@ static int raid10_congested(void *data, int bits)
for (i = 0; i < mddev->raid_disks && ret == 0; i++) { for (i = 0; i < mddev->raid_disks && ret == 0; i++) {
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
request_queue_t *q = bdev_get_queue(rdev->bdev); struct request_queue *q = bdev_get_queue(rdev->bdev);
ret |= bdi_congested(&q->backing_dev_info, bits); ret |= bdi_congested(&q->backing_dev_info, bits);
} }
@ -772,7 +772,7 @@ static void unfreeze_array(conf_t *conf)
spin_unlock_irq(&conf->resync_lock); spin_unlock_irq(&conf->resync_lock);
} }
static int make_request(request_queue_t *q, struct bio * bio) static int make_request(struct request_queue *q, struct bio * bio)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
conf_t *conf = mddev_to_conf(mddev); conf_t *conf = mddev_to_conf(mddev);

View File

@ -289,7 +289,7 @@ static struct stripe_head *__find_stripe(raid5_conf_t *conf, sector_t sector, in
} }
static void unplug_slaves(mddev_t *mddev); static void unplug_slaves(mddev_t *mddev);
static void raid5_unplug_device(request_queue_t *q); static void raid5_unplug_device(struct request_queue *q);
static struct stripe_head *get_active_stripe(raid5_conf_t *conf, sector_t sector, int disks, static struct stripe_head *get_active_stripe(raid5_conf_t *conf, sector_t sector, int disks,
int pd_idx, int noblock) int pd_idx, int noblock)
@ -3182,7 +3182,7 @@ static void unplug_slaves(mddev_t *mddev)
for (i=0; i<mddev->raid_disks; i++) { for (i=0; i<mddev->raid_disks; i++) {
mdk_rdev_t *rdev = rcu_dereference(conf->disks[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->disks[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags) && atomic_read(&rdev->nr_pending)) { if (rdev && !test_bit(Faulty, &rdev->flags) && atomic_read(&rdev->nr_pending)) {
request_queue_t *r_queue = bdev_get_queue(rdev->bdev); struct request_queue *r_queue = bdev_get_queue(rdev->bdev);
atomic_inc(&rdev->nr_pending); atomic_inc(&rdev->nr_pending);
rcu_read_unlock(); rcu_read_unlock();
@ -3197,7 +3197,7 @@ static void unplug_slaves(mddev_t *mddev)
rcu_read_unlock(); rcu_read_unlock();
} }
static void raid5_unplug_device(request_queue_t *q) static void raid5_unplug_device(struct request_queue *q)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
raid5_conf_t *conf = mddev_to_conf(mddev); raid5_conf_t *conf = mddev_to_conf(mddev);
@ -3216,7 +3216,7 @@ static void raid5_unplug_device(request_queue_t *q)
unplug_slaves(mddev); unplug_slaves(mddev);
} }
static int raid5_issue_flush(request_queue_t *q, struct gendisk *disk, static int raid5_issue_flush(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
@ -3228,7 +3228,7 @@ static int raid5_issue_flush(request_queue_t *q, struct gendisk *disk,
mdk_rdev_t *rdev = rcu_dereference(conf->disks[i].rdev); mdk_rdev_t *rdev = rcu_dereference(conf->disks[i].rdev);
if (rdev && !test_bit(Faulty, &rdev->flags)) { if (rdev && !test_bit(Faulty, &rdev->flags)) {
struct block_device *bdev = rdev->bdev; struct block_device *bdev = rdev->bdev;
request_queue_t *r_queue = bdev_get_queue(bdev); struct request_queue *r_queue = bdev_get_queue(bdev);
if (!r_queue->issue_flush_fn) if (!r_queue->issue_flush_fn)
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -3267,7 +3267,7 @@ static int raid5_congested(void *data, int bits)
/* We want read requests to align with chunks where possible, /* We want read requests to align with chunks where possible,
* but write requests don't need to. * but write requests don't need to.
*/ */
static int raid5_mergeable_bvec(request_queue_t *q, struct bio *bio, struct bio_vec *biovec) static int raid5_mergeable_bvec(struct request_queue *q, struct bio *bio, struct bio_vec *biovec)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
sector_t sector = bio->bi_sector + get_start_sect(bio->bi_bdev); sector_t sector = bio->bi_sector + get_start_sect(bio->bi_bdev);
@ -3377,7 +3377,7 @@ static int raid5_align_endio(struct bio *bi, unsigned int bytes, int error)
static int bio_fits_rdev(struct bio *bi) static int bio_fits_rdev(struct bio *bi)
{ {
request_queue_t *q = bdev_get_queue(bi->bi_bdev); struct request_queue *q = bdev_get_queue(bi->bi_bdev);
if ((bi->bi_size>>9) > q->max_sectors) if ((bi->bi_size>>9) > q->max_sectors)
return 0; return 0;
@ -3396,7 +3396,7 @@ static int bio_fits_rdev(struct bio *bi)
} }
static int chunk_aligned_read(request_queue_t *q, struct bio * raid_bio) static int chunk_aligned_read(struct request_queue *q, struct bio * raid_bio)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
raid5_conf_t *conf = mddev_to_conf(mddev); raid5_conf_t *conf = mddev_to_conf(mddev);
@ -3466,7 +3466,7 @@ static int chunk_aligned_read(request_queue_t *q, struct bio * raid_bio)
} }
static int make_request(request_queue_t *q, struct bio * bi) static int make_request(struct request_queue *q, struct bio * bi)
{ {
mddev_t *mddev = q->queuedata; mddev_t *mddev = q->queuedata;
raid5_conf_t *conf = mddev_to_conf(mddev); raid5_conf_t *conf = mddev_to_conf(mddev);

View File

@ -159,7 +159,7 @@ static int i2o_block_device_flush(struct i2o_device *dev)
* Returns 0 on success or negative error code on failure. * Returns 0 on success or negative error code on failure.
*/ */
static int i2o_block_issue_flush(request_queue_t * queue, struct gendisk *disk, static int i2o_block_issue_flush(struct request_queue * queue, struct gendisk *disk,
sector_t * error_sector) sector_t * error_sector)
{ {
struct i2o_block_device *i2o_blk_dev = queue->queuedata; struct i2o_block_device *i2o_blk_dev = queue->queuedata;
@ -445,7 +445,7 @@ static void i2o_block_end_request(struct request *req, int uptodate,
{ {
struct i2o_block_request *ireq = req->special; struct i2o_block_request *ireq = req->special;
struct i2o_block_device *dev = ireq->i2o_blk_dev; struct i2o_block_device *dev = ireq->i2o_blk_dev;
request_queue_t *q = req->q; struct request_queue *q = req->q;
unsigned long flags; unsigned long flags;
if (end_that_request_chunk(req, uptodate, nr_bytes)) { if (end_that_request_chunk(req, uptodate, nr_bytes)) {

View File

@ -83,7 +83,7 @@ static int mmc_queue_thread(void *d)
* on any queue on this host, and attempt to issue it. This may * on any queue on this host, and attempt to issue it. This may
* not be the queue we were asked to process. * not be the queue we were asked to process.
*/ */
static void mmc_request(request_queue_t *q) static void mmc_request(struct request_queue *q)
{ {
struct mmc_queue *mq = q->queuedata; struct mmc_queue *mq = q->queuedata;
struct request *req; struct request *req;
@ -211,7 +211,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock
void mmc_cleanup_queue(struct mmc_queue *mq) void mmc_cleanup_queue(struct mmc_queue *mq)
{ {
request_queue_t *q = mq->queue; struct request_queue *q = mq->queue;
unsigned long flags; unsigned long flags;
/* Mark that we should start throwing out stragglers */ /* Mark that we should start throwing out stragglers */
@ -252,7 +252,7 @@ EXPORT_SYMBOL(mmc_cleanup_queue);
*/ */
void mmc_queue_suspend(struct mmc_queue *mq) void mmc_queue_suspend(struct mmc_queue *mq)
{ {
request_queue_t *q = mq->queue; struct request_queue *q = mq->queue;
unsigned long flags; unsigned long flags;
if (!(mq->flags & MMC_QUEUE_SUSPENDED)) { if (!(mq->flags & MMC_QUEUE_SUSPENDED)) {
@ -272,7 +272,7 @@ void mmc_queue_suspend(struct mmc_queue *mq)
*/ */
void mmc_queue_resume(struct mmc_queue *mq) void mmc_queue_resume(struct mmc_queue *mq)
{ {
request_queue_t *q = mq->queue; struct request_queue *q = mq->queue;
unsigned long flags; unsigned long flags;
if (mq->flags & MMC_QUEUE_SUSPENDED) { if (mq->flags & MMC_QUEUE_SUSPENDED) {

View File

@ -1187,7 +1187,7 @@ dasd_end_request_cb(struct dasd_ccw_req * cqr, void *data)
static void static void
__dasd_process_blk_queue(struct dasd_device * device) __dasd_process_blk_queue(struct dasd_device * device)
{ {
request_queue_t *queue; struct request_queue *queue;
struct request *req; struct request *req;
struct dasd_ccw_req *cqr; struct dasd_ccw_req *cqr;
int nr_queued; int nr_queued;
@ -1740,7 +1740,7 @@ dasd_cancel_req(struct dasd_ccw_req *cqr)
* Dasd request queue function. Called from ll_rw_blk.c * Dasd request queue function. Called from ll_rw_blk.c
*/ */
static void static void
do_dasd_request(request_queue_t * queue) do_dasd_request(struct request_queue * queue)
{ {
struct dasd_device *device; struct dasd_device *device;

View File

@ -293,7 +293,7 @@ struct dasd_uid {
struct dasd_device { struct dasd_device {
/* Block device stuff. */ /* Block device stuff. */
struct gendisk *gdp; struct gendisk *gdp;
request_queue_t *request_queue; struct request_queue *request_queue;
spinlock_t request_queue_lock; spinlock_t request_queue_lock;
struct block_device *bdev; struct block_device *bdev;
unsigned int devindex; unsigned int devindex;

View File

@ -621,7 +621,7 @@ out:
} }
static int static int
dcssblk_make_request(request_queue_t *q, struct bio *bio) dcssblk_make_request(struct request_queue *q, struct bio *bio)
{ {
struct dcssblk_dev_info *dev_info; struct dcssblk_dev_info *dev_info;
struct bio_vec *bvec; struct bio_vec *bvec;

View File

@ -191,7 +191,7 @@ static unsigned long __init xpram_highest_page_index(void)
/* /*
* Block device make request function. * Block device make request function.
*/ */
static int xpram_make_request(request_queue_t *q, struct bio *bio) static int xpram_make_request(struct request_queue *q, struct bio *bio)
{ {
xpram_device_t *xdev = bio->bi_bdev->bd_disk->private_data; xpram_device_t *xdev = bio->bi_bdev->bd_disk->private_data;
struct bio_vec *bvec; struct bio_vec *bvec;

View File

@ -188,7 +188,7 @@ struct tape_blk_data
{ {
struct tape_device * device; struct tape_device * device;
/* Block device request queue. */ /* Block device request queue. */
request_queue_t * request_queue; struct request_queue * request_queue;
spinlock_t request_queue_lock; spinlock_t request_queue_lock;
/* Task to move entries from block request to CCS request queue. */ /* Task to move entries from block request to CCS request queue. */

View File

@ -147,7 +147,7 @@ static void
tapeblock_requeue(struct work_struct *work) { tapeblock_requeue(struct work_struct *work) {
struct tape_blk_data * blkdat; struct tape_blk_data * blkdat;
struct tape_device * device; struct tape_device * device;
request_queue_t * queue; struct request_queue * queue;
int nr_queued; int nr_queued;
struct request * req; struct request * req;
struct list_head * l; struct list_head * l;
@ -194,7 +194,7 @@ tapeblock_requeue(struct work_struct *work) {
* Tape request queue function. Called from ll_rw_blk.c * Tape request queue function. Called from ll_rw_blk.c
*/ */
static void static void
tapeblock_request_fn(request_queue_t *queue) tapeblock_request_fn(struct request_queue *queue)
{ {
struct tape_device *device; struct tape_device *device;

View File

@ -185,7 +185,7 @@ static void jsfd_read(char *buf, unsigned long p, size_t togo) {
} }
} }
static void jsfd_do_request(request_queue_t *q) static void jsfd_do_request(struct request_queue *q)
{ {
struct request *req; struct request *req;

View File

@ -654,7 +654,7 @@ void scsi_run_host_queues(struct Scsi_Host *shost)
static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate, static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate,
int bytes, int requeue) int bytes, int requeue)
{ {
request_queue_t *q = cmd->device->request_queue; struct request_queue *q = cmd->device->request_queue;
struct request *req = cmd->request; struct request *req = cmd->request;
unsigned long flags; unsigned long flags;
@ -818,7 +818,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
{ {
int result = cmd->result; int result = cmd->result;
int this_count = cmd->request_bufflen; int this_count = cmd->request_bufflen;
request_queue_t *q = cmd->device->request_queue; struct request_queue *q = cmd->device->request_queue;
struct request *req = cmd->request; struct request *req = cmd->request;
int clear_errors = 1; int clear_errors = 1;
struct scsi_sense_hdr sshdr; struct scsi_sense_hdr sshdr;
@ -1038,7 +1038,7 @@ static int scsi_init_io(struct scsi_cmnd *cmd)
return BLKPREP_KILL; return BLKPREP_KILL;
} }
static int scsi_issue_flush_fn(request_queue_t *q, struct gendisk *disk, static int scsi_issue_flush_fn(struct request_queue *q, struct gendisk *disk,
sector_t *error_sector) sector_t *error_sector)
{ {
struct scsi_device *sdev = q->queuedata; struct scsi_device *sdev = q->queuedata;
@ -1340,7 +1340,7 @@ static inline int scsi_host_queue_ready(struct request_queue *q,
/* /*
* Kill a request for a dead device * Kill a request for a dead device
*/ */
static void scsi_kill_request(struct request *req, request_queue_t *q) static void scsi_kill_request(struct request *req, struct request_queue *q)
{ {
struct scsi_cmnd *cmd = req->special; struct scsi_cmnd *cmd = req->special;
struct scsi_device *sdev = cmd->device; struct scsi_device *sdev = cmd->device;
@ -2119,7 +2119,7 @@ EXPORT_SYMBOL(scsi_target_resume);
int int
scsi_internal_device_block(struct scsi_device *sdev) scsi_internal_device_block(struct scsi_device *sdev)
{ {
request_queue_t *q = sdev->request_queue; struct request_queue *q = sdev->request_queue;
unsigned long flags; unsigned long flags;
int err = 0; int err = 0;
@ -2159,7 +2159,7 @@ EXPORT_SYMBOL_GPL(scsi_internal_device_block);
int int
scsi_internal_device_unblock(struct scsi_device *sdev) scsi_internal_device_unblock(struct scsi_device *sdev)
{ {
request_queue_t *q = sdev->request_queue; struct request_queue *q = sdev->request_queue;
int err; int err;
unsigned long flags; unsigned long flags;

View File

@ -814,7 +814,7 @@ static int sd_issue_flush(struct device *dev, sector_t *error_sector)
return ret; return ret;
} }
static void sd_prepare_flush(request_queue_t *q, struct request *rq) static void sd_prepare_flush(struct request_queue *q, struct request *rq)
{ {
memset(rq->cmd, 0, sizeof(rq->cmd)); memset(rq->cmd, 0, sizeof(rq->cmd));
rq->cmd_type = REQ_TYPE_BLOCK_PC; rq->cmd_type = REQ_TYPE_BLOCK_PC;
@ -1285,7 +1285,7 @@ got_data:
*/ */
int hard_sector = sector_size; int hard_sector = sector_size;
sector_t sz = (sdkp->capacity/2) * (hard_sector/256); sector_t sz = (sdkp->capacity/2) * (hard_sector/256);
request_queue_t *queue = sdp->request_queue; struct request_queue *queue = sdp->request_queue;
sector_t mb = sz; sector_t mb = sz;
blk_queue_hardsect_size(queue, hard_sector); blk_queue_hardsect_size(queue, hard_sector);

View File

@ -624,7 +624,7 @@ static void get_sectorsize(struct scsi_cd *cd)
unsigned char *buffer; unsigned char *buffer;
int the_result, retries = 3; int the_result, retries = 3;
int sector_size; int sector_size;
request_queue_t *queue; struct request_queue *queue;
buffer = kmalloc(512, GFP_KERNEL | GFP_DMA); buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
if (!buffer) if (!buffer)

View File

@ -230,7 +230,7 @@ void bio_put(struct bio *bio)
} }
} }
inline int bio_phys_segments(request_queue_t *q, struct bio *bio) inline int bio_phys_segments(struct request_queue *q, struct bio *bio)
{ {
if (unlikely(!bio_flagged(bio, BIO_SEG_VALID))) if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
blk_recount_segments(q, bio); blk_recount_segments(q, bio);
@ -238,7 +238,7 @@ inline int bio_phys_segments(request_queue_t *q, struct bio *bio)
return bio->bi_phys_segments; return bio->bi_phys_segments;
} }
inline int bio_hw_segments(request_queue_t *q, struct bio *bio) inline int bio_hw_segments(struct request_queue *q, struct bio *bio)
{ {
if (unlikely(!bio_flagged(bio, BIO_SEG_VALID))) if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
blk_recount_segments(q, bio); blk_recount_segments(q, bio);
@ -257,7 +257,7 @@ inline int bio_hw_segments(request_queue_t *q, struct bio *bio)
*/ */
void __bio_clone(struct bio *bio, struct bio *bio_src) void __bio_clone(struct bio *bio, struct bio *bio_src)
{ {
request_queue_t *q = bdev_get_queue(bio_src->bi_bdev); struct request_queue *q = bdev_get_queue(bio_src->bi_bdev);
memcpy(bio->bi_io_vec, bio_src->bi_io_vec, memcpy(bio->bi_io_vec, bio_src->bi_io_vec,
bio_src->bi_max_vecs * sizeof(struct bio_vec)); bio_src->bi_max_vecs * sizeof(struct bio_vec));
@ -303,7 +303,7 @@ struct bio *bio_clone(struct bio *bio, gfp_t gfp_mask)
*/ */
int bio_get_nr_vecs(struct block_device *bdev) int bio_get_nr_vecs(struct block_device *bdev)
{ {
request_queue_t *q = bdev_get_queue(bdev); struct request_queue *q = bdev_get_queue(bdev);
int nr_pages; int nr_pages;
nr_pages = ((q->max_sectors << 9) + PAGE_SIZE - 1) >> PAGE_SHIFT; nr_pages = ((q->max_sectors << 9) + PAGE_SIZE - 1) >> PAGE_SHIFT;
@ -315,7 +315,7 @@ int bio_get_nr_vecs(struct block_device *bdev)
return nr_pages; return nr_pages;
} }
static int __bio_add_page(request_queue_t *q, struct bio *bio, struct page static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page
*page, unsigned int len, unsigned int offset, *page, unsigned int len, unsigned int offset,
unsigned short max_sectors) unsigned short max_sectors)
{ {
@ -425,7 +425,7 @@ static int __bio_add_page(request_queue_t *q, struct bio *bio, struct page
* smaller than PAGE_SIZE, so it is always possible to add a single * smaller than PAGE_SIZE, so it is always possible to add a single
* page to an empty bio. This should only be used by REQ_PC bios. * page to an empty bio. This should only be used by REQ_PC bios.
*/ */
int bio_add_pc_page(request_queue_t *q, struct bio *bio, struct page *page, int bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page *page,
unsigned int len, unsigned int offset) unsigned int len, unsigned int offset)
{ {
return __bio_add_page(q, bio, page, len, offset, q->max_hw_sectors); return __bio_add_page(q, bio, page, len, offset, q->max_hw_sectors);
@ -523,7 +523,7 @@ int bio_uncopy_user(struct bio *bio)
* to/from kernel pages as necessary. Must be paired with * to/from kernel pages as necessary. Must be paired with
* call bio_uncopy_user() on io completion. * call bio_uncopy_user() on io completion.
*/ */
struct bio *bio_copy_user(request_queue_t *q, unsigned long uaddr, struct bio *bio_copy_user(struct request_queue *q, unsigned long uaddr,
unsigned int len, int write_to_vm) unsigned int len, int write_to_vm)
{ {
unsigned long end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
@ -600,7 +600,7 @@ out_bmd:
return ERR_PTR(ret); return ERR_PTR(ret);
} }
static struct bio *__bio_map_user_iov(request_queue_t *q, static struct bio *__bio_map_user_iov(struct request_queue *q,
struct block_device *bdev, struct block_device *bdev,
struct sg_iovec *iov, int iov_count, struct sg_iovec *iov, int iov_count,
int write_to_vm) int write_to_vm)
@ -712,7 +712,7 @@ static struct bio *__bio_map_user_iov(request_queue_t *q,
/** /**
* bio_map_user - map user address into bio * bio_map_user - map user address into bio
* @q: the request_queue_t for the bio * @q: the struct request_queue for the bio
* @bdev: destination block device * @bdev: destination block device
* @uaddr: start of user address * @uaddr: start of user address
* @len: length in bytes * @len: length in bytes
@ -721,7 +721,7 @@ static struct bio *__bio_map_user_iov(request_queue_t *q,
* Map the user space address into a bio suitable for io to a block * Map the user space address into a bio suitable for io to a block
* device. Returns an error pointer in case of error. * device. Returns an error pointer in case of error.
*/ */
struct bio *bio_map_user(request_queue_t *q, struct block_device *bdev, struct bio *bio_map_user(struct request_queue *q, struct block_device *bdev,
unsigned long uaddr, unsigned int len, int write_to_vm) unsigned long uaddr, unsigned int len, int write_to_vm)
{ {
struct sg_iovec iov; struct sg_iovec iov;
@ -734,7 +734,7 @@ struct bio *bio_map_user(request_queue_t *q, struct block_device *bdev,
/** /**
* bio_map_user_iov - map user sg_iovec table into bio * bio_map_user_iov - map user sg_iovec table into bio
* @q: the request_queue_t for the bio * @q: the struct request_queue for the bio
* @bdev: destination block device * @bdev: destination block device
* @iov: the iovec. * @iov: the iovec.
* @iov_count: number of elements in the iovec * @iov_count: number of elements in the iovec
@ -743,7 +743,7 @@ struct bio *bio_map_user(request_queue_t *q, struct block_device *bdev,
* Map the user space address into a bio suitable for io to a block * Map the user space address into a bio suitable for io to a block
* device. Returns an error pointer in case of error. * device. Returns an error pointer in case of error.
*/ */
struct bio *bio_map_user_iov(request_queue_t *q, struct block_device *bdev, struct bio *bio_map_user_iov(struct request_queue *q, struct block_device *bdev,
struct sg_iovec *iov, int iov_count, struct sg_iovec *iov, int iov_count,
int write_to_vm) int write_to_vm)
{ {
@ -808,7 +808,7 @@ static int bio_map_kern_endio(struct bio *bio, unsigned int bytes_done, int err)
} }
static struct bio *__bio_map_kern(request_queue_t *q, void *data, static struct bio *__bio_map_kern(struct request_queue *q, void *data,
unsigned int len, gfp_t gfp_mask) unsigned int len, gfp_t gfp_mask)
{ {
unsigned long kaddr = (unsigned long)data; unsigned long kaddr = (unsigned long)data;
@ -847,7 +847,7 @@ static struct bio *__bio_map_kern(request_queue_t *q, void *data,
/** /**
* bio_map_kern - map kernel address into bio * bio_map_kern - map kernel address into bio
* @q: the request_queue_t for the bio * @q: the struct request_queue for the bio
* @data: pointer to buffer to map * @data: pointer to buffer to map
* @len: length in bytes * @len: length in bytes
* @gfp_mask: allocation flags for bio allocation * @gfp_mask: allocation flags for bio allocation
@ -855,7 +855,7 @@ static struct bio *__bio_map_kern(request_queue_t *q, void *data,
* Map the kernel address into a bio suitable for io to a block * Map the kernel address into a bio suitable for io to a block
* device. Returns an error pointer in case of error. * device. Returns an error pointer in case of error.
*/ */
struct bio *bio_map_kern(request_queue_t *q, void *data, unsigned int len, struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len,
gfp_t gfp_mask) gfp_t gfp_mask)
{ {
struct bio *bio; struct bio *bio;

View File

@ -37,7 +37,7 @@ struct omap_mbox_ops {
struct omap_mbox_queue { struct omap_mbox_queue {
spinlock_t lock; spinlock_t lock;
request_queue_t *queue; struct request_queue *queue;
struct work_struct work; struct work_struct work;
int (*callback)(void *); int (*callback)(void *);
struct omap_mbox *mbox; struct omap_mbox *mbox;

View File

@ -37,7 +37,6 @@
struct scsi_ioctl_command; struct scsi_ioctl_command;
struct request_queue; struct request_queue;
typedef struct request_queue request_queue_t;
struct elevator_queue; struct elevator_queue;
typedef struct elevator_queue elevator_t; typedef struct elevator_queue elevator_t;
struct request_pm_state; struct request_pm_state;
@ -233,7 +232,7 @@ struct request {
struct list_head queuelist; struct list_head queuelist;
struct list_head donelist; struct list_head donelist;
request_queue_t *q; struct request_queue *q;
unsigned int cmd_flags; unsigned int cmd_flags;
enum rq_cmd_type_bits cmd_type; enum rq_cmd_type_bits cmd_type;
@ -337,15 +336,15 @@ struct request_pm_state
#include <linux/elevator.h> #include <linux/elevator.h>
typedef void (request_fn_proc) (request_queue_t *q); typedef void (request_fn_proc) (struct request_queue *q);
typedef int (make_request_fn) (request_queue_t *q, struct bio *bio); typedef int (make_request_fn) (struct request_queue *q, struct bio *bio);
typedef int (prep_rq_fn) (request_queue_t *, struct request *); typedef int (prep_rq_fn) (struct request_queue *, struct request *);
typedef void (unplug_fn) (request_queue_t *); typedef void (unplug_fn) (struct request_queue *);
struct bio_vec; struct bio_vec;
typedef int (merge_bvec_fn) (request_queue_t *, struct bio *, struct bio_vec *); typedef int (merge_bvec_fn) (struct request_queue *, struct bio *, struct bio_vec *);
typedef int (issue_flush_fn) (request_queue_t *, struct gendisk *, sector_t *); typedef int (issue_flush_fn) (struct request_queue *, struct gendisk *, sector_t *);
typedef void (prepare_flush_fn) (request_queue_t *, struct request *); typedef void (prepare_flush_fn) (struct request_queue *, struct request *);
typedef void (softirq_done_fn)(struct request *); typedef void (softirq_done_fn)(struct request *);
enum blk_queue_state { enum blk_queue_state {
@ -626,13 +625,13 @@ extern unsigned long blk_max_low_pfn, blk_max_pfn;
#ifdef CONFIG_BOUNCE #ifdef CONFIG_BOUNCE
extern int init_emergency_isa_pool(void); extern int init_emergency_isa_pool(void);
extern void blk_queue_bounce(request_queue_t *q, struct bio **bio); extern void blk_queue_bounce(struct request_queue *q, struct bio **bio);
#else #else
static inline int init_emergency_isa_pool(void) static inline int init_emergency_isa_pool(void)
{ {
return 0; return 0;
} }
static inline void blk_queue_bounce(request_queue_t *q, struct bio **bio) static inline void blk_queue_bounce(struct request_queue *q, struct bio **bio)
{ {
} }
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU */
@ -646,14 +645,14 @@ extern void blk_unregister_queue(struct gendisk *disk);
extern void register_disk(struct gendisk *dev); extern void register_disk(struct gendisk *dev);
extern void generic_make_request(struct bio *bio); extern void generic_make_request(struct bio *bio);
extern void blk_put_request(struct request *); extern void blk_put_request(struct request *);
extern void __blk_put_request(request_queue_t *, struct request *); extern void __blk_put_request(struct request_queue *, struct request *);
extern void blk_end_sync_rq(struct request *rq, int error); extern void blk_end_sync_rq(struct request *rq, int error);
extern struct request *blk_get_request(request_queue_t *, int, gfp_t); extern struct request *blk_get_request(struct request_queue *, int, gfp_t);
extern void blk_insert_request(request_queue_t *, struct request *, int, void *); extern void blk_insert_request(struct request_queue *, struct request *, int, void *);
extern void blk_requeue_request(request_queue_t *, struct request *); extern void blk_requeue_request(struct request_queue *, struct request *);
extern void blk_plug_device(request_queue_t *); extern void blk_plug_device(struct request_queue *);
extern int blk_remove_plug(request_queue_t *); extern int blk_remove_plug(struct request_queue *);
extern void blk_recount_segments(request_queue_t *, struct bio *); extern void blk_recount_segments(struct request_queue *, struct bio *);
extern int scsi_cmd_ioctl(struct file *, struct request_queue *, extern int scsi_cmd_ioctl(struct file *, struct request_queue *,
struct gendisk *, unsigned int, void __user *); struct gendisk *, unsigned int, void __user *);
extern int sg_scsi_ioctl(struct file *, struct request_queue *, extern int sg_scsi_ioctl(struct file *, struct request_queue *,
@ -662,14 +661,15 @@ extern int sg_scsi_ioctl(struct file *, struct request_queue *,
/* /*
* Temporary export, until SCSI gets fixed up. * Temporary export, until SCSI gets fixed up.
*/ */
extern int ll_back_merge_fn(request_queue_t *, struct request *, struct bio *); extern int ll_back_merge_fn(struct request_queue *, struct request *,
struct bio *);
/* /*
* A queue has just exitted congestion. Note this in the global counter of * A queue has just exitted congestion. Note this in the global counter of
* congested queues, and wake up anyone who was waiting for requests to be * congested queues, and wake up anyone who was waiting for requests to be
* put back. * put back.
*/ */
static inline void blk_clear_queue_congested(request_queue_t *q, int rw) static inline void blk_clear_queue_congested(struct request_queue *q, int rw)
{ {
clear_bdi_congested(&q->backing_dev_info, rw); clear_bdi_congested(&q->backing_dev_info, rw);
} }
@ -678,29 +678,29 @@ static inline void blk_clear_queue_congested(request_queue_t *q, int rw)
* A queue has just entered congestion. Flag that in the queue's VM-visible * A queue has just entered congestion. Flag that in the queue's VM-visible
* state flags and increment the global gounter of congested queues. * state flags and increment the global gounter of congested queues.
*/ */
static inline void blk_set_queue_congested(request_queue_t *q, int rw) static inline void blk_set_queue_congested(struct request_queue *q, int rw)
{ {
set_bdi_congested(&q->backing_dev_info, rw); set_bdi_congested(&q->backing_dev_info, rw);
} }
extern void blk_start_queue(request_queue_t *q); extern void blk_start_queue(struct request_queue *q);
extern void blk_stop_queue(request_queue_t *q); extern void blk_stop_queue(struct request_queue *q);
extern void blk_sync_queue(struct request_queue *q); extern void blk_sync_queue(struct request_queue *q);
extern void __blk_stop_queue(request_queue_t *q); extern void __blk_stop_queue(struct request_queue *q);
extern void blk_run_queue(request_queue_t *); extern void blk_run_queue(struct request_queue *);
extern void blk_start_queueing(request_queue_t *); extern void blk_start_queueing(struct request_queue *);
extern int blk_rq_map_user(request_queue_t *, struct request *, void __user *, unsigned long); extern int blk_rq_map_user(struct request_queue *, struct request *, void __user *, unsigned long);
extern int blk_rq_unmap_user(struct bio *); extern int blk_rq_unmap_user(struct bio *);
extern int blk_rq_map_kern(request_queue_t *, struct request *, void *, unsigned int, gfp_t); extern int blk_rq_map_kern(struct request_queue *, struct request *, void *, unsigned int, gfp_t);
extern int blk_rq_map_user_iov(request_queue_t *, struct request *, extern int blk_rq_map_user_iov(struct request_queue *, struct request *,
struct sg_iovec *, int, unsigned int); struct sg_iovec *, int, unsigned int);
extern int blk_execute_rq(request_queue_t *, struct gendisk *, extern int blk_execute_rq(struct request_queue *, struct gendisk *,
struct request *, int); struct request *, int);
extern void blk_execute_rq_nowait(request_queue_t *, struct gendisk *, extern void blk_execute_rq_nowait(struct request_queue *, struct gendisk *,
struct request *, int, rq_end_io_fn *); struct request *, int, rq_end_io_fn *);
extern int blk_verify_command(unsigned char *, int); extern int blk_verify_command(unsigned char *, int);
static inline request_queue_t *bdev_get_queue(struct block_device *bdev) static inline struct request_queue *bdev_get_queue(struct block_device *bdev)
{ {
return bdev->bd_disk->queue; return bdev->bd_disk->queue;
} }
@ -749,41 +749,41 @@ static inline void blkdev_dequeue_request(struct request *req)
/* /*
* Access functions for manipulating queue properties * Access functions for manipulating queue properties
*/ */
extern request_queue_t *blk_init_queue_node(request_fn_proc *rfn, extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn,
spinlock_t *lock, int node_id); spinlock_t *lock, int node_id);
extern request_queue_t *blk_init_queue(request_fn_proc *, spinlock_t *); extern struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *);
extern void blk_cleanup_queue(request_queue_t *); extern void blk_cleanup_queue(struct request_queue *);
extern void blk_queue_make_request(request_queue_t *, make_request_fn *); extern void blk_queue_make_request(struct request_queue *, make_request_fn *);
extern void blk_queue_bounce_limit(request_queue_t *, u64); extern void blk_queue_bounce_limit(struct request_queue *, u64);
extern void blk_queue_max_sectors(request_queue_t *, unsigned int); extern void blk_queue_max_sectors(struct request_queue *, unsigned int);
extern void blk_queue_max_phys_segments(request_queue_t *, unsigned short); extern void blk_queue_max_phys_segments(struct request_queue *, unsigned short);
extern void blk_queue_max_hw_segments(request_queue_t *, unsigned short); extern void blk_queue_max_hw_segments(struct request_queue *, unsigned short);
extern void blk_queue_max_segment_size(request_queue_t *, unsigned int); extern void blk_queue_max_segment_size(struct request_queue *, unsigned int);
extern void blk_queue_hardsect_size(request_queue_t *, unsigned short); extern void blk_queue_hardsect_size(struct request_queue *, unsigned short);
extern void blk_queue_stack_limits(request_queue_t *t, request_queue_t *b); extern void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b);
extern void blk_queue_segment_boundary(request_queue_t *, unsigned long); extern void blk_queue_segment_boundary(struct request_queue *, unsigned long);
extern void blk_queue_prep_rq(request_queue_t *, prep_rq_fn *pfn); extern void blk_queue_prep_rq(struct request_queue *, prep_rq_fn *pfn);
extern void blk_queue_merge_bvec(request_queue_t *, merge_bvec_fn *); extern void blk_queue_merge_bvec(struct request_queue *, merge_bvec_fn *);
extern void blk_queue_dma_alignment(request_queue_t *, int); extern void blk_queue_dma_alignment(struct request_queue *, int);
extern void blk_queue_softirq_done(request_queue_t *, softirq_done_fn *); extern void blk_queue_softirq_done(struct request_queue *, softirq_done_fn *);
extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev); extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev);
extern int blk_queue_ordered(request_queue_t *, unsigned, prepare_flush_fn *); extern int blk_queue_ordered(struct request_queue *, unsigned, prepare_flush_fn *);
extern void blk_queue_issue_flush_fn(request_queue_t *, issue_flush_fn *); extern void blk_queue_issue_flush_fn(struct request_queue *, issue_flush_fn *);
extern int blk_do_ordered(request_queue_t *, struct request **); extern int blk_do_ordered(struct request_queue *, struct request **);
extern unsigned blk_ordered_cur_seq(request_queue_t *); extern unsigned blk_ordered_cur_seq(struct request_queue *);
extern unsigned blk_ordered_req_seq(struct request *); extern unsigned blk_ordered_req_seq(struct request *);
extern void blk_ordered_complete_seq(request_queue_t *, unsigned, int); extern void blk_ordered_complete_seq(struct request_queue *, unsigned, int);
extern int blk_rq_map_sg(request_queue_t *, struct request *, struct scatterlist *); extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *);
extern void blk_dump_rq_flags(struct request *, char *); extern void blk_dump_rq_flags(struct request *, char *);
extern void generic_unplug_device(request_queue_t *); extern void generic_unplug_device(struct request_queue *);
extern void __generic_unplug_device(request_queue_t *); extern void __generic_unplug_device(struct request_queue *);
extern long nr_blockdev_pages(void); extern long nr_blockdev_pages(void);
int blk_get_queue(request_queue_t *); int blk_get_queue(struct request_queue *);
request_queue_t *blk_alloc_queue(gfp_t); struct request_queue *blk_alloc_queue(gfp_t);
request_queue_t *blk_alloc_queue_node(gfp_t, int); struct request_queue *blk_alloc_queue_node(gfp_t, int);
extern void blk_put_queue(request_queue_t *); extern void blk_put_queue(struct request_queue *);
/* /*
* tag stuff * tag stuff
@ -791,13 +791,13 @@ extern void blk_put_queue(request_queue_t *);
#define blk_queue_tag_depth(q) ((q)->queue_tags->busy) #define blk_queue_tag_depth(q) ((q)->queue_tags->busy)
#define blk_queue_tag_queue(q) ((q)->queue_tags->busy < (q)->queue_tags->max_depth) #define blk_queue_tag_queue(q) ((q)->queue_tags->busy < (q)->queue_tags->max_depth)
#define blk_rq_tagged(rq) ((rq)->cmd_flags & REQ_QUEUED) #define blk_rq_tagged(rq) ((rq)->cmd_flags & REQ_QUEUED)
extern int blk_queue_start_tag(request_queue_t *, struct request *); extern int blk_queue_start_tag(struct request_queue *, struct request *);
extern struct request *blk_queue_find_tag(request_queue_t *, int); extern struct request *blk_queue_find_tag(struct request_queue *, int);
extern void blk_queue_end_tag(request_queue_t *, struct request *); extern void blk_queue_end_tag(struct request_queue *, struct request *);
extern int blk_queue_init_tags(request_queue_t *, int, struct blk_queue_tag *); extern int blk_queue_init_tags(struct request_queue *, int, struct blk_queue_tag *);
extern void blk_queue_free_tags(request_queue_t *); extern void blk_queue_free_tags(struct request_queue *);
extern int blk_queue_resize_tags(request_queue_t *, int); extern int blk_queue_resize_tags(struct request_queue *, int);
extern void blk_queue_invalidate_tags(request_queue_t *); extern void blk_queue_invalidate_tags(struct request_queue *);
extern struct blk_queue_tag *blk_init_tags(int); extern struct blk_queue_tag *blk_init_tags(int);
extern void blk_free_tags(struct blk_queue_tag *); extern void blk_free_tags(struct blk_queue_tag *);
@ -809,7 +809,7 @@ static inline struct request *blk_map_queue_find_tag(struct blk_queue_tag *bqt,
return bqt->tag_index[tag]; return bqt->tag_index[tag];
} }
extern void blk_rq_bio_prep(request_queue_t *, struct request *, struct bio *); extern void blk_rq_bio_prep(struct request_queue *, struct request *, struct bio *);
extern int blkdev_issue_flush(struct block_device *, sector_t *); extern int blkdev_issue_flush(struct block_device *, sector_t *);
#define MAX_PHYS_SEGMENTS 128 #define MAX_PHYS_SEGMENTS 128
@ -821,7 +821,7 @@ extern int blkdev_issue_flush(struct block_device *, sector_t *);
#define blkdev_entry_to_request(entry) list_entry((entry), struct request, queuelist) #define blkdev_entry_to_request(entry) list_entry((entry), struct request, queuelist)
static inline int queue_hardsect_size(request_queue_t *q) static inline int queue_hardsect_size(struct request_queue *q)
{ {
int retval = 512; int retval = 512;
@ -836,7 +836,7 @@ static inline int bdev_hardsect_size(struct block_device *bdev)
return queue_hardsect_size(bdev_get_queue(bdev)); return queue_hardsect_size(bdev_get_queue(bdev));
} }
static inline int queue_dma_alignment(request_queue_t *q) static inline int queue_dma_alignment(struct request_queue *q)
{ {
int retval = 511; int retval = 511;

View File

@ -144,7 +144,7 @@ struct blk_user_trace_setup {
#if defined(CONFIG_BLK_DEV_IO_TRACE) #if defined(CONFIG_BLK_DEV_IO_TRACE)
extern int blk_trace_ioctl(struct block_device *, unsigned, char __user *); extern int blk_trace_ioctl(struct block_device *, unsigned, char __user *);
extern void blk_trace_shutdown(request_queue_t *); extern void blk_trace_shutdown(struct request_queue *);
extern void __blk_add_trace(struct blk_trace *, sector_t, int, int, u32, int, int, void *); extern void __blk_add_trace(struct blk_trace *, sector_t, int, int, u32, int, int, void *);
/** /**

View File

@ -5,29 +5,29 @@
#ifdef CONFIG_BLOCK #ifdef CONFIG_BLOCK
typedef int (elevator_merge_fn) (request_queue_t *, struct request **, typedef int (elevator_merge_fn) (struct request_queue *, struct request **,
struct bio *); struct bio *);
typedef void (elevator_merge_req_fn) (request_queue_t *, struct request *, struct request *); typedef void (elevator_merge_req_fn) (struct request_queue *, struct request *, struct request *);
typedef void (elevator_merged_fn) (request_queue_t *, struct request *, int); typedef void (elevator_merged_fn) (struct request_queue *, struct request *, int);
typedef int (elevator_allow_merge_fn) (request_queue_t *, struct request *, struct bio *); typedef int (elevator_allow_merge_fn) (struct request_queue *, struct request *, struct bio *);
typedef int (elevator_dispatch_fn) (request_queue_t *, int); typedef int (elevator_dispatch_fn) (struct request_queue *, int);
typedef void (elevator_add_req_fn) (request_queue_t *, struct request *); typedef void (elevator_add_req_fn) (struct request_queue *, struct request *);
typedef int (elevator_queue_empty_fn) (request_queue_t *); typedef int (elevator_queue_empty_fn) (struct request_queue *);
typedef struct request *(elevator_request_list_fn) (request_queue_t *, struct request *); typedef struct request *(elevator_request_list_fn) (struct request_queue *, struct request *);
typedef void (elevator_completed_req_fn) (request_queue_t *, struct request *); typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *);
typedef int (elevator_may_queue_fn) (request_queue_t *, int); typedef int (elevator_may_queue_fn) (struct request_queue *, int);
typedef int (elevator_set_req_fn) (request_queue_t *, struct request *, gfp_t); typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, gfp_t);
typedef void (elevator_put_req_fn) (struct request *); typedef void (elevator_put_req_fn) (struct request *);
typedef void (elevator_activate_req_fn) (request_queue_t *, struct request *); typedef void (elevator_activate_req_fn) (struct request_queue *, struct request *);
typedef void (elevator_deactivate_req_fn) (request_queue_t *, struct request *); typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct request *);
typedef void *(elevator_init_fn) (request_queue_t *); typedef void *(elevator_init_fn) (struct request_queue *);
typedef void (elevator_exit_fn) (elevator_t *); typedef void (elevator_exit_fn) (elevator_t *);
struct elevator_ops struct elevator_ops
@ -94,27 +94,27 @@ struct elevator_queue
/* /*
* block elevator interface * block elevator interface
*/ */
extern void elv_dispatch_sort(request_queue_t *, struct request *); extern void elv_dispatch_sort(struct request_queue *, struct request *);
extern void elv_dispatch_add_tail(request_queue_t *, struct request *); extern void elv_dispatch_add_tail(struct request_queue *, struct request *);
extern void elv_add_request(request_queue_t *, struct request *, int, int); extern void elv_add_request(struct request_queue *, struct request *, int, int);
extern void __elv_add_request(request_queue_t *, struct request *, int, int); extern void __elv_add_request(struct request_queue *, struct request *, int, int);
extern void elv_insert(request_queue_t *, struct request *, int); extern void elv_insert(struct request_queue *, struct request *, int);
extern int elv_merge(request_queue_t *, struct request **, struct bio *); extern int elv_merge(struct request_queue *, struct request **, struct bio *);
extern void elv_merge_requests(request_queue_t *, struct request *, extern void elv_merge_requests(struct request_queue *, struct request *,
struct request *); struct request *);
extern void elv_merged_request(request_queue_t *, struct request *, int); extern void elv_merged_request(struct request_queue *, struct request *, int);
extern void elv_dequeue_request(request_queue_t *, struct request *); extern void elv_dequeue_request(struct request_queue *, struct request *);
extern void elv_requeue_request(request_queue_t *, struct request *); extern void elv_requeue_request(struct request_queue *, struct request *);
extern int elv_queue_empty(request_queue_t *); extern int elv_queue_empty(struct request_queue *);
extern struct request *elv_next_request(struct request_queue *q); extern struct request *elv_next_request(struct request_queue *q);
extern struct request *elv_former_request(request_queue_t *, struct request *); extern struct request *elv_former_request(struct request_queue *, struct request *);
extern struct request *elv_latter_request(request_queue_t *, struct request *); extern struct request *elv_latter_request(struct request_queue *, struct request *);
extern int elv_register_queue(request_queue_t *q); extern int elv_register_queue(struct request_queue *q);
extern void elv_unregister_queue(request_queue_t *q); extern void elv_unregister_queue(struct request_queue *q);
extern int elv_may_queue(request_queue_t *, int); extern int elv_may_queue(struct request_queue *, int);
extern void elv_completed_request(request_queue_t *, struct request *); extern void elv_completed_request(struct request_queue *, struct request *);
extern int elv_set_request(request_queue_t *, struct request *, gfp_t); extern int elv_set_request(struct request_queue *, struct request *, gfp_t);
extern void elv_put_request(request_queue_t *, struct request *); extern void elv_put_request(struct request_queue *, struct request *);
/* /*
* io scheduler registration * io scheduler registration
@ -125,18 +125,18 @@ extern void elv_unregister(struct elevator_type *);
/* /*
* io scheduler sysfs switching * io scheduler sysfs switching
*/ */
extern ssize_t elv_iosched_show(request_queue_t *, char *); extern ssize_t elv_iosched_show(struct request_queue *, char *);
extern ssize_t elv_iosched_store(request_queue_t *, const char *, size_t); extern ssize_t elv_iosched_store(struct request_queue *, const char *, size_t);
extern int elevator_init(request_queue_t *, char *); extern int elevator_init(struct request_queue *, char *);
extern void elevator_exit(elevator_t *); extern void elevator_exit(elevator_t *);
extern int elv_rq_merge_ok(struct request *, struct bio *); extern int elv_rq_merge_ok(struct request *, struct bio *);
/* /*
* Helper functions. * Helper functions.
*/ */
extern struct request *elv_rb_former_request(request_queue_t *, struct request *); extern struct request *elv_rb_former_request(struct request_queue *, struct request *);
extern struct request *elv_rb_latter_request(request_queue_t *, struct request *); extern struct request *elv_rb_latter_request(struct request_queue *, struct request *);
/* /*
* rb support functions. * rb support functions.

View File

@ -555,7 +555,7 @@ typedef struct ide_drive_s {
char name[4]; /* drive name, such as "hda" */ char name[4]; /* drive name, such as "hda" */
char driver_req[10]; /* requests specific driver */ char driver_req[10]; /* requests specific driver */
request_queue_t *queue; /* request queue */ struct request_queue *queue; /* request queue */
struct request *rq; /* current request */ struct request *rq; /* current request */
struct ide_drive_s *next; /* circular list of hwgroup drives */ struct ide_drive_s *next; /* circular list of hwgroup drives */
@ -1206,7 +1206,7 @@ extern void ide_stall_queue(ide_drive_t *drive, unsigned long timeout);
extern int ide_spin_wait_hwgroup(ide_drive_t *); extern int ide_spin_wait_hwgroup(ide_drive_t *);
extern void ide_timer_expiry(unsigned long); extern void ide_timer_expiry(unsigned long);
extern irqreturn_t ide_intr(int irq, void *dev_id); extern irqreturn_t ide_intr(int irq, void *dev_id);
extern void do_ide_request(request_queue_t *); extern void do_ide_request(struct request_queue *);
void ide_init_disk(struct gendisk *, ide_drive_t *); void ide_init_disk(struct gendisk *, ide_drive_t *);

View File

@ -63,7 +63,7 @@ struct loop_device {
struct task_struct *lo_thread; struct task_struct *lo_thread;
wait_queue_head_t lo_event; wait_queue_head_t lo_event;
request_queue_t *lo_queue; struct request_queue *lo_queue;
struct gendisk *lo_disk; struct gendisk *lo_disk;
struct list_head lo_list; struct list_head lo_list;
}; };

View File

@ -227,7 +227,7 @@ struct mddev_s
unsigned int safemode_delay; unsigned int safemode_delay;
struct timer_list safemode_timer; struct timer_list safemode_timer;
atomic_t writes_pending; atomic_t writes_pending;
request_queue_t *queue; /* for plugging ... */ struct request_queue *queue; /* for plugging ... */
atomic_t write_behind; /* outstanding async IO */ atomic_t write_behind; /* outstanding async IO */
unsigned int max_write_behind; /* 0 = sync */ unsigned int max_write_behind; /* 0 = sync */
@ -265,7 +265,7 @@ struct mdk_personality
int level; int level;
struct list_head list; struct list_head list;
struct module *owner; struct module *owner;
int (*make_request)(request_queue_t *q, struct bio *bio); int (*make_request)(struct request_queue *q, struct bio *bio);
int (*run)(mddev_t *mddev); int (*run)(mddev_t *mddev);
int (*stop)(mddev_t *mddev); int (*stop)(mddev_t *mddev);
void (*status)(struct seq_file *seq, mddev_t *mddev); void (*status)(struct seq_file *seq, mddev_t *mddev);

View File

@ -57,7 +57,7 @@ static int sd_resume(struct device *dev);
static void sd_rescan(struct device *); static void sd_rescan(struct device *);
static int sd_init_command(struct scsi_cmnd *); static int sd_init_command(struct scsi_cmnd *);
static int sd_issue_flush(struct device *, sector_t *); static int sd_issue_flush(struct device *, sector_t *);
static void sd_prepare_flush(request_queue_t *, struct request *); static void sd_prepare_flush(struct request_queue *, struct request *);
static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer); static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer);
static void scsi_disk_release(struct class_device *cdev); static void scsi_disk_release(struct class_device *cdev);
static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *); static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *);

View File

@ -190,7 +190,7 @@ static int bounce_end_io_read_isa(struct bio *bio, unsigned int bytes_done, int
return 0; return 0;
} }
static void __blk_queue_bounce(request_queue_t *q, struct bio **bio_orig, static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
mempool_t *pool) mempool_t *pool)
{ {
struct page *page; struct page *page;
@ -275,7 +275,7 @@ static void __blk_queue_bounce(request_queue_t *q, struct bio **bio_orig,
*bio_orig = bio; *bio_orig = bio;
} }
void blk_queue_bounce(request_queue_t *q, struct bio **bio_orig) void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig)
{ {
mempool_t *pool; mempool_t *pool;