Commit Graph

17 Commits

Author SHA1 Message Date
Christoph Hellwig 83521d3eb8 [PATCH] cfq-iosched: move tasklist walk to elevator.c
We're trying to get rid of as much as possible tasklist walks, or at
least moving them to core code.  This patch falls into the second
category.

Instead of walking the tasklist in cfq-iosched move that into
elv_unregister.  The added benefit is that with this change the as
ioscheduler might be might unloadable more easily aswell.

The new code uses read_lock instead of read_lock_irq because the
tasklist_lock only needs irq disabling for writers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-30 17:37:17 -08:00
Linus Torvalds 28d721e24c Merge branch 'generic-dispatch' of git://brick.kernel.dk/data/git/linux-2.6-block 2005-10-28 08:53:49 -07:00
Al Viro 8267e268e0 [PATCH] gfp_t: block layer core
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-28 08:16:47 -07:00
Tejun Heo 98b11471d7 [PATCH] 04/05 remove last_merge handling from ioscheds
Remove last_merge handling from all ioscheds.  This patch
removes merging capability of noop iosched.

Signed-off-by: Tejun Heo <htejun@gmail.com>
2005-10-28 08:45:35 +02:00
Jens Axboe b4878f245e [PATCH] 02/05: update ioscheds to use generic dispatch queue
This patch updates all four ioscheds to use generic dispatch
queue.  There's one behavior change in as-iosched.

* In as-iosched, when force dispatching
  (ELEVATOR_INSERT_BACK), batch_data_dir is reset to REQ_SYNC
  and changed_batch and new_batch are cleared to zero.  This
  prevernts AS from doing incorrect update_write_batch after
  the forced dispatched requests are finished.

* In cfq-iosched, cfqd->rq_in_driver currently counts the
  number of activated (removed) requests to determine
  whether queue-kicking is needed and cfq_max_depth has been
  reached.  With generic dispatch queue, I think counting
  the number of dispatched requests would be more appropriate.

* cfq_max_depth can be lowered to 1 again.

Original from Tejun Heo, modified version applied.

Signed-off-by: Jens Axboe <axboe@suse.de>
2005-10-28 08:45:08 +02:00
Jens Axboe 35797132b3 [PATCH] cfq-iosched: reverse bad reference count fix
The reference count fix merged isn't fully bug free. It doesn't leak
now, but instead it crashes due to looking at freed memory. So for now,
lets reverse the change and I'll fix it for real next week.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-10 10:15:12 -07:00
Brian King 38f1852759 [PATCH] block: CFQ refcounting fix
I ran across a memory leak related to the cfq scheduler. The cfq
init function increments the refcnt of the associated request_queue.

This refcount gets decremented in cfq's exit function. Since blk_cleanup_queue
only calls the elevator exit function when its refcnt goes to zero, the
request_q never gets cleaned up. It didn't look like other io schedulers were
incrementing this refcnt, so I removed the refcnt increment and it fixed the
memory leak for me.

To reproduce the problem, simply use cfq and use the scsi_host scan sysfs
attribute to scan "- - -" repeatedly on a scsi host and watch the memory
vanish.

Signed-off-by: Brian King <brking@us.ibm.com>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:57:39 -07:00
Jens Axboe 9c2c38a122 [PATCH] cfq-iosched.c: minor fixes
One critical fix and two minor fixes for 2.6.13-rc7:

- Max depth must currently be 2 to allow barriers to function on SCSI
- Prefer sync request over async in choosing the next request
- Never allow async request to preempt or disturb the "anticipation" for
  a single cfq process context. This is as-designed, the code right now
  is buggy in that area.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-24 10:22:44 -07:00
Jens Axboe d7ed538a02 [PATCH] cfq-iosched: fix problem with barriers and max_depth == 1
CFQ will currently stall when using write barriers and the default
max_depth setting of 1, since we artificially need a depth of 2 when
pre-pending the first flush. So never deny the barrier request going to
the device.

This is a regression since 2.6.12, it was found in SUSE testing.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-02 11:19:18 -07:00
Andrew Morton 99f95e5286 [PATCH] cfq build fix
drivers/block/cfq-iosched.c: In function 'cfq_put_queue':
drivers/block/cfq-iosched.c:303: sorry, unimplemented: inlining failed in call to 'cfq_pending_requests': function body not available
drivers/block/cfq-iosched.c:1080: sorry, unimplemented: called from here
drivers/block/cfq-iosched.c: In function '__cfq_may_queue':
drivers/block/cfq-iosched.c:1955: warning: the address of 'cfq_cfqq_must_alloc_slice', will always evaluate as 'true'
make[1]: *** [drivers/block/cfq-iosched.o] Error 1
make: *** [drivers/block/cfq-iosched.o] Error 2

Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 20:31:02 -07:00
Jens Axboe 96c51ce94e [PATCH] CFQ io scheduler: scheduler switch oops
If cfq is managing a queue and a new scheduler is later selected, it is
possible for the cfqd unplug_work work to be queued after the kblockd
work struct has been flushed.  The problem is the ordering of
cfq_shutdown_timer_wq() and blk_put_queue() in cfq_put_cfqd().  The
latter may rearm the work, leaving cfq_kick_queue() with dead data.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 14:33:30 -07:00
Jens Axboe 3b18152c32 [PATCH] CFQ io scheduler updates
- Adjust slice values

- Instead of one async queue, one is defined per priority level. This
  prevents kernel threads (such as reiserfs/x and others) that run at
  higher io priority from conflicting with others. Previously, it was a
  coin toss what io prio the async queue got, it was defined by who
  first set up the queue.

- Let a time slice only begin, when the previous slice is completely
  done. Previously we could be somewhat unfair to a new sync slice, if
  the previous slice was async and had several ios queued. This might
  need a little tweaking if throughput suffers a little due to this,
  allowing perhaps an overlap of a single request or so.

- Optimize the calling of kblockd_schedule_work() by doing it only when
  it is strictly necessary (no requests in driver and work left to do).

- Correct sync vs async logic. A 'normal' process can be purely async as
  well, and a flusher can be purely sync as well. Sync or async is now a
  property of the class defined and requests pending. Previously writers
  could be considered sync, when they were really async.

- Get rid of the bit fields in cfqq and crq, use flags instead.

- Various other cleanups and fixes

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 14:33:30 -07:00
Jens Axboe 3d25f3566b [PATCH] Fix cfq_find_next_crq()
In cfq_find_next_crq(), cfq tries to find the next request by choosing
one of two requests before and after the current one.  Currently, when
choosing the next request, if there's no next request, the next
candidate is NULL, resulting in selection of the previous request.  This
results in weird scheduling.  Once we reach the end, we always seek
backward.

The correct behavior is using the first request as the next candidate.
cfq_choose_req() already has logics for handling wrapped requests.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 14:33:29 -07:00
Jens Axboe 22e2c507c3 [PATCH] Update cfq io scheduler to time sliced design
This updates the CFQ io scheduler to the new time sliced design (cfq
v3).  It provides full process fairness, while giving excellent
aggregate system throughput even for many competing processes.  It
supports io priorities, either inherited from the cpu nice value or set
directly with the ioprio_get/set syscalls.  The latter closely mimic
set/getpriority.

This import is based on my latest from -mm.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 14:33:29 -07:00
Dmitry Torokhov 6c1852a08e [PATCH] sysfs: (driver/block) if show/store is missing return -EIO
sysfs: fix drivers/block so if an attribute doesn't implement
       show or store method read/write will return -EIO
       instead of 0 or -EINVAL.

Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2005-06-20 15:15:03 -07:00
Kiyoshi Ueda db3b5848ea When cfq I/O scheduler is selected, get_request() in __make_request() calls
__cfq_get_queue().  __cfq_get_queue() finds an existing queue (struct
cfq_queue) of the current process for the device and returns it.  If it's not
found, __cfq_get_queue() creates and returns a new one if __cfq_get_queue() is
called with __GFP_WAIT flag, or __cfq_get_queue() returns NULL (this means that
get_request() fails) if no __GFP_WAIT flag.

On the other hand, in __make_request(), get_request() is called without
__GFP_WAIT flag at the first time.  Thus, the get_request() fails when there is
no existing queue, typically when it's called for the first I/O request of the
process to the device.

Though it will be followed by get_request_wait() for general case,
__make_request() will just end the I/O with an error (EWOULDBLOCK) when the
request was for read-ahead.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
2005-06-17 16:15:10 +02:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00