The initial patch cleared this for all requests, which is wrong
since internal uses can't have this cleared as that's what they
are using to pass data. The fix moved the initialization to the
mq_ops->initialize_rq_fn(), but that's only a partial fix since
it only catches uses from blk_get_request(), not requests coming
from the file system.
Keep the non-fs initialization, and add the IDE entry clear
IFF RQF_DONTPREP isn't set and it's a passthrough request.
Fixes: d16a67667c ("ide: don't clear special on ide_queue_rq() entry")
Fixes: 22ce0a7ccf ("ide: don't use req->special")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We can't use RQF_DONTPREP to see if we should clear ->special,
as someone could have set that while inserting the request. Make
sure we clear it in our ->initialize_rq_fn() helper instead.
Fixes: 22ce0a7ccf ("ide: don't use req->special")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Just replace it with a field of the same name in struct ide_req.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The return value is just used as a binary yes/no decision, so switch
it to a bool instead of the old BLKPREP_* values returned as an int.
Also clean up a few related comments.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
ide-disk and ide-cd tested as working just fine, ide-tape and
ide-floppy haven't. But the latter don't require changes, so they
should work without issue.
Add helper function to insert a request from a work queue, since we
cannot invoke the blk-mq request insertion from IRQ context.
Cc: David Miller <davem@davemloft.net>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Variable queue_run_ms is not being used and is redundant and hence can
be removed. Removing this also means that variable left can also be
removed.
Cleans up clang warning:
warning: variable ‘queue_run_ms’ set but not used [-Wunused-but-set-variable]
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ide_timer_expiry() disables interrupt at function entry when acquiring
hwif->lock. Before disabling the device interrupt it unlocks hwif->lock,
but interrupts stay disabled. After the call to disable_irq() interrupts
are disabled again, which is a pointless exercise.
After the device irq handler has been invoked with interrupts disabled,
hwif->lock is acquired again with spin_lock_irq() because the device irq
handler might have reenabled interrupts. This is not documented and
confusing for the casual reader.
Remove the redundant local_irq_disable() and add a comment which explains
why hwif->lock has to be reacquired with spin_lock_irq().
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-ide@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Currently we use nornal Linux errno values in the block layer, and while
we accept any error a few have overloaded magic meanings. This patch
instead introduces a new blk_status_t value that holds block layer specific
status codes and explicitly explains their meaning. Helpers to convert from
and to the previous special meanings are provided for now, but I suspect
we want to get rid of them in the long run - those drivers that have a
errno input (e.g. networking) usually get errnos that don't know about
the special block layer overloads, and similarly returning them to userspace
will usually return somethings that strictly speaking isn't correct
for file system operations, but that's left as an exercise for later.
For now the set of errors is a very limited set that closely corresponds
to the previous overloaded errno values, but there is some low hanging
fruite to improve it.
blk_status_t (ab)uses the sparse __bitwise annotations to allow for sparse
typechecking, so that we can easily catch places passing the wrong values.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
The parisc architecture recently reimplemented the memcpy function and
their reimplementation crashed when source and destination overlapped.
The crash happened in the function ide_complete_cmd where memcpy is called
with the same source and destination pointer. According to the C
specification, memcpy behavior is undefined if the source and destination
range overlaps. This patches fixes the undefined behavior.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This passes on the scsi_cmnd result field to users of passthrough
requests. Currently we abuse req->errors for this purpose, but that
field will go away in its current form.
Note that the old IDE code abuses the errors field in very creative
ways and stores all kinds of different values in it. I didn't dare
to touch this magic, so the abuses are brought forward 1:1.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Instead of keeping two levels of indirection for requests types, fold it
all into the operations. The little caveat here is that previously
cmd_type only applied to struct request, while the request and bio op
fields were set to plain REQ_OP_READ/WRITE even for passthrough
operations.
Instead this patch adds new REQ_OP_* for SCSI passthrough and driver
private requests, althought it has to add two for each so that we
can communicate the data in/out nature of the request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Currently the legacy ide driver defines several request types of it's own,
which is in the way of removing that field entirely.
Instead add a type field to struct ide_request and use that to distinguish
the different types of IDE-internal requests.
It's a bit of a mess, but so is the surrounding code..
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <axboe@fb.com>
And require all drivers that want to support BLOCK_PC to allocate it
as the first thing of their private data. To support this the legacy
IDE and BSG code is switched to set cmd_size on their queues to let
the block layer allocate the additional space.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)
to do the replacement at the end of the merge window.
Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A lot of the REQ_* flags are only used on struct requests, and only of
use to the block layer and a few drivers that dig into struct request
internals.
This patch adds a new req_flags_t rq_flags field to struct request for
them, and thus dramatically shrinks the number of common requests. It
also removes the unfortunate situation where we have to fit the fields
from the same enum into 32 bits for struct bio and 64 bits for
struct request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
This removes the request types and hacks from the block code and into the
old IDE driver. There is a small amunt of code duplication due to this,
but it's not too bad.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Just because we are not requeuing a request does not mean that
some aren't pending. So always issue a blk_delay_queue() if
either we are requeueing OR there's pending IO.
This fixes a boot problem for some IDE boxes.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
We see stalls if we don't always ensure that the queue gets run
again. Even if rq == NULL, we could have other pending requests
in the queue.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
The conversion to blk_delay_queue() missed parts of IDE.
Add a blk_delay_queue() to ensure that the request handler
gets reinvoked when it needs to.
Note that in all but one place the old plug re-run delay of
3 msecs is used, even though it probably could be shorter
for performance reasons in some of those cases.
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Code has been converted over to the new explicit on-stack plugging,
and delay users have been converted to use the new API for that.
So lets kill off the old plugging along with aops->sync_page().
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Unplugging from a request function doesn't really help much (it's
already in the request_fn) and soon block layer will be updated to mix
barrier sequence with other commands, so there's no need to treat
queue flushing any differently.
ide was the only user of blk_queue_flushing(). Remove it.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in
struct requests. This allows much easier grepping for different request
types instead of unwinding through macros.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
I noticed that my KVM virtual machines were experiencing IDE
issues resulting in processes stuck on waiting for buffers to
complete.
The root cause is of course race conditions in the ancient qemu
backend that I'm using. However, the fact that the guest isn't
recovering is a bug.
I've tracked it down to the change made last year to dequeue
requests at the start rather than at the end in the IDE layer.
commit 8f6205cd57
Author: Tejun Heo <tj@kernel.org>
Date: Fri May 8 11:53:59 2009 +0900
ide: dequeue in-flight request
The problem is that the function ide_dma_timeout_retry does not
requeue the current request, causing one request to be lost for
each DMA timeout.
This patch fixes this by requeueing the request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
* Use blk_rq_bytes() instead of obsolete ide_rq_bytes() in ide_kill_rq()
and ide_floppy_do_request() for failed requests.
[ bugfix part ]
* Use blk_rq_bytes() instead of obsolete ide_rq_bytes() in ide_do_devset()
and ide_complete_drive_reset(). Then remove ide_rq_bytes().
[ cleanup part ]
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Such requests should be failed with -EIO (like all other requests
in this function) instead of being completed successfully.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make hwif->rq point to PM request during PM sequence and do not allow
any other types of requests to slip in (the old comment was never correct
as there should be no such requests generated during PM sequence).
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move the ack_intr() method into 'struct ide_port_ops', also renaming it to
test_irq() while at it...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
There are now two methods that clear the port interrupt: ack_intr() method,
implemented only on M680x0 machines, that is called at the start of ide_intr(),
and clear_irq() method, that is called somewhat later in this function. In
order to stop this duplication, delegate the task of clearing the interrupt
to clear_irq() method, only leaving to ack_intr() the task of testing for the
port interrupt.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Now the clear_irq() method is called only from ide_intr() but ide_timer_expiry()
also should call this method in case when drive_is_ready() succeeds...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* 'for-2.6.31' of git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6: (29 commits)
ide: re-implement ide_pci_init_one() on top of ide_pci_init_two()
ide: unexport ide_find_dma_mode()
ide: fix PowerMac bootup oops
ide: skip probe if there are no devices on the port (v2)
sl82c105: add printk() logging facility
ide-tape: fix proc warning
ide: add IDE_DFLAG_NIEN_QUIRK device flag
ide: respect quirk_drives[] list on all controllers
hpt366: enable all quirks for devices on quirk_drives[] list
hpt366: sync quirk_drives[] list with pdc202xx_{new,old}.c
ide: remove superfluous SELECT_MASK() call from do_rw_taskfile()
ide: remove superfluous SELECT_MASK() call from ide_driveid_update()
icside: remove superfluous ->maskproc method
ide-tape: fix IDE_AFLAG_* atomic accesses
ide-tape: change IDE_AFLAG_IGNORE_DSC non-atomically
pdc202xx_old: kill resetproc() method
pdc202xx_old: don't call pdc202xx_reset() on IRQ timeout
pdc202xx_old: use ide_dma_test_irq()
ide: preserve Host Protected Area by default (v2)
ide-gd: implement block device ->set_capacity method (v2)
...
Add IDE_DFLAG_NIEN_QUIRK device flag and use it instead of
drive->quirk_list.
There should be no functional changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
On Tuesday 19 May 2009 20:29:28 Martin Lottermoser wrote:
> hdc: cdrom_decode_status: error=0x40 <3>{ LastFailedSense=0x04 }
> ide: failed opcode was: unknown
> hdc: DMA disabled
> ------------[ cut here ]------------
> kernel BUG at drivers/ide/ide-io.c:872!
It is possible for ide-cd to ignore ide_error()'s return value under
some circumstances. Workaround it in ide_intr() and ide_timer_expiry()
by checking if there is a device/port reset pending currently.
Fixes bug #13345:
http://bugzilla.kernel.org/show_bug.cgi?id=13345
Reported-by: Martin Lottermoser <Martin.Lottermoser@t-online.de>
Reported-and-tested-by: Modestas Vainius <modestas@vainius.eu>
Cc: Borislav Petkov <petkovbb@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Replace:
- special_t typedef by IDE_SFLAG_* flags
- 'special_t special' ide_drive_t's field by 'u8 special_flags' one
There should be no functional changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
While at it:
- change debug printk() level to KERN_DEBUG and use __func__
- update documentation
v2:
- fix DEBUG build (noticed by Sergei)
There should be no functional changes caused by this patch.
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Till now block layer allowed two separate modes of request execution.
A request is always acquired from the request queue via
elv_next_request(). After that, drivers are free to either dequeue it
or process it without dequeueing. Dequeue allows elv_next_request()
to return the next request so that multiple requests can be in flight.
Executing requests without dequeueing has its merits mostly in
allowing drivers for simpler devices which can't do sg to deal with
segments only without considering request boundary. However, the
benefit this brings is dubious and declining while the cost of the API
ambiguity is increasing. Segment based drivers are usually for very
old or limited devices and as converting to dequeueing model isn't
difficult, it doesn't justify the API overhead it puts on block layer
and its more modern users.
Previous patches converted all block low level drivers to dequeueing
model. This patch completes the API transition by...
* renaming elv_next_request() to blk_peek_request()
* renaming blkdev_dequeue_request() to blk_start_request()
* adding blk_fetch_request() which is combination of peek and start
* disallowing completion of queued (not started) requests
* applying new API to all LLDs
Renamings are for consistency and to break out of tree code so that
it's apparent that out of tree drivers need updating.
[ Impact: block request issue API cleanup, no functional change ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: unsik Kim <donari75@gmail.com>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Laurent Vivier <Laurent@lvivier.info>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
ide generally has single request in flight and tracks it using
hwif->rq and all state handlers follow the following convention.
* ide_started is returned if the request is in flight.
* ide_stopped is returned if the queue needs to be restarted. The
request might or might not have been processed fully or partially.
* hwif->rq is set to NULL, when an issued request completes.
So, dequeueing model can be implemented by dequeueing after fetch,
requeueing if hwif->rq isn't NULL on ide_stopped return and doing
about the same thing on completion / port unlock paths. These changes
can be made in ide-io proper.
In addition to the above main changes, the following updates are
necessary.
* ide-cd shouldn't dequeue a request when issuing REQUEST SENSE for it
as the request is already dequeued.
* ide-atapi uses request queue as stack when issuing REQUEST SENSE to
put the REQUEST SENSE in front of the failed request. This now
needs to be done using requeueing.
[ Impact: dequeue in-flight request ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
With recent unification of fields, it's now guaranteed that
rq->data_len always equals blk_rq_bytes(). Convert all direct users
to accessors.
[ Impact: convert direct rq->data_len usages to blk_rq_bytes() ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
ide doesn't manipulate request fields anymore and thus all hard and
their soft equivalents are always equal. Convert all references to
accessors.
[ Impact: use pos and nr_sectors accessors ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Implement accessors - blk_rq_pos(), blk_rq_sectors() and
blk_rq_cur_sectors() which return rq->hard_sector, rq->hard_nr_sectors
and rq->hard_cur_sectors respectively and convert direct references of
the said fields to the accessors.
This is in preparation of request data length handling cleanup.
Geert : suggested adding const to struct request * parameter to accessors
Sergei : spotted error in patch description
[ Impact: cleanup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Tested-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Ackec-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Impact: remove code path which is no longer necessary
All IDE data transfers now use rq->bio. Simplify ide_map_sg()
accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: cleanup rq->data usage
ide-pm uses rq->data to carry pointer to struct request_pm_state
through request queue and rq->special is used to carray pointer to
local struct ide_cmd, which isn't necessary. Use rq->special for
request_pm_state instead and use local ide_cmd in
ide_start_power_step().
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: unify request data buffer handling
rq->data is used mostly to pass kernel buffer through request queue
without using bio. There are only a couple of places which still do
this in kernel and converting to bio isn't difficult.
This patch converts ide-cd and atapi to use bio instead of rq->data
for request sense and internal pc commands. With previous change to
unify sense request handling, this is relatively easily achieved by
adding blk_rq_map_kern() during sense_rq prep and PC issue.
If blk_rq_map_kern() fails for sense, the error is deferred till sense
issue and aborts the failed command which triggered the sense. Note
that this is a slim possibility as sense prep is done on each command
issue, so for the above condition to actually trigger, all preps since
the last sense issue till the issue of the request which would require
a sense should fail.
* do_request functions might sleep now. This should be okay as ide
request_fn - do_ide_request() - is invoked only from make_request
and plug work. Make sure this is the case by adding might_sleep()
to do_ide_request().
* Functions which access the read sense data before the sense request
is complete now should access bio_data(sense_rq->bio) as the sense
buffer might have been copied during blk_rq_map_kern().
* ide-tape updated to map sg.
* cdrom_do_block_pc() now doesn't have to deal with REQ_TYPE_ATA_PC
special case. Simplified.
* tp_ops->output/input_data path dropped from ide_pc_intr().
Signed-off-by: Tejun Heo <tj@kernel.org>
Impact: rq->buffer usage cleanup
ide_raw_taskfile() directly uses rq->buffer to carry pointer to the
data buffer. This complicates both block interface and ide backend
request handling. Use blk_rq_map_kern() instead and drop special
handling for REQ_TYPE_ATA_TASKFILE from ide_map_sg().
Note that REQ_RW setting is moved upwards as blk_rq_map_kern() uses it
to initialize bio rw flag.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>