Merge branch 'for-4.10/block' of git://git.kernel.dk/linux-block
Pull block layer updates from Jens Axboe: "This is the main block pull request this series. Contrary to previous release, I've kept the core and driver changes in the same branch. We always ended up having dependencies between the two for obvious reasons, so makes more sense to keep them together. That said, I'll probably try and keep more topical branches going forward, especially for cycles that end up being as busy as this one. The major parts of this pull request is: - Improved support for O_DIRECT on block devices, with a small private implementation instead of using the pig that is fs/direct-io.c. From Christoph. - Request completion tracking in a scalable fashion. This is utilized by two components in this pull, the new hybrid polling and the writeback queue throttling code. - Improved support for polling with O_DIRECT, adding a hybrid mode that combines pure polling with an initial sleep. From me. - Support for automatic throttling of writeback queues on the block side. This uses feedback from the device completion latencies to scale the queue on the block side up or down. From me. - Support from SMR drives in the block layer and for SD. From Hannes and Shaun. - Multi-connection support for nbd. From Josef. - Cleanup of request and bio flags, so we have a clear split between which are bio (or rq) private, and which ones are shared. From Christoph. - A set of patches from Bart, that improve how we handle queue stopping and starting in blk-mq. - Support for WRITE_ZEROES from Chaitanya. - Lightnvm updates from Javier/Matias. - Supoort for FC for the nvme-over-fabrics code. From James Smart. - A bunch of fixes from a whole slew of people, too many to name here" * 'for-4.10/block' of git://git.kernel.dk/linux-block: (182 commits) blk-stat: fix a few cases of missing batch flushing blk-flush: run the queue when inserting blk-mq flush elevator: make the rqhash helpers exported blk-mq: abstract out blk_mq_dispatch_rq_list() helper blk-mq: add blk_mq_start_stopped_hw_queue() block: improve handling of the magic discard payload blk-wbt: don't throttle discard or write zeroes nbd: use dev_err_ratelimited in io path nbd: reset the setup task for NBD_CLEAR_SOCK nvme-fabrics: Add FC LLDD loopback driver to test FC-NVME nvme-fabrics: Add target support for FC transport nvme-fabrics: Add host support for FC transport nvme-fabrics: Add FC transport LLDD api definitions nvme-fabrics: Add FC transport FC-NVME definitions nvme-fabrics: Add FC transport error codes to nvme.h Add type 0x28 NVME type code to scsi fc headers nvme-fabrics: patch target code in prep for FC transport support nvme-fabrics: set sqe.command_id in core not transports parser: add u64 number parser nvme-rdma: align to generic ib_event logging helper ...
This commit is contained in:
commit
36869cb93d
|
@ -235,3 +235,45 @@ Description:
|
|||
write_same_max_bytes is 0, write same is not supported
|
||||
by the device.
|
||||
|
||||
What: /sys/block/<disk>/queue/write_zeroes_max_bytes
|
||||
Date: November 2016
|
||||
Contact: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
|
||||
Description:
|
||||
Devices that support write zeroes operation in which a
|
||||
single request can be issued to zero out the range of
|
||||
contiguous blocks on storage without having any payload
|
||||
in the request. This can be used to optimize writing zeroes
|
||||
to the devices. write_zeroes_max_bytes indicates how many
|
||||
bytes can be written in a single write zeroes command. If
|
||||
write_zeroes_max_bytes is 0, write zeroes is not supported
|
||||
by the device.
|
||||
|
||||
What: /sys/block/<disk>/queue/zoned
|
||||
Date: September 2016
|
||||
Contact: Damien Le Moal <damien.lemoal@hgst.com>
|
||||
Description:
|
||||
zoned indicates if the device is a zoned block device
|
||||
and the zone model of the device if it is indeed zoned.
|
||||
The possible values indicated by zoned are "none" for
|
||||
regular block devices and "host-aware" or "host-managed"
|
||||
for zoned block devices. The characteristics of
|
||||
host-aware and host-managed zoned block devices are
|
||||
described in the ZBC (Zoned Block Commands) and ZAC
|
||||
(Zoned Device ATA Command Set) standards. These standards
|
||||
also define the "drive-managed" zone model. However,
|
||||
since drive-managed zoned block devices do not support
|
||||
zone commands, they will be treated as regular block
|
||||
devices and zoned will report "none".
|
||||
|
||||
What: /sys/block/<disk>/queue/chunk_sectors
|
||||
Date: September 2016
|
||||
Contact: Hannes Reinecke <hare@suse.com>
|
||||
Description:
|
||||
chunk_sectors has different meaning depending on the type
|
||||
of the disk. For a RAID device (dm-raid), chunk_sectors
|
||||
indicates the size in 512B sectors of the RAID volume
|
||||
stripe segment. For a zoned block device, either
|
||||
host-aware or host-managed, chunk_sectors indicates the
|
||||
size of 512B sectors of the zones of the device, with
|
||||
the eventual exception of the last zone of the device
|
||||
which may be smaller.
|
||||
|
|
|
@ -348,7 +348,7 @@ Drivers can now specify a request prepare function (q->prep_rq_fn) that the
|
|||
block layer would invoke to pre-build device commands for a given request,
|
||||
or perform other preparatory processing for the request. This is routine is
|
||||
called by elv_next_request(), i.e. typically just before servicing a request.
|
||||
(The prepare function would not be called for requests that have REQ_DONTPREP
|
||||
(The prepare function would not be called for requests that have RQF_DONTPREP
|
||||
enabled)
|
||||
|
||||
Aside:
|
||||
|
@ -553,8 +553,8 @@ struct request {
|
|||
struct request_list *rl;
|
||||
}
|
||||
|
||||
See the rq_flag_bits definitions for an explanation of the various flags
|
||||
available. Some bits are used by the block layer or i/o scheduler.
|
||||
See the req_ops and req_flag_bits definitions for an explanation of the various
|
||||
flags available. Some bits are used by the block layer or i/o scheduler.
|
||||
|
||||
The behaviour of the various sector counts are almost the same as before,
|
||||
except that since we have multi-segment bios, current_nr_sectors refers
|
||||
|
|
|
@ -240,11 +240,11 @@ All cfq queues doing synchronous sequential IO go on to sync-idle tree.
|
|||
On this tree we idle on each queue individually.
|
||||
|
||||
All synchronous non-sequential queues go on sync-noidle tree. Also any
|
||||
request which are marked with REQ_NOIDLE go on this service tree. On this
|
||||
tree we do not idle on individual queues instead idle on the whole group
|
||||
of queues or the tree. So if there are 4 queues waiting for IO to dispatch
|
||||
we will idle only once last queue has dispatched the IO and there is
|
||||
no more IO on this service tree.
|
||||
synchronous write request which is not marked with REQ_IDLE goes on this
|
||||
service tree. On this tree we do not idle on individual queues instead idle
|
||||
on the whole group of queues or the tree. So if there are 4 queues waiting
|
||||
for IO to dispatch we will idle only once last queue has dispatched the IO
|
||||
and there is no more IO on this service tree.
|
||||
|
||||
All async writes go on async service tree. There is no idling on async
|
||||
queues.
|
||||
|
@ -257,17 +257,17 @@ tree idling provides isolation with buffered write queues on async tree.
|
|||
|
||||
FAQ
|
||||
===
|
||||
Q1. Why to idle at all on queues marked with REQ_NOIDLE.
|
||||
Q1. Why to idle at all on queues not marked with REQ_IDLE.
|
||||
|
||||
A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
|
||||
with REQ_NOIDLE. This helps in providing isolation with all the sync-idle
|
||||
A1. We only do tree idle (all queues on sync-noidle tree) on queues not marked
|
||||
with REQ_IDLE. This helps in providing isolation with all the sync-idle
|
||||
queues. Otherwise in presence of many sequential readers, other
|
||||
synchronous IO might not get fair share of disk.
|
||||
|
||||
For example, if there are 10 sequential readers doing IO and they get
|
||||
100ms each. If a REQ_NOIDLE request comes in, it will be scheduled
|
||||
roughly after 1 second. If after completion of REQ_NOIDLE request we
|
||||
do not idle, and after a couple of milli seconds a another REQ_NOIDLE
|
||||
100ms each. If a !REQ_IDLE request comes in, it will be scheduled
|
||||
roughly after 1 second. If after completion of !REQ_IDLE request we
|
||||
do not idle, and after a couple of milli seconds a another !REQ_IDLE
|
||||
request comes in, again it will be scheduled after 1second. Repeat it
|
||||
and notice how a workload can lose its disk share and suffer due to
|
||||
multiple sequential readers.
|
||||
|
@ -276,16 +276,16 @@ A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
|
|||
context of fsync, and later some journaling data is written. Journaling
|
||||
data comes in only after fsync has finished its IO (atleast for ext4
|
||||
that seemed to be the case). Now if one decides not to idle on fsync
|
||||
thread due to REQ_NOIDLE, then next journaling write will not get
|
||||
thread due to !REQ_IDLE, then next journaling write will not get
|
||||
scheduled for another second. A process doing small fsync, will suffer
|
||||
badly in presence of multiple sequential readers.
|
||||
|
||||
Hence doing tree idling on threads using REQ_NOIDLE flag on requests
|
||||
Hence doing tree idling on threads using !REQ_IDLE flag on requests
|
||||
provides isolation from multiple sequential readers and at the same
|
||||
time we do not idle on individual threads.
|
||||
|
||||
Q2. When to specify REQ_NOIDLE
|
||||
A2. I would think whenever one is doing synchronous write and not expecting
|
||||
Q2. When to specify REQ_IDLE
|
||||
A2. I would think whenever one is doing synchronous write and expecting
|
||||
more writes to be dispatched from same context soon, should be able
|
||||
to specify REQ_NOIDLE on writes and that probably should work well for
|
||||
to specify REQ_IDLE on writes and that probably should work well for
|
||||
most of the cases.
|
||||
|
|
|
@ -72,4 +72,4 @@ use_per_node_hctx=[0/1]: Default: 0
|
|||
queue for each CPU node in the system.
|
||||
|
||||
use_lightnvm=[0/1]: Default: 0
|
||||
Register device with LightNVM. Requires blk-mq to be used.
|
||||
Register device with LightNVM. Requires blk-mq and CONFIG_NVM to be enabled.
|
||||
|
|
|
@ -58,6 +58,20 @@ When read, this file shows the total number of block IO polls and how
|
|||
many returned success. Writing '0' to this file will disable polling
|
||||
for this device. Writing any non-zero value will enable this feature.
|
||||
|
||||
io_poll_delay (RW)
|
||||
------------------
|
||||
If polling is enabled, this controls what kind of polling will be
|
||||
performed. It defaults to -1, which is classic polling. In this mode,
|
||||
the CPU will repeatedly ask for completions without giving up any time.
|
||||
If set to 0, a hybrid polling mode is used, where the kernel will attempt
|
||||
to make an educated guess at when the IO will complete. Based on this
|
||||
guess, the kernel will put the process issuing IO to sleep for an amount
|
||||
of time, before entering a classic poll loop. This mode might be a
|
||||
little slower than pure classic polling, but it will be more efficient.
|
||||
If set to a value larger than 0, the kernel will put the process issuing
|
||||
IO to sleep for this amont of microseconds before entering classic
|
||||
polling.
|
||||
|
||||
iostats (RW)
|
||||
-------------
|
||||
This file is used to control (on/off) the iostats accounting of the
|
||||
|
@ -169,5 +183,14 @@ This is the number of bytes the device can write in a single write-same
|
|||
command. A value of '0' means write-same is not supported by this
|
||||
device.
|
||||
|
||||
wb_lat_usec (RW)
|
||||
----------------
|
||||
If the device is registered for writeback throttling, then this file shows
|
||||
the target minimum read latency. If this latency is exceeded in a given
|
||||
window of time (see wb_window_usec), then the writeback throttling will start
|
||||
scaling back writes. Writing a value of '0' to this file disables the
|
||||
feature. Writing a value of '-1' to this file resets the value to the
|
||||
default setting.
|
||||
|
||||
|
||||
Jens Axboe <jens.axboe@oracle.com>, February 2009
|
||||
|
|
14
MAINTAINERS
14
MAINTAINERS
|
@ -8766,6 +8766,16 @@ L: linux-nvme@lists.infradead.org
|
|||
S: Supported
|
||||
F: drivers/nvme/target/
|
||||
|
||||
NVM EXPRESS FC TRANSPORT DRIVERS
|
||||
M: James Smart <james.smart@broadcom.com>
|
||||
L: linux-nvme@lists.infradead.org
|
||||
S: Supported
|
||||
F: include/linux/nvme-fc.h
|
||||
F: include/linux/nvme-fc-driver.h
|
||||
F: drivers/nvme/host/fc.c
|
||||
F: drivers/nvme/target/fc.c
|
||||
F: drivers/nvme/target/fcloop.c
|
||||
|
||||
NVMEM FRAMEWORK
|
||||
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
|
||||
M: Maxime Ripard <maxime.ripard@free-electrons.com>
|
||||
|
@ -9656,8 +9666,8 @@ F: arch/mips/boot/dts/pistachio/
|
|||
F: arch/mips/configs/pistachio*_defconfig
|
||||
|
||||
PKTCDVD DRIVER
|
||||
M: Jiri Kosina <jikos@kernel.org>
|
||||
S: Maintained
|
||||
S: Orphan
|
||||
M: linux-block@vger.kernel.org
|
||||
F: drivers/block/pktcdvd.c
|
||||
F: include/linux/pktcdvd.h
|
||||
F: include/uapi/linux/pktcdvd.h
|
||||
|
|
|
@ -25,7 +25,6 @@
|
|||
|
||||
#include <linux/string.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/blk_types.h>
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/memory.h>
|
||||
#include <asm-generic/pci_iomap.h>
|
||||
|
|
|
@ -22,7 +22,6 @@
|
|||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/blk_types.h>
|
||||
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/barrier.h>
|
||||
|
|
|
@ -5,6 +5,7 @@ menuconfig BLOCK
|
|||
bool "Enable the block layer" if EXPERT
|
||||
default y
|
||||
select SBITMAP
|
||||
select SRCU
|
||||
help
|
||||
Provide block layer support for the kernel.
|
||||
|
||||
|
@ -89,6 +90,14 @@ config BLK_DEV_INTEGRITY
|
|||
T10/SCSI Data Integrity Field or the T13/ATA External Path
|
||||
Protection. If in doubt, say N.
|
||||
|
||||
config BLK_DEV_ZONED
|
||||
bool "Zoned block device support"
|
||||
---help---
|
||||
Block layer zoned block device support. This option enables
|
||||
support for ZAC/ZBC host-managed and host-aware zoned block devices.
|
||||
|
||||
Say yes here if you have a ZAC or ZBC storage device.
|
||||
|
||||
config BLK_DEV_THROTTLING
|
||||
bool "Block layer bio throttling support"
|
||||
depends on BLK_CGROUP=y
|
||||
|
@ -112,6 +121,32 @@ config BLK_CMDLINE_PARSER
|
|||
|
||||
See Documentation/block/cmdline-partition.txt for more information.
|
||||
|
||||
config BLK_WBT
|
||||
bool "Enable support for block device writeback throttling"
|
||||
default n
|
||||
---help---
|
||||
Enabling this option enables the block layer to throttle buffered
|
||||
background writeback from the VM, making it more smooth and having
|
||||
less impact on foreground operations. The throttling is done
|
||||
dynamically on an algorithm loosely based on CoDel, factoring in
|
||||
the realtime performance of the disk.
|
||||
|
||||
config BLK_WBT_SQ
|
||||
bool "Single queue writeback throttling"
|
||||
default n
|
||||
depends on BLK_WBT
|
||||
---help---
|
||||
Enable writeback throttling by default on legacy single queue devices
|
||||
|
||||
config BLK_WBT_MQ
|
||||
bool "Multiqueue writeback throttling"
|
||||
default y
|
||||
depends on BLK_WBT
|
||||
---help---
|
||||
Enable writeback throttling by default on multiqueue devices.
|
||||
Multiqueue currently doesn't have support for IO scheduling,
|
||||
enabling this option is recommended.
|
||||
|
||||
menu "Partition Types"
|
||||
|
||||
source "block/partitions/Kconfig"
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \
|
||||
blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
|
||||
blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
|
||||
blk-lib.o blk-mq.o blk-mq-tag.o \
|
||||
blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
|
||||
blk-mq-sysfs.o blk-mq-cpumap.o ioctl.o \
|
||||
genhd.o scsi_ioctl.o partition-generic.o ioprio.o \
|
||||
badblocks.o partitions/
|
||||
|
@ -23,3 +23,5 @@ obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
|
|||
obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
|
||||
obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o t10-pi.o
|
||||
obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o
|
||||
obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o
|
||||
obj-$(CONFIG_BLK_WBT) += blk-wbt.o
|
||||
|
|
|
@ -172,7 +172,7 @@ bool bio_integrity_enabled(struct bio *bio)
|
|||
{
|
||||
struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev);
|
||||
|
||||
if (!bio_is_rw(bio))
|
||||
if (bio_op(bio) != REQ_OP_READ && bio_op(bio) != REQ_OP_WRITE)
|
||||
return false;
|
||||
|
||||
/* Already protected? */
|
||||
|
|
68
block/bio.c
68
block/bio.c
|
@ -270,11 +270,15 @@ static void bio_free(struct bio *bio)
|
|||
}
|
||||
}
|
||||
|
||||
void bio_init(struct bio *bio)
|
||||
void bio_init(struct bio *bio, struct bio_vec *table,
|
||||
unsigned short max_vecs)
|
||||
{
|
||||
memset(bio, 0, sizeof(*bio));
|
||||
atomic_set(&bio->__bi_remaining, 1);
|
||||
atomic_set(&bio->__bi_cnt, 1);
|
||||
|
||||
bio->bi_io_vec = table;
|
||||
bio->bi_max_vecs = max_vecs;
|
||||
}
|
||||
EXPORT_SYMBOL(bio_init);
|
||||
|
||||
|
@ -480,7 +484,7 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
|
|||
return NULL;
|
||||
|
||||
bio = p + front_pad;
|
||||
bio_init(bio);
|
||||
bio_init(bio, NULL, 0);
|
||||
|
||||
if (nr_iovecs > inline_vecs) {
|
||||
unsigned long idx = 0;
|
||||
|
@ -670,6 +674,7 @@ struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask,
|
|||
switch (bio_op(bio)) {
|
||||
case REQ_OP_DISCARD:
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
break;
|
||||
case REQ_OP_WRITE_SAME:
|
||||
bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0];
|
||||
|
@ -847,6 +852,55 @@ done:
|
|||
}
|
||||
EXPORT_SYMBOL(bio_add_page);
|
||||
|
||||
/**
|
||||
* bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
|
||||
* @bio: bio to add pages to
|
||||
* @iter: iov iterator describing the region to be mapped
|
||||
*
|
||||
* Pins as many pages from *iter and appends them to @bio's bvec array. The
|
||||
* pages will have to be released using put_page() when done.
|
||||
*/
|
||||
int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
|
||||
{
|
||||
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
|
||||
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
|
||||
struct page **pages = (struct page **)bv;
|
||||
size_t offset, diff;
|
||||
ssize_t size;
|
||||
|
||||
size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
|
||||
if (unlikely(size <= 0))
|
||||
return size ? size : -EFAULT;
|
||||
nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
|
||||
|
||||
/*
|
||||
* Deep magic below: We need to walk the pinned pages backwards
|
||||
* because we are abusing the space allocated for the bio_vecs
|
||||
* for the page array. Because the bio_vecs are larger than the
|
||||
* page pointers by definition this will always work. But it also
|
||||
* means we can't use bio_add_page, so any changes to it's semantics
|
||||
* need to be reflected here as well.
|
||||
*/
|
||||
bio->bi_iter.bi_size += size;
|
||||
bio->bi_vcnt += nr_pages;
|
||||
|
||||
diff = (nr_pages * PAGE_SIZE - offset) - size;
|
||||
while (nr_pages--) {
|
||||
bv[nr_pages].bv_page = pages[nr_pages];
|
||||
bv[nr_pages].bv_len = PAGE_SIZE;
|
||||
bv[nr_pages].bv_offset = 0;
|
||||
}
|
||||
|
||||
bv[0].bv_offset += offset;
|
||||
bv[0].bv_len -= offset;
|
||||
if (diff)
|
||||
bv[bio->bi_vcnt - 1].bv_len -= diff;
|
||||
|
||||
iov_iter_advance(iter, size);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
|
||||
|
||||
struct submit_bio_ret {
|
||||
struct completion event;
|
||||
int error;
|
||||
|
@ -1786,15 +1840,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
|
|||
BUG_ON(sectors <= 0);
|
||||
BUG_ON(sectors >= bio_sectors(bio));
|
||||
|
||||
/*
|
||||
* Discards need a mutable bio_vec to accommodate the payload
|
||||
* required by the DSM TRIM and UNMAP commands.
|
||||
*/
|
||||
if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE)
|
||||
split = bio_clone_bioset(bio, gfp, bs);
|
||||
else
|
||||
split = bio_clone_fast(bio, gfp, bs);
|
||||
|
||||
split = bio_clone_fast(bio, gfp, bs);
|
||||
if (!split)
|
||||
return NULL;
|
||||
|
||||
|
|
|
@ -185,7 +185,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
|
|||
}
|
||||
|
||||
wb_congested = wb_congested_get_create(&q->backing_dev_info,
|
||||
blkcg->css.id, GFP_NOWAIT);
|
||||
blkcg->css.id,
|
||||
GFP_NOWAIT | __GFP_NOWARN);
|
||||
if (!wb_congested) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_css;
|
||||
|
@ -193,7 +194,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
|
|||
|
||||
/* allocate */
|
||||
if (!new_blkg) {
|
||||
new_blkg = blkg_alloc(blkcg, q, GFP_NOWAIT);
|
||||
new_blkg = blkg_alloc(blkcg, q, GFP_NOWAIT | __GFP_NOWARN);
|
||||
if (unlikely(!new_blkg)) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_congested;
|
||||
|
@ -1022,7 +1023,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css)
|
|||
}
|
||||
|
||||
spin_lock_init(&blkcg->lock);
|
||||
INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT);
|
||||
INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT | __GFP_NOWARN);
|
||||
INIT_HLIST_HEAD(&blkcg->blkg_list);
|
||||
#ifdef CONFIG_CGROUP_WRITEBACK
|
||||
INIT_LIST_HEAD(&blkcg->cgwb_list);
|
||||
|
@ -1240,7 +1241,7 @@ pd_prealloc:
|
|||
if (blkg->pd[pol->plid])
|
||||
continue;
|
||||
|
||||
pd = pol->pd_alloc_fn(GFP_NOWAIT, q->node);
|
||||
pd = pol->pd_alloc_fn(GFP_NOWAIT | __GFP_NOWARN, q->node);
|
||||
if (!pd)
|
||||
swap(pd, pd_prealloc);
|
||||
if (!pd) {
|
||||
|
|
261
block/blk-core.c
261
block/blk-core.c
|
@ -39,6 +39,7 @@
|
|||
|
||||
#include "blk.h"
|
||||
#include "blk-mq.h"
|
||||
#include "blk-wbt.h"
|
||||
|
||||
EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_remap);
|
||||
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_remap);
|
||||
|
@ -145,13 +146,13 @@ static void req_bio_endio(struct request *rq, struct bio *bio,
|
|||
if (error)
|
||||
bio->bi_error = error;
|
||||
|
||||
if (unlikely(rq->cmd_flags & REQ_QUIET))
|
||||
if (unlikely(rq->rq_flags & RQF_QUIET))
|
||||
bio_set_flag(bio, BIO_QUIET);
|
||||
|
||||
bio_advance(bio, nbytes);
|
||||
|
||||
/* don't actually finish bio if it's part of flush sequence */
|
||||
if (bio->bi_iter.bi_size == 0 && !(rq->cmd_flags & REQ_FLUSH_SEQ))
|
||||
if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ))
|
||||
bio_endio(bio);
|
||||
}
|
||||
|
||||
|
@ -882,6 +883,7 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
|
|||
|
||||
fail:
|
||||
blk_free_flush_queue(q->fq);
|
||||
wbt_exit(q);
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_init_allocated_queue);
|
||||
|
@ -899,7 +901,7 @@ EXPORT_SYMBOL(blk_get_queue);
|
|||
|
||||
static inline void blk_free_request(struct request_list *rl, struct request *rq)
|
||||
{
|
||||
if (rq->cmd_flags & REQ_ELVPRIV) {
|
||||
if (rq->rq_flags & RQF_ELVPRIV) {
|
||||
elv_put_request(rl->q, rq);
|
||||
if (rq->elv.icq)
|
||||
put_io_context(rq->elv.icq->ioc);
|
||||
|
@ -961,14 +963,14 @@ static void __freed_request(struct request_list *rl, int sync)
|
|||
* A request has just been released. Account for it, update the full and
|
||||
* congestion status, wake up any waiters. Called under q->queue_lock.
|
||||
*/
|
||||
static void freed_request(struct request_list *rl, int op, unsigned int flags)
|
||||
static void freed_request(struct request_list *rl, bool sync,
|
||||
req_flags_t rq_flags)
|
||||
{
|
||||
struct request_queue *q = rl->q;
|
||||
int sync = rw_is_sync(op, flags);
|
||||
|
||||
q->nr_rqs[sync]--;
|
||||
rl->count[sync]--;
|
||||
if (flags & REQ_ELVPRIV)
|
||||
if (rq_flags & RQF_ELVPRIV)
|
||||
q->nr_rqs_elvpriv--;
|
||||
|
||||
__freed_request(rl, sync);
|
||||
|
@ -1056,8 +1058,7 @@ static struct io_context *rq_ioc(struct bio *bio)
|
|||
/**
|
||||
* __get_request - get a free request
|
||||
* @rl: request list to allocate from
|
||||
* @op: REQ_OP_READ/REQ_OP_WRITE
|
||||
* @op_flags: rq_flag_bits
|
||||
* @op: operation and flags
|
||||
* @bio: bio to allocate request for (can be %NULL)
|
||||
* @gfp_mask: allocation mask
|
||||
*
|
||||
|
@ -1068,22 +1069,22 @@ static struct io_context *rq_ioc(struct bio *bio)
|
|||
* Returns ERR_PTR on failure, with @q->queue_lock held.
|
||||
* Returns request pointer on success, with @q->queue_lock *not held*.
|
||||
*/
|
||||
static struct request *__get_request(struct request_list *rl, int op,
|
||||
int op_flags, struct bio *bio,
|
||||
gfp_t gfp_mask)
|
||||
static struct request *__get_request(struct request_list *rl, unsigned int op,
|
||||
struct bio *bio, gfp_t gfp_mask)
|
||||
{
|
||||
struct request_queue *q = rl->q;
|
||||
struct request *rq;
|
||||
struct elevator_type *et = q->elevator->type;
|
||||
struct io_context *ioc = rq_ioc(bio);
|
||||
struct io_cq *icq = NULL;
|
||||
const bool is_sync = rw_is_sync(op, op_flags) != 0;
|
||||
const bool is_sync = op_is_sync(op);
|
||||
int may_queue;
|
||||
req_flags_t rq_flags = RQF_ALLOCED;
|
||||
|
||||
if (unlikely(blk_queue_dying(q)))
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
may_queue = elv_may_queue(q, op, op_flags);
|
||||
may_queue = elv_may_queue(q, op);
|
||||
if (may_queue == ELV_MQUEUE_NO)
|
||||
goto rq_starved;
|
||||
|
||||
|
@ -1127,7 +1128,7 @@ static struct request *__get_request(struct request_list *rl, int op,
|
|||
|
||||
/*
|
||||
* Decide whether the new request will be managed by elevator. If
|
||||
* so, mark @op_flags and increment elvpriv. Non-zero elvpriv will
|
||||
* so, mark @rq_flags and increment elvpriv. Non-zero elvpriv will
|
||||
* prevent the current elevator from being destroyed until the new
|
||||
* request is freed. This guarantees icq's won't be destroyed and
|
||||
* makes creating new ones safe.
|
||||
|
@ -1136,14 +1137,14 @@ static struct request *__get_request(struct request_list *rl, int op,
|
|||
* it will be created after releasing queue_lock.
|
||||
*/
|
||||
if (blk_rq_should_init_elevator(bio) && !blk_queue_bypass(q)) {
|
||||
op_flags |= REQ_ELVPRIV;
|
||||
rq_flags |= RQF_ELVPRIV;
|
||||
q->nr_rqs_elvpriv++;
|
||||
if (et->icq_cache && ioc)
|
||||
icq = ioc_lookup_icq(ioc, q);
|
||||
}
|
||||
|
||||
if (blk_queue_io_stat(q))
|
||||
op_flags |= REQ_IO_STAT;
|
||||
rq_flags |= RQF_IO_STAT;
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
||||
/* allocate and init request */
|
||||
|
@ -1153,10 +1154,11 @@ static struct request *__get_request(struct request_list *rl, int op,
|
|||
|
||||
blk_rq_init(q, rq);
|
||||
blk_rq_set_rl(rq, rl);
|
||||
req_set_op_attrs(rq, op, op_flags | REQ_ALLOCED);
|
||||
rq->cmd_flags = op;
|
||||
rq->rq_flags = rq_flags;
|
||||
|
||||
/* init elvpriv */
|
||||
if (op_flags & REQ_ELVPRIV) {
|
||||
if (rq_flags & RQF_ELVPRIV) {
|
||||
if (unlikely(et->icq_cache && !icq)) {
|
||||
if (ioc)
|
||||
icq = ioc_create_icq(ioc, q, gfp_mask);
|
||||
|
@ -1195,7 +1197,7 @@ fail_elvpriv:
|
|||
printk_ratelimited(KERN_WARNING "%s: dev %s: request aux data allocation failed, iosched may be disturbed\n",
|
||||
__func__, dev_name(q->backing_dev_info.dev));
|
||||
|
||||
rq->cmd_flags &= ~REQ_ELVPRIV;
|
||||
rq->rq_flags &= ~RQF_ELVPRIV;
|
||||
rq->elv.icq = NULL;
|
||||
|
||||
spin_lock_irq(q->queue_lock);
|
||||
|
@ -1212,7 +1214,7 @@ fail_alloc:
|
|||
* queue, but this is pretty rare.
|
||||
*/
|
||||
spin_lock_irq(q->queue_lock);
|
||||
freed_request(rl, op, op_flags);
|
||||
freed_request(rl, is_sync, rq_flags);
|
||||
|
||||
/*
|
||||
* in the very unlikely event that allocation failed and no
|
||||
|
@ -1230,8 +1232,7 @@ rq_starved:
|
|||
/**
|
||||
* get_request - get a free request
|
||||
* @q: request_queue to allocate request from
|
||||
* @op: REQ_OP_READ/REQ_OP_WRITE
|
||||
* @op_flags: rq_flag_bits
|
||||
* @op: operation and flags
|
||||
* @bio: bio to allocate request for (can be %NULL)
|
||||
* @gfp_mask: allocation mask
|
||||
*
|
||||
|
@ -1242,18 +1243,17 @@ rq_starved:
|
|||
* Returns ERR_PTR on failure, with @q->queue_lock held.
|
||||
* Returns request pointer on success, with @q->queue_lock *not held*.
|
||||
*/
|
||||
static struct request *get_request(struct request_queue *q, int op,
|
||||
int op_flags, struct bio *bio,
|
||||
gfp_t gfp_mask)
|
||||
static struct request *get_request(struct request_queue *q, unsigned int op,
|
||||
struct bio *bio, gfp_t gfp_mask)
|
||||
{
|
||||
const bool is_sync = rw_is_sync(op, op_flags) != 0;
|
||||
const bool is_sync = op_is_sync(op);
|
||||
DEFINE_WAIT(wait);
|
||||
struct request_list *rl;
|
||||
struct request *rq;
|
||||
|
||||
rl = blk_get_rl(q, bio); /* transferred to @rq on success */
|
||||
retry:
|
||||
rq = __get_request(rl, op, op_flags, bio, gfp_mask);
|
||||
rq = __get_request(rl, op, bio, gfp_mask);
|
||||
if (!IS_ERR(rq))
|
||||
return rq;
|
||||
|
||||
|
@ -1295,7 +1295,7 @@ static struct request *blk_old_get_request(struct request_queue *q, int rw,
|
|||
create_io_context(gfp_mask, q->node);
|
||||
|
||||
spin_lock_irq(q->queue_lock);
|
||||
rq = get_request(q, rw, 0, NULL, gfp_mask);
|
||||
rq = get_request(q, rw, NULL, gfp_mask);
|
||||
if (IS_ERR(rq)) {
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
return rq;
|
||||
|
@ -1346,8 +1346,9 @@ void blk_requeue_request(struct request_queue *q, struct request *rq)
|
|||
blk_delete_timer(rq);
|
||||
blk_clear_rq_complete(rq);
|
||||
trace_block_rq_requeue(q, rq);
|
||||
wbt_requeue(q->rq_wb, &rq->issue_stat);
|
||||
|
||||
if (rq->cmd_flags & REQ_QUEUED)
|
||||
if (rq->rq_flags & RQF_QUEUED)
|
||||
blk_queue_end_tag(q, rq);
|
||||
|
||||
BUG_ON(blk_queued_rq(rq));
|
||||
|
@ -1409,7 +1410,7 @@ EXPORT_SYMBOL_GPL(part_round_stats);
|
|||
#ifdef CONFIG_PM
|
||||
static void blk_pm_put_request(struct request *rq)
|
||||
{
|
||||
if (rq->q->dev && !(rq->cmd_flags & REQ_PM) && !--rq->q->nr_pending)
|
||||
if (rq->q->dev && !(rq->rq_flags & RQF_PM) && !--rq->q->nr_pending)
|
||||
pm_runtime_mark_last_busy(rq->q->dev);
|
||||
}
|
||||
#else
|
||||
|
@ -1421,6 +1422,8 @@ static inline void blk_pm_put_request(struct request *rq) {}
|
|||
*/
|
||||
void __blk_put_request(struct request_queue *q, struct request *req)
|
||||
{
|
||||
req_flags_t rq_flags = req->rq_flags;
|
||||
|
||||
if (unlikely(!q))
|
||||
return;
|
||||
|
||||
|
@ -1436,20 +1439,21 @@ void __blk_put_request(struct request_queue *q, struct request *req)
|
|||
/* this is a bio leak */
|
||||
WARN_ON(req->bio != NULL);
|
||||
|
||||
wbt_done(q->rq_wb, &req->issue_stat);
|
||||
|
||||
/*
|
||||
* Request may not have originated from ll_rw_blk. if not,
|
||||
* it didn't come out of our reserved rq pools
|
||||
*/
|
||||
if (req->cmd_flags & REQ_ALLOCED) {
|
||||
unsigned int flags = req->cmd_flags;
|
||||
int op = req_op(req);
|
||||
if (rq_flags & RQF_ALLOCED) {
|
||||
struct request_list *rl = blk_rq_rl(req);
|
||||
bool sync = op_is_sync(req->cmd_flags);
|
||||
|
||||
BUG_ON(!list_empty(&req->queuelist));
|
||||
BUG_ON(ELV_ON_HASH(req));
|
||||
|
||||
blk_free_request(rl, req);
|
||||
freed_request(rl, op, flags);
|
||||
freed_request(rl, sync, rq_flags);
|
||||
blk_put_rl(rl);
|
||||
}
|
||||
}
|
||||
|
@ -1471,38 +1475,6 @@ void blk_put_request(struct request *req)
|
|||
}
|
||||
EXPORT_SYMBOL(blk_put_request);
|
||||
|
||||
/**
|
||||
* blk_add_request_payload - add a payload to a request
|
||||
* @rq: request to update
|
||||
* @page: page backing the payload
|
||||
* @offset: offset in page
|
||||
* @len: length of the payload.
|
||||
*
|
||||
* This allows to later add a payload to an already submitted request by
|
||||
* a block driver. The driver needs to take care of freeing the payload
|
||||
* itself.
|
||||
*
|
||||
* Note that this is a quite horrible hack and nothing but handling of
|
||||
* discard requests should ever use it.
|
||||
*/
|
||||
void blk_add_request_payload(struct request *rq, struct page *page,
|
||||
int offset, unsigned int len)
|
||||
{
|
||||
struct bio *bio = rq->bio;
|
||||
|
||||
bio->bi_io_vec->bv_page = page;
|
||||
bio->bi_io_vec->bv_offset = offset;
|
||||
bio->bi_io_vec->bv_len = len;
|
||||
|
||||
bio->bi_iter.bi_size = len;
|
||||
bio->bi_vcnt = 1;
|
||||
bio->bi_phys_segments = 1;
|
||||
|
||||
rq->__data_len = rq->resid_len = len;
|
||||
rq->nr_phys_segments = 1;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_add_request_payload);
|
||||
|
||||
bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
|
||||
struct bio *bio)
|
||||
{
|
||||
|
@ -1649,8 +1621,6 @@ out:
|
|||
void init_request_from_bio(struct request *req, struct bio *bio)
|
||||
{
|
||||
req->cmd_type = REQ_TYPE_FS;
|
||||
|
||||
req->cmd_flags |= bio->bi_opf & REQ_COMMON_MASK;
|
||||
if (bio->bi_opf & REQ_RAHEAD)
|
||||
req->cmd_flags |= REQ_FAILFAST_MASK;
|
||||
|
||||
|
@ -1662,11 +1632,11 @@ void init_request_from_bio(struct request *req, struct bio *bio)
|
|||
|
||||
static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
|
||||
{
|
||||
const bool sync = !!(bio->bi_opf & REQ_SYNC);
|
||||
struct blk_plug *plug;
|
||||
int el_ret, rw_flags = 0, where = ELEVATOR_INSERT_SORT;
|
||||
int el_ret, where = ELEVATOR_INSERT_SORT;
|
||||
struct request *req;
|
||||
unsigned int request_count = 0;
|
||||
unsigned int wb_acct;
|
||||
|
||||
/*
|
||||
* low level driver can indicate that it wants pages above a
|
||||
|
@ -1719,30 +1689,22 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
|
|||
}
|
||||
|
||||
get_rq:
|
||||
/*
|
||||
* This sync check and mask will be re-done in init_request_from_bio(),
|
||||
* but we need to set it earlier to expose the sync flag to the
|
||||
* rq allocator and io schedulers.
|
||||
*/
|
||||
if (sync)
|
||||
rw_flags |= REQ_SYNC;
|
||||
|
||||
/*
|
||||
* Add in META/PRIO flags, if set, before we get to the IO scheduler
|
||||
*/
|
||||
rw_flags |= (bio->bi_opf & (REQ_META | REQ_PRIO));
|
||||
wb_acct = wbt_wait(q->rq_wb, bio, q->queue_lock);
|
||||
|
||||
/*
|
||||
* Grab a free request. This is might sleep but can not fail.
|
||||
* Returns with the queue unlocked.
|
||||
*/
|
||||
req = get_request(q, bio_data_dir(bio), rw_flags, bio, GFP_NOIO);
|
||||
req = get_request(q, bio->bi_opf, bio, GFP_NOIO);
|
||||
if (IS_ERR(req)) {
|
||||
__wbt_done(q->rq_wb, wb_acct);
|
||||
bio->bi_error = PTR_ERR(req);
|
||||
bio_endio(bio);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
wbt_track(&req->issue_stat, wb_acct);
|
||||
|
||||
/*
|
||||
* After dropping the lock and possibly sleeping here, our request
|
||||
* may now be mergeable after it had proven unmergeable (above).
|
||||
|
@ -1759,11 +1721,16 @@ get_rq:
|
|||
/*
|
||||
* If this is the first request added after a plug, fire
|
||||
* of a plug trace.
|
||||
*
|
||||
* @request_count may become stale because of schedule
|
||||
* out, so check plug list again.
|
||||
*/
|
||||
if (!request_count)
|
||||
if (!request_count || list_empty(&plug->list))
|
||||
trace_block_plug(q);
|
||||
else {
|
||||
if (request_count >= BLK_MAX_REQUEST_COUNT) {
|
||||
struct request *last = list_entry_rq(plug->list.prev);
|
||||
if (request_count >= BLK_MAX_REQUEST_COUNT ||
|
||||
blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE) {
|
||||
blk_flush_plug_list(plug, false);
|
||||
trace_block_plug(q);
|
||||
}
|
||||
|
@ -1788,7 +1755,12 @@ static inline void blk_partition_remap(struct bio *bio)
|
|||
{
|
||||
struct block_device *bdev = bio->bi_bdev;
|
||||
|
||||
if (bio_sectors(bio) && bdev != bdev->bd_contains) {
|
||||
/*
|
||||
* Zone reset does not include bi_size so bio_sectors() is always 0.
|
||||
* Include a test for the reset op code and perform the remap if needed.
|
||||
*/
|
||||
if (bdev != bdev->bd_contains &&
|
||||
(bio_sectors(bio) || bio_op(bio) == REQ_OP_ZONE_RESET)) {
|
||||
struct hd_struct *p = bdev->bd_part;
|
||||
|
||||
bio->bi_iter.bi_sector += p->start_sect;
|
||||
|
@ -1942,6 +1914,15 @@ generic_make_request_checks(struct bio *bio)
|
|||
if (!bdev_write_same(bio->bi_bdev))
|
||||
goto not_supported;
|
||||
break;
|
||||
case REQ_OP_ZONE_REPORT:
|
||||
case REQ_OP_ZONE_RESET:
|
||||
if (!bdev_is_zoned(bio->bi_bdev))
|
||||
goto not_supported;
|
||||
break;
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
if (!bdev_write_zeroes_sectors(bio->bi_bdev))
|
||||
goto not_supported;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@ -2210,7 +2191,7 @@ unsigned int blk_rq_err_bytes(const struct request *rq)
|
|||
unsigned int bytes = 0;
|
||||
struct bio *bio;
|
||||
|
||||
if (!(rq->cmd_flags & REQ_MIXED_MERGE))
|
||||
if (!(rq->rq_flags & RQF_MIXED_MERGE))
|
||||
return blk_rq_bytes(rq);
|
||||
|
||||
/*
|
||||
|
@ -2253,7 +2234,7 @@ void blk_account_io_done(struct request *req)
|
|||
* normal IO on queueing nor completion. Accounting the
|
||||
* containing request is enough.
|
||||
*/
|
||||
if (blk_do_io_stat(req) && !(req->cmd_flags & REQ_FLUSH_SEQ)) {
|
||||
if (blk_do_io_stat(req) && !(req->rq_flags & RQF_FLUSH_SEQ)) {
|
||||
unsigned long duration = jiffies - req->start_time;
|
||||
const int rw = rq_data_dir(req);
|
||||
struct hd_struct *part;
|
||||
|
@ -2281,7 +2262,7 @@ static struct request *blk_pm_peek_request(struct request_queue *q,
|
|||
struct request *rq)
|
||||
{
|
||||
if (q->dev && (q->rpm_status == RPM_SUSPENDED ||
|
||||
(q->rpm_status != RPM_ACTIVE && !(rq->cmd_flags & REQ_PM))))
|
||||
(q->rpm_status != RPM_ACTIVE && !(rq->rq_flags & RQF_PM))))
|
||||
return NULL;
|
||||
else
|
||||
return rq;
|
||||
|
@ -2357,13 +2338,13 @@ struct request *blk_peek_request(struct request_queue *q)
|
|||
if (!rq)
|
||||
break;
|
||||
|
||||
if (!(rq->cmd_flags & REQ_STARTED)) {
|
||||
if (!(rq->rq_flags & RQF_STARTED)) {
|
||||
/*
|
||||
* This is the first time the device driver
|
||||
* sees this request (possibly after
|
||||
* requeueing). Notify IO scheduler.
|
||||
*/
|
||||
if (rq->cmd_flags & REQ_SORTED)
|
||||
if (rq->rq_flags & RQF_SORTED)
|
||||
elv_activate_rq(q, rq);
|
||||
|
||||
/*
|
||||
|
@ -2371,7 +2352,7 @@ struct request *blk_peek_request(struct request_queue *q)
|
|||
* it, a request that has been delayed should
|
||||
* not be passed by new incoming requests
|
||||
*/
|
||||
rq->cmd_flags |= REQ_STARTED;
|
||||
rq->rq_flags |= RQF_STARTED;
|
||||
trace_block_rq_issue(q, rq);
|
||||
}
|
||||
|
||||
|
@ -2380,7 +2361,7 @@ struct request *blk_peek_request(struct request_queue *q)
|
|||
q->boundary_rq = NULL;
|
||||
}
|
||||
|
||||
if (rq->cmd_flags & REQ_DONTPREP)
|
||||
if (rq->rq_flags & RQF_DONTPREP)
|
||||
break;
|
||||
|
||||
if (q->dma_drain_size && blk_rq_bytes(rq)) {
|
||||
|
@ -2403,11 +2384,11 @@ struct request *blk_peek_request(struct request_queue *q)
|
|||
/*
|
||||
* the request may have been (partially) prepped.
|
||||
* we need to keep this request in the front to
|
||||
* avoid resource deadlock. REQ_STARTED will
|
||||
* avoid resource deadlock. RQF_STARTED will
|
||||
* prevent other fs requests from passing this one.
|
||||
*/
|
||||
if (q->dma_drain_size && blk_rq_bytes(rq) &&
|
||||
!(rq->cmd_flags & REQ_DONTPREP)) {
|
||||
!(rq->rq_flags & RQF_DONTPREP)) {
|
||||
/*
|
||||
* remove the space for the drain we added
|
||||
* so that we don't add it again
|
||||
|
@ -2420,7 +2401,7 @@ struct request *blk_peek_request(struct request_queue *q)
|
|||
} else if (ret == BLKPREP_KILL || ret == BLKPREP_INVALID) {
|
||||
int err = (ret == BLKPREP_INVALID) ? -EREMOTEIO : -EIO;
|
||||
|
||||
rq->cmd_flags |= REQ_QUIET;
|
||||
rq->rq_flags |= RQF_QUIET;
|
||||
/*
|
||||
* Mark this request as started so we don't trigger
|
||||
* any debug logic in the end I/O path.
|
||||
|
@ -2475,6 +2456,12 @@ void blk_start_request(struct request *req)
|
|||
{
|
||||
blk_dequeue_request(req);
|
||||
|
||||
if (test_bit(QUEUE_FLAG_STATS, &req->q->queue_flags)) {
|
||||
blk_stat_set_issue_time(&req->issue_stat);
|
||||
req->rq_flags |= RQF_STATS;
|
||||
wbt_issue(req->q->rq_wb, &req->issue_stat);
|
||||
}
|
||||
|
||||
/*
|
||||
* We are now handing the request to the hardware, initialize
|
||||
* resid_len to full count and add the timeout handler.
|
||||
|
@ -2557,7 +2544,7 @@ bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
|
|||
req->errors = 0;
|
||||
|
||||
if (error && req->cmd_type == REQ_TYPE_FS &&
|
||||
!(req->cmd_flags & REQ_QUIET)) {
|
||||
!(req->rq_flags & RQF_QUIET)) {
|
||||
char *error_type;
|
||||
|
||||
switch (error) {
|
||||
|
@ -2623,6 +2610,8 @@ bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
|
|||
return false;
|
||||
}
|
||||
|
||||
WARN_ON_ONCE(req->rq_flags & RQF_SPECIAL_PAYLOAD);
|
||||
|
||||
req->__data_len -= total_bytes;
|
||||
|
||||
/* update sector only for requests with clear definition of sector */
|
||||
|
@ -2630,7 +2619,7 @@ bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
|
|||
req->__sector += total_bytes >> 9;
|
||||
|
||||
/* mixed attributes always follow the first bio */
|
||||
if (req->cmd_flags & REQ_MIXED_MERGE) {
|
||||
if (req->rq_flags & RQF_MIXED_MERGE) {
|
||||
req->cmd_flags &= ~REQ_FAILFAST_MASK;
|
||||
req->cmd_flags |= req->bio->bi_opf & REQ_FAILFAST_MASK;
|
||||
}
|
||||
|
@ -2683,7 +2672,7 @@ void blk_unprep_request(struct request *req)
|
|||
{
|
||||
struct request_queue *q = req->q;
|
||||
|
||||
req->cmd_flags &= ~REQ_DONTPREP;
|
||||
req->rq_flags &= ~RQF_DONTPREP;
|
||||
if (q->unprep_rq_fn)
|
||||
q->unprep_rq_fn(q, req);
|
||||
}
|
||||
|
@ -2694,8 +2683,13 @@ EXPORT_SYMBOL_GPL(blk_unprep_request);
|
|||
*/
|
||||
void blk_finish_request(struct request *req, int error)
|
||||
{
|
||||
if (req->cmd_flags & REQ_QUEUED)
|
||||
blk_queue_end_tag(req->q, req);
|
||||
struct request_queue *q = req->q;
|
||||
|
||||
if (req->rq_flags & RQF_STATS)
|
||||
blk_stat_add(&q->rq_stats[rq_data_dir(req)], req);
|
||||
|
||||
if (req->rq_flags & RQF_QUEUED)
|
||||
blk_queue_end_tag(q, req);
|
||||
|
||||
BUG_ON(blk_queued_rq(req));
|
||||
|
||||
|
@ -2704,18 +2698,19 @@ void blk_finish_request(struct request *req, int error)
|
|||
|
||||
blk_delete_timer(req);
|
||||
|
||||
if (req->cmd_flags & REQ_DONTPREP)
|
||||
if (req->rq_flags & RQF_DONTPREP)
|
||||
blk_unprep_request(req);
|
||||
|
||||
blk_account_io_done(req);
|
||||
|
||||
if (req->end_io)
|
||||
if (req->end_io) {
|
||||
wbt_done(req->q->rq_wb, &req->issue_stat);
|
||||
req->end_io(req, error);
|
||||
else {
|
||||
} else {
|
||||
if (blk_bidi_rq(req))
|
||||
__blk_put_request(req->next_rq->q, req->next_rq);
|
||||
|
||||
__blk_put_request(req->q, req);
|
||||
__blk_put_request(q, req);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(blk_finish_request);
|
||||
|
@ -2939,8 +2934,6 @@ EXPORT_SYMBOL_GPL(__blk_end_request_err);
|
|||
void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
|
||||
struct bio *bio)
|
||||
{
|
||||
req_set_op(rq, bio_op(bio));
|
||||
|
||||
if (bio_has_data(bio))
|
||||
rq->nr_phys_segments = bio_phys_segments(q, bio);
|
||||
|
||||
|
@ -3024,8 +3017,7 @@ EXPORT_SYMBOL_GPL(blk_rq_unprep_clone);
|
|||
static void __blk_rq_prep_clone(struct request *dst, struct request *src)
|
||||
{
|
||||
dst->cpu = src->cpu;
|
||||
req_set_op_attrs(dst, req_op(src),
|
||||
(src->cmd_flags & REQ_CLONE_MASK) | REQ_NOMERGE);
|
||||
dst->cmd_flags = src->cmd_flags | REQ_NOMERGE;
|
||||
dst->cmd_type = src->cmd_type;
|
||||
dst->__sector = blk_rq_pos(src);
|
||||
dst->__data_len = blk_rq_bytes(src);
|
||||
|
@ -3303,52 +3295,6 @@ void blk_finish_plug(struct blk_plug *plug)
|
|||
}
|
||||
EXPORT_SYMBOL(blk_finish_plug);
|
||||
|
||||
bool blk_poll(struct request_queue *q, blk_qc_t cookie)
|
||||
{
|
||||
struct blk_plug *plug;
|
||||
long state;
|
||||
unsigned int queue_num;
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
|
||||
if (!q->mq_ops || !q->mq_ops->poll || !blk_qc_t_valid(cookie) ||
|
||||
!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
|
||||
return false;
|
||||
|
||||
queue_num = blk_qc_t_to_queue_num(cookie);
|
||||
hctx = q->queue_hw_ctx[queue_num];
|
||||
hctx->poll_considered++;
|
||||
|
||||
plug = current->plug;
|
||||
if (plug)
|
||||
blk_flush_plug_list(plug, false);
|
||||
|
||||
state = current->state;
|
||||
while (!need_resched()) {
|
||||
int ret;
|
||||
|
||||
hctx->poll_invoked++;
|
||||
|
||||
ret = q->mq_ops->poll(hctx, blk_qc_t_to_tag(cookie));
|
||||
if (ret > 0) {
|
||||
hctx->poll_success++;
|
||||
set_current_state(TASK_RUNNING);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (signal_pending_state(state, current))
|
||||
set_current_state(TASK_RUNNING);
|
||||
|
||||
if (current->state == TASK_RUNNING)
|
||||
return true;
|
||||
if (ret < 0)
|
||||
break;
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_poll);
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
/**
|
||||
* blk_pm_runtime_init - Block layer runtime PM initialization routine
|
||||
|
@ -3530,8 +3476,11 @@ EXPORT_SYMBOL(blk_set_runtime_active);
|
|||
|
||||
int __init blk_dev_init(void)
|
||||
{
|
||||
BUILD_BUG_ON(__REQ_NR_BITS > 8 *
|
||||
BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS));
|
||||
BUILD_BUG_ON(REQ_OP_BITS + REQ_FLAG_BITS > 8 *
|
||||
FIELD_SIZEOF(struct request, cmd_flags));
|
||||
BUILD_BUG_ON(REQ_OP_BITS + REQ_FLAG_BITS > 8 *
|
||||
FIELD_SIZEOF(struct bio, bi_opf));
|
||||
|
||||
/* used for unplugging and affects IO latency/throughput - HIGHPRI */
|
||||
kblockd_workqueue = alloc_workqueue("kblockd",
|
||||
|
|
|
@ -72,7 +72,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
|
|||
spin_lock_irq(q->queue_lock);
|
||||
|
||||
if (unlikely(blk_queue_dying(q))) {
|
||||
rq->cmd_flags |= REQ_QUIET;
|
||||
rq->rq_flags |= RQF_QUIET;
|
||||
rq->errors = -ENXIO;
|
||||
__blk_end_request_all(rq, rq->errors);
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
|
|
@ -56,7 +56,7 @@
|
|||
* Once while executing DATA and again after the whole sequence is
|
||||
* complete. The first completion updates the contained bio but doesn't
|
||||
* finish it so that the bio submitter is notified only after the whole
|
||||
* sequence is complete. This is implemented by testing REQ_FLUSH_SEQ in
|
||||
* sequence is complete. This is implemented by testing RQF_FLUSH_SEQ in
|
||||
* req_bio_endio().
|
||||
*
|
||||
* The above peculiarity requires that each FLUSH/FUA request has only one
|
||||
|
@ -127,17 +127,14 @@ static void blk_flush_restore_request(struct request *rq)
|
|||
rq->bio = rq->biotail;
|
||||
|
||||
/* make @rq a normal request */
|
||||
rq->cmd_flags &= ~REQ_FLUSH_SEQ;
|
||||
rq->rq_flags &= ~RQF_FLUSH_SEQ;
|
||||
rq->end_io = rq->flush.saved_end_io;
|
||||
}
|
||||
|
||||
static bool blk_flush_queue_rq(struct request *rq, bool add_front)
|
||||
{
|
||||
if (rq->q->mq_ops) {
|
||||
struct request_queue *q = rq->q;
|
||||
|
||||
blk_mq_add_to_requeue_list(rq, add_front);
|
||||
blk_mq_kick_requeue_list(q);
|
||||
blk_mq_add_to_requeue_list(rq, add_front, true);
|
||||
return false;
|
||||
} else {
|
||||
if (add_front)
|
||||
|
@ -330,7 +327,8 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq)
|
|||
}
|
||||
|
||||
flush_rq->cmd_type = REQ_TYPE_FS;
|
||||
req_set_op_attrs(flush_rq, REQ_OP_FLUSH, WRITE_FLUSH | REQ_FLUSH_SEQ);
|
||||
flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH;
|
||||
flush_rq->rq_flags |= RQF_FLUSH_SEQ;
|
||||
flush_rq->rq_disk = first_rq->rq_disk;
|
||||
flush_rq->end_io = flush_end_io;
|
||||
|
||||
|
@ -368,7 +366,7 @@ static void flush_data_end_io(struct request *rq, int error)
|
|||
elv_completed_request(q, rq);
|
||||
|
||||
/* for avoiding double accounting */
|
||||
rq->cmd_flags &= ~REQ_STARTED;
|
||||
rq->rq_flags &= ~RQF_STARTED;
|
||||
|
||||
/*
|
||||
* After populating an empty queue, kick it to avoid stall. Read
|
||||
|
@ -425,6 +423,13 @@ void blk_insert_flush(struct request *rq)
|
|||
if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
|
||||
rq->cmd_flags &= ~REQ_FUA;
|
||||
|
||||
/*
|
||||
* REQ_PREFLUSH|REQ_FUA implies REQ_SYNC, so if we clear any
|
||||
* of those flags, we have to set REQ_SYNC to avoid skewing
|
||||
* the request accounting.
|
||||
*/
|
||||
rq->cmd_flags |= REQ_SYNC;
|
||||
|
||||
/*
|
||||
* An empty flush handed down from a stacking driver may
|
||||
* translate into nothing if the underlying device does not
|
||||
|
@ -449,7 +454,7 @@ void blk_insert_flush(struct request *rq)
|
|||
if ((policy & REQ_FSEQ_DATA) &&
|
||||
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
|
||||
if (q->mq_ops) {
|
||||
blk_mq_insert_request(rq, false, false, true);
|
||||
blk_mq_insert_request(rq, false, true, false);
|
||||
} else
|
||||
list_add_tail(&rq->queuelist, &q->queue_head);
|
||||
return;
|
||||
|
@ -461,7 +466,7 @@ void blk_insert_flush(struct request *rq)
|
|||
*/
|
||||
memset(&rq->flush, 0, sizeof(rq->flush));
|
||||
INIT_LIST_HEAD(&rq->flush.list);
|
||||
rq->cmd_flags |= REQ_FLUSH_SEQ;
|
||||
rq->rq_flags |= RQF_FLUSH_SEQ;
|
||||
rq->flush.saved_end_io = rq->end_io; /* Usually NULL */
|
||||
if (q->mq_ops) {
|
||||
rq->end_io = mq_flush_data_end_io;
|
||||
|
@ -513,7 +518,7 @@ int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask,
|
|||
|
||||
bio = bio_alloc(gfp_mask, 0);
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH);
|
||||
bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
||||
|
||||
ret = submit_bio_wait(bio);
|
||||
|
||||
|
|
177
block/blk-lib.c
177
block/blk-lib.c
|
@ -29,7 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
struct bio *bio = *biop;
|
||||
unsigned int granularity;
|
||||
enum req_op op;
|
||||
unsigned int op;
|
||||
int alignment;
|
||||
sector_t bs_mask;
|
||||
|
||||
|
@ -80,7 +80,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|||
req_sects = end_sect - sector;
|
||||
}
|
||||
|
||||
bio = next_bio(bio, 1, gfp_mask);
|
||||
bio = next_bio(bio, 0, gfp_mask);
|
||||
bio->bi_iter.bi_sector = sector;
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, op, 0);
|
||||
|
@ -137,24 +137,24 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|||
EXPORT_SYMBOL(blkdev_issue_discard);
|
||||
|
||||
/**
|
||||
* blkdev_issue_write_same - queue a write same operation
|
||||
* __blkdev_issue_write_same - generate number of bios with same page
|
||||
* @bdev: target blockdev
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @page: page containing data to write
|
||||
* @biop: pointer to anchor bio
|
||||
*
|
||||
* Description:
|
||||
* Issue a write same request for the sectors in question.
|
||||
* Generate and issue number of bios(REQ_OP_WRITE_SAME) with same page.
|
||||
*/
|
||||
int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask,
|
||||
struct page *page)
|
||||
static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask, struct page *page,
|
||||
struct bio **biop)
|
||||
{
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
unsigned int max_write_same_sectors;
|
||||
struct bio *bio = NULL;
|
||||
int ret = 0;
|
||||
struct bio *bio = *biop;
|
||||
sector_t bs_mask;
|
||||
|
||||
if (!q)
|
||||
|
@ -164,6 +164,9 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
|||
if ((sector | nr_sects) & bs_mask)
|
||||
return -EINVAL;
|
||||
|
||||
if (!bdev_write_same(bdev))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* Ensure that max_write_same_sectors doesn't overflow bi_size */
|
||||
max_write_same_sectors = UINT_MAX >> 9;
|
||||
|
||||
|
@ -185,32 +188,112 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
|||
bio->bi_iter.bi_size = nr_sects << 9;
|
||||
nr_sects = 0;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
if (bio) {
|
||||
*biop = bio;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* blkdev_issue_write_same - queue a write same operation
|
||||
* @bdev: target blockdev
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @page: page containing data
|
||||
*
|
||||
* Description:
|
||||
* Issue a write same request for the sectors in question.
|
||||
*/
|
||||
int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask,
|
||||
struct page *page)
|
||||
{
|
||||
struct bio *bio = NULL;
|
||||
struct blk_plug plug;
|
||||
int ret;
|
||||
|
||||
blk_start_plug(&plug);
|
||||
ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask, page,
|
||||
&bio);
|
||||
if (ret == 0 && bio) {
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
}
|
||||
blk_finish_plug(&plug);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(blkdev_issue_write_same);
|
||||
|
||||
/**
|
||||
* blkdev_issue_zeroout - generate number of zero filed write bios
|
||||
* __blkdev_issue_write_zeroes - generate number of bios with WRITE ZEROES
|
||||
* @bdev: blockdev to issue
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @biop: pointer to anchor bio
|
||||
*
|
||||
* Description:
|
||||
* Generate and issue number of bios(REQ_OP_WRITE_ZEROES) with zerofiled pages.
|
||||
*/
|
||||
static int __blkdev_issue_write_zeroes(struct block_device *bdev,
|
||||
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
|
||||
struct bio **biop)
|
||||
{
|
||||
struct bio *bio = *biop;
|
||||
unsigned int max_write_zeroes_sectors;
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
|
||||
if (!q)
|
||||
return -ENXIO;
|
||||
|
||||
/* Ensure that max_write_zeroes_sectors doesn't overflow bi_size */
|
||||
max_write_zeroes_sectors = bdev_write_zeroes_sectors(bdev);
|
||||
|
||||
if (max_write_zeroes_sectors == 0)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
while (nr_sects) {
|
||||
bio = next_bio(bio, 0, gfp_mask);
|
||||
bio->bi_iter.bi_sector = sector;
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE_ZEROES, 0);
|
||||
|
||||
if (nr_sects > max_write_zeroes_sectors) {
|
||||
bio->bi_iter.bi_size = max_write_zeroes_sectors << 9;
|
||||
nr_sects -= max_write_zeroes_sectors;
|
||||
sector += max_write_zeroes_sectors;
|
||||
} else {
|
||||
bio->bi_iter.bi_size = nr_sects << 9;
|
||||
nr_sects = 0;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
*biop = bio;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* __blkdev_issue_zeroout - generate number of zero filed write bios
|
||||
* @bdev: blockdev to issue
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @biop: pointer to anchor bio
|
||||
* @discard: discard flag
|
||||
*
|
||||
* Description:
|
||||
* Generate and issue number of bios with zerofiled pages.
|
||||
*/
|
||||
|
||||
static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask)
|
||||
int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,
|
||||
bool discard)
|
||||
{
|
||||
int ret;
|
||||
struct bio *bio = NULL;
|
||||
int bi_size = 0;
|
||||
struct bio *bio = *biop;
|
||||
unsigned int sz;
|
||||
sector_t bs_mask;
|
||||
|
||||
|
@ -218,6 +301,24 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
|||
if ((sector | nr_sects) & bs_mask)
|
||||
return -EINVAL;
|
||||
|
||||
if (discard) {
|
||||
ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
|
||||
BLKDEV_DISCARD_ZERO, biop);
|
||||
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = __blkdev_issue_write_zeroes(bdev, sector, nr_sects, gfp_mask,
|
||||
biop);
|
||||
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
|
||||
goto out;
|
||||
|
||||
ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask,
|
||||
ZERO_PAGE(0), biop);
|
||||
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
|
||||
goto out;
|
||||
|
||||
ret = 0;
|
||||
while (nr_sects != 0) {
|
||||
bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES),
|
||||
gfp_mask);
|
||||
|
@ -227,21 +328,20 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
|||
|
||||
while (nr_sects != 0) {
|
||||
sz = min((sector_t) PAGE_SIZE >> 9 , nr_sects);
|
||||
ret = bio_add_page(bio, ZERO_PAGE(0), sz << 9, 0);
|
||||
nr_sects -= ret >> 9;
|
||||
sector += ret >> 9;
|
||||
if (ret < (sz << 9))
|
||||
bi_size = bio_add_page(bio, ZERO_PAGE(0), sz << 9, 0);
|
||||
nr_sects -= bi_size >> 9;
|
||||
sector += bi_size >> 9;
|
||||
if (bi_size < (sz << 9))
|
||||
break;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
if (bio) {
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
*biop = bio;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(__blkdev_issue_zeroout);
|
||||
|
||||
/**
|
||||
* blkdev_issue_zeroout - zero-fill a block range
|
||||
|
@ -258,26 +358,27 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
|||
* the discard request fail, if the discard flag is not set, or if
|
||||
* discard_zeroes_data is not supported, this function will resort to
|
||||
* zeroing the blocks manually, thus provisioning (allocating,
|
||||
* anchoring) them. If the block device supports the WRITE SAME command
|
||||
* blkdev_issue_zeroout() will use it to optimize the process of
|
||||
* anchoring) them. If the block device supports WRITE ZEROES or WRITE SAME
|
||||
* command(s), blkdev_issue_zeroout() will use it to optimize the process of
|
||||
* clearing the block range. Otherwise the zeroing will be performed
|
||||
* using regular WRITE calls.
|
||||
*/
|
||||
|
||||
int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask, bool discard)
|
||||
{
|
||||
if (discard) {
|
||||
if (!blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
|
||||
BLKDEV_DISCARD_ZERO))
|
||||
return 0;
|
||||
int ret;
|
||||
struct bio *bio = NULL;
|
||||
struct blk_plug plug;
|
||||
|
||||
blk_start_plug(&plug);
|
||||
ret = __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
|
||||
&bio, discard);
|
||||
if (ret == 0 && bio) {
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
}
|
||||
blk_finish_plug(&plug);
|
||||
|
||||
if (bdev_write_same(bdev) &&
|
||||
blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask,
|
||||
ZERO_PAGE(0)) == 0)
|
||||
return 0;
|
||||
|
||||
return __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(blkdev_issue_zeroout);
|
||||
|
|
|
@ -16,6 +16,8 @@
|
|||
int blk_rq_append_bio(struct request *rq, struct bio *bio)
|
||||
{
|
||||
if (!rq->bio) {
|
||||
rq->cmd_flags &= REQ_OP_MASK;
|
||||
rq->cmd_flags |= (bio->bi_opf & REQ_OP_MASK);
|
||||
blk_rq_bio_prep(rq->q, rq, bio);
|
||||
} else {
|
||||
if (!ll_back_merge_fn(rq->q, rq, bio))
|
||||
|
@ -138,7 +140,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
|
|||
} while (iov_iter_count(&i));
|
||||
|
||||
if (!bio_flagged(bio, BIO_USER_MAPPED))
|
||||
rq->cmd_flags |= REQ_COPY_USER;
|
||||
rq->rq_flags |= RQF_COPY_USER;
|
||||
return 0;
|
||||
|
||||
unmap_rq:
|
||||
|
@ -236,7 +238,7 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
|
|||
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
|
||||
|
||||
if (do_copy)
|
||||
rq->cmd_flags |= REQ_COPY_USER;
|
||||
rq->rq_flags |= RQF_COPY_USER;
|
||||
|
||||
ret = blk_rq_append_bio(rq, bio);
|
||||
if (unlikely(ret)) {
|
||||
|
|
|
@ -199,6 +199,10 @@ void blk_queue_split(struct request_queue *q, struct bio **bio,
|
|||
case REQ_OP_SECURE_ERASE:
|
||||
split = blk_bio_discard_split(q, *bio, bs, &nsegs);
|
||||
break;
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
split = NULL;
|
||||
nsegs = (*bio)->bi_phys_segments;
|
||||
break;
|
||||
case REQ_OP_WRITE_SAME:
|
||||
split = blk_bio_write_same_split(q, *bio, bs, &nsegs);
|
||||
break;
|
||||
|
@ -237,15 +241,14 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
|
|||
if (!bio)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* This should probably be returning 0, but blk_add_request_payload()
|
||||
* (Christoph!!!!)
|
||||
*/
|
||||
if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE)
|
||||
return 1;
|
||||
|
||||
if (bio_op(bio) == REQ_OP_WRITE_SAME)
|
||||
switch (bio_op(bio)) {
|
||||
case REQ_OP_DISCARD:
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
return 0;
|
||||
case REQ_OP_WRITE_SAME:
|
||||
return 1;
|
||||
}
|
||||
|
||||
fbio = bio;
|
||||
cluster = blk_queue_cluster(q);
|
||||
|
@ -402,38 +405,21 @@ new_segment:
|
|||
*bvprv = *bvec;
|
||||
}
|
||||
|
||||
static inline int __blk_bvec_map_sg(struct request_queue *q, struct bio_vec bv,
|
||||
struct scatterlist *sglist, struct scatterlist **sg)
|
||||
{
|
||||
*sg = sglist;
|
||||
sg_set_page(*sg, bv.bv_page, bv.bv_len, bv.bv_offset);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
|
||||
struct scatterlist *sglist,
|
||||
struct scatterlist **sg)
|
||||
{
|
||||
struct bio_vec bvec, bvprv = { NULL };
|
||||
struct bvec_iter iter;
|
||||
int nsegs, cluster;
|
||||
|
||||
nsegs = 0;
|
||||
cluster = blk_queue_cluster(q);
|
||||
|
||||
switch (bio_op(bio)) {
|
||||
case REQ_OP_DISCARD:
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
/*
|
||||
* This is a hack - drivers should be neither modifying the
|
||||
* biovec, nor relying on bi_vcnt - but because of
|
||||
* blk_add_request_payload(), a discard bio may or may not have
|
||||
* a payload we need to set up here (thank you Christoph) and
|
||||
* bi_vcnt is really the only way of telling if we need to.
|
||||
*/
|
||||
if (!bio->bi_vcnt)
|
||||
return 0;
|
||||
/* Fall through */
|
||||
case REQ_OP_WRITE_SAME:
|
||||
*sg = sglist;
|
||||
bvec = bio_iovec(bio);
|
||||
sg_set_page(*sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
|
||||
return 1;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
int cluster = blk_queue_cluster(q), nsegs = 0;
|
||||
|
||||
for_each_bio(bio)
|
||||
bio_for_each_segment(bvec, bio, iter)
|
||||
|
@ -453,10 +439,14 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
|
|||
struct scatterlist *sg = NULL;
|
||||
int nsegs = 0;
|
||||
|
||||
if (rq->bio)
|
||||
if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)
|
||||
nsegs = __blk_bvec_map_sg(q, rq->special_vec, sglist, &sg);
|
||||
else if (rq->bio && bio_op(rq->bio) == REQ_OP_WRITE_SAME)
|
||||
nsegs = __blk_bvec_map_sg(q, bio_iovec(rq->bio), sglist, &sg);
|
||||
else if (rq->bio)
|
||||
nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg);
|
||||
|
||||
if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
|
||||
if (unlikely(rq->rq_flags & RQF_COPY_USER) &&
|
||||
(blk_rq_bytes(rq) & q->dma_pad_mask)) {
|
||||
unsigned int pad_len =
|
||||
(q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1;
|
||||
|
@ -486,12 +476,19 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
|
|||
* Something must have been wrong if the figured number of
|
||||
* segment is bigger than number of req's physical segments
|
||||
*/
|
||||
WARN_ON(nsegs > rq->nr_phys_segments);
|
||||
WARN_ON(nsegs > blk_rq_nr_phys_segments(rq));
|
||||
|
||||
return nsegs;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_rq_map_sg);
|
||||
|
||||
static void req_set_nomerge(struct request_queue *q, struct request *req)
|
||||
{
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
}
|
||||
|
||||
static inline int ll_new_hw_segment(struct request_queue *q,
|
||||
struct request *req,
|
||||
struct bio *bio)
|
||||
|
@ -512,9 +509,7 @@ static inline int ll_new_hw_segment(struct request_queue *q,
|
|||
return 1;
|
||||
|
||||
no_merge:
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
req_set_nomerge(q, req);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -528,9 +523,7 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
|
|||
return 0;
|
||||
if (blk_rq_sectors(req) + bio_sectors(bio) >
|
||||
blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
req_set_nomerge(q, req);
|
||||
return 0;
|
||||
}
|
||||
if (!bio_flagged(req->biotail, BIO_SEG_VALID))
|
||||
|
@ -552,9 +545,7 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
|
|||
return 0;
|
||||
if (blk_rq_sectors(req) + bio_sectors(bio) >
|
||||
blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
req_set_nomerge(q, req);
|
||||
return 0;
|
||||
}
|
||||
if (!bio_flagged(bio, BIO_SEG_VALID))
|
||||
|
@ -634,7 +625,7 @@ void blk_rq_set_mixed_merge(struct request *rq)
|
|||
unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK;
|
||||
struct bio *bio;
|
||||
|
||||
if (rq->cmd_flags & REQ_MIXED_MERGE)
|
||||
if (rq->rq_flags & RQF_MIXED_MERGE)
|
||||
return;
|
||||
|
||||
/*
|
||||
|
@ -647,7 +638,7 @@ void blk_rq_set_mixed_merge(struct request *rq)
|
|||
(bio->bi_opf & REQ_FAILFAST_MASK) != ff);
|
||||
bio->bi_opf |= ff;
|
||||
}
|
||||
rq->cmd_flags |= REQ_MIXED_MERGE;
|
||||
rq->rq_flags |= RQF_MIXED_MERGE;
|
||||
}
|
||||
|
||||
static void blk_account_io_merge(struct request *req)
|
||||
|
@ -709,7 +700,7 @@ static int attempt_merge(struct request_queue *q, struct request *req,
|
|||
* makes sure that all involved bios have mixable attributes
|
||||
* set properly.
|
||||
*/
|
||||
if ((req->cmd_flags | next->cmd_flags) & REQ_MIXED_MERGE ||
|
||||
if (((req->rq_flags | next->rq_flags) & RQF_MIXED_MERGE) ||
|
||||
(req->cmd_flags & REQ_FAILFAST_MASK) !=
|
||||
(next->cmd_flags & REQ_FAILFAST_MASK)) {
|
||||
blk_rq_set_mixed_merge(req);
|
||||
|
|
|
@ -259,6 +259,47 @@ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void blk_mq_stat_clear(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
struct blk_mq_ctx *ctx;
|
||||
unsigned int i;
|
||||
|
||||
hctx_for_each_ctx(hctx, ctx, i) {
|
||||
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
|
||||
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t blk_mq_hw_sysfs_stat_store(struct blk_mq_hw_ctx *hctx,
|
||||
const char *page, size_t count)
|
||||
{
|
||||
blk_mq_stat_clear(hctx);
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t print_stat(char *page, struct blk_rq_stat *stat, const char *pre)
|
||||
{
|
||||
return sprintf(page, "%s samples=%llu, mean=%lld, min=%lld, max=%lld\n",
|
||||
pre, (long long) stat->nr_samples,
|
||||
(long long) stat->mean, (long long) stat->min,
|
||||
(long long) stat->max);
|
||||
}
|
||||
|
||||
static ssize_t blk_mq_hw_sysfs_stat_show(struct blk_mq_hw_ctx *hctx, char *page)
|
||||
{
|
||||
struct blk_rq_stat stat[2];
|
||||
ssize_t ret;
|
||||
|
||||
blk_stat_init(&stat[BLK_STAT_READ]);
|
||||
blk_stat_init(&stat[BLK_STAT_WRITE]);
|
||||
|
||||
blk_hctx_stat_get(hctx, stat);
|
||||
|
||||
ret = print_stat(page, &stat[BLK_STAT_READ], "read :");
|
||||
ret += print_stat(page + ret, &stat[BLK_STAT_WRITE], "write:");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_dispatched = {
|
||||
.attr = {.name = "dispatched", .mode = S_IRUGO },
|
||||
.show = blk_mq_sysfs_dispatched_show,
|
||||
|
@ -317,6 +358,11 @@ static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_poll = {
|
|||
.show = blk_mq_hw_sysfs_poll_show,
|
||||
.store = blk_mq_hw_sysfs_poll_store,
|
||||
};
|
||||
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_stat = {
|
||||
.attr = {.name = "stats", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = blk_mq_hw_sysfs_stat_show,
|
||||
.store = blk_mq_hw_sysfs_stat_store,
|
||||
};
|
||||
|
||||
static struct attribute *default_hw_ctx_attrs[] = {
|
||||
&blk_mq_hw_sysfs_queued.attr,
|
||||
|
@ -327,6 +373,7 @@ static struct attribute *default_hw_ctx_attrs[] = {
|
|||
&blk_mq_hw_sysfs_cpus.attr,
|
||||
&blk_mq_hw_sysfs_active.attr,
|
||||
&blk_mq_hw_sysfs_poll.attr,
|
||||
&blk_mq_hw_sysfs_stat.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
|
611
block/blk-mq.c
611
block/blk-mq.c
|
@ -30,6 +30,8 @@
|
|||
#include "blk.h"
|
||||
#include "blk-mq.h"
|
||||
#include "blk-mq-tag.h"
|
||||
#include "blk-stat.h"
|
||||
#include "blk-wbt.h"
|
||||
|
||||
static DEFINE_MUTEX(all_q_mutex);
|
||||
static LIST_HEAD(all_q_list);
|
||||
|
@ -115,6 +117,33 @@ void blk_mq_unfreeze_queue(struct request_queue *q)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
|
||||
|
||||
/**
|
||||
* blk_mq_quiesce_queue() - wait until all ongoing queue_rq calls have finished
|
||||
* @q: request queue.
|
||||
*
|
||||
* Note: this function does not prevent that the struct request end_io()
|
||||
* callback function is invoked. Additionally, it is not prevented that
|
||||
* new queue_rq() calls occur unless the queue has been stopped first.
|
||||
*/
|
||||
void blk_mq_quiesce_queue(struct request_queue *q)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
unsigned int i;
|
||||
bool rcu = false;
|
||||
|
||||
blk_mq_stop_hw_queues(q);
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
if (hctx->flags & BLK_MQ_F_BLOCKING)
|
||||
synchronize_srcu(&hctx->queue_rq_srcu);
|
||||
else
|
||||
rcu = true;
|
||||
}
|
||||
if (rcu)
|
||||
synchronize_rcu();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue);
|
||||
|
||||
void blk_mq_wake_waiters(struct request_queue *q)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
|
@ -139,17 +168,15 @@ bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
|
|||
EXPORT_SYMBOL(blk_mq_can_queue);
|
||||
|
||||
static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
|
||||
struct request *rq, int op,
|
||||
unsigned int op_flags)
|
||||
struct request *rq, unsigned int op)
|
||||
{
|
||||
if (blk_queue_io_stat(q))
|
||||
op_flags |= REQ_IO_STAT;
|
||||
|
||||
INIT_LIST_HEAD(&rq->queuelist);
|
||||
/* csd/requeue_work/fifo_time is initialized before use */
|
||||
rq->q = q;
|
||||
rq->mq_ctx = ctx;
|
||||
req_set_op_attrs(rq, op, op_flags);
|
||||
rq->cmd_flags = op;
|
||||
if (blk_queue_io_stat(q))
|
||||
rq->rq_flags |= RQF_IO_STAT;
|
||||
/* do not touch atomic flags, it needs atomic ops against the timer */
|
||||
rq->cpu = -1;
|
||||
INIT_HLIST_NODE(&rq->hash);
|
||||
|
@ -184,11 +211,11 @@ static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
|
|||
rq->end_io_data = NULL;
|
||||
rq->next_rq = NULL;
|
||||
|
||||
ctx->rq_dispatched[rw_is_sync(op, op_flags)]++;
|
||||
ctx->rq_dispatched[op_is_sync(op)]++;
|
||||
}
|
||||
|
||||
static struct request *
|
||||
__blk_mq_alloc_request(struct blk_mq_alloc_data *data, int op, int op_flags)
|
||||
__blk_mq_alloc_request(struct blk_mq_alloc_data *data, unsigned int op)
|
||||
{
|
||||
struct request *rq;
|
||||
unsigned int tag;
|
||||
|
@ -198,12 +225,12 @@ __blk_mq_alloc_request(struct blk_mq_alloc_data *data, int op, int op_flags)
|
|||
rq = data->hctx->tags->rqs[tag];
|
||||
|
||||
if (blk_mq_tag_busy(data->hctx)) {
|
||||
rq->cmd_flags = REQ_MQ_INFLIGHT;
|
||||
rq->rq_flags = RQF_MQ_INFLIGHT;
|
||||
atomic_inc(&data->hctx->nr_active);
|
||||
}
|
||||
|
||||
rq->tag = tag;
|
||||
blk_mq_rq_ctx_init(data->q, data->ctx, rq, op, op_flags);
|
||||
blk_mq_rq_ctx_init(data->q, data->ctx, rq, op);
|
||||
return rq;
|
||||
}
|
||||
|
||||
|
@ -226,7 +253,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
|
|||
ctx = blk_mq_get_ctx(q);
|
||||
hctx = blk_mq_map_queue(q, ctx->cpu);
|
||||
blk_mq_set_alloc_data(&alloc_data, q, flags, ctx, hctx);
|
||||
rq = __blk_mq_alloc_request(&alloc_data, rw, 0);
|
||||
rq = __blk_mq_alloc_request(&alloc_data, rw);
|
||||
blk_mq_put_ctx(ctx);
|
||||
|
||||
if (!rq) {
|
||||
|
@ -278,7 +305,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int rw,
|
|||
ctx = __blk_mq_get_ctx(q, cpumask_first(hctx->cpumask));
|
||||
|
||||
blk_mq_set_alloc_data(&alloc_data, q, flags, ctx, hctx);
|
||||
rq = __blk_mq_alloc_request(&alloc_data, rw, 0);
|
||||
rq = __blk_mq_alloc_request(&alloc_data, rw);
|
||||
if (!rq) {
|
||||
ret = -EWOULDBLOCK;
|
||||
goto out_queue_exit;
|
||||
|
@ -298,11 +325,14 @@ static void __blk_mq_free_request(struct blk_mq_hw_ctx *hctx,
|
|||
const int tag = rq->tag;
|
||||
struct request_queue *q = rq->q;
|
||||
|
||||
if (rq->cmd_flags & REQ_MQ_INFLIGHT)
|
||||
if (rq->rq_flags & RQF_MQ_INFLIGHT)
|
||||
atomic_dec(&hctx->nr_active);
|
||||
rq->cmd_flags = 0;
|
||||
|
||||
wbt_done(q->rq_wb, &rq->issue_stat);
|
||||
rq->rq_flags = 0;
|
||||
|
||||
clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
|
||||
clear_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags);
|
||||
blk_mq_put_tag(hctx, ctx, tag);
|
||||
blk_queue_exit(q);
|
||||
}
|
||||
|
@ -328,6 +358,7 @@ inline void __blk_mq_end_request(struct request *rq, int error)
|
|||
blk_account_io_done(rq);
|
||||
|
||||
if (rq->end_io) {
|
||||
wbt_done(rq->q->rq_wb, &rq->issue_stat);
|
||||
rq->end_io(rq, error);
|
||||
} else {
|
||||
if (unlikely(blk_bidi_rq(rq)))
|
||||
|
@ -378,10 +409,27 @@ static void blk_mq_ipi_complete_request(struct request *rq)
|
|||
put_cpu();
|
||||
}
|
||||
|
||||
static void blk_mq_stat_add(struct request *rq)
|
||||
{
|
||||
if (rq->rq_flags & RQF_STATS) {
|
||||
/*
|
||||
* We could rq->mq_ctx here, but there's less of a risk
|
||||
* of races if we have the completion event add the stats
|
||||
* to the local software queue.
|
||||
*/
|
||||
struct blk_mq_ctx *ctx;
|
||||
|
||||
ctx = __blk_mq_get_ctx(rq->q, raw_smp_processor_id());
|
||||
blk_stat_add(&ctx->stat[rq_data_dir(rq)], rq);
|
||||
}
|
||||
}
|
||||
|
||||
static void __blk_mq_complete_request(struct request *rq)
|
||||
{
|
||||
struct request_queue *q = rq->q;
|
||||
|
||||
blk_mq_stat_add(rq);
|
||||
|
||||
if (!q->softirq_done_fn)
|
||||
blk_mq_end_request(rq, rq->errors);
|
||||
else
|
||||
|
@ -425,6 +473,12 @@ void blk_mq_start_request(struct request *rq)
|
|||
if (unlikely(blk_bidi_rq(rq)))
|
||||
rq->next_rq->resid_len = blk_rq_bytes(rq->next_rq);
|
||||
|
||||
if (test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) {
|
||||
blk_stat_set_issue_time(&rq->issue_stat);
|
||||
rq->rq_flags |= RQF_STATS;
|
||||
wbt_issue(q->rq_wb, &rq->issue_stat);
|
||||
}
|
||||
|
||||
blk_add_timer(rq);
|
||||
|
||||
/*
|
||||
|
@ -460,6 +514,7 @@ static void __blk_mq_requeue_request(struct request *rq)
|
|||
struct request_queue *q = rq->q;
|
||||
|
||||
trace_block_rq_requeue(q, rq);
|
||||
wbt_requeue(q->rq_wb, &rq->issue_stat);
|
||||
|
||||
if (test_and_clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) {
|
||||
if (q->dma_drain_size && blk_rq_bytes(rq))
|
||||
|
@ -467,12 +522,12 @@ static void __blk_mq_requeue_request(struct request *rq)
|
|||
}
|
||||
}
|
||||
|
||||
void blk_mq_requeue_request(struct request *rq)
|
||||
void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
|
||||
{
|
||||
__blk_mq_requeue_request(rq);
|
||||
|
||||
BUG_ON(blk_queued_rq(rq));
|
||||
blk_mq_add_to_requeue_list(rq, true);
|
||||
blk_mq_add_to_requeue_list(rq, true, kick_requeue_list);
|
||||
}
|
||||
EXPORT_SYMBOL(blk_mq_requeue_request);
|
||||
|
||||
|
@ -489,10 +544,10 @@ static void blk_mq_requeue_work(struct work_struct *work)
|
|||
spin_unlock_irqrestore(&q->requeue_lock, flags);
|
||||
|
||||
list_for_each_entry_safe(rq, next, &rq_list, queuelist) {
|
||||
if (!(rq->cmd_flags & REQ_SOFTBARRIER))
|
||||
if (!(rq->rq_flags & RQF_SOFTBARRIER))
|
||||
continue;
|
||||
|
||||
rq->cmd_flags &= ~REQ_SOFTBARRIER;
|
||||
rq->rq_flags &= ~RQF_SOFTBARRIER;
|
||||
list_del_init(&rq->queuelist);
|
||||
blk_mq_insert_request(rq, true, false, false);
|
||||
}
|
||||
|
@ -503,14 +558,11 @@ static void blk_mq_requeue_work(struct work_struct *work)
|
|||
blk_mq_insert_request(rq, false, false, false);
|
||||
}
|
||||
|
||||
/*
|
||||
* Use the start variant of queue running here, so that running
|
||||
* the requeue work will kick stopped queues.
|
||||
*/
|
||||
blk_mq_start_hw_queues(q);
|
||||
blk_mq_run_hw_queues(q, false);
|
||||
}
|
||||
|
||||
void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
|
||||
void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
|
||||
bool kick_requeue_list)
|
||||
{
|
||||
struct request_queue *q = rq->q;
|
||||
unsigned long flags;
|
||||
|
@ -519,25 +571,22 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
|
|||
* We abuse this flag that is otherwise used by the I/O scheduler to
|
||||
* request head insertation from the workqueue.
|
||||
*/
|
||||
BUG_ON(rq->cmd_flags & REQ_SOFTBARRIER);
|
||||
BUG_ON(rq->rq_flags & RQF_SOFTBARRIER);
|
||||
|
||||
spin_lock_irqsave(&q->requeue_lock, flags);
|
||||
if (at_head) {
|
||||
rq->cmd_flags |= REQ_SOFTBARRIER;
|
||||
rq->rq_flags |= RQF_SOFTBARRIER;
|
||||
list_add(&rq->queuelist, &q->requeue_list);
|
||||
} else {
|
||||
list_add_tail(&rq->queuelist, &q->requeue_list);
|
||||
}
|
||||
spin_unlock_irqrestore(&q->requeue_lock, flags);
|
||||
|
||||
if (kick_requeue_list)
|
||||
blk_mq_kick_requeue_list(q);
|
||||
}
|
||||
EXPORT_SYMBOL(blk_mq_add_to_requeue_list);
|
||||
|
||||
void blk_mq_cancel_requeue_work(struct request_queue *q)
|
||||
{
|
||||
cancel_delayed_work_sync(&q->requeue_work);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_cancel_requeue_work);
|
||||
|
||||
void blk_mq_kick_requeue_list(struct request_queue *q)
|
||||
{
|
||||
kblockd_schedule_delayed_work(&q->requeue_work, 0);
|
||||
|
@ -772,27 +821,102 @@ static inline unsigned int queued_to_index(unsigned int queued)
|
|||
return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1);
|
||||
}
|
||||
|
||||
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list)
|
||||
{
|
||||
struct request_queue *q = hctx->queue;
|
||||
struct request *rq;
|
||||
LIST_HEAD(driver_list);
|
||||
struct list_head *dptr;
|
||||
int queued, ret = BLK_MQ_RQ_QUEUE_OK;
|
||||
|
||||
/*
|
||||
* Start off with dptr being NULL, so we start the first request
|
||||
* immediately, even if we have more pending.
|
||||
*/
|
||||
dptr = NULL;
|
||||
|
||||
/*
|
||||
* Now process all the entries, sending them to the driver.
|
||||
*/
|
||||
queued = 0;
|
||||
while (!list_empty(list)) {
|
||||
struct blk_mq_queue_data bd;
|
||||
|
||||
rq = list_first_entry(list, struct request, queuelist);
|
||||
list_del_init(&rq->queuelist);
|
||||
|
||||
bd.rq = rq;
|
||||
bd.list = dptr;
|
||||
bd.last = list_empty(list);
|
||||
|
||||
ret = q->mq_ops->queue_rq(hctx, &bd);
|
||||
switch (ret) {
|
||||
case BLK_MQ_RQ_QUEUE_OK:
|
||||
queued++;
|
||||
break;
|
||||
case BLK_MQ_RQ_QUEUE_BUSY:
|
||||
list_add(&rq->queuelist, list);
|
||||
__blk_mq_requeue_request(rq);
|
||||
break;
|
||||
default:
|
||||
pr_err("blk-mq: bad return on queue: %d\n", ret);
|
||||
case BLK_MQ_RQ_QUEUE_ERROR:
|
||||
rq->errors = -EIO;
|
||||
blk_mq_end_request(rq, rq->errors);
|
||||
break;
|
||||
}
|
||||
|
||||
if (ret == BLK_MQ_RQ_QUEUE_BUSY)
|
||||
break;
|
||||
|
||||
/*
|
||||
* We've done the first request. If we have more than 1
|
||||
* left in the list, set dptr to defer issue.
|
||||
*/
|
||||
if (!dptr && list->next != list->prev)
|
||||
dptr = &driver_list;
|
||||
}
|
||||
|
||||
hctx->dispatched[queued_to_index(queued)]++;
|
||||
|
||||
/*
|
||||
* Any items that need requeuing? Stuff them into hctx->dispatch,
|
||||
* that is where we will continue on next queue run.
|
||||
*/
|
||||
if (!list_empty(list)) {
|
||||
spin_lock(&hctx->lock);
|
||||
list_splice(list, &hctx->dispatch);
|
||||
spin_unlock(&hctx->lock);
|
||||
|
||||
/*
|
||||
* the queue is expected stopped with BLK_MQ_RQ_QUEUE_BUSY, but
|
||||
* it's possible the queue is stopped and restarted again
|
||||
* before this. Queue restart will dispatch requests. And since
|
||||
* requests in rq_list aren't added into hctx->dispatch yet,
|
||||
* the requests in rq_list might get lost.
|
||||
*
|
||||
* blk_mq_run_hw_queue() already checks the STOPPED bit
|
||||
**/
|
||||
blk_mq_run_hw_queue(hctx, true);
|
||||
}
|
||||
|
||||
return ret != BLK_MQ_RQ_QUEUE_BUSY;
|
||||
}
|
||||
|
||||
/*
|
||||
* Run this hardware queue, pulling any software queues mapped to it in.
|
||||
* Note that this function currently has various problems around ordering
|
||||
* of IO. In particular, we'd like FIFO behaviour on handling existing
|
||||
* items on the hctx->dispatch list. Ignore that for now.
|
||||
*/
|
||||
static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
|
||||
static void blk_mq_process_rq_list(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
struct request_queue *q = hctx->queue;
|
||||
struct request *rq;
|
||||
LIST_HEAD(rq_list);
|
||||
LIST_HEAD(driver_list);
|
||||
struct list_head *dptr;
|
||||
int queued;
|
||||
|
||||
if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
|
||||
if (unlikely(blk_mq_hctx_stopped(hctx)))
|
||||
return;
|
||||
|
||||
WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) &&
|
||||
cpu_online(hctx->next_cpu));
|
||||
|
||||
hctx->run++;
|
||||
|
||||
/*
|
||||
|
@ -811,75 +935,24 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
|
|||
spin_unlock(&hctx->lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Start off with dptr being NULL, so we start the first request
|
||||
* immediately, even if we have more pending.
|
||||
*/
|
||||
dptr = NULL;
|
||||
blk_mq_dispatch_rq_list(hctx, &rq_list);
|
||||
}
|
||||
|
||||
/*
|
||||
* Now process all the entries, sending them to the driver.
|
||||
*/
|
||||
queued = 0;
|
||||
while (!list_empty(&rq_list)) {
|
||||
struct blk_mq_queue_data bd;
|
||||
int ret;
|
||||
static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
int srcu_idx;
|
||||
|
||||
rq = list_first_entry(&rq_list, struct request, queuelist);
|
||||
list_del_init(&rq->queuelist);
|
||||
WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) &&
|
||||
cpu_online(hctx->next_cpu));
|
||||
|
||||
bd.rq = rq;
|
||||
bd.list = dptr;
|
||||
bd.last = list_empty(&rq_list);
|
||||
|
||||
ret = q->mq_ops->queue_rq(hctx, &bd);
|
||||
switch (ret) {
|
||||
case BLK_MQ_RQ_QUEUE_OK:
|
||||
queued++;
|
||||
break;
|
||||
case BLK_MQ_RQ_QUEUE_BUSY:
|
||||
list_add(&rq->queuelist, &rq_list);
|
||||
__blk_mq_requeue_request(rq);
|
||||
break;
|
||||
default:
|
||||
pr_err("blk-mq: bad return on queue: %d\n", ret);
|
||||
case BLK_MQ_RQ_QUEUE_ERROR:
|
||||
rq->errors = -EIO;
|
||||
blk_mq_end_request(rq, rq->errors);
|
||||
break;
|
||||
}
|
||||
|
||||
if (ret == BLK_MQ_RQ_QUEUE_BUSY)
|
||||
break;
|
||||
|
||||
/*
|
||||
* We've done the first request. If we have more than 1
|
||||
* left in the list, set dptr to defer issue.
|
||||
*/
|
||||
if (!dptr && rq_list.next != rq_list.prev)
|
||||
dptr = &driver_list;
|
||||
}
|
||||
|
||||
hctx->dispatched[queued_to_index(queued)]++;
|
||||
|
||||
/*
|
||||
* Any items that need requeuing? Stuff them into hctx->dispatch,
|
||||
* that is where we will continue on next queue run.
|
||||
*/
|
||||
if (!list_empty(&rq_list)) {
|
||||
spin_lock(&hctx->lock);
|
||||
list_splice(&rq_list, &hctx->dispatch);
|
||||
spin_unlock(&hctx->lock);
|
||||
/*
|
||||
* the queue is expected stopped with BLK_MQ_RQ_QUEUE_BUSY, but
|
||||
* it's possible the queue is stopped and restarted again
|
||||
* before this. Queue restart will dispatch requests. And since
|
||||
* requests in rq_list aren't added into hctx->dispatch yet,
|
||||
* the requests in rq_list might get lost.
|
||||
*
|
||||
* blk_mq_run_hw_queue() already checks the STOPPED bit
|
||||
**/
|
||||
blk_mq_run_hw_queue(hctx, true);
|
||||
if (!(hctx->flags & BLK_MQ_F_BLOCKING)) {
|
||||
rcu_read_lock();
|
||||
blk_mq_process_rq_list(hctx);
|
||||
rcu_read_unlock();
|
||||
} else {
|
||||
srcu_idx = srcu_read_lock(&hctx->queue_rq_srcu);
|
||||
blk_mq_process_rq_list(hctx);
|
||||
srcu_read_unlock(&hctx->queue_rq_srcu, srcu_idx);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -895,7 +968,7 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
|
|||
return WORK_CPU_UNBOUND;
|
||||
|
||||
if (--hctx->next_cpu_batch <= 0) {
|
||||
int cpu = hctx->next_cpu, next_cpu;
|
||||
int next_cpu;
|
||||
|
||||
next_cpu = cpumask_next(hctx->next_cpu, hctx->cpumask);
|
||||
if (next_cpu >= nr_cpu_ids)
|
||||
|
@ -903,8 +976,6 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
|
|||
|
||||
hctx->next_cpu = next_cpu;
|
||||
hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
|
||||
|
||||
return cpu;
|
||||
}
|
||||
|
||||
return hctx->next_cpu;
|
||||
|
@ -912,8 +983,8 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
|
|||
|
||||
void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
|
||||
{
|
||||
if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state) ||
|
||||
!blk_mq_hw_queue_mapped(hctx)))
|
||||
if (unlikely(blk_mq_hctx_stopped(hctx) ||
|
||||
!blk_mq_hw_queue_mapped(hctx)))
|
||||
return;
|
||||
|
||||
if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
|
||||
|
@ -938,7 +1009,7 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
|
|||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
if ((!blk_mq_hctx_has_pending(hctx) &&
|
||||
list_empty_careful(&hctx->dispatch)) ||
|
||||
test_bit(BLK_MQ_S_STOPPED, &hctx->state))
|
||||
blk_mq_hctx_stopped(hctx))
|
||||
continue;
|
||||
|
||||
blk_mq_run_hw_queue(hctx, async);
|
||||
|
@ -946,6 +1017,26 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async)
|
|||
}
|
||||
EXPORT_SYMBOL(blk_mq_run_hw_queues);
|
||||
|
||||
/**
|
||||
* blk_mq_queue_stopped() - check whether one or more hctxs have been stopped
|
||||
* @q: request queue.
|
||||
*
|
||||
* The caller is responsible for serializing this function against
|
||||
* blk_mq_{start,stop}_hw_queue().
|
||||
*/
|
||||
bool blk_mq_queue_stopped(struct request_queue *q)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
int i;
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i)
|
||||
if (blk_mq_hctx_stopped(hctx))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_mq_queue_stopped);
|
||||
|
||||
void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
cancel_work(&hctx->run_work);
|
||||
|
@ -982,18 +1073,23 @@ void blk_mq_start_hw_queues(struct request_queue *q)
|
|||
}
|
||||
EXPORT_SYMBOL(blk_mq_start_hw_queues);
|
||||
|
||||
void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
|
||||
{
|
||||
if (!blk_mq_hctx_stopped(hctx))
|
||||
return;
|
||||
|
||||
clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
||||
blk_mq_run_hw_queue(hctx, async);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
|
||||
|
||||
void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
int i;
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
if (!test_bit(BLK_MQ_S_STOPPED, &hctx->state))
|
||||
continue;
|
||||
|
||||
clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
||||
blk_mq_run_hw_queue(hctx, async);
|
||||
}
|
||||
queue_for_each_hw_ctx(q, hctx, i)
|
||||
blk_mq_start_stopped_hw_queue(hctx, async);
|
||||
}
|
||||
EXPORT_SYMBOL(blk_mq_start_stopped_hw_queues);
|
||||
|
||||
|
@ -1155,7 +1251,7 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
|
|||
{
|
||||
init_request_from_bio(rq, bio);
|
||||
|
||||
blk_account_io_start(rq, 1);
|
||||
blk_account_io_start(rq, true);
|
||||
}
|
||||
|
||||
static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
|
||||
|
@ -1190,40 +1286,27 @@ insert_rq:
|
|||
}
|
||||
}
|
||||
|
||||
struct blk_map_ctx {
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
struct blk_mq_ctx *ctx;
|
||||
};
|
||||
|
||||
static struct request *blk_mq_map_request(struct request_queue *q,
|
||||
struct bio *bio,
|
||||
struct blk_map_ctx *data)
|
||||
struct blk_mq_alloc_data *data)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
struct blk_mq_ctx *ctx;
|
||||
struct request *rq;
|
||||
int op = bio_data_dir(bio);
|
||||
int op_flags = 0;
|
||||
struct blk_mq_alloc_data alloc_data;
|
||||
|
||||
blk_queue_enter_live(q);
|
||||
ctx = blk_mq_get_ctx(q);
|
||||
hctx = blk_mq_map_queue(q, ctx->cpu);
|
||||
|
||||
if (rw_is_sync(bio_op(bio), bio->bi_opf))
|
||||
op_flags |= REQ_SYNC;
|
||||
trace_block_getrq(q, bio, bio->bi_opf);
|
||||
blk_mq_set_alloc_data(data, q, 0, ctx, hctx);
|
||||
rq = __blk_mq_alloc_request(data, bio->bi_opf);
|
||||
|
||||
trace_block_getrq(q, bio, op);
|
||||
blk_mq_set_alloc_data(&alloc_data, q, 0, ctx, hctx);
|
||||
rq = __blk_mq_alloc_request(&alloc_data, op, op_flags);
|
||||
|
||||
data->hctx = alloc_data.hctx;
|
||||
data->ctx = alloc_data.ctx;
|
||||
data->hctx->queued++;
|
||||
return rq;
|
||||
}
|
||||
|
||||
static int blk_mq_direct_issue_request(struct request *rq, blk_qc_t *cookie)
|
||||
static void blk_mq_try_issue_directly(struct request *rq, blk_qc_t *cookie)
|
||||
{
|
||||
int ret;
|
||||
struct request_queue *q = rq->q;
|
||||
|
@ -1235,6 +1318,9 @@ static int blk_mq_direct_issue_request(struct request *rq, blk_qc_t *cookie)
|
|||
};
|
||||
blk_qc_t new_cookie = blk_tag_to_qc_t(rq->tag, hctx->queue_num);
|
||||
|
||||
if (blk_mq_hctx_stopped(hctx))
|
||||
goto insert;
|
||||
|
||||
/*
|
||||
* For OK queue, we are done. For error, kill it. Any other
|
||||
* error (busy), just add it to our list as we previously
|
||||
|
@ -1243,7 +1329,7 @@ static int blk_mq_direct_issue_request(struct request *rq, blk_qc_t *cookie)
|
|||
ret = q->mq_ops->queue_rq(hctx, &bd);
|
||||
if (ret == BLK_MQ_RQ_QUEUE_OK) {
|
||||
*cookie = new_cookie;
|
||||
return 0;
|
||||
return;
|
||||
}
|
||||
|
||||
__blk_mq_requeue_request(rq);
|
||||
|
@ -1252,10 +1338,11 @@ static int blk_mq_direct_issue_request(struct request *rq, blk_qc_t *cookie)
|
|||
*cookie = BLK_QC_T_NONE;
|
||||
rq->errors = -EIO;
|
||||
blk_mq_end_request(rq, rq->errors);
|
||||
return 0;
|
||||
return;
|
||||
}
|
||||
|
||||
return -1;
|
||||
insert:
|
||||
blk_mq_insert_request(rq, false, true, true);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1265,14 +1352,15 @@ static int blk_mq_direct_issue_request(struct request *rq, blk_qc_t *cookie)
|
|||
*/
|
||||
static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
|
||||
{
|
||||
const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
|
||||
const int is_sync = op_is_sync(bio->bi_opf);
|
||||
const int is_flush_fua = bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
|
||||
struct blk_map_ctx data;
|
||||
struct blk_mq_alloc_data data;
|
||||
struct request *rq;
|
||||
unsigned int request_count = 0;
|
||||
unsigned int request_count = 0, srcu_idx;
|
||||
struct blk_plug *plug;
|
||||
struct request *same_queue_rq = NULL;
|
||||
blk_qc_t cookie;
|
||||
unsigned int wb_acct;
|
||||
|
||||
blk_queue_bounce(q, &bio);
|
||||
|
||||
|
@ -1287,9 +1375,15 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
|
|||
blk_attempt_plug_merge(q, bio, &request_count, &same_queue_rq))
|
||||
return BLK_QC_T_NONE;
|
||||
|
||||
wb_acct = wbt_wait(q->rq_wb, bio, NULL);
|
||||
|
||||
rq = blk_mq_map_request(q, bio, &data);
|
||||
if (unlikely(!rq))
|
||||
if (unlikely(!rq)) {
|
||||
__wbt_done(q->rq_wb, wb_acct);
|
||||
return BLK_QC_T_NONE;
|
||||
}
|
||||
|
||||
wbt_track(&rq->issue_stat, wb_acct);
|
||||
|
||||
cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);
|
||||
|
||||
|
@ -1312,7 +1406,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
|
|||
blk_mq_bio_to_request(rq, bio);
|
||||
|
||||
/*
|
||||
* We do limited pluging. If the bio can be merged, do that.
|
||||
* We do limited plugging. If the bio can be merged, do that.
|
||||
* Otherwise the existing request in the plug list will be
|
||||
* issued. So the plug list will have one request at most
|
||||
*/
|
||||
|
@ -1332,9 +1426,16 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
|
|||
blk_mq_put_ctx(data.ctx);
|
||||
if (!old_rq)
|
||||
goto done;
|
||||
if (!blk_mq_direct_issue_request(old_rq, &cookie))
|
||||
goto done;
|
||||
blk_mq_insert_request(old_rq, false, true, true);
|
||||
|
||||
if (!(data.hctx->flags & BLK_MQ_F_BLOCKING)) {
|
||||
rcu_read_lock();
|
||||
blk_mq_try_issue_directly(old_rq, &cookie);
|
||||
rcu_read_unlock();
|
||||
} else {
|
||||
srcu_idx = srcu_read_lock(&data.hctx->queue_rq_srcu);
|
||||
blk_mq_try_issue_directly(old_rq, &cookie);
|
||||
srcu_read_unlock(&data.hctx->queue_rq_srcu, srcu_idx);
|
||||
}
|
||||
goto done;
|
||||
}
|
||||
|
||||
|
@ -1359,13 +1460,14 @@ done:
|
|||
*/
|
||||
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
|
||||
{
|
||||
const int is_sync = rw_is_sync(bio_op(bio), bio->bi_opf);
|
||||
const int is_sync = op_is_sync(bio->bi_opf);
|
||||
const int is_flush_fua = bio->bi_opf & (REQ_PREFLUSH | REQ_FUA);
|
||||
struct blk_plug *plug;
|
||||
unsigned int request_count = 0;
|
||||
struct blk_map_ctx data;
|
||||
struct blk_mq_alloc_data data;
|
||||
struct request *rq;
|
||||
blk_qc_t cookie;
|
||||
unsigned int wb_acct;
|
||||
|
||||
blk_queue_bounce(q, &bio);
|
||||
|
||||
|
@ -1382,9 +1484,15 @@ static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
|
|||
} else
|
||||
request_count = blk_plug_queued_count(q);
|
||||
|
||||
wb_acct = wbt_wait(q->rq_wb, bio, NULL);
|
||||
|
||||
rq = blk_mq_map_request(q, bio, &data);
|
||||
if (unlikely(!rq))
|
||||
if (unlikely(!rq)) {
|
||||
__wbt_done(q->rq_wb, wb_acct);
|
||||
return BLK_QC_T_NONE;
|
||||
}
|
||||
|
||||
wbt_track(&rq->issue_stat, wb_acct);
|
||||
|
||||
cookie = blk_tag_to_qc_t(rq->tag, data.hctx->queue_num);
|
||||
|
||||
|
@ -1401,13 +1509,25 @@ static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
|
|||
*/
|
||||
plug = current->plug;
|
||||
if (plug) {
|
||||
struct request *last = NULL;
|
||||
|
||||
blk_mq_bio_to_request(rq, bio);
|
||||
|
||||
/*
|
||||
* @request_count may become stale because of schedule
|
||||
* out, so check the list again.
|
||||
*/
|
||||
if (list_empty(&plug->mq_list))
|
||||
request_count = 0;
|
||||
if (!request_count)
|
||||
trace_block_plug(q);
|
||||
else
|
||||
last = list_entry_rq(plug->mq_list.prev);
|
||||
|
||||
blk_mq_put_ctx(data.ctx);
|
||||
|
||||
if (request_count >= BLK_MAX_REQUEST_COUNT) {
|
||||
if (request_count >= BLK_MAX_REQUEST_COUNT || (last &&
|
||||
blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
|
||||
blk_flush_plug_list(plug, false);
|
||||
trace_block_plug(q);
|
||||
}
|
||||
|
@ -1613,6 +1733,9 @@ static void blk_mq_exit_hctx(struct request_queue *q,
|
|||
if (set->ops->exit_hctx)
|
||||
set->ops->exit_hctx(hctx, hctx_idx);
|
||||
|
||||
if (hctx->flags & BLK_MQ_F_BLOCKING)
|
||||
cleanup_srcu_struct(&hctx->queue_rq_srcu);
|
||||
|
||||
blk_mq_remove_cpuhp(hctx);
|
||||
blk_free_flush_queue(hctx->fq);
|
||||
sbitmap_free(&hctx->ctx_map);
|
||||
|
@ -1693,6 +1816,9 @@ static int blk_mq_init_hctx(struct request_queue *q,
|
|||
flush_start_tag + hctx_idx, node))
|
||||
goto free_fq;
|
||||
|
||||
if (hctx->flags & BLK_MQ_F_BLOCKING)
|
||||
init_srcu_struct(&hctx->queue_rq_srcu);
|
||||
|
||||
return 0;
|
||||
|
||||
free_fq:
|
||||
|
@ -1723,6 +1849,8 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
|
|||
spin_lock_init(&__ctx->lock);
|
||||
INIT_LIST_HEAD(&__ctx->rq_list);
|
||||
__ctx->queue = q;
|
||||
blk_stat_init(&__ctx->stat[BLK_STAT_READ]);
|
||||
blk_stat_init(&__ctx->stat[BLK_STAT_WRITE]);
|
||||
|
||||
/* If the cpu isn't online, the cpu is mapped to first hctx */
|
||||
if (!cpu_online(i))
|
||||
|
@ -2018,6 +2146,11 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
|
|||
*/
|
||||
q->nr_requests = set->queue_depth;
|
||||
|
||||
/*
|
||||
* Default to classic polling
|
||||
*/
|
||||
q->poll_nsec = -1;
|
||||
|
||||
if (set->ops->complete)
|
||||
blk_queue_softirq_done(q, set->ops->complete);
|
||||
|
||||
|
@ -2053,6 +2186,8 @@ void blk_mq_free_queue(struct request_queue *q)
|
|||
list_del_init(&q->all_q_node);
|
||||
mutex_unlock(&all_q_mutex);
|
||||
|
||||
wbt_exit(q);
|
||||
|
||||
blk_mq_del_queue_tag_set(q);
|
||||
|
||||
blk_mq_exit_hw_queues(q, set, set->nr_hw_queues);
|
||||
|
@ -2099,16 +2234,9 @@ static void blk_mq_queue_reinit_work(void)
|
|||
*/
|
||||
list_for_each_entry(q, &all_q_list, all_q_node)
|
||||
blk_mq_freeze_queue_start(q);
|
||||
list_for_each_entry(q, &all_q_list, all_q_node) {
|
||||
list_for_each_entry(q, &all_q_list, all_q_node)
|
||||
blk_mq_freeze_queue_wait(q);
|
||||
|
||||
/*
|
||||
* timeout handler can't touch hw queue during the
|
||||
* reinitialization
|
||||
*/
|
||||
del_timer_sync(&q->timeout);
|
||||
}
|
||||
|
||||
list_for_each_entry(q, &all_q_list, all_q_node)
|
||||
blk_mq_queue_reinit(q, &cpuhp_online_new);
|
||||
|
||||
|
@ -2353,6 +2481,165 @@ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_update_nr_hw_queues);
|
||||
|
||||
static unsigned long blk_mq_poll_nsecs(struct request_queue *q,
|
||||
struct blk_mq_hw_ctx *hctx,
|
||||
struct request *rq)
|
||||
{
|
||||
struct blk_rq_stat stat[2];
|
||||
unsigned long ret = 0;
|
||||
|
||||
/*
|
||||
* If stats collection isn't on, don't sleep but turn it on for
|
||||
* future users
|
||||
*/
|
||||
if (!blk_stat_enable(q))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* We don't have to do this once per IO, should optimize this
|
||||
* to just use the current window of stats until it changes
|
||||
*/
|
||||
memset(&stat, 0, sizeof(stat));
|
||||
blk_hctx_stat_get(hctx, stat);
|
||||
|
||||
/*
|
||||
* As an optimistic guess, use half of the mean service time
|
||||
* for this type of request. We can (and should) make this smarter.
|
||||
* For instance, if the completion latencies are tight, we can
|
||||
* get closer than just half the mean. This is especially
|
||||
* important on devices where the completion latencies are longer
|
||||
* than ~10 usec.
|
||||
*/
|
||||
if (req_op(rq) == REQ_OP_READ && stat[BLK_STAT_READ].nr_samples)
|
||||
ret = (stat[BLK_STAT_READ].mean + 1) / 2;
|
||||
else if (req_op(rq) == REQ_OP_WRITE && stat[BLK_STAT_WRITE].nr_samples)
|
||||
ret = (stat[BLK_STAT_WRITE].mean + 1) / 2;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool blk_mq_poll_hybrid_sleep(struct request_queue *q,
|
||||
struct blk_mq_hw_ctx *hctx,
|
||||
struct request *rq)
|
||||
{
|
||||
struct hrtimer_sleeper hs;
|
||||
enum hrtimer_mode mode;
|
||||
unsigned int nsecs;
|
||||
ktime_t kt;
|
||||
|
||||
if (test_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* poll_nsec can be:
|
||||
*
|
||||
* -1: don't ever hybrid sleep
|
||||
* 0: use half of prev avg
|
||||
* >0: use this specific value
|
||||
*/
|
||||
if (q->poll_nsec == -1)
|
||||
return false;
|
||||
else if (q->poll_nsec > 0)
|
||||
nsecs = q->poll_nsec;
|
||||
else
|
||||
nsecs = blk_mq_poll_nsecs(q, hctx, rq);
|
||||
|
||||
if (!nsecs)
|
||||
return false;
|
||||
|
||||
set_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags);
|
||||
|
||||
/*
|
||||
* This will be replaced with the stats tracking code, using
|
||||
* 'avg_completion_time / 2' as the pre-sleep target.
|
||||
*/
|
||||
kt = ktime_set(0, nsecs);
|
||||
|
||||
mode = HRTIMER_MODE_REL;
|
||||
hrtimer_init_on_stack(&hs.timer, CLOCK_MONOTONIC, mode);
|
||||
hrtimer_set_expires(&hs.timer, kt);
|
||||
|
||||
hrtimer_init_sleeper(&hs, current);
|
||||
do {
|
||||
if (test_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags))
|
||||
break;
|
||||
set_current_state(TASK_UNINTERRUPTIBLE);
|
||||
hrtimer_start_expires(&hs.timer, mode);
|
||||
if (hs.task)
|
||||
io_schedule();
|
||||
hrtimer_cancel(&hs.timer);
|
||||
mode = HRTIMER_MODE_ABS;
|
||||
} while (hs.task && !signal_pending(current));
|
||||
|
||||
__set_current_state(TASK_RUNNING);
|
||||
destroy_hrtimer_on_stack(&hs.timer);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq)
|
||||
{
|
||||
struct request_queue *q = hctx->queue;
|
||||
long state;
|
||||
|
||||
/*
|
||||
* If we sleep, have the caller restart the poll loop to reset
|
||||
* the state. Like for the other success return cases, the
|
||||
* caller is responsible for checking if the IO completed. If
|
||||
* the IO isn't complete, we'll get called again and will go
|
||||
* straight to the busy poll loop.
|
||||
*/
|
||||
if (blk_mq_poll_hybrid_sleep(q, hctx, rq))
|
||||
return true;
|
||||
|
||||
hctx->poll_considered++;
|
||||
|
||||
state = current->state;
|
||||
while (!need_resched()) {
|
||||
int ret;
|
||||
|
||||
hctx->poll_invoked++;
|
||||
|
||||
ret = q->mq_ops->poll(hctx, rq->tag);
|
||||
if (ret > 0) {
|
||||
hctx->poll_success++;
|
||||
set_current_state(TASK_RUNNING);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (signal_pending_state(state, current))
|
||||
set_current_state(TASK_RUNNING);
|
||||
|
||||
if (current->state == TASK_RUNNING)
|
||||
return true;
|
||||
if (ret < 0)
|
||||
break;
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
struct blk_plug *plug;
|
||||
struct request *rq;
|
||||
|
||||
if (!q->mq_ops || !q->mq_ops->poll || !blk_qc_t_valid(cookie) ||
|
||||
!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
|
||||
return false;
|
||||
|
||||
plug = current->plug;
|
||||
if (plug)
|
||||
blk_flush_plug_list(plug, false);
|
||||
|
||||
hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
|
||||
rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie));
|
||||
|
||||
return __blk_mq_poll(hctx, rq);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_poll);
|
||||
|
||||
void blk_mq_disable_hotplug(void)
|
||||
{
|
||||
mutex_lock(&all_q_mutex);
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
#ifndef INT_BLK_MQ_H
|
||||
#define INT_BLK_MQ_H
|
||||
|
||||
#include "blk-stat.h"
|
||||
|
||||
struct blk_mq_tag_set;
|
||||
|
||||
struct blk_mq_ctx {
|
||||
|
@ -18,6 +20,7 @@ struct blk_mq_ctx {
|
|||
|
||||
/* incremented at completion time */
|
||||
unsigned long ____cacheline_aligned_in_smp rq_completed[2];
|
||||
struct blk_rq_stat stat[2];
|
||||
|
||||
struct request_queue *queue;
|
||||
struct kobject kobj;
|
||||
|
@ -28,6 +31,7 @@ void blk_mq_freeze_queue(struct request_queue *q);
|
|||
void blk_mq_free_queue(struct request_queue *q);
|
||||
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
|
||||
void blk_mq_wake_waiters(struct request_queue *q);
|
||||
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *, struct list_head *);
|
||||
|
||||
/*
|
||||
* CPU hotplug helpers
|
||||
|
@ -100,6 +104,11 @@ static inline void blk_mq_set_alloc_data(struct blk_mq_alloc_data *data,
|
|||
data->hctx = hctx;
|
||||
}
|
||||
|
||||
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
||||
}
|
||||
|
||||
static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
return hctx->nr_ctx && hctx->tags;
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/gfp.h>
|
||||
|
||||
#include "blk.h"
|
||||
#include "blk-wbt.h"
|
||||
|
||||
unsigned long blk_max_low_pfn;
|
||||
EXPORT_SYMBOL(blk_max_low_pfn);
|
||||
|
@ -95,6 +96,7 @@ void blk_set_default_limits(struct queue_limits *lim)
|
|||
lim->max_dev_sectors = 0;
|
||||
lim->chunk_sectors = 0;
|
||||
lim->max_write_same_sectors = 0;
|
||||
lim->max_write_zeroes_sectors = 0;
|
||||
lim->max_discard_sectors = 0;
|
||||
lim->max_hw_discard_sectors = 0;
|
||||
lim->discard_granularity = 0;
|
||||
|
@ -107,6 +109,7 @@ void blk_set_default_limits(struct queue_limits *lim)
|
|||
lim->io_opt = 0;
|
||||
lim->misaligned = 0;
|
||||
lim->cluster = 1;
|
||||
lim->zoned = BLK_ZONED_NONE;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_set_default_limits);
|
||||
|
||||
|
@ -130,6 +133,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
|
|||
lim->max_sectors = UINT_MAX;
|
||||
lim->max_dev_sectors = UINT_MAX;
|
||||
lim->max_write_same_sectors = UINT_MAX;
|
||||
lim->max_write_zeroes_sectors = UINT_MAX;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_set_stacking_limits);
|
||||
|
||||
|
@ -298,6 +302,19 @@ void blk_queue_max_write_same_sectors(struct request_queue *q,
|
|||
}
|
||||
EXPORT_SYMBOL(blk_queue_max_write_same_sectors);
|
||||
|
||||
/**
|
||||
* blk_queue_max_write_zeroes_sectors - set max sectors for a single
|
||||
* write zeroes
|
||||
* @q: the request queue for the device
|
||||
* @max_write_zeroes_sectors: maximum number of sectors to write per command
|
||||
**/
|
||||
void blk_queue_max_write_zeroes_sectors(struct request_queue *q,
|
||||
unsigned int max_write_zeroes_sectors)
|
||||
{
|
||||
q->limits.max_write_zeroes_sectors = max_write_zeroes_sectors;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors);
|
||||
|
||||
/**
|
||||
* blk_queue_max_segments - set max hw segments for a request for this queue
|
||||
* @q: the request queue for the device
|
||||
|
@ -526,6 +543,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
|
|||
t->max_dev_sectors = min_not_zero(t->max_dev_sectors, b->max_dev_sectors);
|
||||
t->max_write_same_sectors = min(t->max_write_same_sectors,
|
||||
b->max_write_same_sectors);
|
||||
t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors,
|
||||
b->max_write_zeroes_sectors);
|
||||
t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn);
|
||||
|
||||
t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask,
|
||||
|
@ -631,6 +650,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
|
|||
t->discard_granularity;
|
||||
}
|
||||
|
||||
if (b->chunk_sectors)
|
||||
t->chunk_sectors = min_not_zero(t->chunk_sectors,
|
||||
b->chunk_sectors);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_stack_limits);
|
||||
|
@ -832,6 +855,19 @@ void blk_queue_flush_queueable(struct request_queue *q, bool queueable)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(blk_queue_flush_queueable);
|
||||
|
||||
/**
|
||||
* blk_set_queue_depth - tell the block layer about the device queue depth
|
||||
* @q: the request queue for the device
|
||||
* @depth: queue depth
|
||||
*
|
||||
*/
|
||||
void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
|
||||
{
|
||||
q->queue_depth = depth;
|
||||
wbt_set_queue_depth(q->rq_wb, depth);
|
||||
}
|
||||
EXPORT_SYMBOL(blk_set_queue_depth);
|
||||
|
||||
/**
|
||||
* blk_queue_write_cache - configure queue's write cache
|
||||
* @q: the request queue for the device
|
||||
|
@ -852,6 +888,8 @@ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
|
|||
else
|
||||
queue_flag_clear(QUEUE_FLAG_FUA, q);
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
||||
wbt_set_write_cache(q->rq_wb, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_queue_write_cache);
|
||||
|
||||
|
|
|
@ -0,0 +1,256 @@
|
|||
/*
|
||||
* Block stat tracking code
|
||||
*
|
||||
* Copyright (C) 2016 Jens Axboe
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/blk-mq.h>
|
||||
|
||||
#include "blk-stat.h"
|
||||
#include "blk-mq.h"
|
||||
|
||||
static void blk_stat_flush_batch(struct blk_rq_stat *stat)
|
||||
{
|
||||
const s32 nr_batch = READ_ONCE(stat->nr_batch);
|
||||
const s32 nr_samples = READ_ONCE(stat->nr_samples);
|
||||
|
||||
if (!nr_batch)
|
||||
return;
|
||||
if (!nr_samples)
|
||||
stat->mean = div64_s64(stat->batch, nr_batch);
|
||||
else {
|
||||
stat->mean = div64_s64((stat->mean * nr_samples) +
|
||||
stat->batch,
|
||||
nr_batch + nr_samples);
|
||||
}
|
||||
|
||||
stat->nr_samples += nr_batch;
|
||||
stat->nr_batch = stat->batch = 0;
|
||||
}
|
||||
|
||||
static void blk_stat_sum(struct blk_rq_stat *dst, struct blk_rq_stat *src)
|
||||
{
|
||||
if (!src->nr_samples)
|
||||
return;
|
||||
|
||||
blk_stat_flush_batch(src);
|
||||
|
||||
dst->min = min(dst->min, src->min);
|
||||
dst->max = max(dst->max, src->max);
|
||||
|
||||
if (!dst->nr_samples)
|
||||
dst->mean = src->mean;
|
||||
else {
|
||||
dst->mean = div64_s64((src->mean * src->nr_samples) +
|
||||
(dst->mean * dst->nr_samples),
|
||||
dst->nr_samples + src->nr_samples);
|
||||
}
|
||||
dst->nr_samples += src->nr_samples;
|
||||
}
|
||||
|
||||
static void blk_mq_stat_get(struct request_queue *q, struct blk_rq_stat *dst)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
struct blk_mq_ctx *ctx;
|
||||
uint64_t latest = 0;
|
||||
int i, j, nr;
|
||||
|
||||
blk_stat_init(&dst[BLK_STAT_READ]);
|
||||
blk_stat_init(&dst[BLK_STAT_WRITE]);
|
||||
|
||||
nr = 0;
|
||||
do {
|
||||
uint64_t newest = 0;
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
hctx_for_each_ctx(hctx, ctx, j) {
|
||||
blk_stat_flush_batch(&ctx->stat[BLK_STAT_READ]);
|
||||
blk_stat_flush_batch(&ctx->stat[BLK_STAT_WRITE]);
|
||||
|
||||
if (!ctx->stat[BLK_STAT_READ].nr_samples &&
|
||||
!ctx->stat[BLK_STAT_WRITE].nr_samples)
|
||||
continue;
|
||||
if (ctx->stat[BLK_STAT_READ].time > newest)
|
||||
newest = ctx->stat[BLK_STAT_READ].time;
|
||||
if (ctx->stat[BLK_STAT_WRITE].time > newest)
|
||||
newest = ctx->stat[BLK_STAT_WRITE].time;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* No samples
|
||||
*/
|
||||
if (!newest)
|
||||
break;
|
||||
|
||||
if (newest > latest)
|
||||
latest = newest;
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
hctx_for_each_ctx(hctx, ctx, j) {
|
||||
if (ctx->stat[BLK_STAT_READ].time == newest) {
|
||||
blk_stat_sum(&dst[BLK_STAT_READ],
|
||||
&ctx->stat[BLK_STAT_READ]);
|
||||
nr++;
|
||||
}
|
||||
if (ctx->stat[BLK_STAT_WRITE].time == newest) {
|
||||
blk_stat_sum(&dst[BLK_STAT_WRITE],
|
||||
&ctx->stat[BLK_STAT_WRITE]);
|
||||
nr++;
|
||||
}
|
||||
}
|
||||
}
|
||||
/*
|
||||
* If we race on finding an entry, just loop back again.
|
||||
* Should be very rare.
|
||||
*/
|
||||
} while (!nr);
|
||||
|
||||
dst[BLK_STAT_READ].time = dst[BLK_STAT_WRITE].time = latest;
|
||||
}
|
||||
|
||||
void blk_queue_stat_get(struct request_queue *q, struct blk_rq_stat *dst)
|
||||
{
|
||||
if (q->mq_ops)
|
||||
blk_mq_stat_get(q, dst);
|
||||
else {
|
||||
blk_stat_flush_batch(&q->rq_stats[BLK_STAT_READ]);
|
||||
blk_stat_flush_batch(&q->rq_stats[BLK_STAT_WRITE]);
|
||||
memcpy(&dst[BLK_STAT_READ], &q->rq_stats[BLK_STAT_READ],
|
||||
sizeof(struct blk_rq_stat));
|
||||
memcpy(&dst[BLK_STAT_WRITE], &q->rq_stats[BLK_STAT_WRITE],
|
||||
sizeof(struct blk_rq_stat));
|
||||
}
|
||||
}
|
||||
|
||||
void blk_hctx_stat_get(struct blk_mq_hw_ctx *hctx, struct blk_rq_stat *dst)
|
||||
{
|
||||
struct blk_mq_ctx *ctx;
|
||||
unsigned int i, nr;
|
||||
|
||||
nr = 0;
|
||||
do {
|
||||
uint64_t newest = 0;
|
||||
|
||||
hctx_for_each_ctx(hctx, ctx, i) {
|
||||
blk_stat_flush_batch(&ctx->stat[BLK_STAT_READ]);
|
||||
blk_stat_flush_batch(&ctx->stat[BLK_STAT_WRITE]);
|
||||
|
||||
if (!ctx->stat[BLK_STAT_READ].nr_samples &&
|
||||
!ctx->stat[BLK_STAT_WRITE].nr_samples)
|
||||
continue;
|
||||
|
||||
if (ctx->stat[BLK_STAT_READ].time > newest)
|
||||
newest = ctx->stat[BLK_STAT_READ].time;
|
||||
if (ctx->stat[BLK_STAT_WRITE].time > newest)
|
||||
newest = ctx->stat[BLK_STAT_WRITE].time;
|
||||
}
|
||||
|
||||
if (!newest)
|
||||
break;
|
||||
|
||||
hctx_for_each_ctx(hctx, ctx, i) {
|
||||
if (ctx->stat[BLK_STAT_READ].time == newest) {
|
||||
blk_stat_sum(&dst[BLK_STAT_READ],
|
||||
&ctx->stat[BLK_STAT_READ]);
|
||||
nr++;
|
||||
}
|
||||
if (ctx->stat[BLK_STAT_WRITE].time == newest) {
|
||||
blk_stat_sum(&dst[BLK_STAT_WRITE],
|
||||
&ctx->stat[BLK_STAT_WRITE]);
|
||||
nr++;
|
||||
}
|
||||
}
|
||||
/*
|
||||
* If we race on finding an entry, just loop back again.
|
||||
* Should be very rare, as the window is only updated
|
||||
* occasionally
|
||||
*/
|
||||
} while (!nr);
|
||||
}
|
||||
|
||||
static void __blk_stat_init(struct blk_rq_stat *stat, s64 time_now)
|
||||
{
|
||||
stat->min = -1ULL;
|
||||
stat->max = stat->nr_samples = stat->mean = 0;
|
||||
stat->batch = stat->nr_batch = 0;
|
||||
stat->time = time_now & BLK_STAT_NSEC_MASK;
|
||||
}
|
||||
|
||||
void blk_stat_init(struct blk_rq_stat *stat)
|
||||
{
|
||||
__blk_stat_init(stat, ktime_to_ns(ktime_get()));
|
||||
}
|
||||
|
||||
static bool __blk_stat_is_current(struct blk_rq_stat *stat, s64 now)
|
||||
{
|
||||
return (now & BLK_STAT_NSEC_MASK) == (stat->time & BLK_STAT_NSEC_MASK);
|
||||
}
|
||||
|
||||
bool blk_stat_is_current(struct blk_rq_stat *stat)
|
||||
{
|
||||
return __blk_stat_is_current(stat, ktime_to_ns(ktime_get()));
|
||||
}
|
||||
|
||||
void blk_stat_add(struct blk_rq_stat *stat, struct request *rq)
|
||||
{
|
||||
s64 now, value;
|
||||
|
||||
now = __blk_stat_time(ktime_to_ns(ktime_get()));
|
||||
if (now < blk_stat_time(&rq->issue_stat))
|
||||
return;
|
||||
|
||||
if (!__blk_stat_is_current(stat, now))
|
||||
__blk_stat_init(stat, now);
|
||||
|
||||
value = now - blk_stat_time(&rq->issue_stat);
|
||||
if (value > stat->max)
|
||||
stat->max = value;
|
||||
if (value < stat->min)
|
||||
stat->min = value;
|
||||
|
||||
if (stat->batch + value < stat->batch ||
|
||||
stat->nr_batch + 1 == BLK_RQ_STAT_BATCH)
|
||||
blk_stat_flush_batch(stat);
|
||||
|
||||
stat->batch += value;
|
||||
stat->nr_batch++;
|
||||
}
|
||||
|
||||
void blk_stat_clear(struct request_queue *q)
|
||||
{
|
||||
if (q->mq_ops) {
|
||||
struct blk_mq_hw_ctx *hctx;
|
||||
struct blk_mq_ctx *ctx;
|
||||
int i, j;
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
hctx_for_each_ctx(hctx, ctx, j) {
|
||||
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
|
||||
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
blk_stat_init(&q->rq_stats[BLK_STAT_READ]);
|
||||
blk_stat_init(&q->rq_stats[BLK_STAT_WRITE]);
|
||||
}
|
||||
}
|
||||
|
||||
void blk_stat_set_issue_time(struct blk_issue_stat *stat)
|
||||
{
|
||||
stat->time = (stat->time & BLK_STAT_MASK) |
|
||||
(ktime_to_ns(ktime_get()) & BLK_STAT_TIME_MASK);
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable stat tracking, return whether it was enabled
|
||||
*/
|
||||
bool blk_stat_enable(struct request_queue *q)
|
||||
{
|
||||
if (!test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) {
|
||||
set_bit(QUEUE_FLAG_STATS, &q->queue_flags);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
|
@ -0,0 +1,42 @@
|
|||
#ifndef BLK_STAT_H
|
||||
#define BLK_STAT_H
|
||||
|
||||
/*
|
||||
* ~0.13s window as a power-of-2 (2^27 nsecs)
|
||||
*/
|
||||
#define BLK_STAT_NSEC 134217728ULL
|
||||
#define BLK_STAT_NSEC_MASK ~(BLK_STAT_NSEC - 1)
|
||||
|
||||
/*
|
||||
* Upper 3 bits can be used elsewhere
|
||||
*/
|
||||
#define BLK_STAT_RES_BITS 3
|
||||
#define BLK_STAT_SHIFT (64 - BLK_STAT_RES_BITS)
|
||||
#define BLK_STAT_TIME_MASK ((1ULL << BLK_STAT_SHIFT) - 1)
|
||||
#define BLK_STAT_MASK ~BLK_STAT_TIME_MASK
|
||||
|
||||
enum {
|
||||
BLK_STAT_READ = 0,
|
||||
BLK_STAT_WRITE,
|
||||
};
|
||||
|
||||
void blk_stat_add(struct blk_rq_stat *, struct request *);
|
||||
void blk_hctx_stat_get(struct blk_mq_hw_ctx *, struct blk_rq_stat *);
|
||||
void blk_queue_stat_get(struct request_queue *, struct blk_rq_stat *);
|
||||
void blk_stat_clear(struct request_queue *);
|
||||
void blk_stat_init(struct blk_rq_stat *);
|
||||
bool blk_stat_is_current(struct blk_rq_stat *);
|
||||
void blk_stat_set_issue_time(struct blk_issue_stat *);
|
||||
bool blk_stat_enable(struct request_queue *);
|
||||
|
||||
static inline u64 __blk_stat_time(u64 time)
|
||||
{
|
||||
return time & BLK_STAT_TIME_MASK;
|
||||
}
|
||||
|
||||
static inline u64 blk_stat_time(struct blk_issue_stat *stat)
|
||||
{
|
||||
return __blk_stat_time(stat->time);
|
||||
}
|
||||
|
||||
#endif
|
|
@ -13,6 +13,7 @@
|
|||
|
||||
#include "blk.h"
|
||||
#include "blk-mq.h"
|
||||
#include "blk-wbt.h"
|
||||
|
||||
struct queue_sysfs_entry {
|
||||
struct attribute attr;
|
||||
|
@ -41,6 +42,19 @@ queue_var_store(unsigned long *var, const char *page, size_t count)
|
|||
return count;
|
||||
}
|
||||
|
||||
static ssize_t queue_var_store64(s64 *var, const char *page)
|
||||
{
|
||||
int err;
|
||||
s64 v;
|
||||
|
||||
err = kstrtos64(page, 10, &v);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
*var = v;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t queue_requests_show(struct request_queue *q, char *page)
|
||||
{
|
||||
return queue_var_show(q->nr_requests, (page));
|
||||
|
@ -130,6 +144,11 @@ static ssize_t queue_physical_block_size_show(struct request_queue *q, char *pag
|
|||
return queue_var_show(queue_physical_block_size(q), page);
|
||||
}
|
||||
|
||||
static ssize_t queue_chunk_sectors_show(struct request_queue *q, char *page)
|
||||
{
|
||||
return queue_var_show(q->limits.chunk_sectors, page);
|
||||
}
|
||||
|
||||
static ssize_t queue_io_min_show(struct request_queue *q, char *page)
|
||||
{
|
||||
return queue_var_show(queue_io_min(q), page);
|
||||
|
@ -192,6 +211,11 @@ static ssize_t queue_write_same_max_show(struct request_queue *q, char *page)
|
|||
(unsigned long long)q->limits.max_write_same_sectors << 9);
|
||||
}
|
||||
|
||||
static ssize_t queue_write_zeroes_max_show(struct request_queue *q, char *page)
|
||||
{
|
||||
return sprintf(page, "%llu\n",
|
||||
(unsigned long long)q->limits.max_write_zeroes_sectors << 9);
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
|
||||
|
@ -258,6 +282,18 @@ QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
|
|||
QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
|
||||
#undef QUEUE_SYSFS_BIT_FNS
|
||||
|
||||
static ssize_t queue_zoned_show(struct request_queue *q, char *page)
|
||||
{
|
||||
switch (blk_queue_zoned_model(q)) {
|
||||
case BLK_ZONED_HA:
|
||||
return sprintf(page, "host-aware\n");
|
||||
case BLK_ZONED_HM:
|
||||
return sprintf(page, "host-managed\n");
|
||||
default:
|
||||
return sprintf(page, "none\n");
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t queue_nomerges_show(struct request_queue *q, char *page)
|
||||
{
|
||||
return queue_var_show((blk_queue_nomerges(q) << 1) |
|
||||
|
@ -320,6 +356,38 @@ queue_rq_affinity_store(struct request_queue *q, const char *page, size_t count)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t queue_poll_delay_show(struct request_queue *q, char *page)
|
||||
{
|
||||
int val;
|
||||
|
||||
if (q->poll_nsec == -1)
|
||||
val = -1;
|
||||
else
|
||||
val = q->poll_nsec / 1000;
|
||||
|
||||
return sprintf(page, "%d\n", val);
|
||||
}
|
||||
|
||||
static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page,
|
||||
size_t count)
|
||||
{
|
||||
int err, val;
|
||||
|
||||
if (!q->mq_ops || !q->mq_ops->poll)
|
||||
return -EINVAL;
|
||||
|
||||
err = kstrtoint(page, 10, &val);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
if (val == -1)
|
||||
q->poll_nsec = -1;
|
||||
else
|
||||
q->poll_nsec = val * 1000;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t queue_poll_show(struct request_queue *q, char *page)
|
||||
{
|
||||
return queue_var_show(test_bit(QUEUE_FLAG_POLL, &q->queue_flags), page);
|
||||
|
@ -348,6 +416,50 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t queue_wb_lat_show(struct request_queue *q, char *page)
|
||||
{
|
||||
if (!q->rq_wb)
|
||||
return -EINVAL;
|
||||
|
||||
return sprintf(page, "%llu\n", div_u64(q->rq_wb->min_lat_nsec, 1000));
|
||||
}
|
||||
|
||||
static ssize_t queue_wb_lat_store(struct request_queue *q, const char *page,
|
||||
size_t count)
|
||||
{
|
||||
struct rq_wb *rwb;
|
||||
ssize_t ret;
|
||||
s64 val;
|
||||
|
||||
ret = queue_var_store64(&val, page);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (val < -1)
|
||||
return -EINVAL;
|
||||
|
||||
rwb = q->rq_wb;
|
||||
if (!rwb) {
|
||||
ret = wbt_init(q);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
rwb = q->rq_wb;
|
||||
if (!rwb)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (val == -1)
|
||||
rwb->min_lat_nsec = wbt_default_latency_nsec(q);
|
||||
else if (val >= 0)
|
||||
rwb->min_lat_nsec = val * 1000ULL;
|
||||
|
||||
if (rwb->enable_state == WBT_STATE_ON_DEFAULT)
|
||||
rwb->enable_state = WBT_STATE_ON_MANUAL;
|
||||
|
||||
wbt_update_limits(rwb);
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t queue_wc_show(struct request_queue *q, char *page)
|
||||
{
|
||||
if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
|
||||
|
@ -385,6 +497,26 @@ static ssize_t queue_dax_show(struct request_queue *q, char *page)
|
|||
return queue_var_show(blk_queue_dax(q), page);
|
||||
}
|
||||
|
||||
static ssize_t print_stat(char *page, struct blk_rq_stat *stat, const char *pre)
|
||||
{
|
||||
return sprintf(page, "%s samples=%llu, mean=%lld, min=%lld, max=%lld\n",
|
||||
pre, (long long) stat->nr_samples,
|
||||
(long long) stat->mean, (long long) stat->min,
|
||||
(long long) stat->max);
|
||||
}
|
||||
|
||||
static ssize_t queue_stats_show(struct request_queue *q, char *page)
|
||||
{
|
||||
struct blk_rq_stat stat[2];
|
||||
ssize_t ret;
|
||||
|
||||
blk_queue_stat_get(q, stat);
|
||||
|
||||
ret = print_stat(page, &stat[BLK_STAT_READ], "read :");
|
||||
ret += print_stat(page + ret, &stat[BLK_STAT_WRITE], "write:");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct queue_sysfs_entry queue_requests_entry = {
|
||||
.attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = queue_requests_show,
|
||||
|
@ -444,6 +576,11 @@ static struct queue_sysfs_entry queue_physical_block_size_entry = {
|
|||
.show = queue_physical_block_size_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_chunk_sectors_entry = {
|
||||
.attr = {.name = "chunk_sectors", .mode = S_IRUGO },
|
||||
.show = queue_chunk_sectors_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_io_min_entry = {
|
||||
.attr = {.name = "minimum_io_size", .mode = S_IRUGO },
|
||||
.show = queue_io_min_show,
|
||||
|
@ -480,12 +617,22 @@ static struct queue_sysfs_entry queue_write_same_max_entry = {
|
|||
.show = queue_write_same_max_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_write_zeroes_max_entry = {
|
||||
.attr = {.name = "write_zeroes_max_bytes", .mode = S_IRUGO },
|
||||
.show = queue_write_zeroes_max_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_nonrot_entry = {
|
||||
.attr = {.name = "rotational", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = queue_show_nonrot,
|
||||
.store = queue_store_nonrot,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_zoned_entry = {
|
||||
.attr = {.name = "zoned", .mode = S_IRUGO },
|
||||
.show = queue_zoned_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_nomerges_entry = {
|
||||
.attr = {.name = "nomerges", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = queue_nomerges_show,
|
||||
|
@ -516,6 +663,12 @@ static struct queue_sysfs_entry queue_poll_entry = {
|
|||
.store = queue_poll_store,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_poll_delay_entry = {
|
||||
.attr = {.name = "io_poll_delay", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = queue_poll_delay_show,
|
||||
.store = queue_poll_delay_store,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_wc_entry = {
|
||||
.attr = {.name = "write_cache", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = queue_wc_show,
|
||||
|
@ -527,6 +680,17 @@ static struct queue_sysfs_entry queue_dax_entry = {
|
|||
.show = queue_dax_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_stats_entry = {
|
||||
.attr = {.name = "stats", .mode = S_IRUGO },
|
||||
.show = queue_stats_show,
|
||||
};
|
||||
|
||||
static struct queue_sysfs_entry queue_wb_lat_entry = {
|
||||
.attr = {.name = "wbt_lat_usec", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = queue_wb_lat_show,
|
||||
.store = queue_wb_lat_store,
|
||||
};
|
||||
|
||||
static struct attribute *default_attrs[] = {
|
||||
&queue_requests_entry.attr,
|
||||
&queue_ra_entry.attr,
|
||||
|
@ -539,6 +703,7 @@ static struct attribute *default_attrs[] = {
|
|||
&queue_hw_sector_size_entry.attr,
|
||||
&queue_logical_block_size_entry.attr,
|
||||
&queue_physical_block_size_entry.attr,
|
||||
&queue_chunk_sectors_entry.attr,
|
||||
&queue_io_min_entry.attr,
|
||||
&queue_io_opt_entry.attr,
|
||||
&queue_discard_granularity_entry.attr,
|
||||
|
@ -546,7 +711,9 @@ static struct attribute *default_attrs[] = {
|
|||
&queue_discard_max_hw_entry.attr,
|
||||
&queue_discard_zeroes_data_entry.attr,
|
||||
&queue_write_same_max_entry.attr,
|
||||
&queue_write_zeroes_max_entry.attr,
|
||||
&queue_nonrot_entry.attr,
|
||||
&queue_zoned_entry.attr,
|
||||
&queue_nomerges_entry.attr,
|
||||
&queue_rq_affinity_entry.attr,
|
||||
&queue_iostats_entry.attr,
|
||||
|
@ -554,6 +721,9 @@ static struct attribute *default_attrs[] = {
|
|||
&queue_poll_entry.attr,
|
||||
&queue_wc_entry.attr,
|
||||
&queue_dax_entry.attr,
|
||||
&queue_stats_entry.attr,
|
||||
&queue_wb_lat_entry.attr,
|
||||
&queue_poll_delay_entry.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -628,6 +798,7 @@ static void blk_release_queue(struct kobject *kobj)
|
|||
struct request_queue *q =
|
||||
container_of(kobj, struct request_queue, kobj);
|
||||
|
||||
wbt_exit(q);
|
||||
bdi_exit(&q->backing_dev_info);
|
||||
blkcg_exit_queue(q);
|
||||
|
||||
|
@ -668,6 +839,23 @@ struct kobj_type blk_queue_ktype = {
|
|||
.release = blk_release_queue,
|
||||
};
|
||||
|
||||
static void blk_wb_init(struct request_queue *q)
|
||||
{
|
||||
#ifndef CONFIG_BLK_WBT_MQ
|
||||
if (q->mq_ops)
|
||||
return;
|
||||
#endif
|
||||
#ifndef CONFIG_BLK_WBT_SQ
|
||||
if (q->request_fn)
|
||||
return;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If this fails, we don't get throttling
|
||||
*/
|
||||
wbt_init(q);
|
||||
}
|
||||
|
||||
int blk_register_queue(struct gendisk *disk)
|
||||
{
|
||||
int ret;
|
||||
|
@ -707,6 +895,8 @@ int blk_register_queue(struct gendisk *disk)
|
|||
if (q->mq_ops)
|
||||
blk_mq_register_dev(dev, q);
|
||||
|
||||
blk_wb_init(q);
|
||||
|
||||
if (!q->request_fn)
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -270,7 +270,7 @@ void blk_queue_end_tag(struct request_queue *q, struct request *rq)
|
|||
BUG_ON(tag >= bqt->real_max_depth);
|
||||
|
||||
list_del_init(&rq->queuelist);
|
||||
rq->cmd_flags &= ~REQ_QUEUED;
|
||||
rq->rq_flags &= ~RQF_QUEUED;
|
||||
rq->tag = -1;
|
||||
|
||||
if (unlikely(bqt->tag_index[tag] == NULL))
|
||||
|
@ -316,7 +316,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
|
|||
unsigned max_depth;
|
||||
int tag;
|
||||
|
||||
if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
|
||||
if (unlikely((rq->rq_flags & RQF_QUEUED))) {
|
||||
printk(KERN_ERR
|
||||
"%s: request %p for device [%s] already tagged %d",
|
||||
__func__, rq,
|
||||
|
@ -371,7 +371,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
|
|||
*/
|
||||
|
||||
bqt->next_tag = (tag + 1) % bqt->max_depth;
|
||||
rq->cmd_flags |= REQ_QUEUED;
|
||||
rq->rq_flags |= RQF_QUEUED;
|
||||
rq->tag = tag;
|
||||
bqt->tag_index[tag] = rq;
|
||||
blk_start_request(rq);
|
||||
|
|
|
@ -818,13 +818,13 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
|
|||
tg->io_disp[rw]++;
|
||||
|
||||
/*
|
||||
* REQ_THROTTLED is used to prevent the same bio to be throttled
|
||||
* BIO_THROTTLED is used to prevent the same bio to be throttled
|
||||
* more than once as a throttled bio will go through blk-throtl the
|
||||
* second time when it eventually gets issued. Set it when a bio
|
||||
* is being charged to a tg.
|
||||
*/
|
||||
if (!(bio->bi_opf & REQ_THROTTLED))
|
||||
bio->bi_opf |= REQ_THROTTLED;
|
||||
if (!bio_flagged(bio, BIO_THROTTLED))
|
||||
bio_set_flag(bio, BIO_THROTTLED);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1401,7 +1401,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
|
|||
WARN_ON_ONCE(!rcu_read_lock_held());
|
||||
|
||||
/* see throtl_charge_bio() */
|
||||
if ((bio->bi_opf & REQ_THROTTLED) || !tg->has_rules[rw])
|
||||
if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw])
|
||||
goto out;
|
||||
|
||||
spin_lock_irq(q->queue_lock);
|
||||
|
@ -1480,7 +1480,7 @@ out:
|
|||
* being issued.
|
||||
*/
|
||||
if (!throttled)
|
||||
bio->bi_opf &= ~REQ_THROTTLED;
|
||||
bio_clear_flag(bio, BIO_THROTTLED);
|
||||
return throttled;
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,750 @@
|
|||
/*
|
||||
* buffered writeback throttling. loosely based on CoDel. We can't drop
|
||||
* packets for IO scheduling, so the logic is something like this:
|
||||
*
|
||||
* - Monitor latencies in a defined window of time.
|
||||
* - If the minimum latency in the above window exceeds some target, increment
|
||||
* scaling step and scale down queue depth by a factor of 2x. The monitoring
|
||||
* window is then shrunk to 100 / sqrt(scaling step + 1).
|
||||
* - For any window where we don't have solid data on what the latencies
|
||||
* look like, retain status quo.
|
||||
* - If latencies look good, decrement scaling step.
|
||||
* - If we're only doing writes, allow the scaling step to go negative. This
|
||||
* will temporarily boost write performance, snapping back to a stable
|
||||
* scaling step of 0 if reads show up or the heavy writers finish. Unlike
|
||||
* positive scaling steps where we shrink the monitoring window, a negative
|
||||
* scaling step retains the default step==0 window size.
|
||||
*
|
||||
* Copyright (C) 2016 Jens Axboe
|
||||
*
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/blk_types.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/backing-dev.h>
|
||||
#include <linux/swap.h>
|
||||
|
||||
#include "blk-wbt.h"
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/wbt.h>
|
||||
|
||||
enum {
|
||||
/*
|
||||
* Default setting, we'll scale up (to 75% of QD max) or down (min 1)
|
||||
* from here depending on device stats
|
||||
*/
|
||||
RWB_DEF_DEPTH = 16,
|
||||
|
||||
/*
|
||||
* 100msec window
|
||||
*/
|
||||
RWB_WINDOW_NSEC = 100 * 1000 * 1000ULL,
|
||||
|
||||
/*
|
||||
* Disregard stats, if we don't meet this minimum
|
||||
*/
|
||||
RWB_MIN_WRITE_SAMPLES = 3,
|
||||
|
||||
/*
|
||||
* If we have this number of consecutive windows with not enough
|
||||
* information to scale up or down, scale up.
|
||||
*/
|
||||
RWB_UNKNOWN_BUMP = 5,
|
||||
};
|
||||
|
||||
static inline bool rwb_enabled(struct rq_wb *rwb)
|
||||
{
|
||||
return rwb && rwb->wb_normal != 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,
|
||||
* false if 'v' + 1 would be bigger than 'below'.
|
||||
*/
|
||||
static bool atomic_inc_below(atomic_t *v, int below)
|
||||
{
|
||||
int cur = atomic_read(v);
|
||||
|
||||
for (;;) {
|
||||
int old;
|
||||
|
||||
if (cur >= below)
|
||||
return false;
|
||||
old = atomic_cmpxchg(v, cur, cur + 1);
|
||||
if (old == cur)
|
||||
break;
|
||||
cur = old;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void wb_timestamp(struct rq_wb *rwb, unsigned long *var)
|
||||
{
|
||||
if (rwb_enabled(rwb)) {
|
||||
const unsigned long cur = jiffies;
|
||||
|
||||
if (cur != *var)
|
||||
*var = cur;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If a task was rate throttled in balance_dirty_pages() within the last
|
||||
* second or so, use that to indicate a higher cleaning rate.
|
||||
*/
|
||||
static bool wb_recent_wait(struct rq_wb *rwb)
|
||||
{
|
||||
struct bdi_writeback *wb = &rwb->queue->backing_dev_info.wb;
|
||||
|
||||
return time_before(jiffies, wb->dirty_sleep + HZ);
|
||||
}
|
||||
|
||||
static inline struct rq_wait *get_rq_wait(struct rq_wb *rwb, bool is_kswapd)
|
||||
{
|
||||
return &rwb->rq_wait[is_kswapd];
|
||||
}
|
||||
|
||||
static void rwb_wake_all(struct rq_wb *rwb)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < WBT_NUM_RWQ; i++) {
|
||||
struct rq_wait *rqw = &rwb->rq_wait[i];
|
||||
|
||||
if (waitqueue_active(&rqw->wait))
|
||||
wake_up_all(&rqw->wait);
|
||||
}
|
||||
}
|
||||
|
||||
void __wbt_done(struct rq_wb *rwb, enum wbt_flags wb_acct)
|
||||
{
|
||||
struct rq_wait *rqw;
|
||||
int inflight, limit;
|
||||
|
||||
if (!(wb_acct & WBT_TRACKED))
|
||||
return;
|
||||
|
||||
rqw = get_rq_wait(rwb, wb_acct & WBT_KSWAPD);
|
||||
inflight = atomic_dec_return(&rqw->inflight);
|
||||
|
||||
/*
|
||||
* wbt got disabled with IO in flight. Wake up any potential
|
||||
* waiters, we don't have to do more than that.
|
||||
*/
|
||||
if (unlikely(!rwb_enabled(rwb))) {
|
||||
rwb_wake_all(rwb);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the device does write back caching, drop further down
|
||||
* before we wake people up.
|
||||
*/
|
||||
if (rwb->wc && !wb_recent_wait(rwb))
|
||||
limit = 0;
|
||||
else
|
||||
limit = rwb->wb_normal;
|
||||
|
||||
/*
|
||||
* Don't wake anyone up if we are above the normal limit.
|
||||
*/
|
||||
if (inflight && inflight >= limit)
|
||||
return;
|
||||
|
||||
if (waitqueue_active(&rqw->wait)) {
|
||||
int diff = limit - inflight;
|
||||
|
||||
if (!inflight || diff >= rwb->wb_background / 2)
|
||||
wake_up_all(&rqw->wait);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Called on completion of a request. Note that it's also called when
|
||||
* a request is merged, when the request gets freed.
|
||||
*/
|
||||
void wbt_done(struct rq_wb *rwb, struct blk_issue_stat *stat)
|
||||
{
|
||||
if (!rwb)
|
||||
return;
|
||||
|
||||
if (!wbt_is_tracked(stat)) {
|
||||
if (rwb->sync_cookie == stat) {
|
||||
rwb->sync_issue = 0;
|
||||
rwb->sync_cookie = NULL;
|
||||
}
|
||||
|
||||
if (wbt_is_read(stat))
|
||||
wb_timestamp(rwb, &rwb->last_comp);
|
||||
wbt_clear_state(stat);
|
||||
} else {
|
||||
WARN_ON_ONCE(stat == rwb->sync_cookie);
|
||||
__wbt_done(rwb, wbt_stat_to_mask(stat));
|
||||
wbt_clear_state(stat);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Return true, if we can't increase the depth further by scaling
|
||||
*/
|
||||
static bool calc_wb_limits(struct rq_wb *rwb)
|
||||
{
|
||||
unsigned int depth;
|
||||
bool ret = false;
|
||||
|
||||
if (!rwb->min_lat_nsec) {
|
||||
rwb->wb_max = rwb->wb_normal = rwb->wb_background = 0;
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* For QD=1 devices, this is a special case. It's important for those
|
||||
* to have one request ready when one completes, so force a depth of
|
||||
* 2 for those devices. On the backend, it'll be a depth of 1 anyway,
|
||||
* since the device can't have more than that in flight. If we're
|
||||
* scaling down, then keep a setting of 1/1/1.
|
||||
*/
|
||||
if (rwb->queue_depth == 1) {
|
||||
if (rwb->scale_step > 0)
|
||||
rwb->wb_max = rwb->wb_normal = 1;
|
||||
else {
|
||||
rwb->wb_max = rwb->wb_normal = 2;
|
||||
ret = true;
|
||||
}
|
||||
rwb->wb_background = 1;
|
||||
} else {
|
||||
/*
|
||||
* scale_step == 0 is our default state. If we have suffered
|
||||
* latency spikes, step will be > 0, and we shrink the
|
||||
* allowed write depths. If step is < 0, we're only doing
|
||||
* writes, and we allow a temporarily higher depth to
|
||||
* increase performance.
|
||||
*/
|
||||
depth = min_t(unsigned int, RWB_DEF_DEPTH, rwb->queue_depth);
|
||||
if (rwb->scale_step > 0)
|
||||
depth = 1 + ((depth - 1) >> min(31, rwb->scale_step));
|
||||
else if (rwb->scale_step < 0) {
|
||||
unsigned int maxd = 3 * rwb->queue_depth / 4;
|
||||
|
||||
depth = 1 + ((depth - 1) << -rwb->scale_step);
|
||||
if (depth > maxd) {
|
||||
depth = maxd;
|
||||
ret = true;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Set our max/normal/bg queue depths based on how far
|
||||
* we have scaled down (->scale_step).
|
||||
*/
|
||||
rwb->wb_max = depth;
|
||||
rwb->wb_normal = (rwb->wb_max + 1) / 2;
|
||||
rwb->wb_background = (rwb->wb_max + 3) / 4;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline bool stat_sample_valid(struct blk_rq_stat *stat)
|
||||
{
|
||||
/*
|
||||
* We need at least one read sample, and a minimum of
|
||||
* RWB_MIN_WRITE_SAMPLES. We require some write samples to know
|
||||
* that it's writes impacting us, and not just some sole read on
|
||||
* a device that is in a lower power state.
|
||||
*/
|
||||
return stat[BLK_STAT_READ].nr_samples >= 1 &&
|
||||
stat[BLK_STAT_WRITE].nr_samples >= RWB_MIN_WRITE_SAMPLES;
|
||||
}
|
||||
|
||||
static u64 rwb_sync_issue_lat(struct rq_wb *rwb)
|
||||
{
|
||||
u64 now, issue = ACCESS_ONCE(rwb->sync_issue);
|
||||
|
||||
if (!issue || !rwb->sync_cookie)
|
||||
return 0;
|
||||
|
||||
now = ktime_to_ns(ktime_get());
|
||||
return now - issue;
|
||||
}
|
||||
|
||||
enum {
|
||||
LAT_OK = 1,
|
||||
LAT_UNKNOWN,
|
||||
LAT_UNKNOWN_WRITES,
|
||||
LAT_EXCEEDED,
|
||||
};
|
||||
|
||||
static int __latency_exceeded(struct rq_wb *rwb, struct blk_rq_stat *stat)
|
||||
{
|
||||
struct backing_dev_info *bdi = &rwb->queue->backing_dev_info;
|
||||
u64 thislat;
|
||||
|
||||
/*
|
||||
* If our stored sync issue exceeds the window size, or it
|
||||
* exceeds our min target AND we haven't logged any entries,
|
||||
* flag the latency as exceeded. wbt works off completion latencies,
|
||||
* but for a flooded device, a single sync IO can take a long time
|
||||
* to complete after being issued. If this time exceeds our
|
||||
* monitoring window AND we didn't see any other completions in that
|
||||
* window, then count that sync IO as a violation of the latency.
|
||||
*/
|
||||
thislat = rwb_sync_issue_lat(rwb);
|
||||
if (thislat > rwb->cur_win_nsec ||
|
||||
(thislat > rwb->min_lat_nsec && !stat[BLK_STAT_READ].nr_samples)) {
|
||||
trace_wbt_lat(bdi, thislat);
|
||||
return LAT_EXCEEDED;
|
||||
}
|
||||
|
||||
/*
|
||||
* No read/write mix, if stat isn't valid
|
||||
*/
|
||||
if (!stat_sample_valid(stat)) {
|
||||
/*
|
||||
* If we had writes in this stat window and the window is
|
||||
* current, we're only doing writes. If a task recently
|
||||
* waited or still has writes in flights, consider us doing
|
||||
* just writes as well.
|
||||
*/
|
||||
if ((stat[BLK_STAT_WRITE].nr_samples && blk_stat_is_current(stat)) ||
|
||||
wb_recent_wait(rwb) || wbt_inflight(rwb))
|
||||
return LAT_UNKNOWN_WRITES;
|
||||
return LAT_UNKNOWN;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the 'min' latency exceeds our target, step down.
|
||||
*/
|
||||
if (stat[BLK_STAT_READ].min > rwb->min_lat_nsec) {
|
||||
trace_wbt_lat(bdi, stat[BLK_STAT_READ].min);
|
||||
trace_wbt_stat(bdi, stat);
|
||||
return LAT_EXCEEDED;
|
||||
}
|
||||
|
||||
if (rwb->scale_step)
|
||||
trace_wbt_stat(bdi, stat);
|
||||
|
||||
return LAT_OK;
|
||||
}
|
||||
|
||||
static int latency_exceeded(struct rq_wb *rwb)
|
||||
{
|
||||
struct blk_rq_stat stat[2];
|
||||
|
||||
blk_queue_stat_get(rwb->queue, stat);
|
||||
return __latency_exceeded(rwb, stat);
|
||||
}
|
||||
|
||||
static void rwb_trace_step(struct rq_wb *rwb, const char *msg)
|
||||
{
|
||||
struct backing_dev_info *bdi = &rwb->queue->backing_dev_info;
|
||||
|
||||
trace_wbt_step(bdi, msg, rwb->scale_step, rwb->cur_win_nsec,
|
||||
rwb->wb_background, rwb->wb_normal, rwb->wb_max);
|
||||
}
|
||||
|
||||
static void scale_up(struct rq_wb *rwb)
|
||||
{
|
||||
/*
|
||||
* Hit max in previous round, stop here
|
||||
*/
|
||||
if (rwb->scaled_max)
|
||||
return;
|
||||
|
||||
rwb->scale_step--;
|
||||
rwb->unknown_cnt = 0;
|
||||
blk_stat_clear(rwb->queue);
|
||||
|
||||
rwb->scaled_max = calc_wb_limits(rwb);
|
||||
|
||||
rwb_wake_all(rwb);
|
||||
|
||||
rwb_trace_step(rwb, "step up");
|
||||
}
|
||||
|
||||
/*
|
||||
* Scale rwb down. If 'hard_throttle' is set, do it quicker, since we
|
||||
* had a latency violation.
|
||||
*/
|
||||
static void scale_down(struct rq_wb *rwb, bool hard_throttle)
|
||||
{
|
||||
/*
|
||||
* Stop scaling down when we've hit the limit. This also prevents
|
||||
* ->scale_step from going to crazy values, if the device can't
|
||||
* keep up.
|
||||
*/
|
||||
if (rwb->wb_max == 1)
|
||||
return;
|
||||
|
||||
if (rwb->scale_step < 0 && hard_throttle)
|
||||
rwb->scale_step = 0;
|
||||
else
|
||||
rwb->scale_step++;
|
||||
|
||||
rwb->scaled_max = false;
|
||||
rwb->unknown_cnt = 0;
|
||||
blk_stat_clear(rwb->queue);
|
||||
calc_wb_limits(rwb);
|
||||
rwb_trace_step(rwb, "step down");
|
||||
}
|
||||
|
||||
static void rwb_arm_timer(struct rq_wb *rwb)
|
||||
{
|
||||
unsigned long expires;
|
||||
|
||||
if (rwb->scale_step > 0) {
|
||||
/*
|
||||
* We should speed this up, using some variant of a fast
|
||||
* integer inverse square root calculation. Since we only do
|
||||
* this for every window expiration, it's not a huge deal,
|
||||
* though.
|
||||
*/
|
||||
rwb->cur_win_nsec = div_u64(rwb->win_nsec << 4,
|
||||
int_sqrt((rwb->scale_step + 1) << 8));
|
||||
} else {
|
||||
/*
|
||||
* For step < 0, we don't want to increase/decrease the
|
||||
* window size.
|
||||
*/
|
||||
rwb->cur_win_nsec = rwb->win_nsec;
|
||||
}
|
||||
|
||||
expires = jiffies + nsecs_to_jiffies(rwb->cur_win_nsec);
|
||||
mod_timer(&rwb->window_timer, expires);
|
||||
}
|
||||
|
||||
static void wb_timer_fn(unsigned long data)
|
||||
{
|
||||
struct rq_wb *rwb = (struct rq_wb *) data;
|
||||
unsigned int inflight = wbt_inflight(rwb);
|
||||
int status;
|
||||
|
||||
status = latency_exceeded(rwb);
|
||||
|
||||
trace_wbt_timer(&rwb->queue->backing_dev_info, status, rwb->scale_step,
|
||||
inflight);
|
||||
|
||||
/*
|
||||
* If we exceeded the latency target, step down. If we did not,
|
||||
* step one level up. If we don't know enough to say either exceeded
|
||||
* or ok, then don't do anything.
|
||||
*/
|
||||
switch (status) {
|
||||
case LAT_EXCEEDED:
|
||||
scale_down(rwb, true);
|
||||
break;
|
||||
case LAT_OK:
|
||||
scale_up(rwb);
|
||||
break;
|
||||
case LAT_UNKNOWN_WRITES:
|
||||
/*
|
||||
* We started a the center step, but don't have a valid
|
||||
* read/write sample, but we do have writes going on.
|
||||
* Allow step to go negative, to increase write perf.
|
||||
*/
|
||||
scale_up(rwb);
|
||||
break;
|
||||
case LAT_UNKNOWN:
|
||||
if (++rwb->unknown_cnt < RWB_UNKNOWN_BUMP)
|
||||
break;
|
||||
/*
|
||||
* We get here when previously scaled reduced depth, and we
|
||||
* currently don't have a valid read/write sample. For that
|
||||
* case, slowly return to center state (step == 0).
|
||||
*/
|
||||
if (rwb->scale_step > 0)
|
||||
scale_up(rwb);
|
||||
else if (rwb->scale_step < 0)
|
||||
scale_down(rwb, false);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* Re-arm timer, if we have IO in flight
|
||||
*/
|
||||
if (rwb->scale_step || inflight)
|
||||
rwb_arm_timer(rwb);
|
||||
}
|
||||
|
||||
void wbt_update_limits(struct rq_wb *rwb)
|
||||
{
|
||||
rwb->scale_step = 0;
|
||||
rwb->scaled_max = false;
|
||||
calc_wb_limits(rwb);
|
||||
|
||||
rwb_wake_all(rwb);
|
||||
}
|
||||
|
||||
static bool close_io(struct rq_wb *rwb)
|
||||
{
|
||||
const unsigned long now = jiffies;
|
||||
|
||||
return time_before(now, rwb->last_issue + HZ / 10) ||
|
||||
time_before(now, rwb->last_comp + HZ / 10);
|
||||
}
|
||||
|
||||
#define REQ_HIPRIO (REQ_SYNC | REQ_META | REQ_PRIO)
|
||||
|
||||
static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw)
|
||||
{
|
||||
unsigned int limit;
|
||||
|
||||
/*
|
||||
* At this point we know it's a buffered write. If this is
|
||||
* kswapd trying to free memory, or REQ_SYNC is set, set, then
|
||||
* it's WB_SYNC_ALL writeback, and we'll use the max limit for
|
||||
* that. If the write is marked as a background write, then use
|
||||
* the idle limit, or go to normal if we haven't had competing
|
||||
* IO for a bit.
|
||||
*/
|
||||
if ((rw & REQ_HIPRIO) || wb_recent_wait(rwb) || current_is_kswapd())
|
||||
limit = rwb->wb_max;
|
||||
else if ((rw & REQ_BACKGROUND) || close_io(rwb)) {
|
||||
/*
|
||||
* If less than 100ms since we completed unrelated IO,
|
||||
* limit us to half the depth for background writeback.
|
||||
*/
|
||||
limit = rwb->wb_background;
|
||||
} else
|
||||
limit = rwb->wb_normal;
|
||||
|
||||
return limit;
|
||||
}
|
||||
|
||||
static inline bool may_queue(struct rq_wb *rwb, struct rq_wait *rqw,
|
||||
wait_queue_t *wait, unsigned long rw)
|
||||
{
|
||||
/*
|
||||
* inc it here even if disabled, since we'll dec it at completion.
|
||||
* this only happens if the task was sleeping in __wbt_wait(),
|
||||
* and someone turned it off at the same time.
|
||||
*/
|
||||
if (!rwb_enabled(rwb)) {
|
||||
atomic_inc(&rqw->inflight);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the waitqueue is already active and we are not the next
|
||||
* in line to be woken up, wait for our turn.
|
||||
*/
|
||||
if (waitqueue_active(&rqw->wait) &&
|
||||
rqw->wait.task_list.next != &wait->task_list)
|
||||
return false;
|
||||
|
||||
return atomic_inc_below(&rqw->inflight, get_limit(rwb, rw));
|
||||
}
|
||||
|
||||
/*
|
||||
* Block if we will exceed our limit, or if we are currently waiting for
|
||||
* the timer to kick off queuing again.
|
||||
*/
|
||||
static void __wbt_wait(struct rq_wb *rwb, unsigned long rw, spinlock_t *lock)
|
||||
{
|
||||
struct rq_wait *rqw = get_rq_wait(rwb, current_is_kswapd());
|
||||
DEFINE_WAIT(wait);
|
||||
|
||||
if (may_queue(rwb, rqw, &wait, rw))
|
||||
return;
|
||||
|
||||
do {
|
||||
prepare_to_wait_exclusive(&rqw->wait, &wait,
|
||||
TASK_UNINTERRUPTIBLE);
|
||||
|
||||
if (may_queue(rwb, rqw, &wait, rw))
|
||||
break;
|
||||
|
||||
if (lock)
|
||||
spin_unlock_irq(lock);
|
||||
|
||||
io_schedule();
|
||||
|
||||
if (lock)
|
||||
spin_lock_irq(lock);
|
||||
} while (1);
|
||||
|
||||
finish_wait(&rqw->wait, &wait);
|
||||
}
|
||||
|
||||
static inline bool wbt_should_throttle(struct rq_wb *rwb, struct bio *bio)
|
||||
{
|
||||
const int op = bio_op(bio);
|
||||
|
||||
/*
|
||||
* If not a WRITE, do nothing
|
||||
*/
|
||||
if (op != REQ_OP_WRITE)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Don't throttle WRITE_ODIRECT
|
||||
*/
|
||||
if ((bio->bi_opf & (REQ_SYNC | REQ_IDLE)) == (REQ_SYNC | REQ_IDLE))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if the IO request should be accounted, false if not.
|
||||
* May sleep, if we have exceeded the writeback limits. Caller can pass
|
||||
* in an irq held spinlock, if it holds one when calling this function.
|
||||
* If we do sleep, we'll release and re-grab it.
|
||||
*/
|
||||
unsigned int wbt_wait(struct rq_wb *rwb, struct bio *bio, spinlock_t *lock)
|
||||
{
|
||||
unsigned int ret = 0;
|
||||
|
||||
if (!rwb_enabled(rwb))
|
||||
return 0;
|
||||
|
||||
if (bio_op(bio) == REQ_OP_READ)
|
||||
ret = WBT_READ;
|
||||
|
||||
if (!wbt_should_throttle(rwb, bio)) {
|
||||
if (ret & WBT_READ)
|
||||
wb_timestamp(rwb, &rwb->last_issue);
|
||||
return ret;
|
||||
}
|
||||
|
||||
__wbt_wait(rwb, bio->bi_opf, lock);
|
||||
|
||||
if (!timer_pending(&rwb->window_timer))
|
||||
rwb_arm_timer(rwb);
|
||||
|
||||
if (current_is_kswapd())
|
||||
ret |= WBT_KSWAPD;
|
||||
|
||||
return ret | WBT_TRACKED;
|
||||
}
|
||||
|
||||
void wbt_issue(struct rq_wb *rwb, struct blk_issue_stat *stat)
|
||||
{
|
||||
if (!rwb_enabled(rwb))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Track sync issue, in case it takes a long time to complete. Allows
|
||||
* us to react quicker, if a sync IO takes a long time to complete.
|
||||
* Note that this is just a hint. 'stat' can go away when the
|
||||
* request completes, so it's important we never dereference it. We
|
||||
* only use the address to compare with, which is why we store the
|
||||
* sync_issue time locally.
|
||||
*/
|
||||
if (wbt_is_read(stat) && !rwb->sync_issue) {
|
||||
rwb->sync_cookie = stat;
|
||||
rwb->sync_issue = blk_stat_time(stat);
|
||||
}
|
||||
}
|
||||
|
||||
void wbt_requeue(struct rq_wb *rwb, struct blk_issue_stat *stat)
|
||||
{
|
||||
if (!rwb_enabled(rwb))
|
||||
return;
|
||||
if (stat == rwb->sync_cookie) {
|
||||
rwb->sync_issue = 0;
|
||||
rwb->sync_cookie = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
void wbt_set_queue_depth(struct rq_wb *rwb, unsigned int depth)
|
||||
{
|
||||
if (rwb) {
|
||||
rwb->queue_depth = depth;
|
||||
wbt_update_limits(rwb);
|
||||
}
|
||||
}
|
||||
|
||||
void wbt_set_write_cache(struct rq_wb *rwb, bool write_cache_on)
|
||||
{
|
||||
if (rwb)
|
||||
rwb->wc = write_cache_on;
|
||||
}
|
||||
|
||||
/*
|
||||
* Disable wbt, if enabled by default. Only called from CFQ, if we have
|
||||
* cgroups enabled
|
||||
*/
|
||||
void wbt_disable_default(struct request_queue *q)
|
||||
{
|
||||
struct rq_wb *rwb = q->rq_wb;
|
||||
|
||||
if (rwb && rwb->enable_state == WBT_STATE_ON_DEFAULT) {
|
||||
del_timer_sync(&rwb->window_timer);
|
||||
rwb->win_nsec = rwb->min_lat_nsec = 0;
|
||||
wbt_update_limits(rwb);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(wbt_disable_default);
|
||||
|
||||
u64 wbt_default_latency_nsec(struct request_queue *q)
|
||||
{
|
||||
/*
|
||||
* We default to 2msec for non-rotational storage, and 75msec
|
||||
* for rotational storage.
|
||||
*/
|
||||
if (blk_queue_nonrot(q))
|
||||
return 2000000ULL;
|
||||
else
|
||||
return 75000000ULL;
|
||||
}
|
||||
|
||||
int wbt_init(struct request_queue *q)
|
||||
{
|
||||
struct rq_wb *rwb;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* For now, we depend on the stats window being larger than
|
||||
* our monitoring window. Ensure that this isn't inadvertently
|
||||
* violated.
|
||||
*/
|
||||
BUILD_BUG_ON(RWB_WINDOW_NSEC > BLK_STAT_NSEC);
|
||||
BUILD_BUG_ON(WBT_NR_BITS > BLK_STAT_RES_BITS);
|
||||
|
||||
rwb = kzalloc(sizeof(*rwb), GFP_KERNEL);
|
||||
if (!rwb)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < WBT_NUM_RWQ; i++) {
|
||||
atomic_set(&rwb->rq_wait[i].inflight, 0);
|
||||
init_waitqueue_head(&rwb->rq_wait[i].wait);
|
||||
}
|
||||
|
||||
setup_timer(&rwb->window_timer, wb_timer_fn, (unsigned long) rwb);
|
||||
rwb->wc = 1;
|
||||
rwb->queue_depth = RWB_DEF_DEPTH;
|
||||
rwb->last_comp = rwb->last_issue = jiffies;
|
||||
rwb->queue = q;
|
||||
rwb->win_nsec = RWB_WINDOW_NSEC;
|
||||
rwb->enable_state = WBT_STATE_ON_DEFAULT;
|
||||
wbt_update_limits(rwb);
|
||||
|
||||
/*
|
||||
* Assign rwb, and turn on stats tracking for this queue
|
||||
*/
|
||||
q->rq_wb = rwb;
|
||||
blk_stat_enable(q);
|
||||
|
||||
rwb->min_lat_nsec = wbt_default_latency_nsec(q);
|
||||
|
||||
wbt_set_queue_depth(rwb, blk_queue_depth(q));
|
||||
wbt_set_write_cache(rwb, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void wbt_exit(struct request_queue *q)
|
||||
{
|
||||
struct rq_wb *rwb = q->rq_wb;
|
||||
|
||||
if (rwb) {
|
||||
del_timer_sync(&rwb->window_timer);
|
||||
q->rq_wb = NULL;
|
||||
kfree(rwb);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,171 @@
|
|||
#ifndef WB_THROTTLE_H
|
||||
#define WB_THROTTLE_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/ktime.h>
|
||||
|
||||
#include "blk-stat.h"
|
||||
|
||||
enum wbt_flags {
|
||||
WBT_TRACKED = 1, /* write, tracked for throttling */
|
||||
WBT_READ = 2, /* read */
|
||||
WBT_KSWAPD = 4, /* write, from kswapd */
|
||||
|
||||
WBT_NR_BITS = 3, /* number of bits */
|
||||
};
|
||||
|
||||
enum {
|
||||
WBT_NUM_RWQ = 2,
|
||||
};
|
||||
|
||||
/*
|
||||
* Enable states. Either off, or on by default (done at init time),
|
||||
* or on through manual setup in sysfs.
|
||||
*/
|
||||
enum {
|
||||
WBT_STATE_ON_DEFAULT = 1,
|
||||
WBT_STATE_ON_MANUAL = 2,
|
||||
};
|
||||
|
||||
static inline void wbt_clear_state(struct blk_issue_stat *stat)
|
||||
{
|
||||
stat->time &= BLK_STAT_TIME_MASK;
|
||||
}
|
||||
|
||||
static inline enum wbt_flags wbt_stat_to_mask(struct blk_issue_stat *stat)
|
||||
{
|
||||
return (stat->time & BLK_STAT_MASK) >> BLK_STAT_SHIFT;
|
||||
}
|
||||
|
||||
static inline void wbt_track(struct blk_issue_stat *stat, enum wbt_flags wb_acct)
|
||||
{
|
||||
stat->time |= ((u64) wb_acct) << BLK_STAT_SHIFT;
|
||||
}
|
||||
|
||||
static inline bool wbt_is_tracked(struct blk_issue_stat *stat)
|
||||
{
|
||||
return (stat->time >> BLK_STAT_SHIFT) & WBT_TRACKED;
|
||||
}
|
||||
|
||||
static inline bool wbt_is_read(struct blk_issue_stat *stat)
|
||||
{
|
||||
return (stat->time >> BLK_STAT_SHIFT) & WBT_READ;
|
||||
}
|
||||
|
||||
struct rq_wait {
|
||||
wait_queue_head_t wait;
|
||||
atomic_t inflight;
|
||||
};
|
||||
|
||||
struct rq_wb {
|
||||
/*
|
||||
* Settings that govern how we throttle
|
||||
*/
|
||||
unsigned int wb_background; /* background writeback */
|
||||
unsigned int wb_normal; /* normal writeback */
|
||||
unsigned int wb_max; /* max throughput writeback */
|
||||
int scale_step;
|
||||
bool scaled_max;
|
||||
|
||||
short enable_state; /* WBT_STATE_* */
|
||||
|
||||
/*
|
||||
* Number of consecutive periods where we don't have enough
|
||||
* information to make a firm scale up/down decision.
|
||||
*/
|
||||
unsigned int unknown_cnt;
|
||||
|
||||
u64 win_nsec; /* default window size */
|
||||
u64 cur_win_nsec; /* current window size */
|
||||
|
||||
struct timer_list window_timer;
|
||||
|
||||
s64 sync_issue;
|
||||
void *sync_cookie;
|
||||
|
||||
unsigned int wc;
|
||||
unsigned int queue_depth;
|
||||
|
||||
unsigned long last_issue; /* last non-throttled issue */
|
||||
unsigned long last_comp; /* last non-throttled comp */
|
||||
unsigned long min_lat_nsec;
|
||||
struct request_queue *queue;
|
||||
struct rq_wait rq_wait[WBT_NUM_RWQ];
|
||||
};
|
||||
|
||||
static inline unsigned int wbt_inflight(struct rq_wb *rwb)
|
||||
{
|
||||
unsigned int i, ret = 0;
|
||||
|
||||
for (i = 0; i < WBT_NUM_RWQ; i++)
|
||||
ret += atomic_read(&rwb->rq_wait[i].inflight);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BLK_WBT
|
||||
|
||||
void __wbt_done(struct rq_wb *, enum wbt_flags);
|
||||
void wbt_done(struct rq_wb *, struct blk_issue_stat *);
|
||||
enum wbt_flags wbt_wait(struct rq_wb *, struct bio *, spinlock_t *);
|
||||
int wbt_init(struct request_queue *);
|
||||
void wbt_exit(struct request_queue *);
|
||||
void wbt_update_limits(struct rq_wb *);
|
||||
void wbt_requeue(struct rq_wb *, struct blk_issue_stat *);
|
||||
void wbt_issue(struct rq_wb *, struct blk_issue_stat *);
|
||||
void wbt_disable_default(struct request_queue *);
|
||||
|
||||
void wbt_set_queue_depth(struct rq_wb *, unsigned int);
|
||||
void wbt_set_write_cache(struct rq_wb *, bool);
|
||||
|
||||
u64 wbt_default_latency_nsec(struct request_queue *);
|
||||
|
||||
#else
|
||||
|
||||
static inline void __wbt_done(struct rq_wb *rwb, enum wbt_flags flags)
|
||||
{
|
||||
}
|
||||
static inline void wbt_done(struct rq_wb *rwb, struct blk_issue_stat *stat)
|
||||
{
|
||||
}
|
||||
static inline enum wbt_flags wbt_wait(struct rq_wb *rwb, struct bio *bio,
|
||||
spinlock_t *lock)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int wbt_init(struct request_queue *q)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
static inline void wbt_exit(struct request_queue *q)
|
||||
{
|
||||
}
|
||||
static inline void wbt_update_limits(struct rq_wb *rwb)
|
||||
{
|
||||
}
|
||||
static inline void wbt_requeue(struct rq_wb *rwb, struct blk_issue_stat *stat)
|
||||
{
|
||||
}
|
||||
static inline void wbt_issue(struct rq_wb *rwb, struct blk_issue_stat *stat)
|
||||
{
|
||||
}
|
||||
static inline void wbt_disable_default(struct request_queue *q)
|
||||
{
|
||||
}
|
||||
static inline void wbt_set_queue_depth(struct rq_wb *rwb, unsigned int depth)
|
||||
{
|
||||
}
|
||||
static inline void wbt_set_write_cache(struct rq_wb *rwb, bool wc)
|
||||
{
|
||||
}
|
||||
static inline u64 wbt_default_latency_nsec(struct request_queue *q)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_BLK_WBT */
|
||||
|
||||
#endif
|
|
@ -0,0 +1,348 @@
|
|||
/*
|
||||
* Zoned block device handling
|
||||
*
|
||||
* Copyright (c) 2015, Hannes Reinecke
|
||||
* Copyright (c) 2015, SUSE Linux GmbH
|
||||
*
|
||||
* Copyright (c) 2016, Damien Le Moal
|
||||
* Copyright (c) 2016, Western Digital
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/rbtree.h>
|
||||
#include <linux/blkdev.h>
|
||||
|
||||
static inline sector_t blk_zone_start(struct request_queue *q,
|
||||
sector_t sector)
|
||||
{
|
||||
sector_t zone_mask = blk_queue_zone_size(q) - 1;
|
||||
|
||||
return sector & ~zone_mask;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check that a zone report belongs to the partition.
|
||||
* If yes, fix its start sector and write pointer, copy it in the
|
||||
* zone information array and return true. Return false otherwise.
|
||||
*/
|
||||
static bool blkdev_report_zone(struct block_device *bdev,
|
||||
struct blk_zone *rep,
|
||||
struct blk_zone *zone)
|
||||
{
|
||||
sector_t offset = get_start_sect(bdev);
|
||||
|
||||
if (rep->start < offset)
|
||||
return false;
|
||||
|
||||
rep->start -= offset;
|
||||
if (rep->start + rep->len > bdev->bd_part->nr_sects)
|
||||
return false;
|
||||
|
||||
if (rep->type == BLK_ZONE_TYPE_CONVENTIONAL)
|
||||
rep->wp = rep->start + rep->len;
|
||||
else
|
||||
rep->wp -= offset;
|
||||
memcpy(zone, rep, sizeof(struct blk_zone));
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* blkdev_report_zones - Get zones information
|
||||
* @bdev: Target block device
|
||||
* @sector: Sector from which to report zones
|
||||
* @zones: Array of zone structures where to return the zones information
|
||||
* @nr_zones: Number of zone structures in the zone array
|
||||
* @gfp_mask: Memory allocation flags (for bio_alloc)
|
||||
*
|
||||
* Description:
|
||||
* Get zone information starting from the zone containing @sector.
|
||||
* The number of zone information reported may be less than the number
|
||||
* requested by @nr_zones. The number of zones actually reported is
|
||||
* returned in @nr_zones.
|
||||
*/
|
||||
int blkdev_report_zones(struct block_device *bdev,
|
||||
sector_t sector,
|
||||
struct blk_zone *zones,
|
||||
unsigned int *nr_zones,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
struct blk_zone_report_hdr *hdr;
|
||||
unsigned int nrz = *nr_zones;
|
||||
struct page *page;
|
||||
unsigned int nr_rep;
|
||||
size_t rep_bytes;
|
||||
unsigned int nr_pages;
|
||||
struct bio *bio;
|
||||
struct bio_vec *bv;
|
||||
unsigned int i, n, nz;
|
||||
unsigned int ofst;
|
||||
void *addr;
|
||||
int ret;
|
||||
|
||||
if (!q)
|
||||
return -ENXIO;
|
||||
|
||||
if (!blk_queue_is_zoned(q))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!nrz)
|
||||
return 0;
|
||||
|
||||
if (sector > bdev->bd_part->nr_sects) {
|
||||
*nr_zones = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* The zone report has a header. So make room for it in the
|
||||
* payload. Also make sure that the report fits in a single BIO
|
||||
* that will not be split down the stack.
|
||||
*/
|
||||
rep_bytes = sizeof(struct blk_zone_report_hdr) +
|
||||
sizeof(struct blk_zone) * nrz;
|
||||
rep_bytes = (rep_bytes + PAGE_SIZE - 1) & PAGE_MASK;
|
||||
if (rep_bytes > (queue_max_sectors(q) << 9))
|
||||
rep_bytes = queue_max_sectors(q) << 9;
|
||||
|
||||
nr_pages = min_t(unsigned int, BIO_MAX_PAGES,
|
||||
rep_bytes >> PAGE_SHIFT);
|
||||
nr_pages = min_t(unsigned int, nr_pages,
|
||||
queue_max_segments(q));
|
||||
|
||||
bio = bio_alloc(gfp_mask, nr_pages);
|
||||
if (!bio)
|
||||
return -ENOMEM;
|
||||
|
||||
bio->bi_bdev = bdev;
|
||||
bio->bi_iter.bi_sector = blk_zone_start(q, sector);
|
||||
bio_set_op_attrs(bio, REQ_OP_ZONE_REPORT, 0);
|
||||
|
||||
for (i = 0; i < nr_pages; i++) {
|
||||
page = alloc_page(gfp_mask);
|
||||
if (!page) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
|
||||
__free_page(page);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (i == 0)
|
||||
ret = -ENOMEM;
|
||||
else
|
||||
ret = submit_bio_wait(bio);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Process the report result: skip the header and go through the
|
||||
* reported zones to fixup and fixup the zone information for
|
||||
* partitions. At the same time, return the zone information into
|
||||
* the zone array.
|
||||
*/
|
||||
n = 0;
|
||||
nz = 0;
|
||||
nr_rep = 0;
|
||||
bio_for_each_segment_all(bv, bio, i) {
|
||||
|
||||
if (!bv->bv_page)
|
||||
break;
|
||||
|
||||
addr = kmap_atomic(bv->bv_page);
|
||||
|
||||
/* Get header in the first page */
|
||||
ofst = 0;
|
||||
if (!nr_rep) {
|
||||
hdr = (struct blk_zone_report_hdr *) addr;
|
||||
nr_rep = hdr->nr_zones;
|
||||
ofst = sizeof(struct blk_zone_report_hdr);
|
||||
}
|
||||
|
||||
/* Fixup and report zones */
|
||||
while (ofst < bv->bv_len &&
|
||||
n < nr_rep && nz < nrz) {
|
||||
if (blkdev_report_zone(bdev, addr + ofst, &zones[nz]))
|
||||
nz++;
|
||||
ofst += sizeof(struct blk_zone);
|
||||
n++;
|
||||
}
|
||||
|
||||
kunmap_atomic(addr);
|
||||
|
||||
if (n >= nr_rep || nz >= nrz)
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
*nr_zones = nz;
|
||||
out:
|
||||
bio_for_each_segment_all(bv, bio, i)
|
||||
__free_page(bv->bv_page);
|
||||
bio_put(bio);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blkdev_report_zones);
|
||||
|
||||
/**
|
||||
* blkdev_reset_zones - Reset zones write pointer
|
||||
* @bdev: Target block device
|
||||
* @sector: Start sector of the first zone to reset
|
||||
* @nr_sectors: Number of sectors, at least the length of one zone
|
||||
* @gfp_mask: Memory allocation flags (for bio_alloc)
|
||||
*
|
||||
* Description:
|
||||
* Reset the write pointer of the zones contained in the range
|
||||
* @sector..@sector+@nr_sectors. Specifying the entire disk sector range
|
||||
* is valid, but the specified range should not contain conventional zones.
|
||||
*/
|
||||
int blkdev_reset_zones(struct block_device *bdev,
|
||||
sector_t sector, sector_t nr_sectors,
|
||||
gfp_t gfp_mask)
|
||||
{
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
sector_t zone_sectors;
|
||||
sector_t end_sector = sector + nr_sectors;
|
||||
struct bio *bio;
|
||||
int ret;
|
||||
|
||||
if (!q)
|
||||
return -ENXIO;
|
||||
|
||||
if (!blk_queue_is_zoned(q))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (end_sector > bdev->bd_part->nr_sects)
|
||||
/* Out of range */
|
||||
return -EINVAL;
|
||||
|
||||
/* Check alignment (handle eventual smaller last zone) */
|
||||
zone_sectors = blk_queue_zone_size(q);
|
||||
if (sector & (zone_sectors - 1))
|
||||
return -EINVAL;
|
||||
|
||||
if ((nr_sectors & (zone_sectors - 1)) &&
|
||||
end_sector != bdev->bd_part->nr_sects)
|
||||
return -EINVAL;
|
||||
|
||||
while (sector < end_sector) {
|
||||
|
||||
bio = bio_alloc(gfp_mask, 0);
|
||||
bio->bi_iter.bi_sector = sector;
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, REQ_OP_ZONE_RESET, 0);
|
||||
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
sector += zone_sectors;
|
||||
|
||||
/* This may take a while, so be nice to others */
|
||||
cond_resched();
|
||||
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blkdev_reset_zones);
|
||||
|
||||
/**
|
||||
* BLKREPORTZONE ioctl processing.
|
||||
* Called from blkdev_ioctl.
|
||||
*/
|
||||
int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
void __user *argp = (void __user *)arg;
|
||||
struct request_queue *q;
|
||||
struct blk_zone_report rep;
|
||||
struct blk_zone *zones;
|
||||
int ret;
|
||||
|
||||
if (!argp)
|
||||
return -EINVAL;
|
||||
|
||||
q = bdev_get_queue(bdev);
|
||||
if (!q)
|
||||
return -ENXIO;
|
||||
|
||||
if (!blk_queue_is_zoned(q))
|
||||
return -ENOTTY;
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
|
||||
if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report)))
|
||||
return -EFAULT;
|
||||
|
||||
if (!rep.nr_zones)
|
||||
return -EINVAL;
|
||||
|
||||
zones = kcalloc(rep.nr_zones, sizeof(struct blk_zone), GFP_KERNEL);
|
||||
if (!zones)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = blkdev_report_zones(bdev, rep.sector,
|
||||
zones, &rep.nr_zones,
|
||||
GFP_KERNEL);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (copy_to_user(argp, &rep, sizeof(struct blk_zone_report))) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (rep.nr_zones) {
|
||||
if (copy_to_user(argp + sizeof(struct blk_zone_report), zones,
|
||||
sizeof(struct blk_zone) * rep.nr_zones))
|
||||
ret = -EFAULT;
|
||||
}
|
||||
|
||||
out:
|
||||
kfree(zones);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* BLKRESETZONE ioctl processing.
|
||||
* Called from blkdev_ioctl.
|
||||
*/
|
||||
int blkdev_reset_zones_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
void __user *argp = (void __user *)arg;
|
||||
struct request_queue *q;
|
||||
struct blk_zone_range zrange;
|
||||
|
||||
if (!argp)
|
||||
return -EINVAL;
|
||||
|
||||
q = bdev_get_queue(bdev);
|
||||
if (!q)
|
||||
return -ENXIO;
|
||||
|
||||
if (!blk_queue_is_zoned(q))
|
||||
return -ENOTTY;
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
|
||||
if (!(mode & FMODE_WRITE))
|
||||
return -EBADF;
|
||||
|
||||
if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
|
||||
return -EFAULT;
|
||||
|
||||
return blkdev_reset_zones(bdev, zrange.sector, zrange.nr_sectors,
|
||||
GFP_KERNEL);
|
||||
}
|
|
@ -111,6 +111,7 @@ void blk_account_io_done(struct request *req);
|
|||
enum rq_atomic_flags {
|
||||
REQ_ATOM_COMPLETE = 0,
|
||||
REQ_ATOM_STARTED,
|
||||
REQ_ATOM_POLL_SLEPT,
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -130,7 +131,7 @@ static inline void blk_clear_rq_complete(struct request *rq)
|
|||
/*
|
||||
* Internal elevator interface
|
||||
*/
|
||||
#define ELV_ON_HASH(rq) ((rq)->cmd_flags & REQ_HASHED)
|
||||
#define ELV_ON_HASH(rq) ((rq)->rq_flags & RQF_HASHED)
|
||||
|
||||
void blk_insert_flush(struct request *rq);
|
||||
|
||||
|
@ -247,7 +248,7 @@ extern int blk_update_nr_requests(struct request_queue *, unsigned int);
|
|||
static inline int blk_do_io_stat(struct request *rq)
|
||||
{
|
||||
return rq->rq_disk &&
|
||||
(rq->cmd_flags & REQ_IO_STAT) &&
|
||||
(rq->rq_flags & RQF_IO_STAT) &&
|
||||
(rq->cmd_type == REQ_TYPE_FS);
|
||||
}
|
||||
|
||||
|
|
|
@ -161,6 +161,8 @@ failjob_rls_job:
|
|||
* Drivers/subsys should pass this to the queue init function.
|
||||
*/
|
||||
void bsg_request_fn(struct request_queue *q)
|
||||
__releases(q->queue_lock)
|
||||
__acquires(q->queue_lock)
|
||||
{
|
||||
struct device *dev = q->queuedata;
|
||||
struct request *req;
|
||||
|
|
|
@ -176,7 +176,7 @@ static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq,
|
|||
* Check if sg_io_v4 from user is allowed and valid
|
||||
*/
|
||||
static int
|
||||
bsg_validate_sgv4_hdr(struct request_queue *q, struct sg_io_v4 *hdr, int *rw)
|
||||
bsg_validate_sgv4_hdr(struct sg_io_v4 *hdr, int *rw)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
|
@ -226,7 +226,7 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
|
|||
hdr->dout_xfer_len, (unsigned long long) hdr->din_xferp,
|
||||
hdr->din_xfer_len);
|
||||
|
||||
ret = bsg_validate_sgv4_hdr(q, hdr, &rw);
|
||||
ret = bsg_validate_sgv4_hdr(hdr, &rw);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
#include <linux/blktrace_api.h>
|
||||
#include <linux/blk-cgroup.h>
|
||||
#include "blk.h"
|
||||
#include "blk-wbt.h"
|
||||
|
||||
/*
|
||||
* tunables
|
||||
|
@ -667,10 +668,10 @@ static inline void cfqg_put(struct cfq_group *cfqg)
|
|||
} while (0)
|
||||
|
||||
static inline void cfqg_stats_update_io_add(struct cfq_group *cfqg,
|
||||
struct cfq_group *curr_cfqg, int op,
|
||||
int op_flags)
|
||||
struct cfq_group *curr_cfqg,
|
||||
unsigned int op)
|
||||
{
|
||||
blkg_rwstat_add(&cfqg->stats.queued, op, op_flags, 1);
|
||||
blkg_rwstat_add(&cfqg->stats.queued, op, 1);
|
||||
cfqg_stats_end_empty_time(&cfqg->stats);
|
||||
cfqg_stats_set_start_group_wait_time(cfqg, curr_cfqg);
|
||||
}
|
||||
|
@ -684,30 +685,29 @@ static inline void cfqg_stats_update_timeslice_used(struct cfq_group *cfqg,
|
|||
#endif
|
||||
}
|
||||
|
||||
static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, int op,
|
||||
int op_flags)
|
||||
static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg,
|
||||
unsigned int op)
|
||||
{
|
||||
blkg_rwstat_add(&cfqg->stats.queued, op, op_flags, -1);
|
||||
blkg_rwstat_add(&cfqg->stats.queued, op, -1);
|
||||
}
|
||||
|
||||
static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, int op,
|
||||
int op_flags)
|
||||
static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg,
|
||||
unsigned int op)
|
||||
{
|
||||
blkg_rwstat_add(&cfqg->stats.merged, op, op_flags, 1);
|
||||
blkg_rwstat_add(&cfqg->stats.merged, op, 1);
|
||||
}
|
||||
|
||||
static inline void cfqg_stats_update_completion(struct cfq_group *cfqg,
|
||||
uint64_t start_time, uint64_t io_start_time, int op,
|
||||
int op_flags)
|
||||
uint64_t start_time, uint64_t io_start_time,
|
||||
unsigned int op)
|
||||
{
|
||||
struct cfqg_stats *stats = &cfqg->stats;
|
||||
unsigned long long now = sched_clock();
|
||||
|
||||
if (time_after64(now, io_start_time))
|
||||
blkg_rwstat_add(&stats->service_time, op, op_flags,
|
||||
now - io_start_time);
|
||||
blkg_rwstat_add(&stats->service_time, op, now - io_start_time);
|
||||
if (time_after64(io_start_time, start_time))
|
||||
blkg_rwstat_add(&stats->wait_time, op, op_flags,
|
||||
blkg_rwstat_add(&stats->wait_time, op,
|
||||
io_start_time - start_time);
|
||||
}
|
||||
|
||||
|
@ -786,16 +786,16 @@ static inline void cfqg_put(struct cfq_group *cfqg) { }
|
|||
#define cfq_log_cfqg(cfqd, cfqg, fmt, args...) do {} while (0)
|
||||
|
||||
static inline void cfqg_stats_update_io_add(struct cfq_group *cfqg,
|
||||
struct cfq_group *curr_cfqg, int op, int op_flags) { }
|
||||
struct cfq_group *curr_cfqg, unsigned int op) { }
|
||||
static inline void cfqg_stats_update_timeslice_used(struct cfq_group *cfqg,
|
||||
uint64_t time, unsigned long unaccounted_time) { }
|
||||
static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, int op,
|
||||
int op_flags) { }
|
||||
static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, int op,
|
||||
int op_flags) { }
|
||||
static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg,
|
||||
unsigned int op) { }
|
||||
static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg,
|
||||
unsigned int op) { }
|
||||
static inline void cfqg_stats_update_completion(struct cfq_group *cfqg,
|
||||
uint64_t start_time, uint64_t io_start_time, int op,
|
||||
int op_flags) { }
|
||||
uint64_t start_time, uint64_t io_start_time,
|
||||
unsigned int op) { }
|
||||
|
||||
#endif /* CONFIG_CFQ_GROUP_IOSCHED */
|
||||
|
||||
|
@ -912,15 +912,6 @@ static inline struct cfq_data *cic_to_cfqd(struct cfq_io_cq *cic)
|
|||
return cic->icq.q->elevator->elevator_data;
|
||||
}
|
||||
|
||||
/*
|
||||
* We regard a request as SYNC, if it's either a read or has the SYNC bit
|
||||
* set (in which case it could also be direct WRITE).
|
||||
*/
|
||||
static inline bool cfq_bio_sync(struct bio *bio)
|
||||
{
|
||||
return bio_data_dir(bio) == READ || (bio->bi_opf & REQ_SYNC);
|
||||
}
|
||||
|
||||
/*
|
||||
* scheduler run of queue, if there are requests pending and no one in the
|
||||
* driver that will restart queueing
|
||||
|
@ -1596,7 +1587,7 @@ static struct blkcg_policy_data *cfq_cpd_alloc(gfp_t gfp)
|
|||
{
|
||||
struct cfq_group_data *cgd;
|
||||
|
||||
cgd = kzalloc(sizeof(*cgd), GFP_KERNEL);
|
||||
cgd = kzalloc(sizeof(*cgd), gfp);
|
||||
if (!cgd)
|
||||
return NULL;
|
||||
return &cgd->cpd;
|
||||
|
@ -2474,10 +2465,10 @@ static void cfq_reposition_rq_rb(struct cfq_queue *cfqq, struct request *rq)
|
|||
{
|
||||
elv_rb_del(&cfqq->sort_list, rq);
|
||||
cfqq->queued[rq_is_sync(rq)]--;
|
||||
cfqg_stats_update_io_remove(RQ_CFQG(rq), req_op(rq), rq->cmd_flags);
|
||||
cfqg_stats_update_io_remove(RQ_CFQG(rq), rq->cmd_flags);
|
||||
cfq_add_rq_rb(rq);
|
||||
cfqg_stats_update_io_add(RQ_CFQG(rq), cfqq->cfqd->serving_group,
|
||||
req_op(rq), rq->cmd_flags);
|
||||
rq->cmd_flags);
|
||||
}
|
||||
|
||||
static struct request *
|
||||
|
@ -2491,7 +2482,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
|
|||
if (!cic)
|
||||
return NULL;
|
||||
|
||||
cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
|
||||
cfqq = cic_to_cfqq(cic, op_is_sync(bio->bi_opf));
|
||||
if (cfqq)
|
||||
return elv_rb_find(&cfqq->sort_list, bio_end_sector(bio));
|
||||
|
||||
|
@ -2530,7 +2521,7 @@ static void cfq_remove_request(struct request *rq)
|
|||
cfq_del_rq_rb(rq);
|
||||
|
||||
cfqq->cfqd->rq_queued--;
|
||||
cfqg_stats_update_io_remove(RQ_CFQG(rq), req_op(rq), rq->cmd_flags);
|
||||
cfqg_stats_update_io_remove(RQ_CFQG(rq), rq->cmd_flags);
|
||||
if (rq->cmd_flags & REQ_PRIO) {
|
||||
WARN_ON(!cfqq->prio_pending);
|
||||
cfqq->prio_pending--;
|
||||
|
@ -2565,7 +2556,7 @@ static void cfq_merged_request(struct request_queue *q, struct request *req,
|
|||
static void cfq_bio_merged(struct request_queue *q, struct request *req,
|
||||
struct bio *bio)
|
||||
{
|
||||
cfqg_stats_update_io_merged(RQ_CFQG(req), bio_op(bio), bio->bi_opf);
|
||||
cfqg_stats_update_io_merged(RQ_CFQG(req), bio->bi_opf);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -2588,7 +2579,7 @@ cfq_merged_requests(struct request_queue *q, struct request *rq,
|
|||
if (cfqq->next_rq == next)
|
||||
cfqq->next_rq = rq;
|
||||
cfq_remove_request(next);
|
||||
cfqg_stats_update_io_merged(RQ_CFQG(rq), req_op(next), next->cmd_flags);
|
||||
cfqg_stats_update_io_merged(RQ_CFQG(rq), next->cmd_flags);
|
||||
|
||||
cfqq = RQ_CFQQ(next);
|
||||
/*
|
||||
|
@ -2605,13 +2596,14 @@ static int cfq_allow_bio_merge(struct request_queue *q, struct request *rq,
|
|||
struct bio *bio)
|
||||
{
|
||||
struct cfq_data *cfqd = q->elevator->elevator_data;
|
||||
bool is_sync = op_is_sync(bio->bi_opf);
|
||||
struct cfq_io_cq *cic;
|
||||
struct cfq_queue *cfqq;
|
||||
|
||||
/*
|
||||
* Disallow merge of a sync bio into an async request.
|
||||
*/
|
||||
if (cfq_bio_sync(bio) && !rq_is_sync(rq))
|
||||
if (is_sync && !rq_is_sync(rq))
|
||||
return false;
|
||||
|
||||
/*
|
||||
|
@ -2622,7 +2614,7 @@ static int cfq_allow_bio_merge(struct request_queue *q, struct request *rq,
|
|||
if (!cic)
|
||||
return false;
|
||||
|
||||
cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));
|
||||
cfqq = cic_to_cfqq(cic, is_sync);
|
||||
return cfqq == RQ_CFQQ(rq);
|
||||
}
|
||||
|
||||
|
@ -3771,9 +3763,11 @@ static void check_blkcg_changed(struct cfq_io_cq *cic, struct bio *bio)
|
|||
struct cfq_data *cfqd = cic_to_cfqd(cic);
|
||||
struct cfq_queue *cfqq;
|
||||
uint64_t serial_nr;
|
||||
bool nonroot_cg;
|
||||
|
||||
rcu_read_lock();
|
||||
serial_nr = bio_blkcg(bio)->css.serial_nr;
|
||||
nonroot_cg = bio_blkcg(bio) != &blkcg_root;
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
|
@ -3783,6 +3777,14 @@ static void check_blkcg_changed(struct cfq_io_cq *cic, struct bio *bio)
|
|||
if (unlikely(!cfqd) || likely(cic->blkcg_serial_nr == serial_nr))
|
||||
return;
|
||||
|
||||
/*
|
||||
* If we have a non-root cgroup, we can depend on that to
|
||||
* do proper throttling of writes. Turn off wbt for that
|
||||
* case, if it was enabled by default.
|
||||
*/
|
||||
if (nonroot_cg)
|
||||
wbt_disable_default(cfqd->queue);
|
||||
|
||||
/*
|
||||
* Drop reference to queues. New queues will be assigned in new
|
||||
* group upon arrival of fresh requests.
|
||||
|
@ -3854,7 +3856,8 @@ cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct cfq_io_cq *cic,
|
|||
goto out;
|
||||
}
|
||||
|
||||
cfqq = kmem_cache_alloc_node(cfq_pool, GFP_NOWAIT | __GFP_ZERO,
|
||||
cfqq = kmem_cache_alloc_node(cfq_pool,
|
||||
GFP_NOWAIT | __GFP_ZERO | __GFP_NOWARN,
|
||||
cfqd->queue->node);
|
||||
if (!cfqq) {
|
||||
cfqq = &cfqd->oom_cfqq;
|
||||
|
@ -3923,6 +3926,12 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_queue *cfqq,
|
|||
cfqq->seek_history |= (sdist > CFQQ_SEEK_THR);
|
||||
}
|
||||
|
||||
static inline bool req_noidle(struct request *req)
|
||||
{
|
||||
return req_op(req) == REQ_OP_WRITE &&
|
||||
(req->cmd_flags & (REQ_SYNC | REQ_IDLE)) == REQ_SYNC;
|
||||
}
|
||||
|
||||
/*
|
||||
* Disable idle window if the process thinks too long or seeks so much that
|
||||
* it doesn't matter
|
||||
|
@ -3944,7 +3953,7 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
|
|||
if (cfqq->queued[0] + cfqq->queued[1] >= 4)
|
||||
cfq_mark_cfqq_deep(cfqq);
|
||||
|
||||
if (cfqq->next_rq && (cfqq->next_rq->cmd_flags & REQ_NOIDLE))
|
||||
if (cfqq->next_rq && req_noidle(cfqq->next_rq))
|
||||
enable_idle = 0;
|
||||
else if (!atomic_read(&cic->icq.ioc->active_ref) ||
|
||||
!cfqd->cfq_slice_idle ||
|
||||
|
@ -4142,7 +4151,7 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq)
|
|||
rq->fifo_time = ktime_get_ns() + cfqd->cfq_fifo_expire[rq_is_sync(rq)];
|
||||
list_add_tail(&rq->queuelist, &cfqq->fifo);
|
||||
cfq_add_rq_rb(rq);
|
||||
cfqg_stats_update_io_add(RQ_CFQG(rq), cfqd->serving_group, req_op(rq),
|
||||
cfqg_stats_update_io_add(RQ_CFQG(rq), cfqd->serving_group,
|
||||
rq->cmd_flags);
|
||||
cfq_rq_enqueued(cfqd, cfqq, rq);
|
||||
}
|
||||
|
@ -4229,8 +4238,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
|
|||
const int sync = rq_is_sync(rq);
|
||||
u64 now = ktime_get_ns();
|
||||
|
||||
cfq_log_cfqq(cfqd, cfqq, "complete rqnoidle %d",
|
||||
!!(rq->cmd_flags & REQ_NOIDLE));
|
||||
cfq_log_cfqq(cfqd, cfqq, "complete rqnoidle %d", req_noidle(rq));
|
||||
|
||||
cfq_update_hw_tag(cfqd);
|
||||
|
||||
|
@ -4240,8 +4248,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
|
|||
cfqq->dispatched--;
|
||||
(RQ_CFQG(rq))->dispatched--;
|
||||
cfqg_stats_update_completion(cfqq->cfqg, rq_start_time_ns(rq),
|
||||
rq_io_start_time_ns(rq), req_op(rq),
|
||||
rq->cmd_flags);
|
||||
rq_io_start_time_ns(rq), rq->cmd_flags);
|
||||
|
||||
cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]--;
|
||||
|
||||
|
@ -4319,14 +4326,14 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
|
|||
cfq_schedule_dispatch(cfqd);
|
||||
}
|
||||
|
||||
static void cfqq_boost_on_prio(struct cfq_queue *cfqq, int op_flags)
|
||||
static void cfqq_boost_on_prio(struct cfq_queue *cfqq, unsigned int op)
|
||||
{
|
||||
/*
|
||||
* If REQ_PRIO is set, boost class and prio level, if it's below
|
||||
* BE/NORM. If prio is not set, restore the potentially boosted
|
||||
* class/prio level.
|
||||
*/
|
||||
if (!(op_flags & REQ_PRIO)) {
|
||||
if (!(op & REQ_PRIO)) {
|
||||
cfqq->ioprio_class = cfqq->org_ioprio_class;
|
||||
cfqq->ioprio = cfqq->org_ioprio;
|
||||
} else {
|
||||
|
@ -4347,7 +4354,7 @@ static inline int __cfq_may_queue(struct cfq_queue *cfqq)
|
|||
return ELV_MQUEUE_MAY;
|
||||
}
|
||||
|
||||
static int cfq_may_queue(struct request_queue *q, int op, int op_flags)
|
||||
static int cfq_may_queue(struct request_queue *q, unsigned int op)
|
||||
{
|
||||
struct cfq_data *cfqd = q->elevator->elevator_data;
|
||||
struct task_struct *tsk = current;
|
||||
|
@ -4364,10 +4371,10 @@ static int cfq_may_queue(struct request_queue *q, int op, int op_flags)
|
|||
if (!cic)
|
||||
return ELV_MQUEUE_MAY;
|
||||
|
||||
cfqq = cic_to_cfqq(cic, rw_is_sync(op, op_flags));
|
||||
cfqq = cic_to_cfqq(cic, op_is_sync(op));
|
||||
if (cfqq) {
|
||||
cfq_init_prio_data(cfqq, cic);
|
||||
cfqq_boost_on_prio(cfqq, op_flags);
|
||||
cfqq_boost_on_prio(cfqq, op);
|
||||
|
||||
return __cfq_may_queue(cfqq);
|
||||
}
|
||||
|
|
|
@ -245,31 +245,31 @@ EXPORT_SYMBOL(elevator_exit);
|
|||
static inline void __elv_rqhash_del(struct request *rq)
|
||||
{
|
||||
hash_del(&rq->hash);
|
||||
rq->cmd_flags &= ~REQ_HASHED;
|
||||
rq->rq_flags &= ~RQF_HASHED;
|
||||
}
|
||||
|
||||
static void elv_rqhash_del(struct request_queue *q, struct request *rq)
|
||||
void elv_rqhash_del(struct request_queue *q, struct request *rq)
|
||||
{
|
||||
if (ELV_ON_HASH(rq))
|
||||
__elv_rqhash_del(rq);
|
||||
}
|
||||
|
||||
static void elv_rqhash_add(struct request_queue *q, struct request *rq)
|
||||
void elv_rqhash_add(struct request_queue *q, struct request *rq)
|
||||
{
|
||||
struct elevator_queue *e = q->elevator;
|
||||
|
||||
BUG_ON(ELV_ON_HASH(rq));
|
||||
hash_add(e->hash, &rq->hash, rq_hash_key(rq));
|
||||
rq->cmd_flags |= REQ_HASHED;
|
||||
rq->rq_flags |= RQF_HASHED;
|
||||
}
|
||||
|
||||
static void elv_rqhash_reposition(struct request_queue *q, struct request *rq)
|
||||
void elv_rqhash_reposition(struct request_queue *q, struct request *rq)
|
||||
{
|
||||
__elv_rqhash_del(rq);
|
||||
elv_rqhash_add(q, rq);
|
||||
}
|
||||
|
||||
static struct request *elv_rqhash_find(struct request_queue *q, sector_t offset)
|
||||
struct request *elv_rqhash_find(struct request_queue *q, sector_t offset)
|
||||
{
|
||||
struct elevator_queue *e = q->elevator;
|
||||
struct hlist_node *next;
|
||||
|
@ -352,7 +352,6 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
|
|||
{
|
||||
sector_t boundary;
|
||||
struct list_head *entry;
|
||||
int stop_flags;
|
||||
|
||||
if (q->last_merge == rq)
|
||||
q->last_merge = NULL;
|
||||
|
@ -362,7 +361,6 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
|
|||
q->nr_sorted--;
|
||||
|
||||
boundary = q->end_sector;
|
||||
stop_flags = REQ_SOFTBARRIER | REQ_STARTED;
|
||||
list_for_each_prev(entry, &q->queue_head) {
|
||||
struct request *pos = list_entry_rq(entry);
|
||||
|
||||
|
@ -370,7 +368,7 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
|
|||
break;
|
||||
if (rq_data_dir(rq) != rq_data_dir(pos))
|
||||
break;
|
||||
if (pos->cmd_flags & stop_flags)
|
||||
if (pos->rq_flags & (RQF_STARTED | RQF_SOFTBARRIER))
|
||||
break;
|
||||
if (blk_rq_pos(rq) >= boundary) {
|
||||
if (blk_rq_pos(pos) < boundary)
|
||||
|
@ -510,7 +508,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
|
|||
struct request *next)
|
||||
{
|
||||
struct elevator_queue *e = q->elevator;
|
||||
const int next_sorted = next->cmd_flags & REQ_SORTED;
|
||||
const int next_sorted = next->rq_flags & RQF_SORTED;
|
||||
|
||||
if (next_sorted && e->type->ops.elevator_merge_req_fn)
|
||||
e->type->ops.elevator_merge_req_fn(q, rq, next);
|
||||
|
@ -537,13 +535,13 @@ void elv_bio_merged(struct request_queue *q, struct request *rq,
|
|||
#ifdef CONFIG_PM
|
||||
static void blk_pm_requeue_request(struct request *rq)
|
||||
{
|
||||
if (rq->q->dev && !(rq->cmd_flags & REQ_PM))
|
||||
if (rq->q->dev && !(rq->rq_flags & RQF_PM))
|
||||
rq->q->nr_pending--;
|
||||
}
|
||||
|
||||
static void blk_pm_add_request(struct request_queue *q, struct request *rq)
|
||||
{
|
||||
if (q->dev && !(rq->cmd_flags & REQ_PM) && q->nr_pending++ == 0 &&
|
||||
if (q->dev && !(rq->rq_flags & RQF_PM) && q->nr_pending++ == 0 &&
|
||||
(q->rpm_status == RPM_SUSPENDED || q->rpm_status == RPM_SUSPENDING))
|
||||
pm_request_resume(q->dev);
|
||||
}
|
||||
|
@ -563,11 +561,11 @@ void elv_requeue_request(struct request_queue *q, struct request *rq)
|
|||
*/
|
||||
if (blk_account_rq(rq)) {
|
||||
q->in_flight[rq_is_sync(rq)]--;
|
||||
if (rq->cmd_flags & REQ_SORTED)
|
||||
if (rq->rq_flags & RQF_SORTED)
|
||||
elv_deactivate_rq(q, rq);
|
||||
}
|
||||
|
||||
rq->cmd_flags &= ~REQ_STARTED;
|
||||
rq->rq_flags &= ~RQF_STARTED;
|
||||
|
||||
blk_pm_requeue_request(rq);
|
||||
|
||||
|
@ -597,13 +595,13 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
|
|||
|
||||
rq->q = q;
|
||||
|
||||
if (rq->cmd_flags & REQ_SOFTBARRIER) {
|
||||
if (rq->rq_flags & RQF_SOFTBARRIER) {
|
||||
/* barriers are scheduling boundary, update end_sector */
|
||||
if (rq->cmd_type == REQ_TYPE_FS) {
|
||||
q->end_sector = rq_end_sector(rq);
|
||||
q->boundary_rq = rq;
|
||||
}
|
||||
} else if (!(rq->cmd_flags & REQ_ELVPRIV) &&
|
||||
} else if (!(rq->rq_flags & RQF_ELVPRIV) &&
|
||||
(where == ELEVATOR_INSERT_SORT ||
|
||||
where == ELEVATOR_INSERT_SORT_MERGE))
|
||||
where = ELEVATOR_INSERT_BACK;
|
||||
|
@ -611,12 +609,12 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
|
|||
switch (where) {
|
||||
case ELEVATOR_INSERT_REQUEUE:
|
||||
case ELEVATOR_INSERT_FRONT:
|
||||
rq->cmd_flags |= REQ_SOFTBARRIER;
|
||||
rq->rq_flags |= RQF_SOFTBARRIER;
|
||||
list_add(&rq->queuelist, &q->queue_head);
|
||||
break;
|
||||
|
||||
case ELEVATOR_INSERT_BACK:
|
||||
rq->cmd_flags |= REQ_SOFTBARRIER;
|
||||
rq->rq_flags |= RQF_SOFTBARRIER;
|
||||
elv_drain_elevator(q);
|
||||
list_add_tail(&rq->queuelist, &q->queue_head);
|
||||
/*
|
||||
|
@ -642,7 +640,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
|
|||
break;
|
||||
case ELEVATOR_INSERT_SORT:
|
||||
BUG_ON(rq->cmd_type != REQ_TYPE_FS);
|
||||
rq->cmd_flags |= REQ_SORTED;
|
||||
rq->rq_flags |= RQF_SORTED;
|
||||
q->nr_sorted++;
|
||||
if (rq_mergeable(rq)) {
|
||||
elv_rqhash_add(q, rq);
|
||||
|
@ -659,7 +657,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
|
|||
break;
|
||||
|
||||
case ELEVATOR_INSERT_FLUSH:
|
||||
rq->cmd_flags |= REQ_SOFTBARRIER;
|
||||
rq->rq_flags |= RQF_SOFTBARRIER;
|
||||
blk_insert_flush(rq);
|
||||
break;
|
||||
default:
|
||||
|
@ -716,12 +714,12 @@ void elv_put_request(struct request_queue *q, struct request *rq)
|
|||
e->type->ops.elevator_put_req_fn(rq);
|
||||
}
|
||||
|
||||
int elv_may_queue(struct request_queue *q, int op, int op_flags)
|
||||
int elv_may_queue(struct request_queue *q, unsigned int op)
|
||||
{
|
||||
struct elevator_queue *e = q->elevator;
|
||||
|
||||
if (e->type->ops.elevator_may_queue_fn)
|
||||
return e->type->ops.elevator_may_queue_fn(q, op, op_flags);
|
||||
return e->type->ops.elevator_may_queue_fn(q, op);
|
||||
|
||||
return ELV_MQUEUE_MAY;
|
||||
}
|
||||
|
@ -735,7 +733,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
|
|||
*/
|
||||
if (blk_account_rq(rq)) {
|
||||
q->in_flight[rq_is_sync(rq)]--;
|
||||
if ((rq->cmd_flags & REQ_SORTED) &&
|
||||
if ((rq->rq_flags & RQF_SORTED) &&
|
||||
e->type->ops.elevator_completed_req_fn)
|
||||
e->type->ops.elevator_completed_req_fn(q, rq);
|
||||
}
|
||||
|
|
|
@ -519,6 +519,10 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
|
|||
BLKDEV_DISCARD_SECURE);
|
||||
case BLKZEROOUT:
|
||||
return blk_ioctl_zeroout(bdev, mode, arg);
|
||||
case BLKREPORTZONE:
|
||||
return blkdev_report_zones_ioctl(bdev, mode, cmd, arg);
|
||||
case BLKRESETZONE:
|
||||
return blkdev_reset_zones_ioctl(bdev, mode, cmd, arg);
|
||||
case HDIO_GETGEO:
|
||||
return blkdev_getgeo(bdev, argp);
|
||||
case BLKRAGET:
|
||||
|
|
|
@ -430,6 +430,56 @@ static int drop_partitions(struct gendisk *disk, struct block_device *bdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static bool part_zone_aligned(struct gendisk *disk,
|
||||
struct block_device *bdev,
|
||||
sector_t from, sector_t size)
|
||||
{
|
||||
unsigned int zone_size = bdev_zone_size(bdev);
|
||||
|
||||
/*
|
||||
* If this function is called, then the disk is a zoned block device
|
||||
* (host-aware or host-managed). This can be detected even if the
|
||||
* zoned block device support is disabled (CONFIG_BLK_DEV_ZONED not
|
||||
* set). In this case, however, only host-aware devices will be seen
|
||||
* as a block device is not created for host-managed devices. Without
|
||||
* zoned block device support, host-aware drives can still be used as
|
||||
* regular block devices (no zone operation) and their zone size will
|
||||
* be reported as 0. Allow this case.
|
||||
*/
|
||||
if (!zone_size)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Check partition start and size alignement. If the drive has a
|
||||
* smaller last runt zone, ignore it and allow the partition to
|
||||
* use it. Check the zone size too: it should be a power of 2 number
|
||||
* of sectors.
|
||||
*/
|
||||
if (WARN_ON_ONCE(!is_power_of_2(zone_size))) {
|
||||
u32 rem;
|
||||
|
||||
div_u64_rem(from, zone_size, &rem);
|
||||
if (rem)
|
||||
return false;
|
||||
if ((from + size) < get_capacity(disk)) {
|
||||
div_u64_rem(size, zone_size, &rem);
|
||||
if (rem)
|
||||
return false;
|
||||
}
|
||||
|
||||
} else {
|
||||
|
||||
if (from & (zone_size - 1))
|
||||
return false;
|
||||
if ((from + size) < get_capacity(disk) &&
|
||||
(size & (zone_size - 1)))
|
||||
return false;
|
||||
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
int rescan_partitions(struct gendisk *disk, struct block_device *bdev)
|
||||
{
|
||||
struct parsed_partitions *state = NULL;
|
||||
|
@ -529,6 +579,21 @@ rescan:
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* On a zoned block device, partitions should be aligned on the
|
||||
* device zone size (i.e. zone boundary crossing not allowed).
|
||||
* Otherwise, resetting the write pointer of the last zone of
|
||||
* one partition may impact the following partition.
|
||||
*/
|
||||
if (bdev_is_zoned(bdev) &&
|
||||
!part_zone_aligned(disk, bdev, from, size)) {
|
||||
printk(KERN_WARNING
|
||||
"%s: p%d start %llu+%llu is not zone aligned\n",
|
||||
disk->disk_name, p, (unsigned long long) from,
|
||||
(unsigned long long) size);
|
||||
continue;
|
||||
}
|
||||
|
||||
part = add_partition(disk, p, from, size,
|
||||
state->parts[p].flags,
|
||||
&state->parts[p].info);
|
||||
|
|
|
@ -384,9 +384,12 @@ config BLK_DEV_RAM_DAX
|
|||
allocated from highmem (only a problem for highmem systems).
|
||||
|
||||
config CDROM_PKTCDVD
|
||||
tristate "Packet writing on CD/DVD media"
|
||||
tristate "Packet writing on CD/DVD media (DEPRECATED)"
|
||||
depends on !UML
|
||||
help
|
||||
Note: This driver is deprecated and will be removed from the
|
||||
kernel in the near future!
|
||||
|
||||
If you have a CDROM/DVD drive that supports packet writing, say
|
||||
Y to include support. It should work with any MMC/Mt Fuji
|
||||
compliant ATAPI or SCSI drive, which is just about any newer
|
||||
|
|
|
@ -395,44 +395,9 @@ static long brd_direct_access(struct block_device *bdev, sector_t sector,
|
|||
#define brd_direct_access NULL
|
||||
#endif
|
||||
|
||||
static int brd_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
int error;
|
||||
struct brd_device *brd = bdev->bd_disk->private_data;
|
||||
|
||||
if (cmd != BLKFLSBUF)
|
||||
return -ENOTTY;
|
||||
|
||||
/*
|
||||
* ram device BLKFLSBUF has special semantics, we want to actually
|
||||
* release and destroy the ramdisk data.
|
||||
*/
|
||||
mutex_lock(&brd_mutex);
|
||||
mutex_lock(&bdev->bd_mutex);
|
||||
error = -EBUSY;
|
||||
if (bdev->bd_openers <= 1) {
|
||||
/*
|
||||
* Kill the cache first, so it isn't written back to the
|
||||
* device.
|
||||
*
|
||||
* Another thread might instantiate more buffercache here,
|
||||
* but there is not much we can do to close that race.
|
||||
*/
|
||||
kill_bdev(bdev);
|
||||
brd_free_pages(brd);
|
||||
error = 0;
|
||||
}
|
||||
mutex_unlock(&bdev->bd_mutex);
|
||||
mutex_unlock(&brd_mutex);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
static const struct block_device_operations brd_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.rw_page = brd_rw_page,
|
||||
.ioctl = brd_ioctl,
|
||||
.direct_access = brd_direct_access,
|
||||
};
|
||||
|
||||
|
@ -443,8 +408,8 @@ static int rd_nr = CONFIG_BLK_DEV_RAM_COUNT;
|
|||
module_param(rd_nr, int, S_IRUGO);
|
||||
MODULE_PARM_DESC(rd_nr, "Maximum number of brd devices");
|
||||
|
||||
int rd_size = CONFIG_BLK_DEV_RAM_SIZE;
|
||||
module_param(rd_size, int, S_IRUGO);
|
||||
unsigned long rd_size = CONFIG_BLK_DEV_RAM_SIZE;
|
||||
module_param(rd_size, ulong, S_IRUGO);
|
||||
MODULE_PARM_DESC(rd_size, "Size of each RAM disk in kbytes.");
|
||||
|
||||
static int max_part = 1;
|
||||
|
|
|
@ -148,7 +148,7 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
|
|||
|
||||
if ((op == REQ_OP_WRITE) && !test_bit(MD_NO_FUA, &device->flags))
|
||||
op_flags |= REQ_FUA | REQ_PREFLUSH;
|
||||
op_flags |= REQ_SYNC | REQ_NOIDLE;
|
||||
op_flags |= REQ_SYNC;
|
||||
|
||||
bio = bio_alloc_drbd(GFP_NOIO);
|
||||
bio->bi_bdev = bdev->md_bdev;
|
||||
|
|
|
@ -1266,7 +1266,7 @@ static void submit_one_flush(struct drbd_device *device, struct issue_flush_cont
|
|||
bio->bi_bdev = device->ldev->backing_bdev;
|
||||
bio->bi_private = octx;
|
||||
bio->bi_end_io = one_flush_endio;
|
||||
bio_set_op_attrs(bio, REQ_OP_FLUSH, WRITE_FLUSH);
|
||||
bio->bi_opf = REQ_OP_FLUSH | REQ_PREFLUSH;
|
||||
|
||||
device->flush_jif = jiffies;
|
||||
set_bit(FLUSH_PENDING, &device->flags);
|
||||
|
@ -1648,20 +1648,8 @@ next_bio:
|
|||
|
||||
page_chain_for_each(page) {
|
||||
unsigned len = min_t(unsigned, data_size, PAGE_SIZE);
|
||||
if (!bio_add_page(bio, page, len, 0)) {
|
||||
/* A single page must always be possible!
|
||||
* But in case it fails anyways,
|
||||
* we deal with it, and complain (below). */
|
||||
if (bio->bi_vcnt == 0) {
|
||||
drbd_err(device,
|
||||
"bio_add_page failed for len=%u, "
|
||||
"bi_vcnt=0 (bi_sector=%llu)\n",
|
||||
len, (uint64_t)bio->bi_iter.bi_sector);
|
||||
err = -ENOSPC;
|
||||
goto fail;
|
||||
}
|
||||
if (!bio_add_page(bio, page, len, 0))
|
||||
goto next_bio;
|
||||
}
|
||||
data_size -= len;
|
||||
sector += len >> 9;
|
||||
--nr_pages;
|
||||
|
|
|
@ -3806,14 +3806,10 @@ static int __floppy_read_block_0(struct block_device *bdev, int drive)
|
|||
|
||||
cbdata.drive = drive;
|
||||
|
||||
bio_init(&bio);
|
||||
bio.bi_io_vec = &bio_vec;
|
||||
bio_vec.bv_page = page;
|
||||
bio_vec.bv_len = size;
|
||||
bio_vec.bv_offset = 0;
|
||||
bio.bi_vcnt = 1;
|
||||
bio.bi_iter.bi_size = size;
|
||||
bio_init(&bio, &bio_vec, 1);
|
||||
bio.bi_bdev = bdev;
|
||||
bio_add_page(&bio, page, size, 0);
|
||||
|
||||
bio.bi_iter.bi_sector = 0;
|
||||
bio.bi_flags |= (1 << BIO_QUIET);
|
||||
bio.bi_private = &cbdata;
|
||||
|
|
|
@ -1646,7 +1646,7 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
blk_mq_start_request(bd->rq);
|
||||
|
||||
if (lo->lo_state != Lo_bound)
|
||||
return -EIO;
|
||||
return BLK_MQ_RQ_QUEUE_ERROR;
|
||||
|
||||
switch (req_op(cmd->rq)) {
|
||||
case REQ_OP_FLUSH:
|
||||
|
|
|
@ -2035,18 +2035,14 @@ static int exec_drive_taskfile(struct driver_data *dd,
|
|||
taskout = req_task->out_size;
|
||||
taskin = req_task->in_size;
|
||||
/* 130560 = 512 * 0xFF*/
|
||||
if (taskin > 130560 || taskout > 130560) {
|
||||
err = -EINVAL;
|
||||
goto abort;
|
||||
}
|
||||
if (taskin > 130560 || taskout > 130560)
|
||||
return -EINVAL;
|
||||
|
||||
if (taskout) {
|
||||
outbuf = memdup_user(buf + outtotal, taskout);
|
||||
if (IS_ERR(outbuf)) {
|
||||
err = PTR_ERR(outbuf);
|
||||
outbuf = NULL;
|
||||
goto abort;
|
||||
}
|
||||
if (IS_ERR(outbuf))
|
||||
return PTR_ERR(outbuf);
|
||||
|
||||
outbuf_dma = pci_map_single(dd->pdev,
|
||||
outbuf,
|
||||
taskout,
|
||||
|
@ -3937,8 +3933,10 @@ static int mtip_block_initialize(struct driver_data *dd)
|
|||
|
||||
/* Generate the disk name, implemented same as in sd.c */
|
||||
do {
|
||||
if (!ida_pre_get(&rssd_index_ida, GFP_KERNEL))
|
||||
if (!ida_pre_get(&rssd_index_ida, GFP_KERNEL)) {
|
||||
rv = -ENOMEM;
|
||||
goto ida_get_error;
|
||||
}
|
||||
|
||||
spin_lock(&rssd_index_lock);
|
||||
rv = ida_get_new(&rssd_index_ida, &index);
|
||||
|
|
|
@ -41,26 +41,34 @@
|
|||
|
||||
#include <linux/nbd.h>
|
||||
|
||||
struct nbd_sock {
|
||||
struct socket *sock;
|
||||
struct mutex tx_lock;
|
||||
};
|
||||
|
||||
#define NBD_TIMEDOUT 0
|
||||
#define NBD_DISCONNECT_REQUESTED 1
|
||||
#define NBD_DISCONNECTED 2
|
||||
#define NBD_RUNNING 3
|
||||
|
||||
struct nbd_device {
|
||||
u32 flags;
|
||||
unsigned long runtime_flags;
|
||||
struct socket * sock; /* If == NULL, device is not ready, yet */
|
||||
struct nbd_sock **socks;
|
||||
int magic;
|
||||
|
||||
struct blk_mq_tag_set tag_set;
|
||||
|
||||
struct mutex tx_lock;
|
||||
struct mutex config_lock;
|
||||
struct gendisk *disk;
|
||||
int blksize;
|
||||
int num_connections;
|
||||
atomic_t recv_threads;
|
||||
wait_queue_head_t recv_wq;
|
||||
loff_t blksize;
|
||||
loff_t bytesize;
|
||||
|
||||
/* protects initialization and shutdown of the socket */
|
||||
spinlock_t sock_lock;
|
||||
struct task_struct *task_recv;
|
||||
struct task_struct *task_send;
|
||||
struct task_struct *task_setup;
|
||||
|
||||
#if IS_ENABLED(CONFIG_DEBUG_FS)
|
||||
struct dentry *dbg_dir;
|
||||
|
@ -69,7 +77,7 @@ struct nbd_device {
|
|||
|
||||
struct nbd_cmd {
|
||||
struct nbd_device *nbd;
|
||||
struct list_head list;
|
||||
struct completion send_complete;
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_DEBUG_FS)
|
||||
|
@ -126,7 +134,7 @@ static void nbd_size_update(struct nbd_device *nbd, struct block_device *bdev)
|
|||
}
|
||||
|
||||
static int nbd_size_set(struct nbd_device *nbd, struct block_device *bdev,
|
||||
int blocksize, int nr_blocks)
|
||||
loff_t blocksize, loff_t nr_blocks)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -135,7 +143,7 @@ static int nbd_size_set(struct nbd_device *nbd, struct block_device *bdev,
|
|||
return ret;
|
||||
|
||||
nbd->blksize = blocksize;
|
||||
nbd->bytesize = (loff_t)blocksize * (loff_t)nr_blocks;
|
||||
nbd->bytesize = blocksize * nr_blocks;
|
||||
|
||||
nbd_size_update(nbd, bdev);
|
||||
|
||||
|
@ -159,22 +167,20 @@ static void nbd_end_request(struct nbd_cmd *cmd)
|
|||
*/
|
||||
static void sock_shutdown(struct nbd_device *nbd)
|
||||
{
|
||||
struct socket *sock;
|
||||
int i;
|
||||
|
||||
spin_lock(&nbd->sock_lock);
|
||||
|
||||
if (!nbd->sock) {
|
||||
spin_unlock(&nbd->sock_lock);
|
||||
if (nbd->num_connections == 0)
|
||||
return;
|
||||
if (test_and_set_bit(NBD_DISCONNECTED, &nbd->runtime_flags))
|
||||
return;
|
||||
|
||||
for (i = 0; i < nbd->num_connections; i++) {
|
||||
struct nbd_sock *nsock = nbd->socks[i];
|
||||
mutex_lock(&nsock->tx_lock);
|
||||
kernel_sock_shutdown(nsock->sock, SHUT_RDWR);
|
||||
mutex_unlock(&nsock->tx_lock);
|
||||
}
|
||||
|
||||
sock = nbd->sock;
|
||||
dev_warn(disk_to_dev(nbd->disk), "shutting down socket\n");
|
||||
nbd->sock = NULL;
|
||||
spin_unlock(&nbd->sock_lock);
|
||||
|
||||
kernel_sock_shutdown(sock, SHUT_RDWR);
|
||||
sockfd_put(sock);
|
||||
dev_warn(disk_to_dev(nbd->disk), "shutting down sockets\n");
|
||||
}
|
||||
|
||||
static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
|
||||
|
@ -182,42 +188,38 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
|
|||
{
|
||||
struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
|
||||
struct nbd_device *nbd = cmd->nbd;
|
||||
struct socket *sock = NULL;
|
||||
|
||||
spin_lock(&nbd->sock_lock);
|
||||
|
||||
set_bit(NBD_TIMEDOUT, &nbd->runtime_flags);
|
||||
|
||||
if (nbd->sock) {
|
||||
sock = nbd->sock;
|
||||
get_file(sock->file);
|
||||
}
|
||||
|
||||
spin_unlock(&nbd->sock_lock);
|
||||
if (sock) {
|
||||
kernel_sock_shutdown(sock, SHUT_RDWR);
|
||||
sockfd_put(sock);
|
||||
}
|
||||
|
||||
req->errors++;
|
||||
dev_err(nbd_to_dev(nbd), "Connection timed out, shutting down connection\n");
|
||||
set_bit(NBD_TIMEDOUT, &nbd->runtime_flags);
|
||||
req->errors++;
|
||||
|
||||
/*
|
||||
* If our disconnect packet times out then we're already holding the
|
||||
* config_lock and could deadlock here, so just set an error and return,
|
||||
* we'll handle shutting everything down later.
|
||||
*/
|
||||
if (req->cmd_type == REQ_TYPE_DRV_PRIV)
|
||||
return BLK_EH_HANDLED;
|
||||
mutex_lock(&nbd->config_lock);
|
||||
sock_shutdown(nbd);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
return BLK_EH_HANDLED;
|
||||
}
|
||||
|
||||
/*
|
||||
* Send or receive packet.
|
||||
*/
|
||||
static int sock_xmit(struct nbd_device *nbd, int send, void *buf, int size,
|
||||
int msg_flags)
|
||||
static int sock_xmit(struct nbd_device *nbd, int index, int send, void *buf,
|
||||
int size, int msg_flags)
|
||||
{
|
||||
struct socket *sock = nbd->sock;
|
||||
struct socket *sock = nbd->socks[index]->sock;
|
||||
int result;
|
||||
struct msghdr msg;
|
||||
struct kvec iov;
|
||||
unsigned long pflags = current->flags;
|
||||
|
||||
if (unlikely(!sock)) {
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Attempted %s on closed socket in sock_xmit\n",
|
||||
(send ? "send" : "recv"));
|
||||
return -EINVAL;
|
||||
|
@ -254,29 +256,29 @@ static int sock_xmit(struct nbd_device *nbd, int send, void *buf, int size,
|
|||
return result;
|
||||
}
|
||||
|
||||
static inline int sock_send_bvec(struct nbd_device *nbd, struct bio_vec *bvec,
|
||||
int flags)
|
||||
static inline int sock_send_bvec(struct nbd_device *nbd, int index,
|
||||
struct bio_vec *bvec, int flags)
|
||||
{
|
||||
int result;
|
||||
void *kaddr = kmap(bvec->bv_page);
|
||||
result = sock_xmit(nbd, 1, kaddr + bvec->bv_offset,
|
||||
result = sock_xmit(nbd, index, 1, kaddr + bvec->bv_offset,
|
||||
bvec->bv_len, flags);
|
||||
kunmap(bvec->bv_page);
|
||||
return result;
|
||||
}
|
||||
|
||||
/* always call with the tx_lock held */
|
||||
static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd)
|
||||
static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
|
||||
{
|
||||
struct request *req = blk_mq_rq_from_pdu(cmd);
|
||||
int result, flags;
|
||||
struct nbd_request request;
|
||||
unsigned long size = blk_rq_bytes(req);
|
||||
struct bio *bio;
|
||||
u32 type;
|
||||
u32 tag = blk_mq_unique_tag(req);
|
||||
|
||||
if (req->cmd_type == REQ_TYPE_DRV_PRIV)
|
||||
type = NBD_CMD_DISC;
|
||||
else if (req_op(req) == REQ_OP_DISCARD)
|
||||
if (req_op(req) == REQ_OP_DISCARD)
|
||||
type = NBD_CMD_TRIM;
|
||||
else if (req_op(req) == REQ_OP_FLUSH)
|
||||
type = NBD_CMD_FLUSH;
|
||||
|
@ -288,73 +290,89 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd)
|
|||
memset(&request, 0, sizeof(request));
|
||||
request.magic = htonl(NBD_REQUEST_MAGIC);
|
||||
request.type = htonl(type);
|
||||
if (type != NBD_CMD_FLUSH && type != NBD_CMD_DISC) {
|
||||
if (type != NBD_CMD_FLUSH) {
|
||||
request.from = cpu_to_be64((u64)blk_rq_pos(req) << 9);
|
||||
request.len = htonl(size);
|
||||
}
|
||||
memcpy(request.handle, &req->tag, sizeof(req->tag));
|
||||
memcpy(request.handle, &tag, sizeof(tag));
|
||||
|
||||
dev_dbg(nbd_to_dev(nbd), "request %p: sending control (%s@%llu,%uB)\n",
|
||||
cmd, nbdcmd_to_ascii(type),
|
||||
(unsigned long long)blk_rq_pos(req) << 9, blk_rq_bytes(req));
|
||||
result = sock_xmit(nbd, 1, &request, sizeof(request),
|
||||
result = sock_xmit(nbd, index, 1, &request, sizeof(request),
|
||||
(type == NBD_CMD_WRITE) ? MSG_MORE : 0);
|
||||
if (result <= 0) {
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Send control failed (result %d)\n", result);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (type == NBD_CMD_WRITE) {
|
||||
struct req_iterator iter;
|
||||
if (type != NBD_CMD_WRITE)
|
||||
return 0;
|
||||
|
||||
flags = 0;
|
||||
bio = req->bio;
|
||||
while (bio) {
|
||||
struct bio *next = bio->bi_next;
|
||||
struct bvec_iter iter;
|
||||
struct bio_vec bvec;
|
||||
/*
|
||||
* we are really probing at internals to determine
|
||||
* whether to set MSG_MORE or not...
|
||||
*/
|
||||
rq_for_each_segment(bvec, req, iter) {
|
||||
flags = 0;
|
||||
if (!rq_iter_last(bvec, iter))
|
||||
|
||||
bio_for_each_segment(bvec, bio, iter) {
|
||||
bool is_last = !next && bio_iter_last(bvec, iter);
|
||||
|
||||
if (is_last)
|
||||
flags = MSG_MORE;
|
||||
dev_dbg(nbd_to_dev(nbd), "request %p: sending %d bytes data\n",
|
||||
cmd, bvec.bv_len);
|
||||
result = sock_send_bvec(nbd, &bvec, flags);
|
||||
result = sock_send_bvec(nbd, index, &bvec, flags);
|
||||
if (result <= 0) {
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Send data failed (result %d)\n",
|
||||
result);
|
||||
return -EIO;
|
||||
}
|
||||
/*
|
||||
* The completion might already have come in,
|
||||
* so break for the last one instead of letting
|
||||
* the iterator do it. This prevents use-after-free
|
||||
* of the bio.
|
||||
*/
|
||||
if (is_last)
|
||||
break;
|
||||
}
|
||||
bio = next;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int sock_recv_bvec(struct nbd_device *nbd, struct bio_vec *bvec)
|
||||
static inline int sock_recv_bvec(struct nbd_device *nbd, int index,
|
||||
struct bio_vec *bvec)
|
||||
{
|
||||
int result;
|
||||
void *kaddr = kmap(bvec->bv_page);
|
||||
result = sock_xmit(nbd, 0, kaddr + bvec->bv_offset, bvec->bv_len,
|
||||
MSG_WAITALL);
|
||||
result = sock_xmit(nbd, index, 0, kaddr + bvec->bv_offset,
|
||||
bvec->bv_len, MSG_WAITALL);
|
||||
kunmap(bvec->bv_page);
|
||||
return result;
|
||||
}
|
||||
|
||||
/* NULL returned = something went wrong, inform userspace */
|
||||
static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd)
|
||||
static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index)
|
||||
{
|
||||
int result;
|
||||
struct nbd_reply reply;
|
||||
struct nbd_cmd *cmd;
|
||||
struct request *req = NULL;
|
||||
u16 hwq;
|
||||
int tag;
|
||||
u32 tag;
|
||||
|
||||
reply.magic = 0;
|
||||
result = sock_xmit(nbd, 0, &reply, sizeof(reply), MSG_WAITALL);
|
||||
result = sock_xmit(nbd, index, 0, &reply, sizeof(reply), MSG_WAITALL);
|
||||
if (result <= 0) {
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Receive control failed (result %d)\n", result);
|
||||
if (!test_bit(NBD_DISCONNECTED, &nbd->runtime_flags) &&
|
||||
!test_bit(NBD_DISCONNECT_REQUESTED, &nbd->runtime_flags))
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Receive control failed (result %d)\n", result);
|
||||
return ERR_PTR(result);
|
||||
}
|
||||
|
||||
|
@ -364,7 +382,7 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd)
|
|||
return ERR_PTR(-EPROTO);
|
||||
}
|
||||
|
||||
memcpy(&tag, reply.handle, sizeof(int));
|
||||
memcpy(&tag, reply.handle, sizeof(u32));
|
||||
|
||||
hwq = blk_mq_unique_tag_to_hwq(tag);
|
||||
if (hwq < nbd->tag_set.nr_hw_queues)
|
||||
|
@ -376,7 +394,6 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd)
|
|||
return ERR_PTR(-ENOENT);
|
||||
}
|
||||
cmd = blk_mq_rq_to_pdu(req);
|
||||
|
||||
if (ntohl(reply.error)) {
|
||||
dev_err(disk_to_dev(nbd->disk), "Other side returned error (%d)\n",
|
||||
ntohl(reply.error));
|
||||
|
@ -390,7 +407,7 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd)
|
|||
struct bio_vec bvec;
|
||||
|
||||
rq_for_each_segment(bvec, req, iter) {
|
||||
result = sock_recv_bvec(nbd, &bvec);
|
||||
result = sock_recv_bvec(nbd, index, &bvec);
|
||||
if (result <= 0) {
|
||||
dev_err(disk_to_dev(nbd->disk), "Receive data failed (result %d)\n",
|
||||
result);
|
||||
|
@ -400,6 +417,9 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd)
|
|||
dev_dbg(nbd_to_dev(nbd), "request %p: got %d bytes data\n",
|
||||
cmd, bvec.bv_len);
|
||||
}
|
||||
} else {
|
||||
/* See the comment in nbd_queue_rq. */
|
||||
wait_for_completion(&cmd->send_complete);
|
||||
}
|
||||
return cmd;
|
||||
}
|
||||
|
@ -418,25 +438,24 @@ static struct device_attribute pid_attr = {
|
|||
.show = pid_show,
|
||||
};
|
||||
|
||||
static int nbd_thread_recv(struct nbd_device *nbd, struct block_device *bdev)
|
||||
struct recv_thread_args {
|
||||
struct work_struct work;
|
||||
struct nbd_device *nbd;
|
||||
int index;
|
||||
};
|
||||
|
||||
static void recv_work(struct work_struct *work)
|
||||
{
|
||||
struct recv_thread_args *args = container_of(work,
|
||||
struct recv_thread_args,
|
||||
work);
|
||||
struct nbd_device *nbd = args->nbd;
|
||||
struct nbd_cmd *cmd;
|
||||
int ret;
|
||||
int ret = 0;
|
||||
|
||||
BUG_ON(nbd->magic != NBD_MAGIC);
|
||||
|
||||
sk_set_memalloc(nbd->sock->sk);
|
||||
|
||||
ret = device_create_file(disk_to_dev(nbd->disk), &pid_attr);
|
||||
if (ret) {
|
||||
dev_err(disk_to_dev(nbd->disk), "device_create_file failed!\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
nbd_size_update(nbd, bdev);
|
||||
|
||||
while (1) {
|
||||
cmd = nbd_read_stat(nbd);
|
||||
cmd = nbd_read_stat(nbd, args->index);
|
||||
if (IS_ERR(cmd)) {
|
||||
ret = PTR_ERR(cmd);
|
||||
break;
|
||||
|
@ -445,10 +464,14 @@ static int nbd_thread_recv(struct nbd_device *nbd, struct block_device *bdev)
|
|||
nbd_end_request(cmd);
|
||||
}
|
||||
|
||||
nbd_size_clear(nbd, bdev);
|
||||
|
||||
device_remove_file(disk_to_dev(nbd->disk), &pid_attr);
|
||||
return ret;
|
||||
/*
|
||||
* We got an error, shut everybody down if this wasn't the result of a
|
||||
* disconnect request.
|
||||
*/
|
||||
if (ret && !test_bit(NBD_DISCONNECT_REQUESTED, &nbd->runtime_flags))
|
||||
sock_shutdown(nbd);
|
||||
atomic_dec(&nbd->recv_threads);
|
||||
wake_up(&nbd->recv_wq);
|
||||
}
|
||||
|
||||
static void nbd_clear_req(struct request *req, void *data, bool reserved)
|
||||
|
@ -466,51 +489,60 @@ static void nbd_clear_que(struct nbd_device *nbd)
|
|||
{
|
||||
BUG_ON(nbd->magic != NBD_MAGIC);
|
||||
|
||||
/*
|
||||
* Because we have set nbd->sock to NULL under the tx_lock, all
|
||||
* modifications to the list must have completed by now.
|
||||
*/
|
||||
BUG_ON(nbd->sock);
|
||||
|
||||
blk_mq_tagset_busy_iter(&nbd->tag_set, nbd_clear_req, NULL);
|
||||
dev_dbg(disk_to_dev(nbd->disk), "queue cleared\n");
|
||||
}
|
||||
|
||||
|
||||
static void nbd_handle_cmd(struct nbd_cmd *cmd)
|
||||
static void nbd_handle_cmd(struct nbd_cmd *cmd, int index)
|
||||
{
|
||||
struct request *req = blk_mq_rq_from_pdu(cmd);
|
||||
struct nbd_device *nbd = cmd->nbd;
|
||||
struct nbd_sock *nsock;
|
||||
|
||||
if (req->cmd_type != REQ_TYPE_FS)
|
||||
if (index >= nbd->num_connections) {
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Attempted send on invalid socket\n");
|
||||
goto error_out;
|
||||
}
|
||||
|
||||
if (test_bit(NBD_DISCONNECTED, &nbd->runtime_flags)) {
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Attempted send on closed socket\n");
|
||||
goto error_out;
|
||||
}
|
||||
|
||||
if (req->cmd_type != REQ_TYPE_FS &&
|
||||
req->cmd_type != REQ_TYPE_DRV_PRIV)
|
||||
goto error_out;
|
||||
|
||||
if (rq_data_dir(req) == WRITE &&
|
||||
if (req->cmd_type == REQ_TYPE_FS &&
|
||||
rq_data_dir(req) == WRITE &&
|
||||
(nbd->flags & NBD_FLAG_READ_ONLY)) {
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Write on read-only\n");
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Write on read-only\n");
|
||||
goto error_out;
|
||||
}
|
||||
|
||||
req->errors = 0;
|
||||
|
||||
mutex_lock(&nbd->tx_lock);
|
||||
nbd->task_send = current;
|
||||
if (unlikely(!nbd->sock)) {
|
||||
mutex_unlock(&nbd->tx_lock);
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Attempted send on closed socket\n");
|
||||
nsock = nbd->socks[index];
|
||||
mutex_lock(&nsock->tx_lock);
|
||||
if (unlikely(!nsock->sock)) {
|
||||
mutex_unlock(&nsock->tx_lock);
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Attempted send on closed socket\n");
|
||||
goto error_out;
|
||||
}
|
||||
|
||||
if (nbd_send_cmd(nbd, cmd) != 0) {
|
||||
dev_err(disk_to_dev(nbd->disk), "Request send failed\n");
|
||||
if (nbd_send_cmd(nbd, cmd, index) != 0) {
|
||||
dev_err_ratelimited(disk_to_dev(nbd->disk),
|
||||
"Request send failed\n");
|
||||
req->errors++;
|
||||
nbd_end_request(cmd);
|
||||
}
|
||||
|
||||
nbd->task_send = NULL;
|
||||
mutex_unlock(&nbd->tx_lock);
|
||||
mutex_unlock(&nsock->tx_lock);
|
||||
|
||||
return;
|
||||
|
||||
|
@ -524,39 +556,70 @@ static int nbd_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
{
|
||||
struct nbd_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
|
||||
|
||||
/*
|
||||
* Since we look at the bio's to send the request over the network we
|
||||
* need to make sure the completion work doesn't mark this request done
|
||||
* before we are done doing our send. This keeps us from dereferencing
|
||||
* freed data if we have particularly fast completions (ie we get the
|
||||
* completion before we exit sock_xmit on the last bvec) or in the case
|
||||
* that the server is misbehaving (or there was an error) before we're
|
||||
* done sending everything over the wire.
|
||||
*/
|
||||
init_completion(&cmd->send_complete);
|
||||
blk_mq_start_request(bd->rq);
|
||||
nbd_handle_cmd(cmd);
|
||||
nbd_handle_cmd(cmd, hctx->queue_num);
|
||||
complete(&cmd->send_complete);
|
||||
|
||||
return BLK_MQ_RQ_QUEUE_OK;
|
||||
}
|
||||
|
||||
static int nbd_set_socket(struct nbd_device *nbd, struct socket *sock)
|
||||
static int nbd_add_socket(struct nbd_device *nbd, struct socket *sock)
|
||||
{
|
||||
int ret = 0;
|
||||
struct nbd_sock **socks;
|
||||
struct nbd_sock *nsock;
|
||||
|
||||
spin_lock_irq(&nbd->sock_lock);
|
||||
|
||||
if (nbd->sock) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
if (!nbd->task_setup)
|
||||
nbd->task_setup = current;
|
||||
if (nbd->task_setup != current) {
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Device being setup by another task");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
nbd->sock = sock;
|
||||
socks = krealloc(nbd->socks, (nbd->num_connections + 1) *
|
||||
sizeof(struct nbd_sock *), GFP_KERNEL);
|
||||
if (!socks)
|
||||
return -ENOMEM;
|
||||
nsock = kzalloc(sizeof(struct nbd_sock), GFP_KERNEL);
|
||||
if (!nsock)
|
||||
return -ENOMEM;
|
||||
|
||||
out:
|
||||
spin_unlock_irq(&nbd->sock_lock);
|
||||
nbd->socks = socks;
|
||||
|
||||
return ret;
|
||||
mutex_init(&nsock->tx_lock);
|
||||
nsock->sock = sock;
|
||||
socks[nbd->num_connections++] = nsock;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Reset all properties of an NBD device */
|
||||
static void nbd_reset(struct nbd_device *nbd)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nbd->num_connections; i++)
|
||||
kfree(nbd->socks[i]);
|
||||
kfree(nbd->socks);
|
||||
nbd->socks = NULL;
|
||||
nbd->runtime_flags = 0;
|
||||
nbd->blksize = 1024;
|
||||
nbd->bytesize = 0;
|
||||
set_capacity(nbd->disk, 0);
|
||||
nbd->flags = 0;
|
||||
nbd->tag_set.timeout = 0;
|
||||
nbd->num_connections = 0;
|
||||
nbd->task_setup = NULL;
|
||||
queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, nbd->disk->queue);
|
||||
}
|
||||
|
||||
|
@ -582,48 +645,68 @@ static void nbd_parse_flags(struct nbd_device *nbd, struct block_device *bdev)
|
|||
blk_queue_write_cache(nbd->disk->queue, false, false);
|
||||
}
|
||||
|
||||
static void send_disconnects(struct nbd_device *nbd)
|
||||
{
|
||||
struct nbd_request request = {};
|
||||
int i, ret;
|
||||
|
||||
request.magic = htonl(NBD_REQUEST_MAGIC);
|
||||
request.type = htonl(NBD_CMD_DISC);
|
||||
|
||||
for (i = 0; i < nbd->num_connections; i++) {
|
||||
ret = sock_xmit(nbd, i, 1, &request, sizeof(request), 0);
|
||||
if (ret <= 0)
|
||||
dev_err(disk_to_dev(nbd->disk),
|
||||
"Send disconnect failed %d\n", ret);
|
||||
}
|
||||
}
|
||||
|
||||
static int nbd_dev_dbg_init(struct nbd_device *nbd);
|
||||
static void nbd_dev_dbg_close(struct nbd_device *nbd);
|
||||
|
||||
/* Must be called with tx_lock held */
|
||||
|
||||
/* Must be called with config_lock held */
|
||||
static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
case NBD_DISCONNECT: {
|
||||
struct request *sreq;
|
||||
|
||||
dev_info(disk_to_dev(nbd->disk), "NBD_DISCONNECT\n");
|
||||
if (!nbd->sock)
|
||||
if (!nbd->socks)
|
||||
return -EINVAL;
|
||||
|
||||
sreq = blk_mq_alloc_request(bdev_get_queue(bdev), WRITE, 0);
|
||||
if (IS_ERR(sreq))
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_unlock(&nbd->tx_lock);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
fsync_bdev(bdev);
|
||||
mutex_lock(&nbd->tx_lock);
|
||||
sreq->cmd_type = REQ_TYPE_DRV_PRIV;
|
||||
mutex_lock(&nbd->config_lock);
|
||||
|
||||
/* Check again after getting mutex back. */
|
||||
if (!nbd->sock) {
|
||||
blk_mq_free_request(sreq);
|
||||
if (!nbd->socks)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
set_bit(NBD_DISCONNECT_REQUESTED, &nbd->runtime_flags);
|
||||
|
||||
nbd_send_cmd(nbd, blk_mq_rq_to_pdu(sreq));
|
||||
blk_mq_free_request(sreq);
|
||||
if (!test_and_set_bit(NBD_DISCONNECT_REQUESTED,
|
||||
&nbd->runtime_flags))
|
||||
send_disconnects(nbd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
case NBD_CLEAR_SOCK:
|
||||
sock_shutdown(nbd);
|
||||
nbd_clear_que(nbd);
|
||||
kill_bdev(bdev);
|
||||
nbd_bdev_reset(bdev);
|
||||
/*
|
||||
* We want to give the run thread a chance to wait for everybody
|
||||
* to clean up and then do it's own cleanup.
|
||||
*/
|
||||
if (!test_bit(NBD_RUNNING, &nbd->runtime_flags)) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nbd->num_connections; i++)
|
||||
kfree(nbd->socks[i]);
|
||||
kfree(nbd->socks);
|
||||
nbd->socks = NULL;
|
||||
nbd->num_connections = 0;
|
||||
nbd->task_setup = NULL;
|
||||
}
|
||||
return 0;
|
||||
|
||||
case NBD_SET_SOCK: {
|
||||
|
@ -633,7 +716,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
|
|||
if (!sock)
|
||||
return err;
|
||||
|
||||
err = nbd_set_socket(nbd, sock);
|
||||
err = nbd_add_socket(nbd, sock);
|
||||
if (!err && max_part)
|
||||
bdev->bd_invalidated = 1;
|
||||
|
||||
|
@ -648,7 +731,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
|
|||
|
||||
case NBD_SET_SIZE:
|
||||
return nbd_size_set(nbd, bdev, nbd->blksize,
|
||||
arg / nbd->blksize);
|
||||
div_s64(arg, nbd->blksize));
|
||||
|
||||
case NBD_SET_SIZE_BLOCKS:
|
||||
return nbd_size_set(nbd, bdev, nbd->blksize, arg);
|
||||
|
@ -662,26 +745,61 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
|
|||
return 0;
|
||||
|
||||
case NBD_DO_IT: {
|
||||
int error;
|
||||
struct recv_thread_args *args;
|
||||
int num_connections = nbd->num_connections;
|
||||
int error = 0, i;
|
||||
|
||||
if (nbd->task_recv)
|
||||
return -EBUSY;
|
||||
if (!nbd->sock)
|
||||
if (!nbd->socks)
|
||||
return -EINVAL;
|
||||
if (num_connections > 1 &&
|
||||
!(nbd->flags & NBD_FLAG_CAN_MULTI_CONN)) {
|
||||
dev_err(disk_to_dev(nbd->disk), "server does not support multiple connections per device.\n");
|
||||
error = -EINVAL;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
/* We have to claim the device under the lock */
|
||||
set_bit(NBD_RUNNING, &nbd->runtime_flags);
|
||||
blk_mq_update_nr_hw_queues(&nbd->tag_set, nbd->num_connections);
|
||||
args = kcalloc(num_connections, sizeof(*args), GFP_KERNEL);
|
||||
if (!args) {
|
||||
error = -ENOMEM;
|
||||
goto out_err;
|
||||
}
|
||||
nbd->task_recv = current;
|
||||
mutex_unlock(&nbd->tx_lock);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
|
||||
nbd_parse_flags(nbd, bdev);
|
||||
|
||||
error = device_create_file(disk_to_dev(nbd->disk), &pid_attr);
|
||||
if (error) {
|
||||
dev_err(disk_to_dev(nbd->disk), "device_create_file failed!\n");
|
||||
goto out_recv;
|
||||
}
|
||||
|
||||
nbd_size_update(nbd, bdev);
|
||||
|
||||
nbd_dev_dbg_init(nbd);
|
||||
error = nbd_thread_recv(nbd, bdev);
|
||||
for (i = 0; i < num_connections; i++) {
|
||||
sk_set_memalloc(nbd->socks[i]->sock->sk);
|
||||
atomic_inc(&nbd->recv_threads);
|
||||
INIT_WORK(&args[i].work, recv_work);
|
||||
args[i].nbd = nbd;
|
||||
args[i].index = i;
|
||||
queue_work(system_long_wq, &args[i].work);
|
||||
}
|
||||
wait_event_interruptible(nbd->recv_wq,
|
||||
atomic_read(&nbd->recv_threads) == 0);
|
||||
for (i = 0; i < num_connections; i++)
|
||||
flush_work(&args[i].work);
|
||||
nbd_dev_dbg_close(nbd);
|
||||
|
||||
mutex_lock(&nbd->tx_lock);
|
||||
nbd_size_clear(nbd, bdev);
|
||||
device_remove_file(disk_to_dev(nbd->disk), &pid_attr);
|
||||
out_recv:
|
||||
mutex_lock(&nbd->config_lock);
|
||||
nbd->task_recv = NULL;
|
||||
|
||||
out_err:
|
||||
sock_shutdown(nbd);
|
||||
nbd_clear_que(nbd);
|
||||
kill_bdev(bdev);
|
||||
|
@ -694,7 +812,6 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
|
|||
error = -ETIMEDOUT;
|
||||
|
||||
nbd_reset(nbd);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
|
@ -726,9 +843,9 @@ static int nbd_ioctl(struct block_device *bdev, fmode_t mode,
|
|||
|
||||
BUG_ON(nbd->magic != NBD_MAGIC);
|
||||
|
||||
mutex_lock(&nbd->tx_lock);
|
||||
mutex_lock(&nbd->config_lock);
|
||||
error = __nbd_ioctl(bdev, nbd, cmd, arg);
|
||||
mutex_unlock(&nbd->tx_lock);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
@ -748,8 +865,6 @@ static int nbd_dbg_tasks_show(struct seq_file *s, void *unused)
|
|||
|
||||
if (nbd->task_recv)
|
||||
seq_printf(s, "recv: %d\n", task_pid_nr(nbd->task_recv));
|
||||
if (nbd->task_send)
|
||||
seq_printf(s, "send: %d\n", task_pid_nr(nbd->task_send));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -817,7 +932,7 @@ static int nbd_dev_dbg_init(struct nbd_device *nbd)
|
|||
debugfs_create_file("tasks", 0444, dir, nbd, &nbd_dbg_tasks_ops);
|
||||
debugfs_create_u64("size_bytes", 0444, dir, &nbd->bytesize);
|
||||
debugfs_create_u32("timeout", 0444, dir, &nbd->tag_set.timeout);
|
||||
debugfs_create_u32("blocksize", 0444, dir, &nbd->blksize);
|
||||
debugfs_create_u64("blocksize", 0444, dir, &nbd->blksize);
|
||||
debugfs_create_file("flags", 0444, dir, nbd, &nbd_dbg_flags_ops);
|
||||
|
||||
return 0;
|
||||
|
@ -873,9 +988,7 @@ static int nbd_init_request(void *data, struct request *rq,
|
|||
unsigned int numa_node)
|
||||
{
|
||||
struct nbd_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
|
||||
cmd->nbd = data;
|
||||
INIT_LIST_HEAD(&cmd->list);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -985,13 +1098,13 @@ static int __init nbd_init(void)
|
|||
for (i = 0; i < nbds_max; i++) {
|
||||
struct gendisk *disk = nbd_dev[i].disk;
|
||||
nbd_dev[i].magic = NBD_MAGIC;
|
||||
spin_lock_init(&nbd_dev[i].sock_lock);
|
||||
mutex_init(&nbd_dev[i].tx_lock);
|
||||
mutex_init(&nbd_dev[i].config_lock);
|
||||
disk->major = NBD_MAJOR;
|
||||
disk->first_minor = i << part_shift;
|
||||
disk->fops = &nbd_fops;
|
||||
disk->private_data = &nbd_dev[i];
|
||||
sprintf(disk->disk_name, "nbd%d", i);
|
||||
init_waitqueue_head(&nbd_dev[i].recv_wq);
|
||||
nbd_reset(&nbd_dev[i]);
|
||||
add_disk(disk);
|
||||
}
|
||||
|
|
|
@ -577,6 +577,7 @@ static void null_nvm_unregister(struct nullb *nullb)
|
|||
#else
|
||||
static int null_nvm_register(struct nullb *nullb)
|
||||
{
|
||||
pr_err("null_blk: CONFIG_NVM needs to be enabled for LightNVM\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
static void null_nvm_unregister(struct nullb *nullb) {}
|
||||
|
|
|
@ -721,7 +721,7 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *
|
|||
|
||||
rq->timeout = 60*HZ;
|
||||
if (cgc->quiet)
|
||||
rq->cmd_flags |= REQ_QUIET;
|
||||
rq->rq_flags |= RQF_QUIET;
|
||||
|
||||
blk_execute_rq(rq->q, pd->bdev->bd_disk, rq, 0);
|
||||
if (rq->errors)
|
||||
|
@ -944,39 +944,6 @@ static int pkt_set_segment_merging(struct pktcdvd_device *pd, struct request_que
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy all data for this packet to pkt->pages[], so that
|
||||
* a) The number of required segments for the write bio is minimized, which
|
||||
* is necessary for some scsi controllers.
|
||||
* b) The data can be used as cache to avoid read requests if we receive a
|
||||
* new write request for the same zone.
|
||||
*/
|
||||
static void pkt_make_local_copy(struct packet_data *pkt, struct bio_vec *bvec)
|
||||
{
|
||||
int f, p, offs;
|
||||
|
||||
/* Copy all data to pkt->pages[] */
|
||||
p = 0;
|
||||
offs = 0;
|
||||
for (f = 0; f < pkt->frames; f++) {
|
||||
if (bvec[f].bv_page != pkt->pages[p]) {
|
||||
void *vfrom = kmap_atomic(bvec[f].bv_page) + bvec[f].bv_offset;
|
||||
void *vto = page_address(pkt->pages[p]) + offs;
|
||||
memcpy(vto, vfrom, CD_FRAMESIZE);
|
||||
kunmap_atomic(vfrom);
|
||||
bvec[f].bv_page = pkt->pages[p];
|
||||
bvec[f].bv_offset = offs;
|
||||
} else {
|
||||
BUG_ON(bvec[f].bv_offset != offs);
|
||||
}
|
||||
offs += CD_FRAMESIZE;
|
||||
if (offs >= PAGE_SIZE) {
|
||||
offs = 0;
|
||||
p++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void pkt_end_io_read(struct bio *bio)
|
||||
{
|
||||
struct packet_data *pkt = bio->bi_private;
|
||||
|
@ -1298,7 +1265,6 @@ try_next_bio:
|
|||
static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
|
||||
{
|
||||
int f;
|
||||
struct bio_vec *bvec = pkt->w_bio->bi_io_vec;
|
||||
|
||||
bio_reset(pkt->w_bio);
|
||||
pkt->w_bio->bi_iter.bi_sector = pkt->sector;
|
||||
|
@ -1308,9 +1274,10 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
|
|||
|
||||
/* XXX: locking? */
|
||||
for (f = 0; f < pkt->frames; f++) {
|
||||
bvec[f].bv_page = pkt->pages[(f * CD_FRAMESIZE) / PAGE_SIZE];
|
||||
bvec[f].bv_offset = (f * CD_FRAMESIZE) % PAGE_SIZE;
|
||||
if (!bio_add_page(pkt->w_bio, bvec[f].bv_page, CD_FRAMESIZE, bvec[f].bv_offset))
|
||||
struct page *page = pkt->pages[(f * CD_FRAMESIZE) / PAGE_SIZE];
|
||||
unsigned offset = (f * CD_FRAMESIZE) % PAGE_SIZE;
|
||||
|
||||
if (!bio_add_page(pkt->w_bio, page, CD_FRAMESIZE, offset))
|
||||
BUG();
|
||||
}
|
||||
pkt_dbg(2, pd, "vcnt=%d\n", pkt->w_bio->bi_vcnt);
|
||||
|
@ -1327,12 +1294,10 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
|
|||
pkt_dbg(2, pd, "Writing %d frames for zone %llx\n",
|
||||
pkt->write_size, (unsigned long long)pkt->sector);
|
||||
|
||||
if (test_bit(PACKET_MERGE_SEGS, &pd->flags) || (pkt->write_size < pkt->frames)) {
|
||||
pkt_make_local_copy(pkt, bvec);
|
||||
if (test_bit(PACKET_MERGE_SEGS, &pd->flags) || (pkt->write_size < pkt->frames))
|
||||
pkt->cache_valid = 1;
|
||||
} else {
|
||||
else
|
||||
pkt->cache_valid = 0;
|
||||
}
|
||||
|
||||
/* Start the write request */
|
||||
atomic_set(&pkt->io_wait, 1);
|
||||
|
|
|
@ -36,7 +36,6 @@
|
|||
#include <linux/scatterlist.h>
|
||||
#include <linux/version.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/aer.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/wait.h>
|
||||
|
@ -270,8 +269,6 @@ struct skd_device {
|
|||
resource_size_t mem_phys[SKD_MAX_BARS];
|
||||
u32 mem_size[SKD_MAX_BARS];
|
||||
|
||||
skd_irq_type_t irq_type;
|
||||
u32 msix_count;
|
||||
struct skd_msix_entry *msix_entries;
|
||||
|
||||
struct pci_dev *pdev;
|
||||
|
@ -2138,12 +2135,8 @@ static void skd_send_fitmsg(struct skd_device *skdev,
|
|||
u8 *bp = (u8 *)skmsg->msg_buf;
|
||||
int i;
|
||||
for (i = 0; i < skmsg->length; i += 8) {
|
||||
pr_debug("%s:%s:%d msg[%2d] %02x %02x %02x %02x "
|
||||
"%02x %02x %02x %02x\n",
|
||||
skdev->name, __func__, __LINE__,
|
||||
i, bp[i + 0], bp[i + 1], bp[i + 2],
|
||||
bp[i + 3], bp[i + 4], bp[i + 5],
|
||||
bp[i + 6], bp[i + 7]);
|
||||
pr_debug("%s:%s:%d msg[%2d] %8ph\n",
|
||||
skdev->name, __func__, __LINE__, i, &bp[i]);
|
||||
if (i == 0)
|
||||
i = 64 - 8;
|
||||
}
|
||||
|
@ -2164,7 +2157,6 @@ static void skd_send_fitmsg(struct skd_device *skdev,
|
|||
qcmd |= FIT_QCMD_MSGSIZE_64;
|
||||
|
||||
SKD_WRITEQ(skdev, qcmd, FIT_Q_COMMAND);
|
||||
|
||||
}
|
||||
|
||||
static void skd_send_special_fitmsg(struct skd_device *skdev,
|
||||
|
@ -2177,11 +2169,8 @@ static void skd_send_special_fitmsg(struct skd_device *skdev,
|
|||
int i;
|
||||
|
||||
for (i = 0; i < SKD_N_SPECIAL_FITMSG_BYTES; i += 8) {
|
||||
pr_debug("%s:%s:%d spcl[%2d] %02x %02x %02x %02x "
|
||||
"%02x %02x %02x %02x\n",
|
||||
skdev->name, __func__, __LINE__, i,
|
||||
bp[i + 0], bp[i + 1], bp[i + 2], bp[i + 3],
|
||||
bp[i + 4], bp[i + 5], bp[i + 6], bp[i + 7]);
|
||||
pr_debug("%s:%s:%d spcl[%2d] %8ph\n",
|
||||
skdev->name, __func__, __LINE__, i, &bp[i]);
|
||||
if (i == 0)
|
||||
i = 64 - 8;
|
||||
}
|
||||
|
@ -2955,8 +2944,8 @@ static void skd_completion_worker(struct work_struct *work)
|
|||
|
||||
static void skd_isr_msg_from_dev(struct skd_device *skdev);
|
||||
|
||||
irqreturn_t
|
||||
static skd_isr(int irq, void *ptr)
|
||||
static irqreturn_t
|
||||
skd_isr(int irq, void *ptr)
|
||||
{
|
||||
struct skd_device *skdev;
|
||||
u32 intstat;
|
||||
|
@ -3821,10 +3810,6 @@ static irqreturn_t skd_qfull_isr(int irq, void *skd_host_data)
|
|||
*/
|
||||
|
||||
struct skd_msix_entry {
|
||||
int have_irq;
|
||||
u32 vector;
|
||||
u32 entry;
|
||||
struct skd_device *rsp;
|
||||
char isr_name[30];
|
||||
};
|
||||
|
||||
|
@ -3853,193 +3838,121 @@ static struct skd_init_msix_entry msix_entries[SKD_MAX_MSIX_COUNT] = {
|
|||
{ "(Queue Full 3)", skd_qfull_isr },
|
||||
};
|
||||
|
||||
static void skd_release_msix(struct skd_device *skdev)
|
||||
{
|
||||
struct skd_msix_entry *qentry;
|
||||
int i;
|
||||
|
||||
if (skdev->msix_entries) {
|
||||
for (i = 0; i < skdev->msix_count; i++) {
|
||||
qentry = &skdev->msix_entries[i];
|
||||
skdev = qentry->rsp;
|
||||
|
||||
if (qentry->have_irq)
|
||||
devm_free_irq(&skdev->pdev->dev,
|
||||
qentry->vector, qentry->rsp);
|
||||
}
|
||||
|
||||
kfree(skdev->msix_entries);
|
||||
}
|
||||
|
||||
if (skdev->msix_count)
|
||||
pci_disable_msix(skdev->pdev);
|
||||
|
||||
skdev->msix_count = 0;
|
||||
skdev->msix_entries = NULL;
|
||||
}
|
||||
|
||||
static int skd_acquire_msix(struct skd_device *skdev)
|
||||
{
|
||||
int i, rc;
|
||||
struct pci_dev *pdev = skdev->pdev;
|
||||
struct msix_entry *entries;
|
||||
struct skd_msix_entry *qentry;
|
||||
|
||||
entries = kzalloc(sizeof(struct msix_entry) * SKD_MAX_MSIX_COUNT,
|
||||
GFP_KERNEL);
|
||||
if (!entries)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < SKD_MAX_MSIX_COUNT; i++)
|
||||
entries[i].entry = i;
|
||||
|
||||
rc = pci_enable_msix_exact(pdev, entries, SKD_MAX_MSIX_COUNT);
|
||||
if (rc) {
|
||||
rc = pci_alloc_irq_vectors(pdev, SKD_MAX_MSIX_COUNT, SKD_MAX_MSIX_COUNT,
|
||||
PCI_IRQ_MSIX);
|
||||
if (rc < 0) {
|
||||
pr_err("(%s): failed to enable MSI-X %d\n",
|
||||
skd_name(skdev), rc);
|
||||
goto msix_out;
|
||||
goto out;
|
||||
}
|
||||
|
||||
skdev->msix_count = SKD_MAX_MSIX_COUNT;
|
||||
skdev->msix_entries = kzalloc(sizeof(struct skd_msix_entry) *
|
||||
skdev->msix_count, GFP_KERNEL);
|
||||
skdev->msix_entries = kcalloc(SKD_MAX_MSIX_COUNT,
|
||||
sizeof(struct skd_msix_entry), GFP_KERNEL);
|
||||
if (!skdev->msix_entries) {
|
||||
rc = -ENOMEM;
|
||||
pr_err("(%s): msix table allocation error\n",
|
||||
skd_name(skdev));
|
||||
goto msix_out;
|
||||
}
|
||||
|
||||
for (i = 0; i < skdev->msix_count; i++) {
|
||||
qentry = &skdev->msix_entries[i];
|
||||
qentry->vector = entries[i].vector;
|
||||
qentry->entry = entries[i].entry;
|
||||
qentry->rsp = NULL;
|
||||
qentry->have_irq = 0;
|
||||
pr_debug("%s:%s:%d %s: <%s> msix (%d) vec %d, entry %x\n",
|
||||
skdev->name, __func__, __LINE__,
|
||||
pci_name(pdev), skdev->name,
|
||||
i, qentry->vector, qentry->entry);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Enable MSI-X vectors for the base queue */
|
||||
for (i = 0; i < skdev->msix_count; i++) {
|
||||
qentry = &skdev->msix_entries[i];
|
||||
for (i = 0; i < SKD_MAX_MSIX_COUNT; i++) {
|
||||
struct skd_msix_entry *qentry = &skdev->msix_entries[i];
|
||||
|
||||
snprintf(qentry->isr_name, sizeof(qentry->isr_name),
|
||||
"%s%d-msix %s", DRV_NAME, skdev->devno,
|
||||
msix_entries[i].name);
|
||||
rc = devm_request_irq(&skdev->pdev->dev, qentry->vector,
|
||||
msix_entries[i].handler, 0,
|
||||
qentry->isr_name, skdev);
|
||||
|
||||
rc = devm_request_irq(&skdev->pdev->dev,
|
||||
pci_irq_vector(skdev->pdev, i),
|
||||
msix_entries[i].handler, 0,
|
||||
qentry->isr_name, skdev);
|
||||
if (rc) {
|
||||
pr_err("(%s): Unable to register(%d) MSI-X "
|
||||
"handler %d: %s\n",
|
||||
skd_name(skdev), rc, i, qentry->isr_name);
|
||||
goto msix_out;
|
||||
} else {
|
||||
qentry->have_irq = 1;
|
||||
qentry->rsp = skdev;
|
||||
}
|
||||
}
|
||||
|
||||
pr_debug("%s:%s:%d %s: <%s> msix %d irq(s) enabled\n",
|
||||
skdev->name, __func__, __LINE__,
|
||||
pci_name(pdev), skdev->name, skdev->msix_count);
|
||||
pci_name(pdev), skdev->name, SKD_MAX_MSIX_COUNT);
|
||||
return 0;
|
||||
|
||||
msix_out:
|
||||
if (entries)
|
||||
kfree(entries);
|
||||
skd_release_msix(skdev);
|
||||
while (--i >= 0)
|
||||
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i), skdev);
|
||||
out:
|
||||
kfree(skdev->msix_entries);
|
||||
skdev->msix_entries = NULL;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int skd_acquire_irq(struct skd_device *skdev)
|
||||
{
|
||||
struct pci_dev *pdev = skdev->pdev;
|
||||
unsigned int irq_flag = PCI_IRQ_LEGACY;
|
||||
int rc;
|
||||
struct pci_dev *pdev;
|
||||
|
||||
pdev = skdev->pdev;
|
||||
skdev->msix_count = 0;
|
||||
|
||||
RETRY_IRQ_TYPE:
|
||||
switch (skdev->irq_type) {
|
||||
case SKD_IRQ_MSIX:
|
||||
if (skd_isr_type == SKD_IRQ_MSIX) {
|
||||
rc = skd_acquire_msix(skdev);
|
||||
if (!rc)
|
||||
pr_info("(%s): MSI-X %d irqs enabled\n",
|
||||
skd_name(skdev), skdev->msix_count);
|
||||
else {
|
||||
pr_err(
|
||||
"(%s): failed to enable MSI-X, re-trying with MSI %d\n",
|
||||
skd_name(skdev), rc);
|
||||
skdev->irq_type = SKD_IRQ_MSI;
|
||||
goto RETRY_IRQ_TYPE;
|
||||
}
|
||||
break;
|
||||
case SKD_IRQ_MSI:
|
||||
snprintf(skdev->isr_name, sizeof(skdev->isr_name), "%s%d-msi",
|
||||
DRV_NAME, skdev->devno);
|
||||
rc = pci_enable_msi_range(pdev, 1, 1);
|
||||
if (rc > 0) {
|
||||
rc = devm_request_irq(&pdev->dev, pdev->irq, skd_isr, 0,
|
||||
skdev->isr_name, skdev);
|
||||
if (rc) {
|
||||
pci_disable_msi(pdev);
|
||||
pr_err(
|
||||
"(%s): failed to allocate the MSI interrupt %d\n",
|
||||
skd_name(skdev), rc);
|
||||
goto RETRY_IRQ_LEGACY;
|
||||
}
|
||||
pr_info("(%s): MSI irq %d enabled\n",
|
||||
skd_name(skdev), pdev->irq);
|
||||
} else {
|
||||
RETRY_IRQ_LEGACY:
|
||||
pr_err(
|
||||
"(%s): failed to enable MSI, re-trying with LEGACY %d\n",
|
||||
skd_name(skdev), rc);
|
||||
skdev->irq_type = SKD_IRQ_LEGACY;
|
||||
goto RETRY_IRQ_TYPE;
|
||||
}
|
||||
break;
|
||||
case SKD_IRQ_LEGACY:
|
||||
snprintf(skdev->isr_name, sizeof(skdev->isr_name),
|
||||
"%s%d-legacy", DRV_NAME, skdev->devno);
|
||||
rc = devm_request_irq(&pdev->dev, pdev->irq, skd_isr,
|
||||
IRQF_SHARED, skdev->isr_name, skdev);
|
||||
if (!rc)
|
||||
pr_info("(%s): LEGACY irq %d enabled\n",
|
||||
skd_name(skdev), pdev->irq);
|
||||
else
|
||||
pr_err("(%s): request LEGACY irq error %d\n",
|
||||
skd_name(skdev), rc);
|
||||
break;
|
||||
default:
|
||||
pr_info("(%s): irq_type %d invalid, re-set to %d\n",
|
||||
skd_name(skdev), skdev->irq_type, SKD_IRQ_DEFAULT);
|
||||
skdev->irq_type = SKD_IRQ_LEGACY;
|
||||
goto RETRY_IRQ_TYPE;
|
||||
return 0;
|
||||
|
||||
pr_err("(%s): failed to enable MSI-X, re-trying with MSI %d\n",
|
||||
skd_name(skdev), rc);
|
||||
}
|
||||
return rc;
|
||||
|
||||
snprintf(skdev->isr_name, sizeof(skdev->isr_name), "%s%d", DRV_NAME,
|
||||
skdev->devno);
|
||||
|
||||
if (skd_isr_type != SKD_IRQ_LEGACY)
|
||||
irq_flag |= PCI_IRQ_MSI;
|
||||
rc = pci_alloc_irq_vectors(pdev, 1, 1, irq_flag);
|
||||
if (rc < 0) {
|
||||
pr_err("(%s): failed to allocate the MSI interrupt %d\n",
|
||||
skd_name(skdev), rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = devm_request_irq(&pdev->dev, pdev->irq, skd_isr,
|
||||
pdev->msi_enabled ? 0 : IRQF_SHARED,
|
||||
skdev->isr_name, skdev);
|
||||
if (rc) {
|
||||
pci_free_irq_vectors(pdev);
|
||||
pr_err("(%s): failed to allocate interrupt %d\n",
|
||||
skd_name(skdev), rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void skd_release_irq(struct skd_device *skdev)
|
||||
{
|
||||
switch (skdev->irq_type) {
|
||||
case SKD_IRQ_MSIX:
|
||||
skd_release_msix(skdev);
|
||||
break;
|
||||
case SKD_IRQ_MSI:
|
||||
devm_free_irq(&skdev->pdev->dev, skdev->pdev->irq, skdev);
|
||||
pci_disable_msi(skdev->pdev);
|
||||
break;
|
||||
case SKD_IRQ_LEGACY:
|
||||
devm_free_irq(&skdev->pdev->dev, skdev->pdev->irq, skdev);
|
||||
break;
|
||||
default:
|
||||
pr_err("(%s): wrong irq type %d!",
|
||||
skd_name(skdev), skdev->irq_type);
|
||||
break;
|
||||
struct pci_dev *pdev = skdev->pdev;
|
||||
|
||||
if (skdev->msix_entries) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < SKD_MAX_MSIX_COUNT; i++) {
|
||||
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i),
|
||||
skdev);
|
||||
}
|
||||
|
||||
kfree(skdev->msix_entries);
|
||||
skdev->msix_entries = NULL;
|
||||
} else {
|
||||
devm_free_irq(&pdev->dev, pdev->irq, skdev);
|
||||
}
|
||||
|
||||
pci_free_irq_vectors(pdev);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -4402,7 +4315,6 @@ static struct skd_device *skd_construct(struct pci_dev *pdev)
|
|||
skdev->pdev = pdev;
|
||||
skdev->devno = skd_next_devno++;
|
||||
skdev->major = blk_major;
|
||||
skdev->irq_type = skd_isr_type;
|
||||
sprintf(skdev->name, DRV_NAME "%d", skdev->devno);
|
||||
skdev->dev_max_queue_depth = 0;
|
||||
|
||||
|
|
|
@ -535,7 +535,7 @@ static blk_qc_t mm_make_request(struct request_queue *q, struct bio *bio)
|
|||
*card->biotail = bio;
|
||||
bio->bi_next = NULL;
|
||||
card->biotail = &bio->bi_next;
|
||||
if (bio->bi_opf & REQ_SYNC || !mm_check_plugged(card))
|
||||
if (op_is_sync(bio->bi_opf) || !mm_check_plugged(card))
|
||||
activate(card);
|
||||
spin_unlock_irq(&card->lock);
|
||||
|
||||
|
|
|
@ -1253,14 +1253,14 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
|
|||
case BLKIF_OP_WRITE:
|
||||
ring->st_wr_req++;
|
||||
operation = REQ_OP_WRITE;
|
||||
operation_flags = WRITE_ODIRECT;
|
||||
operation_flags = REQ_SYNC | REQ_IDLE;
|
||||
break;
|
||||
case BLKIF_OP_WRITE_BARRIER:
|
||||
drain = true;
|
||||
case BLKIF_OP_FLUSH_DISKCACHE:
|
||||
ring->st_f_req++;
|
||||
operation = REQ_OP_WRITE;
|
||||
operation_flags = WRITE_FLUSH;
|
||||
operation_flags = REQ_PREFLUSH;
|
||||
break;
|
||||
default:
|
||||
operation = 0; /* make gcc happy */
|
||||
|
@ -1272,7 +1272,7 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
|
|||
nseg = req->operation == BLKIF_OP_INDIRECT ?
|
||||
req->u.indirect.nr_segments : req->u.rw.nr_segments;
|
||||
|
||||
if (unlikely(nseg == 0 && operation_flags != WRITE_FLUSH) ||
|
||||
if (unlikely(nseg == 0 && operation_flags != REQ_PREFLUSH) ||
|
||||
unlikely((req->operation != BLKIF_OP_INDIRECT) &&
|
||||
(nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) ||
|
||||
unlikely((req->operation == BLKIF_OP_INDIRECT) &&
|
||||
|
@ -1334,7 +1334,7 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
|
|||
}
|
||||
|
||||
/* Wait on all outstanding I/O's and once that has been completed
|
||||
* issue the WRITE_FLUSH.
|
||||
* issue the flush.
|
||||
*/
|
||||
if (drain)
|
||||
xen_blk_drain_io(pending_req->ring);
|
||||
|
@ -1380,7 +1380,7 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
|
|||
|
||||
/* This will be hit if the operation was a flush or discard. */
|
||||
if (!bio) {
|
||||
BUG_ON(operation_flags != WRITE_FLUSH);
|
||||
BUG_ON(operation_flags != REQ_PREFLUSH);
|
||||
|
||||
bio = bio_alloc(GFP_KERNEL, 0);
|
||||
if (unlikely(bio == NULL))
|
||||
|
|
|
@ -2043,8 +2043,9 @@ static int blkif_recover(struct blkfront_info *info)
|
|||
/* Requeue pending requests (flush or discard) */
|
||||
list_del_init(&req->queuelist);
|
||||
BUG_ON(req->nr_phys_segments > segs);
|
||||
blk_mq_requeue_request(req);
|
||||
blk_mq_requeue_request(req, false);
|
||||
}
|
||||
blk_mq_start_stopped_hw_queues(info->rq, true);
|
||||
blk_mq_kick_requeue_list(info->rq);
|
||||
|
||||
while ((bio = bio_list_pop(&info->bio_list)) != NULL) {
|
||||
|
|
|
@ -211,7 +211,7 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq)
|
|||
sense_rq->cmd[0] = GPCMD_REQUEST_SENSE;
|
||||
sense_rq->cmd[4] = cmd_len;
|
||||
sense_rq->cmd_type = REQ_TYPE_ATA_SENSE;
|
||||
sense_rq->cmd_flags |= REQ_PREEMPT;
|
||||
sense_rq->rq_flags |= RQF_PREEMPT;
|
||||
|
||||
if (drive->media == ide_tape)
|
||||
sense_rq->cmd[13] = REQ_IDETAPE_PC1;
|
||||
|
@ -295,7 +295,7 @@ int ide_cd_expiry(ide_drive_t *drive)
|
|||
wait = ATAPI_WAIT_PC;
|
||||
break;
|
||||
default:
|
||||
if (!(rq->cmd_flags & REQ_QUIET))
|
||||
if (!(rq->rq_flags & RQF_QUIET))
|
||||
printk(KERN_INFO PFX "cmd 0x%x timed out\n",
|
||||
rq->cmd[0]);
|
||||
wait = 0;
|
||||
|
@ -375,7 +375,7 @@ int ide_check_ireason(ide_drive_t *drive, struct request *rq, int len,
|
|||
}
|
||||
|
||||
if (dev_is_idecd(drive) && rq->cmd_type == REQ_TYPE_ATA_PC)
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
|
|
@ -98,7 +98,7 @@ static int cdrom_log_sense(ide_drive_t *drive, struct request *rq)
|
|||
struct request_sense *sense = &drive->sense_data;
|
||||
int log = 0;
|
||||
|
||||
if (!sense || !rq || (rq->cmd_flags & REQ_QUIET))
|
||||
if (!sense || !rq || (rq->rq_flags & RQF_QUIET))
|
||||
return 0;
|
||||
|
||||
ide_debug_log(IDE_DBG_SENSE, "sense_key: 0x%x", sense->sense_key);
|
||||
|
@ -291,7 +291,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
|
|||
* (probably while trying to recover from a former error).
|
||||
* Just give up.
|
||||
*/
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
return 2;
|
||||
}
|
||||
|
||||
|
@ -311,7 +311,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
|
|||
cdrom_saw_media_change(drive);
|
||||
|
||||
if (rq->cmd_type == REQ_TYPE_FS &&
|
||||
!(rq->cmd_flags & REQ_QUIET))
|
||||
!(rq->rq_flags & RQF_QUIET))
|
||||
printk(KERN_ERR PFX "%s: tray open\n",
|
||||
drive->name);
|
||||
}
|
||||
|
@ -346,7 +346,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
|
|||
* No point in retrying after an illegal request or data
|
||||
* protect error.
|
||||
*/
|
||||
if (!(rq->cmd_flags & REQ_QUIET))
|
||||
if (!(rq->rq_flags & RQF_QUIET))
|
||||
ide_dump_status(drive, "command error", stat);
|
||||
do_end_request = 1;
|
||||
break;
|
||||
|
@ -355,14 +355,14 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
|
|||
* No point in re-trying a zillion times on a bad sector.
|
||||
* If we got here the error is not correctable.
|
||||
*/
|
||||
if (!(rq->cmd_flags & REQ_QUIET))
|
||||
if (!(rq->rq_flags & RQF_QUIET))
|
||||
ide_dump_status(drive, "media error "
|
||||
"(bad sector)", stat);
|
||||
do_end_request = 1;
|
||||
break;
|
||||
case BLANK_CHECK:
|
||||
/* disk appears blank? */
|
||||
if (!(rq->cmd_flags & REQ_QUIET))
|
||||
if (!(rq->rq_flags & RQF_QUIET))
|
||||
ide_dump_status(drive, "media error (blank)",
|
||||
stat);
|
||||
do_end_request = 1;
|
||||
|
@ -380,7 +380,7 @@ static int cdrom_decode_status(ide_drive_t *drive, u8 stat)
|
|||
}
|
||||
|
||||
if (rq->cmd_type != REQ_TYPE_FS) {
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
do_end_request = 1;
|
||||
}
|
||||
|
||||
|
@ -422,19 +422,19 @@ static void ide_cd_request_sense_fixup(ide_drive_t *drive, struct ide_cmd *cmd)
|
|||
int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
|
||||
int write, void *buffer, unsigned *bufflen,
|
||||
struct request_sense *sense, int timeout,
|
||||
unsigned int cmd_flags)
|
||||
req_flags_t rq_flags)
|
||||
{
|
||||
struct cdrom_info *info = drive->driver_data;
|
||||
struct request_sense local_sense;
|
||||
int retries = 10;
|
||||
unsigned int flags = 0;
|
||||
req_flags_t flags = 0;
|
||||
|
||||
if (!sense)
|
||||
sense = &local_sense;
|
||||
|
||||
ide_debug_log(IDE_DBG_PC, "cmd[0]: 0x%x, write: 0x%x, timeout: %d, "
|
||||
"cmd_flags: 0x%x",
|
||||
cmd[0], write, timeout, cmd_flags);
|
||||
"rq_flags: 0x%x",
|
||||
cmd[0], write, timeout, rq_flags);
|
||||
|
||||
/* start of retry loop */
|
||||
do {
|
||||
|
@ -446,7 +446,7 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
|
|||
memcpy(rq->cmd, cmd, BLK_MAX_CDB);
|
||||
rq->cmd_type = REQ_TYPE_ATA_PC;
|
||||
rq->sense = sense;
|
||||
rq->cmd_flags |= cmd_flags;
|
||||
rq->rq_flags |= rq_flags;
|
||||
rq->timeout = timeout;
|
||||
if (buffer) {
|
||||
error = blk_rq_map_kern(drive->queue, rq, buffer,
|
||||
|
@ -462,14 +462,14 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
|
|||
if (buffer)
|
||||
*bufflen = rq->resid_len;
|
||||
|
||||
flags = rq->cmd_flags;
|
||||
flags = rq->rq_flags;
|
||||
blk_put_request(rq);
|
||||
|
||||
/*
|
||||
* FIXME: we should probably abort/retry or something in case of
|
||||
* failure.
|
||||
*/
|
||||
if (flags & REQ_FAILED) {
|
||||
if (flags & RQF_FAILED) {
|
||||
/*
|
||||
* The request failed. Retry if it was due to a unit
|
||||
* attention status (usually means media was changed).
|
||||
|
@ -494,10 +494,10 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
|
|||
}
|
||||
|
||||
/* end of retry loop */
|
||||
} while ((flags & REQ_FAILED) && retries >= 0);
|
||||
} while ((flags & RQF_FAILED) && retries >= 0);
|
||||
|
||||
/* return an error if the command failed */
|
||||
return (flags & REQ_FAILED) ? -EIO : 0;
|
||||
return (flags & RQF_FAILED) ? -EIO : 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -589,7 +589,7 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
|
|||
"(%u bytes)\n", drive->name, __func__,
|
||||
cmd->nleft);
|
||||
if (!write)
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
uptodate = 0;
|
||||
}
|
||||
} else if (rq->cmd_type != REQ_TYPE_BLOCK_PC) {
|
||||
|
@ -607,7 +607,7 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
|
|||
}
|
||||
|
||||
if (!uptodate)
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
}
|
||||
goto out_end;
|
||||
}
|
||||
|
@ -745,9 +745,9 @@ static void cdrom_do_block_pc(ide_drive_t *drive, struct request *rq)
|
|||
rq->cmd[0], rq->cmd_type);
|
||||
|
||||
if (rq->cmd_type == REQ_TYPE_BLOCK_PC)
|
||||
rq->cmd_flags |= REQ_QUIET;
|
||||
rq->rq_flags |= RQF_QUIET;
|
||||
else
|
||||
rq->cmd_flags &= ~REQ_FAILED;
|
||||
rq->rq_flags &= ~RQF_FAILED;
|
||||
|
||||
drive->dma = 0;
|
||||
|
||||
|
@ -867,7 +867,7 @@ int cdrom_check_status(ide_drive_t *drive, struct request_sense *sense)
|
|||
*/
|
||||
cmd[7] = cdi->sanyo_slot % 3;
|
||||
|
||||
return ide_cd_queue_pc(drive, cmd, 0, NULL, NULL, sense, 0, REQ_QUIET);
|
||||
return ide_cd_queue_pc(drive, cmd, 0, NULL, NULL, sense, 0, RQF_QUIET);
|
||||
}
|
||||
|
||||
static int cdrom_read_capacity(ide_drive_t *drive, unsigned long *capacity,
|
||||
|
@ -890,7 +890,7 @@ static int cdrom_read_capacity(ide_drive_t *drive, unsigned long *capacity,
|
|||
cmd[0] = GPCMD_READ_CDVD_CAPACITY;
|
||||
|
||||
stat = ide_cd_queue_pc(drive, cmd, 0, &capbuf, &len, sense, 0,
|
||||
REQ_QUIET);
|
||||
RQF_QUIET);
|
||||
if (stat)
|
||||
return stat;
|
||||
|
||||
|
@ -943,7 +943,7 @@ static int cdrom_read_tocentry(ide_drive_t *drive, int trackno, int msf_flag,
|
|||
if (msf_flag)
|
||||
cmd[1] = 2;
|
||||
|
||||
return ide_cd_queue_pc(drive, cmd, 0, buf, &buflen, sense, 0, REQ_QUIET);
|
||||
return ide_cd_queue_pc(drive, cmd, 0, buf, &buflen, sense, 0, RQF_QUIET);
|
||||
}
|
||||
|
||||
/* Try to read the entire TOC for the disk into our internal buffer. */
|
||||
|
|
|
@ -101,7 +101,7 @@ void ide_cd_log_error(const char *, struct request *, struct request_sense *);
|
|||
|
||||
/* ide-cd.c functions used by ide-cd_ioctl.c */
|
||||
int ide_cd_queue_pc(ide_drive_t *, const unsigned char *, int, void *,
|
||||
unsigned *, struct request_sense *, int, unsigned int);
|
||||
unsigned *, struct request_sense *, int, req_flags_t);
|
||||
int ide_cd_read_toc(ide_drive_t *, struct request_sense *);
|
||||
int ide_cdrom_get_capabilities(ide_drive_t *, u8 *);
|
||||
void ide_cdrom_update_speed(ide_drive_t *, u8 *);
|
||||
|
|
|
@ -305,7 +305,7 @@ int ide_cdrom_reset(struct cdrom_device_info *cdi)
|
|||
|
||||
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
|
||||
rq->cmd_type = REQ_TYPE_DRV_PRIV;
|
||||
rq->cmd_flags = REQ_QUIET;
|
||||
rq->rq_flags = RQF_QUIET;
|
||||
ret = blk_execute_rq(drive->queue, cd->disk, rq, 0);
|
||||
blk_put_request(rq);
|
||||
/*
|
||||
|
@ -449,7 +449,7 @@ int ide_cdrom_packet(struct cdrom_device_info *cdi,
|
|||
struct packet_command *cgc)
|
||||
{
|
||||
ide_drive_t *drive = cdi->handle;
|
||||
unsigned int flags = 0;
|
||||
req_flags_t flags = 0;
|
||||
unsigned len = cgc->buflen;
|
||||
|
||||
if (cgc->timeout <= 0)
|
||||
|
@ -463,7 +463,7 @@ int ide_cdrom_packet(struct cdrom_device_info *cdi,
|
|||
memset(cgc->sense, 0, sizeof(struct request_sense));
|
||||
|
||||
if (cgc->quiet)
|
||||
flags |= REQ_QUIET;
|
||||
flags |= RQF_QUIET;
|
||||
|
||||
cgc->stat = ide_cd_queue_pc(drive, cgc->cmd,
|
||||
cgc->data_direction == CGC_DATA_WRITE,
|
||||
|
|
|
@ -307,7 +307,7 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq)
|
|||
{
|
||||
ide_startstop_t startstop;
|
||||
|
||||
BUG_ON(!(rq->cmd_flags & REQ_STARTED));
|
||||
BUG_ON(!(rq->rq_flags & RQF_STARTED));
|
||||
|
||||
#ifdef DEBUG
|
||||
printk("%s: start_request: current=0x%08lx\n",
|
||||
|
@ -316,7 +316,7 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq)
|
|||
|
||||
/* bail early if we've exceeded max_failures */
|
||||
if (drive->max_failures && (drive->failures > drive->max_failures)) {
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
goto kill_rq;
|
||||
}
|
||||
|
||||
|
@ -539,7 +539,7 @@ repeat:
|
|||
*/
|
||||
if ((drive->dev_flags & IDE_DFLAG_BLOCKED) &&
|
||||
ata_pm_request(rq) == 0 &&
|
||||
(rq->cmd_flags & REQ_PREEMPT) == 0) {
|
||||
(rq->rq_flags & RQF_PREEMPT) == 0) {
|
||||
/* there should be no pending command at this point */
|
||||
ide_unlock_port(hwif);
|
||||
goto plug_device;
|
||||
|
|
|
@ -53,7 +53,7 @@ static int ide_pm_execute_rq(struct request *rq)
|
|||
|
||||
spin_lock_irq(q->queue_lock);
|
||||
if (unlikely(blk_queue_dying(q))) {
|
||||
rq->cmd_flags |= REQ_QUIET;
|
||||
rq->rq_flags |= RQF_QUIET;
|
||||
rq->errors = -ENXIO;
|
||||
__blk_end_request_all(rq, rq->errors);
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
@ -90,7 +90,7 @@ int generic_ide_resume(struct device *dev)
|
|||
memset(&rqpm, 0, sizeof(rqpm));
|
||||
rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
|
||||
rq->cmd_type = REQ_TYPE_ATA_PM_RESUME;
|
||||
rq->cmd_flags |= REQ_PREEMPT;
|
||||
rq->rq_flags |= RQF_PREEMPT;
|
||||
rq->special = &rqpm;
|
||||
rqpm.pm_step = IDE_PM_START_RESUME;
|
||||
rqpm.pm_state = PM_EVENT_ON;
|
||||
|
|
|
@ -2,6 +2,6 @@
|
|||
# Makefile for Open-Channel SSDs.
|
||||
#
|
||||
|
||||
obj-$(CONFIG_NVM) := core.o sysblk.o sysfs.o
|
||||
obj-$(CONFIG_NVM) := core.o sysblk.o
|
||||
obj-$(CONFIG_NVM_GENNVM) += gennvm.o
|
||||
obj-$(CONFIG_NVM_RRPC) += rrpc.o
|
||||
|
|
|
@ -27,8 +27,6 @@
|
|||
#include <linux/lightnvm.h>
|
||||
#include <linux/sched/sysctl.h>
|
||||
|
||||
#include "lightnvm.h"
|
||||
|
||||
static LIST_HEAD(nvm_tgt_types);
|
||||
static DECLARE_RWSEM(nvm_tgtt_lock);
|
||||
static LIST_HEAD(nvm_mgrs);
|
||||
|
@ -88,8 +86,7 @@ void *nvm_dev_dma_alloc(struct nvm_dev *dev, gfp_t mem_flags,
|
|||
}
|
||||
EXPORT_SYMBOL(nvm_dev_dma_alloc);
|
||||
|
||||
void nvm_dev_dma_free(struct nvm_dev *dev, void *addr,
|
||||
dma_addr_t dma_handler)
|
||||
void nvm_dev_dma_free(struct nvm_dev *dev, void *addr, dma_addr_t dma_handler)
|
||||
{
|
||||
dev->ops->dev_dma_free(dev->dma_pool, addr, dma_handler);
|
||||
}
|
||||
|
@ -178,38 +175,133 @@ static struct nvm_dev *nvm_find_nvm_dev(const char *name)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
struct nvm_block *nvm_get_blk(struct nvm_dev *dev, struct nvm_lun *lun,
|
||||
unsigned long flags)
|
||||
static void nvm_tgt_generic_to_addr_mode(struct nvm_tgt_dev *tgt_dev,
|
||||
struct nvm_rq *rqd)
|
||||
{
|
||||
return dev->mt->get_blk(dev, lun, flags);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_get_blk);
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
int i;
|
||||
|
||||
/* Assumes that all valid pages have already been moved on release to bm */
|
||||
void nvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
|
||||
{
|
||||
return dev->mt->put_blk(dev, blk);
|
||||
if (rqd->nr_ppas > 1) {
|
||||
for (i = 0; i < rqd->nr_ppas; i++) {
|
||||
rqd->ppa_list[i] = dev->mt->trans_ppa(tgt_dev,
|
||||
rqd->ppa_list[i], TRANS_TGT_TO_DEV);
|
||||
rqd->ppa_list[i] = generic_to_dev_addr(dev,
|
||||
rqd->ppa_list[i]);
|
||||
}
|
||||
} else {
|
||||
rqd->ppa_addr = dev->mt->trans_ppa(tgt_dev, rqd->ppa_addr,
|
||||
TRANS_TGT_TO_DEV);
|
||||
rqd->ppa_addr = generic_to_dev_addr(dev, rqd->ppa_addr);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_put_blk);
|
||||
|
||||
void nvm_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
|
||||
int nvm_set_bb_tbl(struct nvm_dev *dev, struct ppa_addr *ppas, int nr_ppas,
|
||||
int type)
|
||||
{
|
||||
return dev->mt->mark_blk(dev, ppa, type);
|
||||
struct nvm_rq rqd;
|
||||
int ret;
|
||||
|
||||
if (nr_ppas > dev->ops->max_phys_sect) {
|
||||
pr_err("nvm: unable to update all sysblocks atomically\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
memset(&rqd, 0, sizeof(struct nvm_rq));
|
||||
|
||||
nvm_set_rqd_ppalist(dev, &rqd, ppas, nr_ppas, 1);
|
||||
nvm_generic_to_addr_mode(dev, &rqd);
|
||||
|
||||
ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type);
|
||||
nvm_free_rqd_ppalist(dev, &rqd);
|
||||
if (ret) {
|
||||
pr_err("nvm: sysblk failed bb mark\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_mark_blk);
|
||||
EXPORT_SYMBOL(nvm_set_bb_tbl);
|
||||
|
||||
int nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
|
||||
int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas,
|
||||
int nr_ppas, int type)
|
||||
{
|
||||
return dev->mt->submit_io(dev, rqd);
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
struct nvm_rq rqd;
|
||||
int ret;
|
||||
|
||||
if (nr_ppas > dev->ops->max_phys_sect) {
|
||||
pr_err("nvm: unable to update all blocks atomically\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
memset(&rqd, 0, sizeof(struct nvm_rq));
|
||||
|
||||
nvm_set_rqd_ppalist(dev, &rqd, ppas, nr_ppas, 1);
|
||||
nvm_tgt_generic_to_addr_mode(tgt_dev, &rqd);
|
||||
|
||||
ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type);
|
||||
nvm_free_rqd_ppalist(dev, &rqd);
|
||||
if (ret) {
|
||||
pr_err("nvm: sysblk failed bb mark\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_set_tgt_bb_tbl);
|
||||
|
||||
int nvm_max_phys_sects(struct nvm_tgt_dev *tgt_dev)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
return dev->ops->max_phys_sect;
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_max_phys_sects);
|
||||
|
||||
int nvm_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
return dev->mt->submit_io(tgt_dev, rqd);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_submit_io);
|
||||
|
||||
int nvm_erase_blk(struct nvm_dev *dev, struct nvm_block *blk)
|
||||
int nvm_erase_blk(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p, int flags)
|
||||
{
|
||||
return dev->mt->erase_blk(dev, blk, 0);
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
return dev->mt->erase_blk(tgt_dev, p, flags);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_erase_blk);
|
||||
|
||||
int nvm_get_l2p_tbl(struct nvm_tgt_dev *tgt_dev, u64 slba, u32 nlb,
|
||||
nvm_l2p_update_fn *update_l2p, void *priv)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
if (!dev->ops->get_l2p_tbl)
|
||||
return 0;
|
||||
|
||||
return dev->ops->get_l2p_tbl(dev, slba, nlb, update_l2p, priv);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_get_l2p_tbl);
|
||||
|
||||
int nvm_get_area(struct nvm_tgt_dev *tgt_dev, sector_t *lba, sector_t len)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
return dev->mt->get_area(dev, lba, len);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_get_area);
|
||||
|
||||
void nvm_put_area(struct nvm_tgt_dev *tgt_dev, sector_t lba)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
dev->mt->put_area(dev, lba);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_put_area);
|
||||
|
||||
void nvm_addr_to_generic_mode(struct nvm_dev *dev, struct nvm_rq *rqd)
|
||||
{
|
||||
int i;
|
||||
|
@ -241,10 +333,11 @@ EXPORT_SYMBOL(nvm_generic_to_addr_mode);
|
|||
int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd,
|
||||
const struct ppa_addr *ppas, int nr_ppas, int vblk)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int i, plane_cnt, pl_idx;
|
||||
struct ppa_addr ppa;
|
||||
|
||||
if ((!vblk || dev->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) {
|
||||
if ((!vblk || geo->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) {
|
||||
rqd->nr_ppas = nr_ppas;
|
||||
rqd->ppa_addr = ppas[0];
|
||||
|
||||
|
@ -262,7 +355,7 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd,
|
|||
for (i = 0; i < nr_ppas; i++)
|
||||
rqd->ppa_list[i] = ppas[i];
|
||||
} else {
|
||||
plane_cnt = dev->plane_mode;
|
||||
plane_cnt = geo->plane_mode;
|
||||
rqd->nr_ppas *= plane_cnt;
|
||||
|
||||
for (i = 0; i < nr_ppas; i++) {
|
||||
|
@ -287,7 +380,8 @@ void nvm_free_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd)
|
|||
}
|
||||
EXPORT_SYMBOL(nvm_free_rqd_ppalist);
|
||||
|
||||
int nvm_erase_ppa(struct nvm_dev *dev, struct ppa_addr *ppas, int nr_ppas)
|
||||
int nvm_erase_ppa(struct nvm_dev *dev, struct ppa_addr *ppas, int nr_ppas,
|
||||
int flags)
|
||||
{
|
||||
struct nvm_rq rqd;
|
||||
int ret;
|
||||
|
@ -303,6 +397,8 @@ int nvm_erase_ppa(struct nvm_dev *dev, struct ppa_addr *ppas, int nr_ppas)
|
|||
|
||||
nvm_generic_to_addr_mode(dev, &rqd);
|
||||
|
||||
rqd.flags = flags;
|
||||
|
||||
ret = dev->ops->erase_block(dev, &rqd);
|
||||
|
||||
nvm_free_rqd_ppalist(dev, &rqd);
|
||||
|
@ -341,7 +437,7 @@ static int __nvm_submit_ppa(struct nvm_dev *dev, struct nvm_rq *rqd, int opcode,
|
|||
|
||||
nvm_generic_to_addr_mode(dev, rqd);
|
||||
|
||||
rqd->dev = dev;
|
||||
rqd->dev = NULL;
|
||||
rqd->opcode = opcode;
|
||||
rqd->flags = flags;
|
||||
rqd->bio = bio;
|
||||
|
@ -437,17 +533,18 @@ EXPORT_SYMBOL(nvm_submit_ppa);
|
|||
*/
|
||||
int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int blk, offset, pl, blktype;
|
||||
|
||||
if (nr_blks != dev->blks_per_lun * dev->plane_mode)
|
||||
if (nr_blks != geo->blks_per_lun * geo->plane_mode)
|
||||
return -EINVAL;
|
||||
|
||||
for (blk = 0; blk < dev->blks_per_lun; blk++) {
|
||||
offset = blk * dev->plane_mode;
|
||||
for (blk = 0; blk < geo->blks_per_lun; blk++) {
|
||||
offset = blk * geo->plane_mode;
|
||||
blktype = blks[offset];
|
||||
|
||||
/* Bad blocks on any planes take precedence over other types */
|
||||
for (pl = 0; pl < dev->plane_mode; pl++) {
|
||||
for (pl = 0; pl < geo->plane_mode; pl++) {
|
||||
if (blks[offset + pl] &
|
||||
(NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
|
||||
blktype = blks[offset + pl];
|
||||
|
@ -458,7 +555,7 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
|
|||
blks[blk] = blktype;
|
||||
}
|
||||
|
||||
return dev->blks_per_lun;
|
||||
return geo->blks_per_lun;
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_bb_tbl_fold);
|
||||
|
||||
|
@ -470,11 +567,22 @@ int nvm_get_bb_tbl(struct nvm_dev *dev, struct ppa_addr ppa, u8 *blks)
|
|||
}
|
||||
EXPORT_SYMBOL(nvm_get_bb_tbl);
|
||||
|
||||
int nvm_get_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr ppa,
|
||||
u8 *blks)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
ppa = dev->mt->trans_ppa(tgt_dev, ppa, TRANS_TGT_TO_DEV);
|
||||
return nvm_get_bb_tbl(dev, ppa, blks);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_get_tgt_bb_tbl);
|
||||
|
||||
static int nvm_init_slc_tbl(struct nvm_dev *dev, struct nvm_id_group *grp)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int i;
|
||||
|
||||
dev->lps_per_blk = dev->pgs_per_blk;
|
||||
dev->lps_per_blk = geo->pgs_per_blk;
|
||||
dev->lptbl = kcalloc(dev->lps_per_blk, sizeof(int), GFP_KERNEL);
|
||||
if (!dev->lptbl)
|
||||
return -ENOMEM;
|
||||
|
@ -520,29 +628,32 @@ static int nvm_core_init(struct nvm_dev *dev)
|
|||
{
|
||||
struct nvm_id *id = &dev->identity;
|
||||
struct nvm_id_group *grp = &id->groups[0];
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int ret;
|
||||
|
||||
/* device values */
|
||||
dev->nr_chnls = grp->num_ch;
|
||||
dev->luns_per_chnl = grp->num_lun;
|
||||
dev->pgs_per_blk = grp->num_pg;
|
||||
dev->blks_per_lun = grp->num_blk;
|
||||
dev->nr_planes = grp->num_pln;
|
||||
dev->fpg_size = grp->fpg_sz;
|
||||
dev->pfpg_size = grp->fpg_sz * grp->num_pln;
|
||||
dev->sec_size = grp->csecs;
|
||||
dev->oob_size = grp->sos;
|
||||
dev->sec_per_pg = grp->fpg_sz / grp->csecs;
|
||||
dev->mccap = grp->mccap;
|
||||
memcpy(&dev->ppaf, &id->ppaf, sizeof(struct nvm_addr_format));
|
||||
/* Whole device values */
|
||||
geo->nr_chnls = grp->num_ch;
|
||||
geo->luns_per_chnl = grp->num_lun;
|
||||
|
||||
dev->plane_mode = NVM_PLANE_SINGLE;
|
||||
dev->max_rq_size = dev->ops->max_phys_sect * dev->sec_size;
|
||||
/* Generic device values */
|
||||
geo->pgs_per_blk = grp->num_pg;
|
||||
geo->blks_per_lun = grp->num_blk;
|
||||
geo->nr_planes = grp->num_pln;
|
||||
geo->fpg_size = grp->fpg_sz;
|
||||
geo->pfpg_size = grp->fpg_sz * grp->num_pln;
|
||||
geo->sec_size = grp->csecs;
|
||||
geo->oob_size = grp->sos;
|
||||
geo->sec_per_pg = grp->fpg_sz / grp->csecs;
|
||||
geo->mccap = grp->mccap;
|
||||
memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format));
|
||||
|
||||
geo->plane_mode = NVM_PLANE_SINGLE;
|
||||
geo->max_rq_size = dev->ops->max_phys_sect * geo->sec_size;
|
||||
|
||||
if (grp->mpos & 0x020202)
|
||||
dev->plane_mode = NVM_PLANE_DOUBLE;
|
||||
geo->plane_mode = NVM_PLANE_DOUBLE;
|
||||
if (grp->mpos & 0x040404)
|
||||
dev->plane_mode = NVM_PLANE_QUAD;
|
||||
geo->plane_mode = NVM_PLANE_QUAD;
|
||||
|
||||
if (grp->mtype != 0) {
|
||||
pr_err("nvm: memory type not supported\n");
|
||||
|
@ -550,13 +661,13 @@ static int nvm_core_init(struct nvm_dev *dev)
|
|||
}
|
||||
|
||||
/* calculated values */
|
||||
dev->sec_per_pl = dev->sec_per_pg * dev->nr_planes;
|
||||
dev->sec_per_blk = dev->sec_per_pl * dev->pgs_per_blk;
|
||||
dev->sec_per_lun = dev->sec_per_blk * dev->blks_per_lun;
|
||||
dev->nr_luns = dev->luns_per_chnl * dev->nr_chnls;
|
||||
geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes;
|
||||
geo->sec_per_blk = geo->sec_per_pl * geo->pgs_per_blk;
|
||||
geo->sec_per_lun = geo->sec_per_blk * geo->blks_per_lun;
|
||||
geo->nr_luns = geo->luns_per_chnl * geo->nr_chnls;
|
||||
|
||||
dev->total_secs = dev->nr_luns * dev->sec_per_lun;
|
||||
dev->lun_map = kcalloc(BITS_TO_LONGS(dev->nr_luns),
|
||||
dev->total_secs = geo->nr_luns * geo->sec_per_lun;
|
||||
dev->lun_map = kcalloc(BITS_TO_LONGS(geo->nr_luns),
|
||||
sizeof(unsigned long), GFP_KERNEL);
|
||||
if (!dev->lun_map)
|
||||
return -ENOMEM;
|
||||
|
@ -583,7 +694,7 @@ static int nvm_core_init(struct nvm_dev *dev)
|
|||
mutex_init(&dev->mlock);
|
||||
spin_lock_init(&dev->lock);
|
||||
|
||||
blk_queue_logical_block_size(dev->q, dev->sec_size);
|
||||
blk_queue_logical_block_size(dev->q, geo->sec_size);
|
||||
|
||||
return 0;
|
||||
err_fmtype:
|
||||
|
@ -617,6 +728,7 @@ void nvm_free(struct nvm_dev *dev)
|
|||
|
||||
static int nvm_init(struct nvm_dev *dev)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (!dev->q || !dev->ops)
|
||||
|
@ -648,20 +760,15 @@ static int nvm_init(struct nvm_dev *dev)
|
|||
}
|
||||
|
||||
pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n",
|
||||
dev->name, dev->sec_per_pg, dev->nr_planes,
|
||||
dev->pgs_per_blk, dev->blks_per_lun, dev->nr_luns,
|
||||
dev->nr_chnls);
|
||||
dev->name, geo->sec_per_pg, geo->nr_planes,
|
||||
geo->pgs_per_blk, geo->blks_per_lun,
|
||||
geo->nr_luns, geo->nr_chnls);
|
||||
return 0;
|
||||
err:
|
||||
pr_err("nvm: failed to initialize nvm\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void nvm_exit(struct nvm_dev *dev)
|
||||
{
|
||||
nvm_sysfs_unregister_dev(dev);
|
||||
}
|
||||
|
||||
struct nvm_dev *nvm_alloc_dev(int node)
|
||||
{
|
||||
return kzalloc_node(sizeof(struct nvm_dev), GFP_KERNEL, node);
|
||||
|
@ -691,10 +798,6 @@ int nvm_register(struct nvm_dev *dev)
|
|||
}
|
||||
}
|
||||
|
||||
ret = nvm_sysfs_register_dev(dev);
|
||||
if (ret)
|
||||
goto err_ppalist;
|
||||
|
||||
if (dev->identity.cap & NVM_ID_DCAP_BBLKMGMT) {
|
||||
ret = nvm_get_sysblock(dev, &dev->sb);
|
||||
if (!ret)
|
||||
|
@ -711,8 +814,6 @@ int nvm_register(struct nvm_dev *dev)
|
|||
up_write(&nvm_lock);
|
||||
|
||||
return 0;
|
||||
err_ppalist:
|
||||
dev->ops->destroy_dma_pool(dev->dma_pool);
|
||||
err_init:
|
||||
kfree(dev->lun_map);
|
||||
return ret;
|
||||
|
@ -725,7 +826,7 @@ void nvm_unregister(struct nvm_dev *dev)
|
|||
list_del(&dev->devices);
|
||||
up_write(&nvm_lock);
|
||||
|
||||
nvm_exit(dev);
|
||||
nvm_free(dev);
|
||||
}
|
||||
EXPORT_SYMBOL(nvm_unregister);
|
||||
|
||||
|
@ -754,149 +855,15 @@ static int __nvm_configure_create(struct nvm_ioctl_create *create)
|
|||
}
|
||||
s = &create->conf.s;
|
||||
|
||||
if (s->lun_begin > s->lun_end || s->lun_end > dev->nr_luns) {
|
||||
if (s->lun_begin > s->lun_end || s->lun_end > dev->geo.nr_luns) {
|
||||
pr_err("nvm: lun out of bound (%u:%u > %u)\n",
|
||||
s->lun_begin, s->lun_end, dev->nr_luns);
|
||||
s->lun_begin, s->lun_end, dev->geo.nr_luns);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return dev->mt->create_tgt(dev, create);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NVM_DEBUG
|
||||
static int nvm_configure_show(const char *val)
|
||||
{
|
||||
struct nvm_dev *dev;
|
||||
char opcode, devname[DISK_NAME_LEN];
|
||||
int ret;
|
||||
|
||||
ret = sscanf(val, "%c %32s", &opcode, devname);
|
||||
if (ret != 2) {
|
||||
pr_err("nvm: invalid command. Use \"opcode devicename\".\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
down_write(&nvm_lock);
|
||||
dev = nvm_find_nvm_dev(devname);
|
||||
up_write(&nvm_lock);
|
||||
if (!dev) {
|
||||
pr_err("nvm: device not found\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!dev->mt)
|
||||
return 0;
|
||||
|
||||
dev->mt->lun_info_print(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nvm_configure_remove(const char *val)
|
||||
{
|
||||
struct nvm_ioctl_remove remove;
|
||||
struct nvm_dev *dev;
|
||||
char opcode;
|
||||
int ret = 0;
|
||||
|
||||
ret = sscanf(val, "%c %256s", &opcode, remove.tgtname);
|
||||
if (ret != 2) {
|
||||
pr_err("nvm: invalid command. Use \"d targetname\".\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
remove.flags = 0;
|
||||
|
||||
list_for_each_entry(dev, &nvm_devices, devices) {
|
||||
ret = dev->mt->remove_tgt(dev, &remove);
|
||||
if (!ret)
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int nvm_configure_create(const char *val)
|
||||
{
|
||||
struct nvm_ioctl_create create;
|
||||
char opcode;
|
||||
int lun_begin, lun_end, ret;
|
||||
|
||||
ret = sscanf(val, "%c %256s %256s %48s %u:%u", &opcode, create.dev,
|
||||
create.tgtname, create.tgttype,
|
||||
&lun_begin, &lun_end);
|
||||
if (ret != 6) {
|
||||
pr_err("nvm: invalid command. Use \"opcode device name tgttype lun_begin:lun_end\".\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
create.flags = 0;
|
||||
create.conf.type = NVM_CONFIG_TYPE_SIMPLE;
|
||||
create.conf.s.lun_begin = lun_begin;
|
||||
create.conf.s.lun_end = lun_end;
|
||||
|
||||
return __nvm_configure_create(&create);
|
||||
}
|
||||
|
||||
|
||||
/* Exposes administrative interface through /sys/module/lnvm/configure_by_str */
|
||||
static int nvm_configure_by_str_event(const char *val,
|
||||
const struct kernel_param *kp)
|
||||
{
|
||||
char opcode;
|
||||
int ret;
|
||||
|
||||
ret = sscanf(val, "%c", &opcode);
|
||||
if (ret != 1) {
|
||||
pr_err("nvm: string must have the format of \"cmd ...\"\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (opcode) {
|
||||
case 'a':
|
||||
return nvm_configure_create(val);
|
||||
case 'd':
|
||||
return nvm_configure_remove(val);
|
||||
case 's':
|
||||
return nvm_configure_show(val);
|
||||
default:
|
||||
pr_err("nvm: invalid command\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nvm_configure_get(char *buf, const struct kernel_param *kp)
|
||||
{
|
||||
int sz;
|
||||
struct nvm_dev *dev;
|
||||
|
||||
sz = sprintf(buf, "available devices:\n");
|
||||
down_write(&nvm_lock);
|
||||
list_for_each_entry(dev, &nvm_devices, devices) {
|
||||
if (sz > 4095 - DISK_NAME_LEN - 2)
|
||||
break;
|
||||
sz += sprintf(buf + sz, " %32s\n", dev->name);
|
||||
}
|
||||
up_write(&nvm_lock);
|
||||
|
||||
return sz;
|
||||
}
|
||||
|
||||
static const struct kernel_param_ops nvm_configure_by_str_event_param_ops = {
|
||||
.set = nvm_configure_by_str_event,
|
||||
.get = nvm_configure_get,
|
||||
};
|
||||
|
||||
#undef MODULE_PARAM_PREFIX
|
||||
#define MODULE_PARAM_PREFIX "lnvm."
|
||||
|
||||
module_param_cb(configure_debug, &nvm_configure_by_str_event_param_ops, NULL,
|
||||
0644);
|
||||
|
||||
#endif /* CONFIG_NVM_DEBUG */
|
||||
|
||||
static long nvm_ioctl_info(struct file *file, void __user *arg)
|
||||
{
|
||||
struct nvm_ioctl_info *info;
|
||||
|
|
|
@ -35,6 +35,165 @@ static const struct block_device_operations gen_fops = {
|
|||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static int gen_reserve_luns(struct nvm_dev *dev, struct nvm_target *t,
|
||||
int lun_begin, int lun_end)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = lun_begin; i <= lun_end; i++) {
|
||||
if (test_and_set_bit(i, dev->lun_map)) {
|
||||
pr_err("nvm: lun %d already allocated\n", i);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
while (--i > lun_begin)
|
||||
clear_bit(i, dev->lun_map);
|
||||
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
static void gen_release_luns_err(struct nvm_dev *dev, int lun_begin,
|
||||
int lun_end)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = lun_begin; i <= lun_end; i++)
|
||||
WARN_ON(!test_and_clear_bit(i, dev->lun_map));
|
||||
}
|
||||
|
||||
static void gen_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
struct gen_dev_map *dev_map = tgt_dev->map;
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < dev_map->nr_chnls; i++) {
|
||||
struct gen_ch_map *ch_map = &dev_map->chnls[i];
|
||||
int *lun_offs = ch_map->lun_offs;
|
||||
int ch = i + ch_map->ch_off;
|
||||
|
||||
for (j = 0; j < ch_map->nr_luns; j++) {
|
||||
int lun = j + lun_offs[j];
|
||||
int lunid = (ch * dev->geo.luns_per_chnl) + lun;
|
||||
|
||||
WARN_ON(!test_and_clear_bit(lunid, dev->lun_map));
|
||||
}
|
||||
|
||||
kfree(ch_map->lun_offs);
|
||||
}
|
||||
|
||||
kfree(dev_map->chnls);
|
||||
kfree(dev_map);
|
||||
kfree(tgt_dev->luns);
|
||||
kfree(tgt_dev);
|
||||
}
|
||||
|
||||
static struct nvm_tgt_dev *gen_create_tgt_dev(struct nvm_dev *dev,
|
||||
int lun_begin, int lun_end)
|
||||
{
|
||||
struct nvm_tgt_dev *tgt_dev = NULL;
|
||||
struct gen_dev_map *dev_rmap = dev->rmap;
|
||||
struct gen_dev_map *dev_map;
|
||||
struct ppa_addr *luns;
|
||||
int nr_luns = lun_end - lun_begin + 1;
|
||||
int luns_left = nr_luns;
|
||||
int nr_chnls = nr_luns / dev->geo.luns_per_chnl;
|
||||
int nr_chnls_mod = nr_luns % dev->geo.luns_per_chnl;
|
||||
int bch = lun_begin / dev->geo.luns_per_chnl;
|
||||
int blun = lun_begin % dev->geo.luns_per_chnl;
|
||||
int lunid = 0;
|
||||
int lun_balanced = 1;
|
||||
int prev_nr_luns;
|
||||
int i, j;
|
||||
|
||||
nr_chnls = nr_luns / dev->geo.luns_per_chnl;
|
||||
nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1;
|
||||
|
||||
dev_map = kmalloc(sizeof(struct gen_dev_map), GFP_KERNEL);
|
||||
if (!dev_map)
|
||||
goto err_dev;
|
||||
|
||||
dev_map->chnls = kcalloc(nr_chnls, sizeof(struct gen_ch_map),
|
||||
GFP_KERNEL);
|
||||
if (!dev_map->chnls)
|
||||
goto err_chnls;
|
||||
|
||||
luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL);
|
||||
if (!luns)
|
||||
goto err_luns;
|
||||
|
||||
prev_nr_luns = (luns_left > dev->geo.luns_per_chnl) ?
|
||||
dev->geo.luns_per_chnl : luns_left;
|
||||
for (i = 0; i < nr_chnls; i++) {
|
||||
struct gen_ch_map *ch_rmap = &dev_rmap->chnls[i + bch];
|
||||
int *lun_roffs = ch_rmap->lun_offs;
|
||||
struct gen_ch_map *ch_map = &dev_map->chnls[i];
|
||||
int *lun_offs;
|
||||
int luns_in_chnl = (luns_left > dev->geo.luns_per_chnl) ?
|
||||
dev->geo.luns_per_chnl : luns_left;
|
||||
|
||||
if (lun_balanced && prev_nr_luns != luns_in_chnl)
|
||||
lun_balanced = 0;
|
||||
|
||||
ch_map->ch_off = ch_rmap->ch_off = bch;
|
||||
ch_map->nr_luns = luns_in_chnl;
|
||||
|
||||
lun_offs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL);
|
||||
if (!lun_offs)
|
||||
goto err_ch;
|
||||
|
||||
for (j = 0; j < luns_in_chnl; j++) {
|
||||
luns[lunid].ppa = 0;
|
||||
luns[lunid].g.ch = i;
|
||||
luns[lunid++].g.lun = j;
|
||||
|
||||
lun_offs[j] = blun;
|
||||
lun_roffs[j + blun] = blun;
|
||||
}
|
||||
|
||||
ch_map->lun_offs = lun_offs;
|
||||
|
||||
/* when starting a new channel, lun offset is reset */
|
||||
blun = 0;
|
||||
luns_left -= luns_in_chnl;
|
||||
}
|
||||
|
||||
dev_map->nr_chnls = nr_chnls;
|
||||
|
||||
tgt_dev = kmalloc(sizeof(struct nvm_tgt_dev), GFP_KERNEL);
|
||||
if (!tgt_dev)
|
||||
goto err_ch;
|
||||
|
||||
memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo));
|
||||
/* Target device only owns a portion of the physical device */
|
||||
tgt_dev->geo.nr_chnls = nr_chnls;
|
||||
tgt_dev->geo.nr_luns = nr_luns;
|
||||
tgt_dev->geo.luns_per_chnl = (lun_balanced) ? prev_nr_luns : -1;
|
||||
tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun;
|
||||
tgt_dev->q = dev->q;
|
||||
tgt_dev->map = dev_map;
|
||||
tgt_dev->luns = luns;
|
||||
memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id));
|
||||
|
||||
tgt_dev->parent = dev;
|
||||
|
||||
return tgt_dev;
|
||||
err_ch:
|
||||
while (--i > 0)
|
||||
kfree(dev_map->chnls[i].lun_offs);
|
||||
kfree(luns);
|
||||
err_luns:
|
||||
kfree(dev_map->chnls);
|
||||
err_chnls:
|
||||
kfree(dev_map);
|
||||
err_dev:
|
||||
return tgt_dev;
|
||||
}
|
||||
|
||||
static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
|
||||
{
|
||||
struct gen_dev *gn = dev->mp;
|
||||
|
@ -43,6 +202,7 @@ static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
|
|||
struct gendisk *tdisk;
|
||||
struct nvm_tgt_type *tt;
|
||||
struct nvm_target *t;
|
||||
struct nvm_tgt_dev *tgt_dev;
|
||||
void *targetdata;
|
||||
|
||||
tt = nvm_find_target_type(create->tgttype, 1);
|
||||
|
@ -64,9 +224,18 @@ static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
|
|||
if (!t)
|
||||
return -ENOMEM;
|
||||
|
||||
if (gen_reserve_luns(dev, t, s->lun_begin, s->lun_end))
|
||||
goto err_t;
|
||||
|
||||
tgt_dev = gen_create_tgt_dev(dev, s->lun_begin, s->lun_end);
|
||||
if (!tgt_dev) {
|
||||
pr_err("nvm: could not create target device\n");
|
||||
goto err_reserve;
|
||||
}
|
||||
|
||||
tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node);
|
||||
if (!tqueue)
|
||||
goto err_t;
|
||||
goto err_dev;
|
||||
blk_queue_make_request(tqueue, tt->make_rq);
|
||||
|
||||
tdisk = alloc_disk(0);
|
||||
|
@ -80,7 +249,7 @@ static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
|
|||
tdisk->fops = &gen_fops;
|
||||
tdisk->queue = tqueue;
|
||||
|
||||
targetdata = tt->init(dev, tdisk, s->lun_begin, s->lun_end);
|
||||
targetdata = tt->init(tgt_dev, tdisk);
|
||||
if (IS_ERR(targetdata))
|
||||
goto err_init;
|
||||
|
||||
|
@ -94,7 +263,7 @@ static int gen_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
|
|||
|
||||
t->type = tt;
|
||||
t->disk = tdisk;
|
||||
t->dev = dev;
|
||||
t->dev = tgt_dev;
|
||||
|
||||
mutex_lock(&gn->lock);
|
||||
list_add_tail(&t->list, &gn->targets);
|
||||
|
@ -105,6 +274,10 @@ err_init:
|
|||
put_disk(tdisk);
|
||||
err_queue:
|
||||
blk_cleanup_queue(tqueue);
|
||||
err_dev:
|
||||
kfree(tgt_dev);
|
||||
err_reserve:
|
||||
gen_release_luns_err(dev, s->lun_begin, s->lun_end);
|
||||
err_t:
|
||||
kfree(t);
|
||||
return -ENOMEM;
|
||||
|
@ -122,6 +295,7 @@ static void __gen_remove_target(struct nvm_target *t)
|
|||
if (tt->exit)
|
||||
tt->exit(tdisk->private_data);
|
||||
|
||||
gen_remove_tgt_dev(t->dev);
|
||||
put_disk(tdisk);
|
||||
|
||||
list_del(&t->list);
|
||||
|
@ -160,10 +334,11 @@ static int gen_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)
|
|||
|
||||
static int gen_get_area(struct nvm_dev *dev, sector_t *lba, sector_t len)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct gen_dev *gn = dev->mp;
|
||||
struct gen_area *area, *prev, *next;
|
||||
sector_t begin = 0;
|
||||
sector_t max_sectors = (dev->sec_size * dev->total_secs) >> 9;
|
||||
sector_t max_sectors = (geo->sec_size * dev->total_secs) >> 9;
|
||||
|
||||
if (len > max_sectors)
|
||||
return -EINVAL;
|
||||
|
@ -220,240 +395,74 @@ static void gen_put_area(struct nvm_dev *dev, sector_t begin)
|
|||
spin_unlock(&dev->lock);
|
||||
}
|
||||
|
||||
static void gen_blocks_free(struct nvm_dev *dev)
|
||||
{
|
||||
struct gen_dev *gn = dev->mp;
|
||||
struct gen_lun *lun;
|
||||
int i;
|
||||
|
||||
gen_for_each_lun(gn, lun, i) {
|
||||
if (!lun->vlun.blocks)
|
||||
break;
|
||||
vfree(lun->vlun.blocks);
|
||||
}
|
||||
}
|
||||
|
||||
static void gen_luns_free(struct nvm_dev *dev)
|
||||
{
|
||||
struct gen_dev *gn = dev->mp;
|
||||
|
||||
kfree(gn->luns);
|
||||
}
|
||||
|
||||
static int gen_luns_init(struct nvm_dev *dev, struct gen_dev *gn)
|
||||
{
|
||||
struct gen_lun *lun;
|
||||
int i;
|
||||
|
||||
gn->luns = kcalloc(dev->nr_luns, sizeof(struct gen_lun), GFP_KERNEL);
|
||||
if (!gn->luns)
|
||||
return -ENOMEM;
|
||||
|
||||
gen_for_each_lun(gn, lun, i) {
|
||||
spin_lock_init(&lun->vlun.lock);
|
||||
INIT_LIST_HEAD(&lun->free_list);
|
||||
INIT_LIST_HEAD(&lun->used_list);
|
||||
INIT_LIST_HEAD(&lun->bb_list);
|
||||
|
||||
lun->reserved_blocks = 2; /* for GC only */
|
||||
lun->vlun.id = i;
|
||||
lun->vlun.lun_id = i % dev->luns_per_chnl;
|
||||
lun->vlun.chnl_id = i / dev->luns_per_chnl;
|
||||
lun->vlun.nr_free_blocks = dev->blks_per_lun;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gen_block_bb(struct gen_dev *gn, struct ppa_addr ppa,
|
||||
u8 *blks, int nr_blks)
|
||||
{
|
||||
struct nvm_dev *dev = gn->dev;
|
||||
struct gen_lun *lun;
|
||||
struct nvm_block *blk;
|
||||
int i;
|
||||
|
||||
nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
|
||||
if (nr_blks < 0)
|
||||
return nr_blks;
|
||||
|
||||
lun = &gn->luns[(dev->luns_per_chnl * ppa.g.ch) + ppa.g.lun];
|
||||
|
||||
for (i = 0; i < nr_blks; i++) {
|
||||
if (blks[i] == 0)
|
||||
continue;
|
||||
|
||||
blk = &lun->vlun.blocks[i];
|
||||
list_move_tail(&blk->list, &lun->bb_list);
|
||||
lun->vlun.nr_free_blocks--;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gen_block_map(u64 slba, u32 nlb, __le64 *entries, void *private)
|
||||
{
|
||||
struct nvm_dev *dev = private;
|
||||
struct gen_dev *gn = dev->mp;
|
||||
u64 elba = slba + nlb;
|
||||
struct gen_lun *lun;
|
||||
struct nvm_block *blk;
|
||||
u64 i;
|
||||
int lun_id;
|
||||
|
||||
if (unlikely(elba > dev->total_secs)) {
|
||||
pr_err("gen: L2P data from device is out of bounds!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
for (i = 0; i < nlb; i++) {
|
||||
u64 pba = le64_to_cpu(entries[i]);
|
||||
|
||||
if (unlikely(pba >= dev->total_secs && pba != U64_MAX)) {
|
||||
pr_err("gen: L2P data entry is out of bounds!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Address zero is a special one. The first page on a disk is
|
||||
* protected. It often holds internal device boot
|
||||
* information.
|
||||
*/
|
||||
if (!pba)
|
||||
continue;
|
||||
|
||||
/* resolve block from physical address */
|
||||
lun_id = div_u64(pba, dev->sec_per_lun);
|
||||
lun = &gn->luns[lun_id];
|
||||
|
||||
/* Calculate block offset into lun */
|
||||
pba = pba - (dev->sec_per_lun * lun_id);
|
||||
blk = &lun->vlun.blocks[div_u64(pba, dev->sec_per_blk)];
|
||||
|
||||
if (!blk->state) {
|
||||
/* at this point, we don't know anything about the
|
||||
* block. It's up to the FTL on top to re-etablish the
|
||||
* block state. The block is assumed to be open.
|
||||
*/
|
||||
list_move_tail(&blk->list, &lun->used_list);
|
||||
blk->state = NVM_BLK_ST_TGT;
|
||||
lun->vlun.nr_free_blocks--;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gen_blocks_init(struct nvm_dev *dev, struct gen_dev *gn)
|
||||
{
|
||||
struct gen_lun *lun;
|
||||
struct nvm_block *block;
|
||||
sector_t lun_iter, blk_iter, cur_block_id = 0;
|
||||
int ret, nr_blks;
|
||||
u8 *blks;
|
||||
|
||||
nr_blks = dev->blks_per_lun * dev->plane_mode;
|
||||
blks = kmalloc(nr_blks, GFP_KERNEL);
|
||||
if (!blks)
|
||||
return -ENOMEM;
|
||||
|
||||
gen_for_each_lun(gn, lun, lun_iter) {
|
||||
lun->vlun.blocks = vzalloc(sizeof(struct nvm_block) *
|
||||
dev->blks_per_lun);
|
||||
if (!lun->vlun.blocks) {
|
||||
kfree(blks);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
for (blk_iter = 0; blk_iter < dev->blks_per_lun; blk_iter++) {
|
||||
block = &lun->vlun.blocks[blk_iter];
|
||||
|
||||
INIT_LIST_HEAD(&block->list);
|
||||
|
||||
block->lun = &lun->vlun;
|
||||
block->id = cur_block_id++;
|
||||
|
||||
/* First block is reserved for device */
|
||||
if (unlikely(lun_iter == 0 && blk_iter == 0)) {
|
||||
lun->vlun.nr_free_blocks--;
|
||||
continue;
|
||||
}
|
||||
|
||||
list_add_tail(&block->list, &lun->free_list);
|
||||
}
|
||||
|
||||
if (dev->ops->get_bb_tbl) {
|
||||
struct ppa_addr ppa;
|
||||
|
||||
ppa.ppa = 0;
|
||||
ppa.g.ch = lun->vlun.chnl_id;
|
||||
ppa.g.lun = lun->vlun.lun_id;
|
||||
|
||||
ret = nvm_get_bb_tbl(dev, ppa, blks);
|
||||
if (ret)
|
||||
pr_err("gen: could not get BB table\n");
|
||||
|
||||
ret = gen_block_bb(gn, ppa, blks, nr_blks);
|
||||
if (ret)
|
||||
pr_err("gen: BB table map failed\n");
|
||||
}
|
||||
}
|
||||
|
||||
if ((dev->identity.dom & NVM_RSP_L2P) && dev->ops->get_l2p_tbl) {
|
||||
ret = dev->ops->get_l2p_tbl(dev, 0, dev->total_secs,
|
||||
gen_block_map, dev);
|
||||
if (ret) {
|
||||
pr_err("gen: could not read L2P table.\n");
|
||||
pr_warn("gen: default block initialization");
|
||||
}
|
||||
}
|
||||
|
||||
kfree(blks);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void gen_free(struct nvm_dev *dev)
|
||||
{
|
||||
gen_blocks_free(dev);
|
||||
gen_luns_free(dev);
|
||||
kfree(dev->mp);
|
||||
kfree(dev->rmap);
|
||||
dev->mp = NULL;
|
||||
}
|
||||
|
||||
static int gen_register(struct nvm_dev *dev)
|
||||
{
|
||||
struct gen_dev *gn;
|
||||
int ret;
|
||||
struct gen_dev_map *dev_rmap;
|
||||
int i, j;
|
||||
|
||||
if (!try_module_get(THIS_MODULE))
|
||||
return -ENODEV;
|
||||
|
||||
gn = kzalloc(sizeof(struct gen_dev), GFP_KERNEL);
|
||||
if (!gn)
|
||||
return -ENOMEM;
|
||||
goto err_gn;
|
||||
|
||||
dev_rmap = kmalloc(sizeof(struct gen_dev_map), GFP_KERNEL);
|
||||
if (!dev_rmap)
|
||||
goto err_rmap;
|
||||
|
||||
dev_rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct gen_ch_map),
|
||||
GFP_KERNEL);
|
||||
if (!dev_rmap->chnls)
|
||||
goto err_chnls;
|
||||
|
||||
for (i = 0; i < dev->geo.nr_chnls; i++) {
|
||||
struct gen_ch_map *ch_rmap;
|
||||
int *lun_roffs;
|
||||
int luns_in_chnl = dev->geo.luns_per_chnl;
|
||||
|
||||
ch_rmap = &dev_rmap->chnls[i];
|
||||
|
||||
ch_rmap->ch_off = -1;
|
||||
ch_rmap->nr_luns = luns_in_chnl;
|
||||
|
||||
lun_roffs = kcalloc(luns_in_chnl, sizeof(int), GFP_KERNEL);
|
||||
if (!lun_roffs)
|
||||
goto err_ch;
|
||||
|
||||
for (j = 0; j < luns_in_chnl; j++)
|
||||
lun_roffs[j] = -1;
|
||||
|
||||
ch_rmap->lun_offs = lun_roffs;
|
||||
}
|
||||
|
||||
gn->dev = dev;
|
||||
gn->nr_luns = dev->nr_luns;
|
||||
gn->nr_luns = dev->geo.nr_luns;
|
||||
INIT_LIST_HEAD(&gn->area_list);
|
||||
mutex_init(&gn->lock);
|
||||
INIT_LIST_HEAD(&gn->targets);
|
||||
dev->mp = gn;
|
||||
|
||||
ret = gen_luns_init(dev, gn);
|
||||
if (ret) {
|
||||
pr_err("gen: could not initialize luns\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = gen_blocks_init(dev, gn);
|
||||
if (ret) {
|
||||
pr_err("gen: could not initialize blocks\n");
|
||||
goto err;
|
||||
}
|
||||
dev->rmap = dev_rmap;
|
||||
|
||||
return 1;
|
||||
err:
|
||||
err_ch:
|
||||
while (--i >= 0)
|
||||
kfree(dev_rmap->chnls[i].lun_offs);
|
||||
err_chnls:
|
||||
kfree(dev_rmap);
|
||||
err_rmap:
|
||||
gen_free(dev);
|
||||
err_gn:
|
||||
module_put(THIS_MODULE);
|
||||
return ret;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static void gen_unregister(struct nvm_dev *dev)
|
||||
|
@ -463,7 +472,7 @@ static void gen_unregister(struct nvm_dev *dev)
|
|||
|
||||
mutex_lock(&gn->lock);
|
||||
list_for_each_entry_safe(t, tmp, &gn->targets, list) {
|
||||
if (t->dev != dev)
|
||||
if (t->dev->parent != dev)
|
||||
continue;
|
||||
__gen_remove_target(t);
|
||||
}
|
||||
|
@ -473,168 +482,142 @@ static void gen_unregister(struct nvm_dev *dev)
|
|||
module_put(THIS_MODULE);
|
||||
}
|
||||
|
||||
static struct nvm_block *gen_get_blk(struct nvm_dev *dev,
|
||||
struct nvm_lun *vlun, unsigned long flags)
|
||||
static int gen_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
|
||||
{
|
||||
struct gen_lun *lun = container_of(vlun, struct gen_lun, vlun);
|
||||
struct nvm_block *blk = NULL;
|
||||
int is_gc = flags & NVM_IOTYPE_GC;
|
||||
struct gen_dev_map *dev_map = tgt_dev->map;
|
||||
struct gen_ch_map *ch_map = &dev_map->chnls[p->g.ch];
|
||||
int lun_off = ch_map->lun_offs[p->g.lun];
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
struct gen_dev_map *dev_rmap = dev->rmap;
|
||||
struct gen_ch_map *ch_rmap;
|
||||
int lun_roff;
|
||||
|
||||
spin_lock(&vlun->lock);
|
||||
if (list_empty(&lun->free_list)) {
|
||||
pr_err_ratelimited("gen: lun %u have no free pages available",
|
||||
lun->vlun.id);
|
||||
goto out;
|
||||
p->g.ch += ch_map->ch_off;
|
||||
p->g.lun += lun_off;
|
||||
|
||||
ch_rmap = &dev_rmap->chnls[p->g.ch];
|
||||
lun_roff = ch_rmap->lun_offs[p->g.lun];
|
||||
|
||||
if (unlikely(ch_rmap->ch_off < 0 || lun_roff < 0)) {
|
||||
pr_err("nvm: corrupted device partition table\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!is_gc && lun->vlun.nr_free_blocks < lun->reserved_blocks)
|
||||
goto out;
|
||||
return 0;
|
||||
}
|
||||
|
||||
blk = list_first_entry(&lun->free_list, struct nvm_block, list);
|
||||
static int gen_map_to_tgt(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
struct gen_dev_map *dev_rmap = dev->rmap;
|
||||
struct gen_ch_map *ch_rmap = &dev_rmap->chnls[p->g.ch];
|
||||
int lun_roff = ch_rmap->lun_offs[p->g.lun];
|
||||
|
||||
p->g.ch -= ch_rmap->ch_off;
|
||||
p->g.lun -= lun_roff;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gen_trans_rq(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
|
||||
int flag)
|
||||
{
|
||||
gen_trans_fn *f;
|
||||
int i;
|
||||
int ret = 0;
|
||||
|
||||
f = (flag == TRANS_TGT_TO_DEV) ? gen_map_to_dev : gen_map_to_tgt;
|
||||
|
||||
if (rqd->nr_ppas == 1)
|
||||
return f(tgt_dev, &rqd->ppa_addr);
|
||||
|
||||
for (i = 0; i < rqd->nr_ppas; i++) {
|
||||
ret = f(tgt_dev, &rqd->ppa_list[i]);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
list_move_tail(&blk->list, &lun->used_list);
|
||||
blk->state = NVM_BLK_ST_TGT;
|
||||
lun->vlun.nr_free_blocks--;
|
||||
out:
|
||||
spin_unlock(&vlun->lock);
|
||||
return blk;
|
||||
}
|
||||
|
||||
static void gen_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
|
||||
{
|
||||
struct nvm_lun *vlun = blk->lun;
|
||||
struct gen_lun *lun = container_of(vlun, struct gen_lun, vlun);
|
||||
|
||||
spin_lock(&vlun->lock);
|
||||
if (blk->state & NVM_BLK_ST_TGT) {
|
||||
list_move_tail(&blk->list, &lun->free_list);
|
||||
lun->vlun.nr_free_blocks++;
|
||||
blk->state = NVM_BLK_ST_FREE;
|
||||
} else if (blk->state & NVM_BLK_ST_BAD) {
|
||||
list_move_tail(&blk->list, &lun->bb_list);
|
||||
blk->state = NVM_BLK_ST_BAD;
|
||||
} else {
|
||||
WARN_ON_ONCE(1);
|
||||
pr_err("gen: erroneous block type (%lu -> %u)\n",
|
||||
blk->id, blk->state);
|
||||
list_move_tail(&blk->list, &lun->bb_list);
|
||||
}
|
||||
spin_unlock(&vlun->lock);
|
||||
}
|
||||
|
||||
static void gen_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
|
||||
{
|
||||
struct gen_dev *gn = dev->mp;
|
||||
struct gen_lun *lun;
|
||||
struct nvm_block *blk;
|
||||
|
||||
pr_debug("gen: ppa (ch: %u lun: %u blk: %u pg: %u) -> %u\n",
|
||||
ppa.g.ch, ppa.g.lun, ppa.g.blk, ppa.g.pg, type);
|
||||
|
||||
if (unlikely(ppa.g.ch > dev->nr_chnls ||
|
||||
ppa.g.lun > dev->luns_per_chnl ||
|
||||
ppa.g.blk > dev->blks_per_lun)) {
|
||||
WARN_ON_ONCE(1);
|
||||
pr_err("gen: ppa broken (ch: %u > %u lun: %u > %u blk: %u > %u",
|
||||
ppa.g.ch, dev->nr_chnls,
|
||||
ppa.g.lun, dev->luns_per_chnl,
|
||||
ppa.g.blk, dev->blks_per_lun);
|
||||
return;
|
||||
}
|
||||
|
||||
lun = &gn->luns[(dev->luns_per_chnl * ppa.g.ch) + ppa.g.lun];
|
||||
blk = &lun->vlun.blocks[ppa.g.blk];
|
||||
|
||||
/* will be moved to bb list on put_blk from target */
|
||||
blk->state = type;
|
||||
}
|
||||
|
||||
/*
|
||||
* mark block bad in gen. It is expected that the target recovers separately
|
||||
*/
|
||||
static void gen_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd)
|
||||
{
|
||||
int bit = -1;
|
||||
int max_secs = dev->ops->max_phys_sect;
|
||||
void *comp_bits = &rqd->ppa_status;
|
||||
|
||||
nvm_addr_to_generic_mode(dev, rqd);
|
||||
|
||||
/* look up blocks and mark them as bad */
|
||||
if (rqd->nr_ppas == 1) {
|
||||
gen_mark_blk(dev, rqd->ppa_addr, NVM_BLK_ST_BAD);
|
||||
return;
|
||||
}
|
||||
|
||||
while ((bit = find_next_bit(comp_bits, max_secs, bit + 1)) < max_secs)
|
||||
gen_mark_blk(dev, rqd->ppa_list[bit], NVM_BLK_ST_BAD);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void gen_end_io(struct nvm_rq *rqd)
|
||||
{
|
||||
struct nvm_tgt_dev *tgt_dev = rqd->dev;
|
||||
struct nvm_tgt_instance *ins = rqd->ins;
|
||||
|
||||
if (rqd->error == NVM_RSP_ERR_FAILWRITE)
|
||||
gen_mark_blk_bad(rqd->dev, rqd);
|
||||
/* Convert address space */
|
||||
if (tgt_dev)
|
||||
gen_trans_rq(tgt_dev, rqd, TRANS_DEV_TO_TGT);
|
||||
|
||||
ins->tt->end_io(rqd);
|
||||
}
|
||||
|
||||
static int gen_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
|
||||
static int gen_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd)
|
||||
{
|
||||
struct nvm_dev *dev = tgt_dev->parent;
|
||||
|
||||
if (!dev->ops->submit_io)
|
||||
return -ENODEV;
|
||||
|
||||
/* Convert address space */
|
||||
gen_trans_rq(tgt_dev, rqd, TRANS_TGT_TO_DEV);
|
||||
nvm_generic_to_addr_mode(dev, rqd);
|
||||
|
||||
rqd->dev = dev;
|
||||
rqd->dev = tgt_dev;
|
||||
rqd->end_io = gen_end_io;
|
||||
return dev->ops->submit_io(dev, rqd);
|
||||
}
|
||||
|
||||
static int gen_erase_blk(struct nvm_dev *dev, struct nvm_block *blk,
|
||||
unsigned long flags)
|
||||
static int gen_erase_blk(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p,
|
||||
int flags)
|
||||
{
|
||||
struct ppa_addr addr = block_to_ppa(dev, blk);
|
||||
/* Convert address space */
|
||||
gen_map_to_dev(tgt_dev, p);
|
||||
|
||||
return nvm_erase_ppa(dev, &addr, 1);
|
||||
return nvm_erase_ppa(tgt_dev->parent, p, 1, flags);
|
||||
}
|
||||
|
||||
static int gen_reserve_lun(struct nvm_dev *dev, int lunid)
|
||||
static struct ppa_addr gen_trans_ppa(struct nvm_tgt_dev *tgt_dev,
|
||||
struct ppa_addr p, int direction)
|
||||
{
|
||||
return test_and_set_bit(lunid, dev->lun_map);
|
||||
gen_trans_fn *f;
|
||||
struct ppa_addr ppa = p;
|
||||
|
||||
f = (direction == TRANS_TGT_TO_DEV) ? gen_map_to_dev : gen_map_to_tgt;
|
||||
f(tgt_dev, &ppa);
|
||||
|
||||
return ppa;
|
||||
}
|
||||
|
||||
static void gen_release_lun(struct nvm_dev *dev, int lunid)
|
||||
static void gen_part_to_tgt(struct nvm_dev *dev, sector_t *entries,
|
||||
int len)
|
||||
{
|
||||
WARN_ON(!test_and_clear_bit(lunid, dev->lun_map));
|
||||
}
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct gen_dev_map *dev_rmap = dev->rmap;
|
||||
u64 i;
|
||||
|
||||
static struct nvm_lun *gen_get_lun(struct nvm_dev *dev, int lunid)
|
||||
{
|
||||
struct gen_dev *gn = dev->mp;
|
||||
for (i = 0; i < len; i++) {
|
||||
struct gen_ch_map *ch_rmap;
|
||||
int *lun_roffs;
|
||||
struct ppa_addr gaddr;
|
||||
u64 pba = le64_to_cpu(entries[i]);
|
||||
int off;
|
||||
u64 diff;
|
||||
|
||||
if (unlikely(lunid >= dev->nr_luns))
|
||||
return NULL;
|
||||
if (!pba)
|
||||
continue;
|
||||
|
||||
return &gn->luns[lunid].vlun;
|
||||
}
|
||||
gaddr = linear_to_generic_addr(geo, pba);
|
||||
ch_rmap = &dev_rmap->chnls[gaddr.g.ch];
|
||||
lun_roffs = ch_rmap->lun_offs;
|
||||
|
||||
static void gen_lun_info_print(struct nvm_dev *dev)
|
||||
{
|
||||
struct gen_dev *gn = dev->mp;
|
||||
struct gen_lun *lun;
|
||||
unsigned int i;
|
||||
off = gaddr.g.ch * geo->luns_per_chnl + gaddr.g.lun;
|
||||
|
||||
diff = ((ch_rmap->ch_off * geo->luns_per_chnl) +
|
||||
(lun_roffs[gaddr.g.lun])) * geo->sec_per_lun;
|
||||
|
||||
gen_for_each_lun(gn, lun, i) {
|
||||
spin_lock(&lun->vlun.lock);
|
||||
|
||||
pr_info("%s: lun%8u\t%u\n", dev->name, i,
|
||||
lun->vlun.nr_free_blocks);
|
||||
|
||||
spin_unlock(&lun->vlun.lock);
|
||||
entries[i] -= cpu_to_le64(diff);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -648,22 +631,14 @@ static struct nvmm_type gen = {
|
|||
.create_tgt = gen_create_tgt,
|
||||
.remove_tgt = gen_remove_tgt,
|
||||
|
||||
.get_blk = gen_get_blk,
|
||||
.put_blk = gen_put_blk,
|
||||
|
||||
.submit_io = gen_submit_io,
|
||||
.erase_blk = gen_erase_blk,
|
||||
|
||||
.mark_blk = gen_mark_blk,
|
||||
|
||||
.get_lun = gen_get_lun,
|
||||
.reserve_lun = gen_reserve_lun,
|
||||
.release_lun = gen_release_lun,
|
||||
.lun_info_print = gen_lun_info_print,
|
||||
|
||||
.get_area = gen_get_area,
|
||||
.put_area = gen_put_area,
|
||||
|
||||
.trans_ppa = gen_trans_ppa,
|
||||
.part_to_tgt = gen_part_to_tgt,
|
||||
};
|
||||
|
||||
static int __init gen_module_init(void)
|
||||
|
|
|
@ -20,37 +20,41 @@
|
|||
|
||||
#include <linux/lightnvm.h>
|
||||
|
||||
struct gen_lun {
|
||||
struct nvm_lun vlun;
|
||||
|
||||
int reserved_blocks;
|
||||
/* lun block lists */
|
||||
struct list_head used_list; /* In-use blocks */
|
||||
struct list_head free_list; /* Not used blocks i.e. released
|
||||
* and ready for use
|
||||
*/
|
||||
struct list_head bb_list; /* Bad blocks. Mutually exclusive with
|
||||
* free_list and used_list
|
||||
*/
|
||||
};
|
||||
|
||||
struct gen_dev {
|
||||
struct nvm_dev *dev;
|
||||
|
||||
int nr_luns;
|
||||
struct gen_lun *luns;
|
||||
struct list_head area_list;
|
||||
|
||||
struct mutex lock;
|
||||
struct list_head targets;
|
||||
};
|
||||
|
||||
/* Map between virtual and physical channel and lun */
|
||||
struct gen_ch_map {
|
||||
int ch_off;
|
||||
int nr_luns;
|
||||
int *lun_offs;
|
||||
};
|
||||
|
||||
struct gen_dev_map {
|
||||
struct gen_ch_map *chnls;
|
||||
int nr_chnls;
|
||||
};
|
||||
|
||||
struct gen_area {
|
||||
struct list_head list;
|
||||
sector_t begin;
|
||||
sector_t end; /* end is excluded */
|
||||
};
|
||||
|
||||
static inline void *ch_map_to_lun_offs(struct gen_ch_map *ch_map)
|
||||
{
|
||||
return ch_map + 1;
|
||||
}
|
||||
|
||||
typedef int (gen_trans_fn)(struct nvm_tgt_dev *, struct ppa_addr *);
|
||||
|
||||
#define gen_for_each_lun(bm, lun, i) \
|
||||
for ((i) = 0, lun = &(bm)->luns[0]; \
|
||||
(i) < (bm)->nr_luns; (i)++, lun = &(bm)->luns[(i)])
|
||||
|
|
|
@ -1,35 +0,0 @@
|
|||
/*
|
||||
* Copyright (C) 2016 CNEX Labs. All rights reserved.
|
||||
* Initial release: Matias Bjorling <matias@cnexlabs.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public License version
|
||||
* 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; see the file COPYING. If not, write to
|
||||
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139,
|
||||
* USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef LIGHTNVM_H
|
||||
#define LIGHTNVM_H
|
||||
|
||||
#include <linux/lightnvm.h>
|
||||
|
||||
/* core -> sysfs.c */
|
||||
int __must_check nvm_sysfs_register_dev(struct nvm_dev *);
|
||||
void nvm_sysfs_unregister_dev(struct nvm_dev *);
|
||||
int nvm_sysfs_register(void);
|
||||
void nvm_sysfs_unregister(void);
|
||||
|
||||
/* sysfs > core */
|
||||
void nvm_free(struct nvm_dev *);
|
||||
|
||||
#endif
|
File diff suppressed because it is too large
Load Diff
|
@ -48,14 +48,15 @@ struct rrpc_inflight_rq {
|
|||
|
||||
struct rrpc_rq {
|
||||
struct rrpc_inflight_rq inflight_rq;
|
||||
struct rrpc_addr *addr;
|
||||
unsigned long flags;
|
||||
};
|
||||
|
||||
struct rrpc_block {
|
||||
struct nvm_block *parent;
|
||||
int id; /* id inside of LUN */
|
||||
struct rrpc_lun *rlun;
|
||||
struct list_head prio;
|
||||
|
||||
struct list_head prio; /* LUN CG list */
|
||||
struct list_head list; /* LUN free, used, bb list */
|
||||
|
||||
#define MAX_INVALID_PAGES_STORAGE 8
|
||||
/* Bitmap for invalid page intries */
|
||||
|
@ -65,21 +66,38 @@ struct rrpc_block {
|
|||
/* number of pages that are invalid, wrt host page size */
|
||||
unsigned int nr_invalid_pages;
|
||||
|
||||
int state;
|
||||
|
||||
spinlock_t lock;
|
||||
atomic_t data_cmnt_size; /* data pages committed to stable storage */
|
||||
};
|
||||
|
||||
struct rrpc_lun {
|
||||
struct rrpc *rrpc;
|
||||
struct nvm_lun *parent;
|
||||
|
||||
int id;
|
||||
struct ppa_addr bppa;
|
||||
|
||||
struct rrpc_block *cur, *gc_cur;
|
||||
struct rrpc_block *blocks; /* Reference to block allocation */
|
||||
|
||||
struct list_head prio_list; /* Blocks that may be GC'ed */
|
||||
struct list_head wblk_list; /* Queued blocks to be written to */
|
||||
|
||||
/* lun block lists */
|
||||
struct list_head used_list; /* In-use blocks */
|
||||
struct list_head free_list; /* Not used blocks i.e. released
|
||||
* and ready for use
|
||||
*/
|
||||
struct list_head bb_list; /* Bad blocks. Mutually exclusive with
|
||||
* free_list and used_list
|
||||
*/
|
||||
unsigned int nr_free_blocks; /* Number of unused blocks */
|
||||
|
||||
struct work_struct ws_gc;
|
||||
|
||||
int reserved_blocks;
|
||||
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
|
@ -87,19 +105,16 @@ struct rrpc {
|
|||
/* instance must be kept in top to resolve rrpc in unprep */
|
||||
struct nvm_tgt_instance instance;
|
||||
|
||||
struct nvm_dev *dev;
|
||||
struct nvm_tgt_dev *dev;
|
||||
struct gendisk *disk;
|
||||
|
||||
sector_t soffset; /* logical sector offset */
|
||||
u64 poffset; /* physical page offset */
|
||||
int lun_offset;
|
||||
|
||||
int nr_luns;
|
||||
struct rrpc_lun *luns;
|
||||
|
||||
/* calculated values */
|
||||
unsigned long long nr_sects;
|
||||
unsigned long total_blocks;
|
||||
|
||||
/* Write strategy variables. Move these into each for structure for each
|
||||
* strategy
|
||||
|
@ -150,13 +165,37 @@ struct rrpc_rev_addr {
|
|||
u64 addr;
|
||||
};
|
||||
|
||||
static inline struct rrpc_block *rrpc_get_rblk(struct rrpc_lun *rlun,
|
||||
int blk_id)
|
||||
static inline struct ppa_addr rrpc_linear_to_generic_addr(struct nvm_geo *geo,
|
||||
struct ppa_addr r)
|
||||
{
|
||||
struct rrpc *rrpc = rlun->rrpc;
|
||||
int lun_blk = blk_id % rrpc->dev->blks_per_lun;
|
||||
struct ppa_addr l;
|
||||
int secs, pgs;
|
||||
sector_t ppa = r.ppa;
|
||||
|
||||
return &rlun->blocks[lun_blk];
|
||||
l.ppa = 0;
|
||||
|
||||
div_u64_rem(ppa, geo->sec_per_pg, &secs);
|
||||
l.g.sec = secs;
|
||||
|
||||
sector_div(ppa, geo->sec_per_pg);
|
||||
div_u64_rem(ppa, geo->pgs_per_blk, &pgs);
|
||||
l.g.pg = pgs;
|
||||
|
||||
return l;
|
||||
}
|
||||
|
||||
static inline struct ppa_addr rrpc_recov_addr(struct nvm_tgt_dev *dev, u64 pba)
|
||||
{
|
||||
return linear_to_generic_addr(&dev->geo, pba);
|
||||
}
|
||||
|
||||
static inline u64 rrpc_blk_to_ppa(struct rrpc *rrpc, struct rrpc_block *rblk)
|
||||
{
|
||||
struct nvm_tgt_dev *dev = rrpc->dev;
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct rrpc_lun *rlun = rblk->rlun;
|
||||
|
||||
return (rlun->id * geo->sec_per_lun) + (rblk->id * geo->sec_per_blk);
|
||||
}
|
||||
|
||||
static inline sector_t rrpc_get_laddr(struct bio *bio)
|
||||
|
|
|
@ -62,7 +62,8 @@ static void nvm_cpu_to_sysblk(struct nvm_system_block *sb,
|
|||
|
||||
static int nvm_setup_sysblks(struct nvm_dev *dev, struct ppa_addr *sysblk_ppas)
|
||||
{
|
||||
int nr_rows = min_t(int, MAX_SYSBLKS, dev->nr_chnls);
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int nr_rows = min_t(int, MAX_SYSBLKS, geo->nr_chnls);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nr_rows; i++)
|
||||
|
@ -71,7 +72,7 @@ static int nvm_setup_sysblks(struct nvm_dev *dev, struct ppa_addr *sysblk_ppas)
|
|||
/* if possible, place sysblk at first channel, middle channel and last
|
||||
* channel of the device. If not, create only one or two sys blocks
|
||||
*/
|
||||
switch (dev->nr_chnls) {
|
||||
switch (geo->nr_chnls) {
|
||||
case 2:
|
||||
sysblk_ppas[1].g.ch = 1;
|
||||
/* fall-through */
|
||||
|
@ -80,8 +81,8 @@ static int nvm_setup_sysblks(struct nvm_dev *dev, struct ppa_addr *sysblk_ppas)
|
|||
break;
|
||||
default:
|
||||
sysblk_ppas[0].g.ch = 0;
|
||||
sysblk_ppas[1].g.ch = dev->nr_chnls / 2;
|
||||
sysblk_ppas[2].g.ch = dev->nr_chnls - 1;
|
||||
sysblk_ppas[1].g.ch = geo->nr_chnls / 2;
|
||||
sysblk_ppas[2].g.ch = geo->nr_chnls - 1;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -162,11 +163,12 @@ static int sysblk_get_host_blks(struct nvm_dev *dev, struct ppa_addr ppa,
|
|||
static int nvm_get_all_sysblks(struct nvm_dev *dev, struct sysblk_scan *s,
|
||||
struct ppa_addr *ppas, int get_free)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
int i, nr_blks, ret = 0;
|
||||
u8 *blks;
|
||||
|
||||
s->nr_ppas = 0;
|
||||
nr_blks = dev->blks_per_lun * dev->plane_mode;
|
||||
nr_blks = geo->blks_per_lun * geo->plane_mode;
|
||||
|
||||
blks = kmalloc(nr_blks, GFP_KERNEL);
|
||||
if (!blks)
|
||||
|
@ -210,13 +212,14 @@ err_get:
|
|||
static int nvm_scan_block(struct nvm_dev *dev, struct ppa_addr *ppa,
|
||||
struct nvm_system_block *sblk)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct nvm_system_block *cur;
|
||||
int pg, ret, found = 0;
|
||||
|
||||
/* the full buffer for a flash page is allocated. Only the first of it
|
||||
* contains the system block information
|
||||
*/
|
||||
cur = kmalloc(dev->pfpg_size, GFP_KERNEL);
|
||||
cur = kmalloc(geo->pfpg_size, GFP_KERNEL);
|
||||
if (!cur)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -225,7 +228,7 @@ static int nvm_scan_block(struct nvm_dev *dev, struct ppa_addr *ppa,
|
|||
ppa->g.pg = ppa_to_slc(dev, pg);
|
||||
|
||||
ret = nvm_submit_ppa(dev, ppa, 1, NVM_OP_PREAD, NVM_IO_SLC_MODE,
|
||||
cur, dev->pfpg_size);
|
||||
cur, geo->pfpg_size);
|
||||
if (ret) {
|
||||
if (ret == NVM_RSP_ERR_EMPTYPAGE) {
|
||||
pr_debug("nvm: sysblk scan empty ppa (%u %u %u %u)\n",
|
||||
|
@ -267,34 +270,16 @@ static int nvm_scan_block(struct nvm_dev *dev, struct ppa_addr *ppa,
|
|||
return found;
|
||||
}
|
||||
|
||||
static int nvm_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s, int type)
|
||||
static int nvm_sysblk_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s,
|
||||
int type)
|
||||
{
|
||||
struct nvm_rq rqd;
|
||||
int ret;
|
||||
|
||||
if (s->nr_ppas > dev->ops->max_phys_sect) {
|
||||
pr_err("nvm: unable to update all sysblocks atomically\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
memset(&rqd, 0, sizeof(struct nvm_rq));
|
||||
|
||||
nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas, 1);
|
||||
nvm_generic_to_addr_mode(dev, &rqd);
|
||||
|
||||
ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type);
|
||||
nvm_free_rqd_ppalist(dev, &rqd);
|
||||
if (ret) {
|
||||
pr_err("nvm: sysblk failed bb mark\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return nvm_set_bb_tbl(dev, s->ppas, s->nr_ppas, type);
|
||||
}
|
||||
|
||||
static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
|
||||
struct sysblk_scan *s)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct nvm_system_block nvmsb;
|
||||
void *buf;
|
||||
int i, sect, ret = 0;
|
||||
|
@ -302,12 +287,12 @@ static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
|
|||
|
||||
nvm_cpu_to_sysblk(&nvmsb, info);
|
||||
|
||||
buf = kzalloc(dev->pfpg_size, GFP_KERNEL);
|
||||
buf = kzalloc(geo->pfpg_size, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
memcpy(buf, &nvmsb, sizeof(struct nvm_system_block));
|
||||
|
||||
ppas = kcalloc(dev->sec_per_pg, sizeof(struct ppa_addr), GFP_KERNEL);
|
||||
ppas = kcalloc(geo->sec_per_pg, sizeof(struct ppa_addr), GFP_KERNEL);
|
||||
if (!ppas) {
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
|
@ -324,15 +309,15 @@ static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
|
|||
ppas[0].g.pg);
|
||||
|
||||
/* Expand to all sectors within a flash page */
|
||||
if (dev->sec_per_pg > 1) {
|
||||
for (sect = 1; sect < dev->sec_per_pg; sect++) {
|
||||
if (geo->sec_per_pg > 1) {
|
||||
for (sect = 1; sect < geo->sec_per_pg; sect++) {
|
||||
ppas[sect].ppa = ppas[0].ppa;
|
||||
ppas[sect].g.sec = sect;
|
||||
}
|
||||
}
|
||||
|
||||
ret = nvm_submit_ppa(dev, ppas, dev->sec_per_pg, NVM_OP_PWRITE,
|
||||
NVM_IO_SLC_MODE, buf, dev->pfpg_size);
|
||||
ret = nvm_submit_ppa(dev, ppas, geo->sec_per_pg, NVM_OP_PWRITE,
|
||||
NVM_IO_SLC_MODE, buf, geo->pfpg_size);
|
||||
if (ret) {
|
||||
pr_err("nvm: sysblk failed program (%u %u %u)\n",
|
||||
ppas[0].g.ch,
|
||||
|
@ -341,8 +326,8 @@ static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
|
|||
break;
|
||||
}
|
||||
|
||||
ret = nvm_submit_ppa(dev, ppas, dev->sec_per_pg, NVM_OP_PREAD,
|
||||
NVM_IO_SLC_MODE, buf, dev->pfpg_size);
|
||||
ret = nvm_submit_ppa(dev, ppas, geo->sec_per_pg, NVM_OP_PREAD,
|
||||
NVM_IO_SLC_MODE, buf, geo->pfpg_size);
|
||||
if (ret) {
|
||||
pr_err("nvm: sysblk failed read (%u %u %u)\n",
|
||||
ppas[0].g.ch,
|
||||
|
@ -379,7 +364,7 @@ static int nvm_prepare_new_sysblks(struct nvm_dev *dev, struct sysblk_scan *s)
|
|||
ppa = &s->ppas[scan_ppa_idx(i, nxt_blk)];
|
||||
ppa->g.pg = ppa_to_slc(dev, 0);
|
||||
|
||||
ret = nvm_erase_ppa(dev, ppa, 1);
|
||||
ret = nvm_erase_ppa(dev, ppa, 1, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
@ -546,6 +531,7 @@ err_sysblk:
|
|||
|
||||
int nvm_init_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
|
||||
struct sysblk_scan s;
|
||||
int ret;
|
||||
|
@ -560,7 +546,7 @@ int nvm_init_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
|
|||
if (!dev->ops->get_bb_tbl || !dev->ops->set_bb_tbl)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(dev->mccap & NVM_ID_CAP_SLC) || !dev->lps_per_blk) {
|
||||
if (!(geo->mccap & NVM_ID_CAP_SLC) || !dev->lps_per_blk) {
|
||||
pr_err("nvm: memory does not support SLC access\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -573,7 +559,7 @@ int nvm_init_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
|
|||
if (ret)
|
||||
goto err_mark;
|
||||
|
||||
ret = nvm_set_bb_tbl(dev, &s, NVM_BLK_T_HOST);
|
||||
ret = nvm_sysblk_set_bb_tbl(dev, &s, NVM_BLK_T_HOST);
|
||||
if (ret)
|
||||
goto err_mark;
|
||||
|
||||
|
@ -590,11 +576,11 @@ static int factory_nblks(int nblks)
|
|||
return (nblks + (BITS_PER_LONG - 1)) & ~(BITS_PER_LONG - 1);
|
||||
}
|
||||
|
||||
static unsigned int factory_blk_offset(struct nvm_dev *dev, struct ppa_addr ppa)
|
||||
static unsigned int factory_blk_offset(struct nvm_geo *geo, struct ppa_addr ppa)
|
||||
{
|
||||
int nblks = factory_nblks(dev->blks_per_lun);
|
||||
int nblks = factory_nblks(geo->blks_per_lun);
|
||||
|
||||
return ((ppa.g.ch * dev->luns_per_chnl * nblks) + (ppa.g.lun * nblks)) /
|
||||
return ((ppa.g.ch * geo->luns_per_chnl * nblks) + (ppa.g.lun * nblks)) /
|
||||
BITS_PER_LONG;
|
||||
}
|
||||
|
||||
|
@ -608,7 +594,7 @@ static int nvm_factory_blks(struct nvm_dev *dev, struct ppa_addr ppa,
|
|||
if (nr_blks < 0)
|
||||
return nr_blks;
|
||||
|
||||
lunoff = factory_blk_offset(dev, ppa);
|
||||
lunoff = factory_blk_offset(&dev->geo, ppa);
|
||||
|
||||
/* non-set bits correspond to the block must be erased */
|
||||
for (i = 0; i < nr_blks; i++) {
|
||||
|
@ -637,19 +623,19 @@ static int nvm_factory_blks(struct nvm_dev *dev, struct ppa_addr ppa,
|
|||
static int nvm_fact_get_blks(struct nvm_dev *dev, struct ppa_addr *erase_list,
|
||||
int max_ppas, unsigned long *blk_bitmap)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct ppa_addr ppa;
|
||||
int ch, lun, blkid, idx, done = 0, ppa_cnt = 0;
|
||||
unsigned long *offset;
|
||||
|
||||
while (!done) {
|
||||
done = 1;
|
||||
nvm_for_each_lun_ppa(dev, ppa, ch, lun) {
|
||||
idx = factory_blk_offset(dev, ppa);
|
||||
nvm_for_each_lun_ppa(geo, ppa, ch, lun) {
|
||||
idx = factory_blk_offset(geo, ppa);
|
||||
offset = &blk_bitmap[idx];
|
||||
|
||||
blkid = find_first_zero_bit(offset,
|
||||
dev->blks_per_lun);
|
||||
if (blkid >= dev->blks_per_lun)
|
||||
blkid = find_first_zero_bit(offset, geo->blks_per_lun);
|
||||
if (blkid >= geo->blks_per_lun)
|
||||
continue;
|
||||
set_bit(blkid, offset);
|
||||
|
||||
|
@ -674,16 +660,17 @@ static int nvm_fact_get_blks(struct nvm_dev *dev, struct ppa_addr *erase_list,
|
|||
static int nvm_fact_select_blks(struct nvm_dev *dev, unsigned long *blk_bitmap,
|
||||
int flags)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct ppa_addr ppa;
|
||||
int ch, lun, nr_blks, ret = 0;
|
||||
u8 *blks;
|
||||
|
||||
nr_blks = dev->blks_per_lun * dev->plane_mode;
|
||||
nr_blks = geo->blks_per_lun * geo->plane_mode;
|
||||
blks = kmalloc(nr_blks, GFP_KERNEL);
|
||||
if (!blks)
|
||||
return -ENOMEM;
|
||||
|
||||
nvm_for_each_lun_ppa(dev, ppa, ch, lun) {
|
||||
nvm_for_each_lun_ppa(geo, ppa, ch, lun) {
|
||||
ret = nvm_get_bb_tbl(dev, ppa, blks);
|
||||
if (ret)
|
||||
pr_err("nvm: failed bb tbl for ch%u lun%u\n",
|
||||
|
@ -701,14 +688,15 @@ static int nvm_fact_select_blks(struct nvm_dev *dev, unsigned long *blk_bitmap,
|
|||
|
||||
int nvm_dev_factory(struct nvm_dev *dev, int flags)
|
||||
{
|
||||
struct nvm_geo *geo = &dev->geo;
|
||||
struct ppa_addr *ppas;
|
||||
int ppa_cnt, ret = -ENOMEM;
|
||||
int max_ppas = dev->ops->max_phys_sect / dev->nr_planes;
|
||||
int max_ppas = dev->ops->max_phys_sect / geo->nr_planes;
|
||||
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
|
||||
struct sysblk_scan s;
|
||||
unsigned long *blk_bitmap;
|
||||
|
||||
blk_bitmap = kzalloc(factory_nblks(dev->blks_per_lun) * dev->nr_luns,
|
||||
blk_bitmap = kzalloc(factory_nblks(geo->blks_per_lun) * geo->nr_luns,
|
||||
GFP_KERNEL);
|
||||
if (!blk_bitmap)
|
||||
return ret;
|
||||
|
@ -725,7 +713,7 @@ int nvm_dev_factory(struct nvm_dev *dev, int flags)
|
|||
/* continue to erase until list of blks until empty */
|
||||
while ((ppa_cnt =
|
||||
nvm_fact_get_blks(dev, ppas, max_ppas, blk_bitmap)) > 0)
|
||||
nvm_erase_ppa(dev, ppas, ppa_cnt);
|
||||
nvm_erase_ppa(dev, ppas, ppa_cnt, 0);
|
||||
|
||||
/* mark host reserved blocks free */
|
||||
if (flags & NVM_FACTORY_RESET_HOST_BLKS) {
|
||||
|
@ -733,7 +721,7 @@ int nvm_dev_factory(struct nvm_dev *dev, int flags)
|
|||
mutex_lock(&dev->mlock);
|
||||
ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
|
||||
if (!ret)
|
||||
ret = nvm_set_bb_tbl(dev, &s, NVM_BLK_T_FREE);
|
||||
ret = nvm_sysblk_set_bb_tbl(dev, &s, NVM_BLK_T_FREE);
|
||||
mutex_unlock(&dev->mlock);
|
||||
}
|
||||
err_ppas:
|
||||
|
|
|
@ -1,198 +0,0 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/lightnvm.h>
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/blk-mq.h>
|
||||
|
||||
#include "lightnvm.h"
|
||||
|
||||
static ssize_t nvm_dev_attr_show(struct device *dev,
|
||||
struct device_attribute *dattr, char *page)
|
||||
{
|
||||
struct nvm_dev *ndev = container_of(dev, struct nvm_dev, dev);
|
||||
struct nvm_id *id = &ndev->identity;
|
||||
struct nvm_id_group *grp = &id->groups[0];
|
||||
struct attribute *attr = &dattr->attr;
|
||||
|
||||
if (strcmp(attr->name, "version") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id);
|
||||
} else if (strcmp(attr->name, "vendor_opcode") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt);
|
||||
} else if (strcmp(attr->name, "capabilities") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->cap);
|
||||
} else if (strcmp(attr->name, "device_mode") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->dom);
|
||||
} else if (strcmp(attr->name, "media_manager") == 0) {
|
||||
if (!ndev->mt)
|
||||
return scnprintf(page, PAGE_SIZE, "%s\n", "none");
|
||||
return scnprintf(page, PAGE_SIZE, "%s\n", ndev->mt->name);
|
||||
} else if (strcmp(attr->name, "ppa_format") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE,
|
||||
"0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
|
||||
id->ppaf.ch_offset, id->ppaf.ch_len,
|
||||
id->ppaf.lun_offset, id->ppaf.lun_len,
|
||||
id->ppaf.pln_offset, id->ppaf.pln_len,
|
||||
id->ppaf.blk_offset, id->ppaf.blk_len,
|
||||
id->ppaf.pg_offset, id->ppaf.pg_len,
|
||||
id->ppaf.sect_offset, id->ppaf.sect_len);
|
||||
} else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->mtype);
|
||||
} else if (strcmp(attr->name, "flash_media_type") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->fmtype);
|
||||
} else if (strcmp(attr->name, "num_channels") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_ch);
|
||||
} else if (strcmp(attr->name, "num_luns") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_lun);
|
||||
} else if (strcmp(attr->name, "num_planes") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_pln);
|
||||
} else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_blk);
|
||||
} else if (strcmp(attr->name, "num_pages") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_pg);
|
||||
} else if (strcmp(attr->name, "page_size") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->fpg_sz);
|
||||
} else if (strcmp(attr->name, "hw_sector_size") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->csecs);
|
||||
} else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->sos);
|
||||
} else if (strcmp(attr->name, "read_typ") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->trdt);
|
||||
} else if (strcmp(attr->name, "read_max") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->trdm);
|
||||
} else if (strcmp(attr->name, "prog_typ") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tprt);
|
||||
} else if (strcmp(attr->name, "prog_max") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tprm);
|
||||
} else if (strcmp(attr->name, "erase_typ") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tbet);
|
||||
} else if (strcmp(attr->name, "erase_max") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tbem);
|
||||
} else if (strcmp(attr->name, "multiplane_modes") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "0x%08x\n", grp->mpos);
|
||||
} else if (strcmp(attr->name, "media_capabilities") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "0x%08x\n", grp->mccap);
|
||||
} else if (strcmp(attr->name, "max_phys_secs") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n",
|
||||
ndev->ops->max_phys_sect);
|
||||
} else {
|
||||
return scnprintf(page,
|
||||
PAGE_SIZE,
|
||||
"Unhandled attr(%s) in `nvm_dev_attr_show`\n",
|
||||
attr->name);
|
||||
}
|
||||
}
|
||||
|
||||
#define NVM_DEV_ATTR_RO(_name) \
|
||||
DEVICE_ATTR(_name, S_IRUGO, nvm_dev_attr_show, NULL)
|
||||
|
||||
static NVM_DEV_ATTR_RO(version);
|
||||
static NVM_DEV_ATTR_RO(vendor_opcode);
|
||||
static NVM_DEV_ATTR_RO(capabilities);
|
||||
static NVM_DEV_ATTR_RO(device_mode);
|
||||
static NVM_DEV_ATTR_RO(ppa_format);
|
||||
static NVM_DEV_ATTR_RO(media_manager);
|
||||
|
||||
static NVM_DEV_ATTR_RO(media_type);
|
||||
static NVM_DEV_ATTR_RO(flash_media_type);
|
||||
static NVM_DEV_ATTR_RO(num_channels);
|
||||
static NVM_DEV_ATTR_RO(num_luns);
|
||||
static NVM_DEV_ATTR_RO(num_planes);
|
||||
static NVM_DEV_ATTR_RO(num_blocks);
|
||||
static NVM_DEV_ATTR_RO(num_pages);
|
||||
static NVM_DEV_ATTR_RO(page_size);
|
||||
static NVM_DEV_ATTR_RO(hw_sector_size);
|
||||
static NVM_DEV_ATTR_RO(oob_sector_size);
|
||||
static NVM_DEV_ATTR_RO(read_typ);
|
||||
static NVM_DEV_ATTR_RO(read_max);
|
||||
static NVM_DEV_ATTR_RO(prog_typ);
|
||||
static NVM_DEV_ATTR_RO(prog_max);
|
||||
static NVM_DEV_ATTR_RO(erase_typ);
|
||||
static NVM_DEV_ATTR_RO(erase_max);
|
||||
static NVM_DEV_ATTR_RO(multiplane_modes);
|
||||
static NVM_DEV_ATTR_RO(media_capabilities);
|
||||
static NVM_DEV_ATTR_RO(max_phys_secs);
|
||||
|
||||
#define NVM_DEV_ATTR(_name) (dev_attr_##_name##)
|
||||
|
||||
static struct attribute *nvm_dev_attrs[] = {
|
||||
&dev_attr_version.attr,
|
||||
&dev_attr_vendor_opcode.attr,
|
||||
&dev_attr_capabilities.attr,
|
||||
&dev_attr_device_mode.attr,
|
||||
&dev_attr_media_manager.attr,
|
||||
|
||||
&dev_attr_ppa_format.attr,
|
||||
&dev_attr_media_type.attr,
|
||||
&dev_attr_flash_media_type.attr,
|
||||
&dev_attr_num_channels.attr,
|
||||
&dev_attr_num_luns.attr,
|
||||
&dev_attr_num_planes.attr,
|
||||
&dev_attr_num_blocks.attr,
|
||||
&dev_attr_num_pages.attr,
|
||||
&dev_attr_page_size.attr,
|
||||
&dev_attr_hw_sector_size.attr,
|
||||
&dev_attr_oob_sector_size.attr,
|
||||
&dev_attr_read_typ.attr,
|
||||
&dev_attr_read_max.attr,
|
||||
&dev_attr_prog_typ.attr,
|
||||
&dev_attr_prog_max.attr,
|
||||
&dev_attr_erase_typ.attr,
|
||||
&dev_attr_erase_max.attr,
|
||||
&dev_attr_multiplane_modes.attr,
|
||||
&dev_attr_media_capabilities.attr,
|
||||
&dev_attr_max_phys_secs.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group nvm_dev_attr_group = {
|
||||
.name = "lightnvm",
|
||||
.attrs = nvm_dev_attrs,
|
||||
};
|
||||
|
||||
static const struct attribute_group *nvm_dev_attr_groups[] = {
|
||||
&nvm_dev_attr_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static void nvm_dev_release(struct device *device)
|
||||
{
|
||||
struct nvm_dev *dev = container_of(device, struct nvm_dev, dev);
|
||||
struct request_queue *q = dev->q;
|
||||
|
||||
pr_debug("nvm/sysfs: `nvm_dev_release`\n");
|
||||
|
||||
blk_mq_unregister_dev(device, q);
|
||||
|
||||
nvm_free(dev);
|
||||
}
|
||||
|
||||
static struct device_type nvm_type = {
|
||||
.name = "lightnvm",
|
||||
.groups = nvm_dev_attr_groups,
|
||||
.release = nvm_dev_release,
|
||||
};
|
||||
|
||||
int nvm_sysfs_register_dev(struct nvm_dev *dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!dev->parent_dev)
|
||||
return 0;
|
||||
|
||||
dev->dev.parent = dev->parent_dev;
|
||||
dev_set_name(&dev->dev, "%s", dev->name);
|
||||
dev->dev.type = &nvm_type;
|
||||
device_initialize(&dev->dev);
|
||||
ret = device_add(&dev->dev);
|
||||
|
||||
if (!ret)
|
||||
blk_mq_register_dev(&dev->dev, dev->q);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void nvm_sysfs_unregister_dev(struct nvm_dev *dev)
|
||||
{
|
||||
if (dev && dev->parent_dev)
|
||||
kobject_put(&dev->dev.kobj);
|
||||
}
|
|
@ -297,7 +297,7 @@ static void bch_btree_node_read(struct btree *b)
|
|||
bio->bi_iter.bi_size = KEY_SIZE(&b->key) << 9;
|
||||
bio->bi_end_io = btree_node_read_endio;
|
||||
bio->bi_private = &cl;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, REQ_META|READ_SYNC);
|
||||
bio->bi_opf = REQ_OP_READ | REQ_META;
|
||||
|
||||
bch_bio_map(bio, b->keys.set[0].data);
|
||||
|
||||
|
@ -393,7 +393,7 @@ static void do_btree_node_write(struct btree *b)
|
|||
b->bio->bi_end_io = btree_node_write_endio;
|
||||
b->bio->bi_private = cl;
|
||||
b->bio->bi_iter.bi_size = roundup(set_bytes(i), block_bytes(b->c));
|
||||
bio_set_op_attrs(b->bio, REQ_OP_WRITE, REQ_META|WRITE_SYNC|REQ_FUA);
|
||||
b->bio->bi_opf = REQ_OP_WRITE | REQ_META | REQ_FUA;
|
||||
bch_bio_map(b->bio, i);
|
||||
|
||||
/*
|
||||
|
|
|
@ -52,7 +52,7 @@ void bch_btree_verify(struct btree *b)
|
|||
bio->bi_bdev = PTR_CACHE(b->c, &b->key, 0)->bdev;
|
||||
bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0);
|
||||
bio->bi_iter.bi_size = KEY_SIZE(&v->key) << 9;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, REQ_META|READ_SYNC);
|
||||
bio->bi_opf = REQ_OP_READ | REQ_META;
|
||||
bch_bio_map(bio, sorted);
|
||||
|
||||
submit_bio_wait(bio);
|
||||
|
@ -107,22 +107,26 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
|
|||
{
|
||||
char name[BDEVNAME_SIZE];
|
||||
struct bio *check;
|
||||
struct bio_vec bv;
|
||||
struct bvec_iter iter;
|
||||
struct bio_vec bv, cbv;
|
||||
struct bvec_iter iter, citer = { 0 };
|
||||
|
||||
check = bio_clone(bio, GFP_NOIO);
|
||||
if (!check)
|
||||
return;
|
||||
bio_set_op_attrs(check, REQ_OP_READ, READ_SYNC);
|
||||
check->bi_opf = REQ_OP_READ;
|
||||
|
||||
if (bio_alloc_pages(check, GFP_NOIO))
|
||||
goto out_put;
|
||||
|
||||
submit_bio_wait(check);
|
||||
|
||||
citer.bi_size = UINT_MAX;
|
||||
bio_for_each_segment(bv, bio, iter) {
|
||||
void *p1 = kmap_atomic(bv.bv_page);
|
||||
void *p2 = page_address(check->bi_io_vec[iter.bi_idx].bv_page);
|
||||
void *p2;
|
||||
|
||||
cbv = bio_iter_iovec(check, citer);
|
||||
p2 = page_address(cbv.bv_page);
|
||||
|
||||
cache_set_err_on(memcmp(p1 + bv.bv_offset,
|
||||
p2 + bv.bv_offset,
|
||||
|
@ -133,6 +137,7 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
|
|||
(uint64_t) bio->bi_iter.bi_sector);
|
||||
|
||||
kunmap_atomic(p1);
|
||||
bio_advance_iter(check, &citer, bv.bv_len);
|
||||
}
|
||||
|
||||
bio_free_pages(check);
|
||||
|
|
|
@ -24,9 +24,7 @@ struct bio *bch_bbio_alloc(struct cache_set *c)
|
|||
struct bbio *b = mempool_alloc(c->bio_meta, GFP_NOIO);
|
||||
struct bio *bio = &b->bio;
|
||||
|
||||
bio_init(bio);
|
||||
bio->bi_max_vecs = bucket_pages(c);
|
||||
bio->bi_io_vec = bio->bi_inline_vecs;
|
||||
bio_init(bio, bio->bi_inline_vecs, bucket_pages(c));
|
||||
|
||||
return bio;
|
||||
}
|
||||
|
|
|
@ -448,13 +448,11 @@ static void do_journal_discard(struct cache *ca)
|
|||
|
||||
atomic_set(&ja->discard_in_flight, DISCARD_IN_FLIGHT);
|
||||
|
||||
bio_init(bio);
|
||||
bio_init(bio, bio->bi_inline_vecs, 1);
|
||||
bio_set_op_attrs(bio, REQ_OP_DISCARD, 0);
|
||||
bio->bi_iter.bi_sector = bucket_to_sector(ca->set,
|
||||
ca->sb.d[ja->discard_idx]);
|
||||
bio->bi_bdev = ca->bdev;
|
||||
bio->bi_max_vecs = 1;
|
||||
bio->bi_io_vec = bio->bi_inline_vecs;
|
||||
bio->bi_iter.bi_size = bucket_bytes(ca);
|
||||
bio->bi_end_io = journal_discard_endio;
|
||||
|
||||
|
|
|
@ -77,15 +77,13 @@ static void moving_init(struct moving_io *io)
|
|||
{
|
||||
struct bio *bio = &io->bio.bio;
|
||||
|
||||
bio_init(bio);
|
||||
bio_init(bio, bio->bi_inline_vecs,
|
||||
DIV_ROUND_UP(KEY_SIZE(&io->w->key), PAGE_SECTORS));
|
||||
bio_get(bio);
|
||||
bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
|
||||
|
||||
bio->bi_iter.bi_size = KEY_SIZE(&io->w->key) << 9;
|
||||
bio->bi_max_vecs = DIV_ROUND_UP(KEY_SIZE(&io->w->key),
|
||||
PAGE_SECTORS);
|
||||
bio->bi_private = &io->cl;
|
||||
bio->bi_io_vec = bio->bi_inline_vecs;
|
||||
bch_bio_map(bio, NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -404,8 +404,8 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
|
|||
|
||||
if (!congested &&
|
||||
mode == CACHE_MODE_WRITEBACK &&
|
||||
op_is_write(bio_op(bio)) &&
|
||||
(bio->bi_opf & REQ_SYNC))
|
||||
op_is_write(bio->bi_opf) &&
|
||||
op_is_sync(bio->bi_opf))
|
||||
goto rescale;
|
||||
|
||||
spin_lock(&dc->io_lock);
|
||||
|
@ -623,7 +623,7 @@ static void do_bio_hook(struct search *s, struct bio *orig_bio)
|
|||
{
|
||||
struct bio *bio = &s->bio.bio;
|
||||
|
||||
bio_init(bio);
|
||||
bio_init(bio, NULL, 0);
|
||||
__bio_clone_fast(bio, orig_bio);
|
||||
bio->bi_end_io = request_endio;
|
||||
bio->bi_private = &s->cl;
|
||||
|
@ -923,7 +923,7 @@ static void cached_dev_write(struct cached_dev *dc, struct search *s)
|
|||
flush->bi_bdev = bio->bi_bdev;
|
||||
flush->bi_end_io = request_endio;
|
||||
flush->bi_private = cl;
|
||||
bio_set_op_attrs(flush, REQ_OP_WRITE, WRITE_FLUSH);
|
||||
flush->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
||||
|
||||
closure_bio_submit(flush, cl);
|
||||
}
|
||||
|
|
|
@ -381,7 +381,7 @@ static char *uuid_read(struct cache_set *c, struct jset *j, struct closure *cl)
|
|||
return "bad uuid pointer";
|
||||
|
||||
bkey_copy(&c->uuid_bucket, k);
|
||||
uuid_io(c, REQ_OP_READ, READ_SYNC, k, cl);
|
||||
uuid_io(c, REQ_OP_READ, 0, k, cl);
|
||||
|
||||
if (j->version < BCACHE_JSET_VERSION_UUIDv1) {
|
||||
struct uuid_entry_v0 *u0 = (void *) c->uuids;
|
||||
|
@ -600,7 +600,7 @@ static void prio_read(struct cache *ca, uint64_t bucket)
|
|||
ca->prio_last_buckets[bucket_nr] = bucket;
|
||||
bucket_nr++;
|
||||
|
||||
prio_io(ca, bucket, REQ_OP_READ, READ_SYNC);
|
||||
prio_io(ca, bucket, REQ_OP_READ, 0);
|
||||
|
||||
if (p->csum != bch_crc64(&p->magic, bucket_bytes(ca) - 8))
|
||||
pr_warn("bad csum reading priorities");
|
||||
|
@ -1152,9 +1152,7 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
|
|||
dc->bdev = bdev;
|
||||
dc->bdev->bd_holder = dc;
|
||||
|
||||
bio_init(&dc->sb_bio);
|
||||
dc->sb_bio.bi_max_vecs = 1;
|
||||
dc->sb_bio.bi_io_vec = dc->sb_bio.bi_inline_vecs;
|
||||
bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1);
|
||||
dc->sb_bio.bi_io_vec[0].bv_page = sb_page;
|
||||
get_page(sb_page);
|
||||
|
||||
|
@ -1814,9 +1812,7 @@ static int cache_alloc(struct cache *ca)
|
|||
__module_get(THIS_MODULE);
|
||||
kobject_init(&ca->kobj, &bch_cache_ktype);
|
||||
|
||||
bio_init(&ca->journal.bio);
|
||||
ca->journal.bio.bi_max_vecs = 8;
|
||||
ca->journal.bio.bi_io_vec = ca->journal.bio.bi_inline_vecs;
|
||||
bio_init(&ca->journal.bio, ca->journal.bio.bi_inline_vecs, 8);
|
||||
|
||||
free = roundup_pow_of_two(ca->sb.nbuckets) >> 10;
|
||||
|
||||
|
@ -1852,9 +1848,7 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
|
|||
ca->bdev = bdev;
|
||||
ca->bdev->bd_holder = ca;
|
||||
|
||||
bio_init(&ca->sb_bio);
|
||||
ca->sb_bio.bi_max_vecs = 1;
|
||||
ca->sb_bio.bi_io_vec = ca->sb_bio.bi_inline_vecs;
|
||||
bio_init(&ca->sb_bio, ca->sb_bio.bi_inline_vecs, 1);
|
||||
ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
|
||||
get_page(sb_page);
|
||||
|
||||
|
|
|
@ -106,14 +106,13 @@ static void dirty_init(struct keybuf_key *w)
|
|||
struct dirty_io *io = w->private;
|
||||
struct bio *bio = &io->bio;
|
||||
|
||||
bio_init(bio);
|
||||
bio_init(bio, bio->bi_inline_vecs,
|
||||
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS));
|
||||
if (!io->dc->writeback_percent)
|
||||
bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
|
||||
|
||||
bio->bi_iter.bi_size = KEY_SIZE(&w->key) << 9;
|
||||
bio->bi_max_vecs = DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS);
|
||||
bio->bi_private = w;
|
||||
bio->bi_io_vec = bio->bi_inline_vecs;
|
||||
bch_bio_map(bio, NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -57,8 +57,7 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
|
|||
if (would_skip)
|
||||
return false;
|
||||
|
||||
return bio->bi_opf & REQ_SYNC ||
|
||||
in_use <= CUTOFF_WRITEBACK;
|
||||
return op_is_sync(bio->bi_opf) || in_use <= CUTOFF_WRITEBACK;
|
||||
}
|
||||
|
||||
static inline void bch_writeback_queue(struct cached_dev *dc)
|
||||
|
|
|
@ -611,9 +611,7 @@ static void use_inline_bio(struct dm_buffer *b, int rw, sector_t block,
|
|||
char *ptr;
|
||||
int len;
|
||||
|
||||
bio_init(&b->bio);
|
||||
b->bio.bi_io_vec = b->bio_vec;
|
||||
b->bio.bi_max_vecs = DM_BUFIO_INLINE_VECS;
|
||||
bio_init(&b->bio, b->bio_vec, DM_BUFIO_INLINE_VECS);
|
||||
b->bio.bi_iter.bi_sector = block << b->c->sectors_per_block_bits;
|
||||
b->bio.bi_bdev = b->c->bdev;
|
||||
b->bio.bi_end_io = inline_endio;
|
||||
|
@ -1316,7 +1314,7 @@ int dm_bufio_issue_flush(struct dm_bufio_client *c)
|
|||
{
|
||||
struct dm_io_request io_req = {
|
||||
.bi_op = REQ_OP_WRITE,
|
||||
.bi_op_flags = WRITE_FLUSH,
|
||||
.bi_op_flags = REQ_PREFLUSH,
|
||||
.mem.type = DM_IO_KMEM,
|
||||
.mem.ptr.addr = NULL,
|
||||
.client = c->dm_io,
|
||||
|
|
|
@ -1135,7 +1135,7 @@ static void clone_init(struct dm_crypt_io *io, struct bio *clone)
|
|||
clone->bi_private = io;
|
||||
clone->bi_end_io = crypt_endio;
|
||||
clone->bi_bdev = cc->dev->bdev;
|
||||
bio_set_op_attrs(clone, bio_op(io->base_bio), bio_flags(io->base_bio));
|
||||
clone->bi_opf = io->base_bio->bi_opf;
|
||||
}
|
||||
|
||||
static int kcryptd_io_read(struct dm_crypt_io *io, gfp_t gfp)
|
||||
|
|
|
@ -308,7 +308,7 @@ static int flush_header(struct log_c *lc)
|
|||
};
|
||||
|
||||
lc->io_req.bi_op = REQ_OP_WRITE;
|
||||
lc->io_req.bi_op_flags = WRITE_FLUSH;
|
||||
lc->io_req.bi_op_flags = REQ_PREFLUSH;
|
||||
|
||||
return dm_io(&lc->io_req, 1, &null_location, NULL);
|
||||
}
|
||||
|
|
|
@ -260,7 +260,7 @@ static int mirror_flush(struct dm_target *ti)
|
|||
struct mirror *m;
|
||||
struct dm_io_request io_req = {
|
||||
.bi_op = REQ_OP_WRITE,
|
||||
.bi_op_flags = WRITE_FLUSH,
|
||||
.bi_op_flags = REQ_PREFLUSH,
|
||||
.mem.type = DM_IO_KMEM,
|
||||
.mem.ptr.addr = NULL,
|
||||
.client = ms->io_client,
|
||||
|
@ -656,7 +656,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
|
|||
struct mirror *m;
|
||||
struct dm_io_request io_req = {
|
||||
.bi_op = REQ_OP_WRITE,
|
||||
.bi_op_flags = bio->bi_opf & WRITE_FLUSH_FUA,
|
||||
.bi_op_flags = bio->bi_opf & (REQ_FUA | REQ_PREFLUSH),
|
||||
.mem.type = DM_IO_BIO,
|
||||
.mem.ptr.bio = bio,
|
||||
.notify.fn = write_callback,
|
||||
|
|
|
@ -75,12 +75,6 @@ static void dm_old_start_queue(struct request_queue *q)
|
|||
|
||||
static void dm_mq_start_queue(struct request_queue *q)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(q->queue_lock, flags);
|
||||
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
|
||||
spin_unlock_irqrestore(q->queue_lock, flags);
|
||||
|
||||
blk_mq_start_stopped_hw_queues(q, true);
|
||||
blk_mq_kick_requeue_list(q);
|
||||
}
|
||||
|
@ -105,20 +99,10 @@ static void dm_old_stop_queue(struct request_queue *q)
|
|||
|
||||
static void dm_mq_stop_queue(struct request_queue *q)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(q->queue_lock, flags);
|
||||
if (blk_queue_stopped(q)) {
|
||||
spin_unlock_irqrestore(q->queue_lock, flags);
|
||||
if (blk_mq_queue_stopped(q))
|
||||
return;
|
||||
}
|
||||
|
||||
queue_flag_set(QUEUE_FLAG_STOPPED, q);
|
||||
spin_unlock_irqrestore(q->queue_lock, flags);
|
||||
|
||||
/* Avoid that requeuing could restart the queue. */
|
||||
blk_mq_cancel_requeue_work(q);
|
||||
blk_mq_stop_hw_queues(q);
|
||||
blk_mq_quiesce_queue(q);
|
||||
}
|
||||
|
||||
void dm_stop_queue(struct request_queue *q)
|
||||
|
@ -313,7 +297,7 @@ static void dm_unprep_request(struct request *rq)
|
|||
|
||||
if (!rq->q->mq_ops) {
|
||||
rq->special = NULL;
|
||||
rq->cmd_flags &= ~REQ_DONTPREP;
|
||||
rq->rq_flags &= ~RQF_DONTPREP;
|
||||
}
|
||||
|
||||
if (clone)
|
||||
|
@ -338,12 +322,7 @@ static void dm_old_requeue_request(struct request *rq)
|
|||
|
||||
static void __dm_mq_kick_requeue_list(struct request_queue *q, unsigned long msecs)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(q->queue_lock, flags);
|
||||
if (!blk_queue_stopped(q))
|
||||
blk_mq_delay_kick_requeue_list(q, msecs);
|
||||
spin_unlock_irqrestore(q->queue_lock, flags);
|
||||
blk_mq_delay_kick_requeue_list(q, msecs);
|
||||
}
|
||||
|
||||
void dm_mq_kick_requeue_list(struct mapped_device *md)
|
||||
|
@ -354,7 +333,7 @@ EXPORT_SYMBOL(dm_mq_kick_requeue_list);
|
|||
|
||||
static void dm_mq_delay_requeue_request(struct request *rq, unsigned long msecs)
|
||||
{
|
||||
blk_mq_requeue_request(rq);
|
||||
blk_mq_requeue_request(rq, false);
|
||||
__dm_mq_kick_requeue_list(rq->q, msecs);
|
||||
}
|
||||
|
||||
|
@ -431,7 +410,7 @@ static void dm_softirq_done(struct request *rq)
|
|||
return;
|
||||
}
|
||||
|
||||
if (rq->cmd_flags & REQ_FAILED)
|
||||
if (rq->rq_flags & RQF_FAILED)
|
||||
mapped = false;
|
||||
|
||||
dm_done(clone, tio->error, mapped);
|
||||
|
@ -460,7 +439,7 @@ static void dm_complete_request(struct request *rq, int error)
|
|||
*/
|
||||
static void dm_kill_unmapped_request(struct request *rq, int error)
|
||||
{
|
||||
rq->cmd_flags |= REQ_FAILED;
|
||||
rq->rq_flags |= RQF_FAILED;
|
||||
dm_complete_request(rq, error);
|
||||
}
|
||||
|
||||
|
@ -476,7 +455,7 @@ static void end_clone_request(struct request *clone, int error)
|
|||
* For just cleaning up the information of the queue in which
|
||||
* the clone was dispatched.
|
||||
* The clone is *NOT* freed actually here because it is alloced
|
||||
* from dm own mempool (REQ_ALLOCED isn't set).
|
||||
* from dm own mempool (RQF_ALLOCED isn't set).
|
||||
*/
|
||||
__blk_put_request(clone->q, clone);
|
||||
}
|
||||
|
@ -497,7 +476,7 @@ static void dm_dispatch_clone_request(struct request *clone, struct request *rq)
|
|||
int r;
|
||||
|
||||
if (blk_queue_io_stat(clone->q))
|
||||
clone->cmd_flags |= REQ_IO_STAT;
|
||||
clone->rq_flags |= RQF_IO_STAT;
|
||||
|
||||
clone->start_time = jiffies;
|
||||
r = blk_insert_cloned_request(clone->q, clone);
|
||||
|
@ -633,7 +612,7 @@ static int dm_old_prep_fn(struct request_queue *q, struct request *rq)
|
|||
return BLKPREP_DEFER;
|
||||
|
||||
rq->special = tio;
|
||||
rq->cmd_flags |= REQ_DONTPREP;
|
||||
rq->rq_flags |= RQF_DONTPREP;
|
||||
|
||||
return BLKPREP_OK;
|
||||
}
|
||||
|
@ -904,17 +883,6 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
dm_put_live_table(md, srcu_idx);
|
||||
}
|
||||
|
||||
/*
|
||||
* On suspend dm_stop_queue() handles stopping the blk-mq
|
||||
* request_queue BUT: even though the hw_queues are marked
|
||||
* BLK_MQ_S_STOPPED at that point there is still a race that
|
||||
* is allowing block/blk-mq.c to call ->queue_rq against a
|
||||
* hctx that it really shouldn't. The following check guards
|
||||
* against this rarity (albeit _not_ race-free).
|
||||
*/
|
||||
if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
|
||||
return BLK_MQ_RQ_QUEUE_BUSY;
|
||||
|
||||
if (ti->type->busy && ti->type->busy(ti))
|
||||
return BLK_MQ_RQ_QUEUE_BUSY;
|
||||
|
||||
|
|
|
@ -741,7 +741,7 @@ static void persistent_commit_exception(struct dm_exception_store *store,
|
|||
/*
|
||||
* Commit exceptions to disk.
|
||||
*/
|
||||
if (ps->valid && area_io(ps, REQ_OP_WRITE, WRITE_FLUSH_FUA))
|
||||
if (ps->valid && area_io(ps, REQ_OP_WRITE, REQ_PREFLUSH | REQ_FUA))
|
||||
ps->valid = 0;
|
||||
|
||||
/*
|
||||
|
@ -818,7 +818,7 @@ static int persistent_commit_merge(struct dm_exception_store *store,
|
|||
for (i = 0; i < nr_merged; i++)
|
||||
clear_exception(ps, ps->current_committed - 1 - i);
|
||||
|
||||
r = area_io(ps, REQ_OP_WRITE, WRITE_FLUSH_FUA);
|
||||
r = area_io(ps, REQ_OP_WRITE, REQ_PREFLUSH | REQ_FUA);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
|
|
|
@ -1525,9 +1525,9 @@ static struct mapped_device *alloc_dev(int minor)
|
|||
if (!md->bdev)
|
||||
goto bad;
|
||||
|
||||
bio_init(&md->flush_bio);
|
||||
bio_init(&md->flush_bio, NULL, 0);
|
||||
md->flush_bio.bi_bdev = md->bdev;
|
||||
bio_set_op_attrs(&md->flush_bio, REQ_OP_WRITE, WRITE_FLUSH);
|
||||
md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
||||
|
||||
dm_stats_init(&md->stats);
|
||||
|
||||
|
|
|
@ -394,7 +394,7 @@ static void submit_flushes(struct work_struct *ws)
|
|||
bi->bi_end_io = md_end_flush;
|
||||
bi->bi_private = rdev;
|
||||
bi->bi_bdev = rdev->bdev;
|
||||
bio_set_op_attrs(bi, REQ_OP_WRITE, WRITE_FLUSH);
|
||||
bi->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
||||
atomic_inc(&mddev->flush_pending);
|
||||
submit_bio(bi);
|
||||
rcu_read_lock();
|
||||
|
@ -743,7 +743,7 @@ void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
|
|||
bio_add_page(bio, page, size, 0);
|
||||
bio->bi_private = rdev;
|
||||
bio->bi_end_io = super_written;
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH_FUA);
|
||||
bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_FUA;
|
||||
|
||||
atomic_inc(&mddev->pending_writes);
|
||||
submit_bio(bio);
|
||||
|
|
|
@ -130,7 +130,7 @@ static void multipath_make_request(struct mddev *mddev, struct bio * bio)
|
|||
}
|
||||
multipath = conf->multipaths + mp_bh->path;
|
||||
|
||||
bio_init(&mp_bh->bio);
|
||||
bio_init(&mp_bh->bio, NULL, 0);
|
||||
__bio_clone_fast(&mp_bh->bio, bio);
|
||||
|
||||
mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset;
|
||||
|
|
|
@ -685,7 +685,7 @@ void r5l_flush_stripe_to_raid(struct r5l_log *log)
|
|||
bio_reset(&log->flush_bio);
|
||||
log->flush_bio.bi_bdev = log->rdev->bdev;
|
||||
log->flush_bio.bi_end_io = r5l_log_flush_endio;
|
||||
bio_set_op_attrs(&log->flush_bio, REQ_OP_WRITE, WRITE_FLUSH);
|
||||
log->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
||||
submit_bio(&log->flush_bio);
|
||||
}
|
||||
|
||||
|
@ -1053,7 +1053,7 @@ static int r5l_log_write_empty_meta_block(struct r5l_log *log, sector_t pos,
|
|||
mb->checksum = cpu_to_le32(crc);
|
||||
|
||||
if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE,
|
||||
WRITE_FUA, false)) {
|
||||
REQ_FUA, false)) {
|
||||
__free_page(page);
|
||||
return -EIO;
|
||||
}
|
||||
|
@ -1205,7 +1205,7 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
|
|||
INIT_LIST_HEAD(&log->io_end_ios);
|
||||
INIT_LIST_HEAD(&log->flushing_ios);
|
||||
INIT_LIST_HEAD(&log->finished_ios);
|
||||
bio_init(&log->flush_bio);
|
||||
bio_init(&log->flush_bio, NULL, 0);
|
||||
|
||||
log->io_kc = KMEM_CACHE(r5l_io_unit, 0);
|
||||
if (!log->io_kc)
|
||||
|
|
|
@ -913,7 +913,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
|
|||
if (test_and_clear_bit(R5_Wantwrite, &sh->dev[i].flags)) {
|
||||
op = REQ_OP_WRITE;
|
||||
if (test_and_clear_bit(R5_WantFUA, &sh->dev[i].flags))
|
||||
op_flags = WRITE_FUA;
|
||||
op_flags = REQ_FUA;
|
||||
if (test_bit(R5_Discard, &sh->dev[i].flags))
|
||||
op = REQ_OP_DISCARD;
|
||||
} else if (test_and_clear_bit(R5_Wantread, &sh->dev[i].flags))
|
||||
|
@ -2004,13 +2004,8 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
|
|||
for (i = 0; i < disks; i++) {
|
||||
struct r5dev *dev = &sh->dev[i];
|
||||
|
||||
bio_init(&dev->req);
|
||||
dev->req.bi_io_vec = &dev->vec;
|
||||
dev->req.bi_max_vecs = 1;
|
||||
|
||||
bio_init(&dev->rreq);
|
||||
dev->rreq.bi_io_vec = &dev->rvec;
|
||||
dev->rreq.bi_max_vecs = 1;
|
||||
bio_init(&dev->req, &dev->vec, 1);
|
||||
bio_init(&dev->rreq, &dev->rvec, 1);
|
||||
}
|
||||
}
|
||||
return sh;
|
||||
|
|
|
@ -2006,7 +2006,7 @@ static int msb_prepare_req(struct request_queue *q, struct request *req)
|
|||
blk_dump_rq_flags(req, "MS unsupported request");
|
||||
return BLKPREP_KILL;
|
||||
}
|
||||
req->cmd_flags |= REQ_DONTPREP;
|
||||
req->rq_flags |= RQF_DONTPREP;
|
||||
return BLKPREP_OK;
|
||||
}
|
||||
|
||||
|
|
|
@ -834,7 +834,7 @@ static int mspro_block_prepare_req(struct request_queue *q, struct request *req)
|
|||
return BLKPREP_KILL;
|
||||
}
|
||||
|
||||
req->cmd_flags |= REQ_DONTPREP;
|
||||
req->rq_flags |= RQF_DONTPREP;
|
||||
|
||||
return BLKPREP_OK;
|
||||
}
|
||||
|
|
|
@ -1733,7 +1733,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
|
|||
|
||||
cmd_abort:
|
||||
if (mmc_card_removed(card))
|
||||
req->cmd_flags |= REQ_QUIET;
|
||||
req->rq_flags |= RQF_QUIET;
|
||||
while (ret)
|
||||
ret = blk_end_request(req, -EIO,
|
||||
blk_rq_cur_bytes(req));
|
||||
|
@ -1741,7 +1741,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
|
|||
start_new_req:
|
||||
if (rqc) {
|
||||
if (mmc_card_removed(card)) {
|
||||
rqc->cmd_flags |= REQ_QUIET;
|
||||
rqc->rq_flags |= RQF_QUIET;
|
||||
blk_end_request_all(rqc, -EIO);
|
||||
} else {
|
||||
mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq);
|
||||
|
|
|
@ -44,7 +44,7 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
|
|||
if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq)))
|
||||
return BLKPREP_KILL;
|
||||
|
||||
req->cmd_flags |= REQ_DONTPREP;
|
||||
req->rq_flags |= RQF_DONTPREP;
|
||||
|
||||
return BLKPREP_OK;
|
||||
}
|
||||
|
@ -133,7 +133,7 @@ static void mmc_request_fn(struct request_queue *q)
|
|||
|
||||
if (!mq) {
|
||||
while ((req = blk_fetch_request(q)) != NULL) {
|
||||
req->cmd_flags |= REQ_QUIET;
|
||||
req->rq_flags |= RQF_QUIET;
|
||||
__blk_end_request_all(req, -EIO);
|
||||
}
|
||||
return;
|
||||
|
|
|
@ -43,3 +43,20 @@ config NVME_RDMA
|
|||
from https://github.com/linux-nvme/nvme-cli.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config NVME_FC
|
||||
tristate "NVM Express over Fabrics FC host driver"
|
||||
depends on BLOCK
|
||||
depends on HAS_DMA
|
||||
select NVME_CORE
|
||||
select NVME_FABRICS
|
||||
select SG_POOL
|
||||
help
|
||||
This provides support for the NVMe over Fabrics protocol using
|
||||
the FC transport. This allows you to use remote block devices
|
||||
exported using the NVMe protocol set.
|
||||
|
||||
To configure a NVMe over Fabrics controller use the nvme-cli tool
|
||||
from https://github.com/linux-nvme/nvme-cli.
|
||||
|
||||
If unsure, say N.
|
||||
|
|
|
@ -2,6 +2,7 @@ obj-$(CONFIG_NVME_CORE) += nvme-core.o
|
|||
obj-$(CONFIG_BLK_DEV_NVME) += nvme.o
|
||||
obj-$(CONFIG_NVME_FABRICS) += nvme-fabrics.o
|
||||
obj-$(CONFIG_NVME_RDMA) += nvme-rdma.o
|
||||
obj-$(CONFIG_NVME_FC) += nvme-fc.o
|
||||
|
||||
nvme-core-y := core.o
|
||||
nvme-core-$(CONFIG_BLK_DEV_NVME_SCSI) += scsi.o
|
||||
|
@ -12,3 +13,5 @@ nvme-y += pci.o
|
|||
nvme-fabrics-y += fabrics.o
|
||||
|
||||
nvme-rdma-y += rdma.o
|
||||
|
||||
nvme-fc-y += fc.o
|
||||
|
|
|
@ -201,13 +201,7 @@ fail:
|
|||
|
||||
void nvme_requeue_req(struct request *req)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
blk_mq_requeue_request(req);
|
||||
spin_lock_irqsave(req->q->queue_lock, flags);
|
||||
if (!blk_queue_stopped(req->q))
|
||||
blk_mq_kick_requeue_list(req->q);
|
||||
spin_unlock_irqrestore(req->q->queue_lock, flags);
|
||||
blk_mq_requeue_request(req, !blk_mq_queue_stopped(req->q));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvme_requeue_req);
|
||||
|
||||
|
@ -227,8 +221,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
|
|||
|
||||
req->cmd_type = REQ_TYPE_DRV_PRIV;
|
||||
req->cmd_flags |= REQ_FAILFAST_DRIVER;
|
||||
req->cmd = (unsigned char *)cmd;
|
||||
req->cmd_len = sizeof(struct nvme_command);
|
||||
nvme_req(req)->cmd = cmd;
|
||||
|
||||
return req;
|
||||
}
|
||||
|
@ -246,8 +239,6 @@ static inline int nvme_setup_discard(struct nvme_ns *ns, struct request *req,
|
|||
struct nvme_command *cmnd)
|
||||
{
|
||||
struct nvme_dsm_range *range;
|
||||
struct page *page;
|
||||
int offset;
|
||||
unsigned int nr_bytes = blk_rq_bytes(req);
|
||||
|
||||
range = kmalloc(sizeof(*range), GFP_ATOMIC);
|
||||
|
@ -264,19 +255,27 @@ static inline int nvme_setup_discard(struct nvme_ns *ns, struct request *req,
|
|||
cmnd->dsm.nr = 0;
|
||||
cmnd->dsm.attributes = cpu_to_le32(NVME_DSMGMT_AD);
|
||||
|
||||
req->completion_data = range;
|
||||
page = virt_to_page(range);
|
||||
offset = offset_in_page(range);
|
||||
blk_add_request_payload(req, page, offset, sizeof(*range));
|
||||
req->special_vec.bv_page = virt_to_page(range);
|
||||
req->special_vec.bv_offset = offset_in_page(range);
|
||||
req->special_vec.bv_len = sizeof(*range);
|
||||
req->rq_flags |= RQF_SPECIAL_PAYLOAD;
|
||||
|
||||
/*
|
||||
* we set __data_len back to the size of the area to be discarded
|
||||
* on disk. This allows us to report completion on the full amount
|
||||
* of blocks described by the request.
|
||||
*/
|
||||
req->__data_len = nr_bytes;
|
||||
return BLK_MQ_RQ_QUEUE_OK;
|
||||
}
|
||||
|
||||
return 0;
|
||||
static inline void nvme_setup_write_zeroes(struct nvme_ns *ns,
|
||||
struct request *req, struct nvme_command *cmnd)
|
||||
{
|
||||
struct nvme_write_zeroes_cmd *write_zeroes = &cmnd->write_zeroes;
|
||||
|
||||
memset(cmnd, 0, sizeof(*cmnd));
|
||||
write_zeroes->opcode = nvme_cmd_write_zeroes;
|
||||
write_zeroes->nsid = cpu_to_le32(ns->ns_id);
|
||||
write_zeroes->slba =
|
||||
cpu_to_le64(nvme_block_nr(ns, blk_rq_pos(req)));
|
||||
write_zeroes->length =
|
||||
cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
|
||||
write_zeroes->control = 0;
|
||||
}
|
||||
|
||||
static inline void nvme_setup_rw(struct nvme_ns *ns, struct request *req,
|
||||
|
@ -295,7 +294,6 @@ static inline void nvme_setup_rw(struct nvme_ns *ns, struct request *req,
|
|||
|
||||
memset(cmnd, 0, sizeof(*cmnd));
|
||||
cmnd->rw.opcode = (rq_data_dir(req) ? nvme_cmd_write : nvme_cmd_read);
|
||||
cmnd->rw.command_id = req->tag;
|
||||
cmnd->rw.nsid = cpu_to_le32(ns->ns_id);
|
||||
cmnd->rw.slba = cpu_to_le64(nvme_block_nr(ns, blk_rq_pos(req)));
|
||||
cmnd->rw.length = cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
|
||||
|
@ -324,17 +322,21 @@ static inline void nvme_setup_rw(struct nvme_ns *ns, struct request *req,
|
|||
int nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
|
||||
struct nvme_command *cmd)
|
||||
{
|
||||
int ret = 0;
|
||||
int ret = BLK_MQ_RQ_QUEUE_OK;
|
||||
|
||||
if (req->cmd_type == REQ_TYPE_DRV_PRIV)
|
||||
memcpy(cmd, req->cmd, sizeof(*cmd));
|
||||
memcpy(cmd, nvme_req(req)->cmd, sizeof(*cmd));
|
||||
else if (req_op(req) == REQ_OP_FLUSH)
|
||||
nvme_setup_flush(ns, cmd);
|
||||
else if (req_op(req) == REQ_OP_DISCARD)
|
||||
ret = nvme_setup_discard(ns, req, cmd);
|
||||
else if (req_op(req) == REQ_OP_WRITE_ZEROES)
|
||||
nvme_setup_write_zeroes(ns, req, cmd);
|
||||
else
|
||||
nvme_setup_rw(ns, req, cmd);
|
||||
|
||||
cmd->common.command_id = req->tag;
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvme_setup_cmd);
|
||||
|
@ -344,7 +346,7 @@ EXPORT_SYMBOL_GPL(nvme_setup_cmd);
|
|||
* if the result is positive, it's an NVM Express status code
|
||||
*/
|
||||
int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
|
||||
struct nvme_completion *cqe, void *buffer, unsigned bufflen,
|
||||
union nvme_result *result, void *buffer, unsigned bufflen,
|
||||
unsigned timeout, int qid, int at_head, int flags)
|
||||
{
|
||||
struct request *req;
|
||||
|
@ -355,7 +357,6 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
|
|||
return PTR_ERR(req);
|
||||
|
||||
req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
|
||||
req->special = cqe;
|
||||
|
||||
if (buffer && bufflen) {
|
||||
ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
|
||||
|
@ -364,6 +365,8 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
|
|||
}
|
||||
|
||||
blk_execute_rq(req->q, NULL, req, at_head);
|
||||
if (result)
|
||||
*result = nvme_req(req)->result;
|
||||
ret = req->errors;
|
||||
out:
|
||||
blk_mq_free_request(req);
|
||||
|
@ -385,7 +388,6 @@ int __nvme_submit_user_cmd(struct request_queue *q, struct nvme_command *cmd,
|
|||
u32 *result, unsigned timeout)
|
||||
{
|
||||
bool write = nvme_is_write(cmd);
|
||||
struct nvme_completion cqe;
|
||||
struct nvme_ns *ns = q->queuedata;
|
||||
struct gendisk *disk = ns ? ns->disk : NULL;
|
||||
struct request *req;
|
||||
|
@ -398,7 +400,6 @@ int __nvme_submit_user_cmd(struct request_queue *q, struct nvme_command *cmd,
|
|||
return PTR_ERR(req);
|
||||
|
||||
req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
|
||||
req->special = &cqe;
|
||||
|
||||
if (ubuffer && bufflen) {
|
||||
ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen,
|
||||
|
@ -453,7 +454,7 @@ int __nvme_submit_user_cmd(struct request_queue *q, struct nvme_command *cmd,
|
|||
blk_execute_rq(req->q, disk, req, 0);
|
||||
ret = req->errors;
|
||||
if (result)
|
||||
*result = le32_to_cpu(cqe.result);
|
||||
*result = le32_to_cpu(nvme_req(req)->result.u32);
|
||||
if (meta && !ret && !write) {
|
||||
if (copy_to_user(meta_buffer, meta, meta_len))
|
||||
ret = -EFAULT;
|
||||
|
@ -602,7 +603,7 @@ int nvme_get_features(struct nvme_ctrl *dev, unsigned fid, unsigned nsid,
|
|||
void *buffer, size_t buflen, u32 *result)
|
||||
{
|
||||
struct nvme_command c;
|
||||
struct nvme_completion cqe;
|
||||
union nvme_result res;
|
||||
int ret;
|
||||
|
||||
memset(&c, 0, sizeof(c));
|
||||
|
@ -610,10 +611,10 @@ int nvme_get_features(struct nvme_ctrl *dev, unsigned fid, unsigned nsid,
|
|||
c.features.nsid = cpu_to_le32(nsid);
|
||||
c.features.fid = cpu_to_le32(fid);
|
||||
|
||||
ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &cqe, buffer, buflen, 0,
|
||||
ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res, buffer, buflen, 0,
|
||||
NVME_QID_ANY, 0, 0);
|
||||
if (ret >= 0 && result)
|
||||
*result = le32_to_cpu(cqe.result);
|
||||
*result = le32_to_cpu(res.u32);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -621,7 +622,7 @@ int nvme_set_features(struct nvme_ctrl *dev, unsigned fid, unsigned dword11,
|
|||
void *buffer, size_t buflen, u32 *result)
|
||||
{
|
||||
struct nvme_command c;
|
||||
struct nvme_completion cqe;
|
||||
union nvme_result res;
|
||||
int ret;
|
||||
|
||||
memset(&c, 0, sizeof(c));
|
||||
|
@ -629,10 +630,10 @@ int nvme_set_features(struct nvme_ctrl *dev, unsigned fid, unsigned dword11,
|
|||
c.features.fid = cpu_to_le32(fid);
|
||||
c.features.dword11 = cpu_to_le32(dword11);
|
||||
|
||||
ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &cqe,
|
||||
ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res,
|
||||
buffer, buflen, 0, NVME_QID_ANY, 0, 0);
|
||||
if (ret >= 0 && result)
|
||||
*result = le32_to_cpu(cqe.result);
|
||||
*result = le32_to_cpu(res.u32);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -951,6 +952,10 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
|
|||
|
||||
if (ns->ctrl->oncs & NVME_CTRL_ONCS_DSM)
|
||||
nvme_config_discard(ns);
|
||||
if (ns->ctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES)
|
||||
blk_queue_max_write_zeroes_sectors(ns->queue,
|
||||
((u32)(USHRT_MAX + 1) * bs) >> 9);
|
||||
|
||||
blk_mq_unfreeze_queue(disk->queue);
|
||||
}
|
||||
|
||||
|
@ -1683,28 +1688,25 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
|
|||
if (nvme_revalidate_ns(ns, &id))
|
||||
goto out_free_queue;
|
||||
|
||||
if (nvme_nvm_ns_supported(ns, id)) {
|
||||
if (nvme_nvm_register(ns, disk_name, node,
|
||||
&nvme_ns_attr_group)) {
|
||||
dev_warn(ctrl->dev, "%s: LightNVM init failure\n",
|
||||
__func__);
|
||||
goto out_free_id;
|
||||
}
|
||||
} else {
|
||||
disk = alloc_disk_node(0, node);
|
||||
if (!disk)
|
||||
goto out_free_id;
|
||||
|
||||
disk->fops = &nvme_fops;
|
||||
disk->private_data = ns;
|
||||
disk->queue = ns->queue;
|
||||
disk->flags = GENHD_FL_EXT_DEVT;
|
||||
memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
|
||||
ns->disk = disk;
|
||||
|
||||
__nvme_revalidate_disk(disk, id);
|
||||
if (nvme_nvm_ns_supported(ns, id) &&
|
||||
nvme_nvm_register(ns, disk_name, node)) {
|
||||
dev_warn(ctrl->dev, "%s: LightNVM init failure\n", __func__);
|
||||
goto out_free_id;
|
||||
}
|
||||
|
||||
disk = alloc_disk_node(0, node);
|
||||
if (!disk)
|
||||
goto out_free_id;
|
||||
|
||||
disk->fops = &nvme_fops;
|
||||
disk->private_data = ns;
|
||||
disk->queue = ns->queue;
|
||||
disk->flags = GENHD_FL_EXT_DEVT;
|
||||
memcpy(disk->disk_name, disk_name, DISK_NAME_LEN);
|
||||
ns->disk = disk;
|
||||
|
||||
__nvme_revalidate_disk(disk, id);
|
||||
|
||||
mutex_lock(&ctrl->namespaces_mutex);
|
||||
list_add_tail(&ns->list, &ctrl->namespaces);
|
||||
mutex_unlock(&ctrl->namespaces_mutex);
|
||||
|
@ -1713,14 +1715,14 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
|
|||
|
||||
kfree(id);
|
||||
|
||||
if (ns->ndev)
|
||||
return;
|
||||
|
||||
device_add_disk(ctrl->device, ns->disk);
|
||||
if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
|
||||
&nvme_ns_attr_group))
|
||||
pr_warn("%s: failed to create sysfs group for identification\n",
|
||||
ns->disk->disk_name);
|
||||
if (ns->ndev && nvme_nvm_register_sysfs(ns))
|
||||
pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
|
||||
ns->disk->disk_name);
|
||||
return;
|
||||
out_free_id:
|
||||
kfree(id);
|
||||
|
@ -1742,6 +1744,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
|
|||
blk_integrity_unregister(ns->disk);
|
||||
sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
|
||||
&nvme_ns_attr_group);
|
||||
if (ns->ndev)
|
||||
nvme_nvm_unregister_sysfs(ns);
|
||||
del_gendisk(ns->disk);
|
||||
blk_mq_abort_requeue_list(ns->queue);
|
||||
blk_cleanup_queue(ns->queue);
|
||||
|
@ -1905,18 +1909,25 @@ static void nvme_async_event_work(struct work_struct *work)
|
|||
spin_unlock_irq(&ctrl->lock);
|
||||
}
|
||||
|
||||
void nvme_complete_async_event(struct nvme_ctrl *ctrl,
|
||||
struct nvme_completion *cqe)
|
||||
void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
|
||||
union nvme_result *res)
|
||||
{
|
||||
u16 status = le16_to_cpu(cqe->status) >> 1;
|
||||
u32 result = le32_to_cpu(cqe->result);
|
||||
u32 result = le32_to_cpu(res->u32);
|
||||
bool done = true;
|
||||
|
||||
if (status == NVME_SC_SUCCESS || status == NVME_SC_ABORT_REQ) {
|
||||
switch (le16_to_cpu(status) >> 1) {
|
||||
case NVME_SC_SUCCESS:
|
||||
done = false;
|
||||
/*FALLTHRU*/
|
||||
case NVME_SC_ABORT_REQ:
|
||||
++ctrl->event_limit;
|
||||
schedule_work(&ctrl->async_event_work);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
if (status != NVME_SC_SUCCESS)
|
||||
if (done)
|
||||
return;
|
||||
|
||||
switch (result & 0xff07) {
|
||||
|
@ -2078,14 +2089,8 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl)
|
|||
struct nvme_ns *ns;
|
||||
|
||||
mutex_lock(&ctrl->namespaces_mutex);
|
||||
list_for_each_entry(ns, &ctrl->namespaces, list) {
|
||||
spin_lock_irq(ns->queue->queue_lock);
|
||||
queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
|
||||
spin_unlock_irq(ns->queue->queue_lock);
|
||||
|
||||
blk_mq_cancel_requeue_work(ns->queue);
|
||||
blk_mq_stop_hw_queues(ns->queue);
|
||||
}
|
||||
list_for_each_entry(ns, &ctrl->namespaces, list)
|
||||
blk_mq_quiesce_queue(ns->queue);
|
||||
mutex_unlock(&ctrl->namespaces_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvme_stop_queues);
|
||||
|
@ -2096,7 +2101,6 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
|
|||
|
||||
mutex_lock(&ctrl->namespaces_mutex);
|
||||
list_for_each_entry(ns, &ctrl->namespaces, list) {
|
||||
queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);
|
||||
blk_mq_start_stopped_hw_queues(ns->queue, true);
|
||||
blk_mq_kick_requeue_list(ns->queue);
|
||||
}
|
||||
|
|
|
@ -161,7 +161,7 @@ EXPORT_SYMBOL_GPL(nvmf_get_subsysnqn);
|
|||
int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
|
||||
{
|
||||
struct nvme_command cmd;
|
||||
struct nvme_completion cqe;
|
||||
union nvme_result res;
|
||||
int ret;
|
||||
|
||||
memset(&cmd, 0, sizeof(cmd));
|
||||
|
@ -169,11 +169,11 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
|
|||
cmd.prop_get.fctype = nvme_fabrics_type_property_get;
|
||||
cmd.prop_get.offset = cpu_to_le32(off);
|
||||
|
||||
ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &cqe, NULL, 0, 0,
|
||||
ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res, NULL, 0, 0,
|
||||
NVME_QID_ANY, 0, 0);
|
||||
|
||||
if (ret >= 0)
|
||||
*val = le64_to_cpu(cqe.result64);
|
||||
*val = le64_to_cpu(res.u64);
|
||||
if (unlikely(ret != 0))
|
||||
dev_err(ctrl->device,
|
||||
"Property Get error: %d, offset %#x\n",
|
||||
|
@ -207,7 +207,7 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read32);
|
|||
int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
|
||||
{
|
||||
struct nvme_command cmd;
|
||||
struct nvme_completion cqe;
|
||||
union nvme_result res;
|
||||
int ret;
|
||||
|
||||
memset(&cmd, 0, sizeof(cmd));
|
||||
|
@ -216,11 +216,11 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
|
|||
cmd.prop_get.attrib = 1;
|
||||
cmd.prop_get.offset = cpu_to_le32(off);
|
||||
|
||||
ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &cqe, NULL, 0, 0,
|
||||
ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res, NULL, 0, 0,
|
||||
NVME_QID_ANY, 0, 0);
|
||||
|
||||
if (ret >= 0)
|
||||
*val = le64_to_cpu(cqe.result64);
|
||||
*val = le64_to_cpu(res.u64);
|
||||
if (unlikely(ret != 0))
|
||||
dev_err(ctrl->device,
|
||||
"Property Get error: %d, offset %#x\n",
|
||||
|
@ -368,7 +368,7 @@ static void nvmf_log_connect_error(struct nvme_ctrl *ctrl,
|
|||
int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
|
||||
{
|
||||
struct nvme_command cmd;
|
||||
struct nvme_completion cqe;
|
||||
union nvme_result res;
|
||||
struct nvmf_connect_data *data;
|
||||
int ret;
|
||||
|
||||
|
@ -400,16 +400,16 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
|
|||
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
|
||||
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
|
||||
|
||||
ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &cqe,
|
||||
ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res,
|
||||
data, sizeof(*data), 0, NVME_QID_ANY, 1,
|
||||
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
|
||||
if (ret) {
|
||||
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(cqe.result),
|
||||
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
|
||||
&cmd, data);
|
||||
goto out_free_data;
|
||||
}
|
||||
|
||||
ctrl->cntlid = le16_to_cpu(cqe.result16);
|
||||
ctrl->cntlid = le16_to_cpu(res.u16);
|
||||
|
||||
out_free_data:
|
||||
kfree(data);
|
||||
|
@ -441,7 +441,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
|
|||
{
|
||||
struct nvme_command cmd;
|
||||
struct nvmf_connect_data *data;
|
||||
struct nvme_completion cqe;
|
||||
union nvme_result res;
|
||||
int ret;
|
||||
|
||||
memset(&cmd, 0, sizeof(cmd));
|
||||
|
@ -459,11 +459,11 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
|
|||
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
|
||||
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
|
||||
|
||||
ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &cqe,
|
||||
ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res,
|
||||
data, sizeof(*data), 0, qid, 1,
|
||||
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
|
||||
if (ret) {
|
||||
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(cqe.result),
|
||||
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
|
||||
&cmd, data);
|
||||
}
|
||||
kfree(data);
|
||||
|
@ -576,7 +576,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
|
|||
nqnlen = strlen(opts->subsysnqn);
|
||||
if (nqnlen >= NVMF_NQN_SIZE) {
|
||||
pr_err("%s needs to be < %d bytes\n",
|
||||
opts->subsysnqn, NVMF_NQN_SIZE);
|
||||
opts->subsysnqn, NVMF_NQN_SIZE);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
@ -666,10 +666,12 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
|
|||
if (nqnlen >= NVMF_NQN_SIZE) {
|
||||
pr_err("%s needs to be < %d bytes\n",
|
||||
p, NVMF_NQN_SIZE);
|
||||
kfree(p);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
opts->host = nvmf_host_add(p);
|
||||
kfree(p);
|
||||
if (!opts->host) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
|
@ -825,8 +827,7 @@ nvmf_create_ctrl(struct device *dev, const char *buf, size_t count)
|
|||
out_unlock:
|
||||
mutex_unlock(&nvmf_transports_mutex);
|
||||
out_free_opts:
|
||||
nvmf_host_put(opts->host);
|
||||
kfree(opts);
|
||||
nvmf_free_options(opts);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -146,14 +146,6 @@ struct nvme_nvm_command {
|
|||
};
|
||||
};
|
||||
|
||||
struct nvme_nvm_completion {
|
||||
__le64 result; /* Used by LightNVM to return ppa completions */
|
||||
__le16 sq_head; /* how much of this queue may be reclaimed */
|
||||
__le16 sq_id; /* submission queue that generated this entry */
|
||||
__u16 command_id; /* of the command which completed */
|
||||
__le16 status; /* did the command fail, and if so, why? */
|
||||
};
|
||||
|
||||
#define NVME_NVM_LP_MLC_PAIRS 886
|
||||
struct nvme_nvm_lp_mlc {
|
||||
__le16 num_pairs;
|
||||
|
@ -360,6 +352,7 @@ static int nvme_nvm_get_l2p_tbl(struct nvm_dev *nvmdev, u64 slba, u32 nlb,
|
|||
|
||||
while (nlb) {
|
||||
u32 cmd_nlb = min(nlb_pr_rq, nlb);
|
||||
u64 elba = slba + cmd_nlb;
|
||||
|
||||
c.l2p.slba = cpu_to_le64(cmd_slba);
|
||||
c.l2p.nlb = cpu_to_le32(cmd_nlb);
|
||||
|
@ -373,6 +366,14 @@ static int nvme_nvm_get_l2p_tbl(struct nvm_dev *nvmdev, u64 slba, u32 nlb,
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (unlikely(elba > nvmdev->total_secs)) {
|
||||
pr_err("nvm: L2P data from device is out of bounds!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Transform physical address to target address space */
|
||||
nvmdev->mt->part_to_tgt(nvmdev, entries, cmd_nlb);
|
||||
|
||||
if (update_l2p(cmd_slba, cmd_nlb, entries, priv)) {
|
||||
ret = -EINTR;
|
||||
goto out;
|
||||
|
@ -391,11 +392,12 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
|
|||
u8 *blks)
|
||||
{
|
||||
struct request_queue *q = nvmdev->q;
|
||||
struct nvm_geo *geo = &nvmdev->geo;
|
||||
struct nvme_ns *ns = q->queuedata;
|
||||
struct nvme_ctrl *ctrl = ns->ctrl;
|
||||
struct nvme_nvm_command c = {};
|
||||
struct nvme_nvm_bb_tbl *bb_tbl;
|
||||
int nr_blks = nvmdev->blks_per_lun * nvmdev->plane_mode;
|
||||
int nr_blks = geo->blks_per_lun * geo->plane_mode;
|
||||
int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks;
|
||||
int ret = 0;
|
||||
|
||||
|
@ -436,7 +438,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
|
|||
goto out;
|
||||
}
|
||||
|
||||
memcpy(blks, bb_tbl->blk, nvmdev->blks_per_lun * nvmdev->plane_mode);
|
||||
memcpy(blks, bb_tbl->blk, geo->blks_per_lun * geo->plane_mode);
|
||||
out:
|
||||
kfree(bb_tbl);
|
||||
return ret;
|
||||
|
@ -481,14 +483,11 @@ static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd,
|
|||
static void nvme_nvm_end_io(struct request *rq, int error)
|
||||
{
|
||||
struct nvm_rq *rqd = rq->end_io_data;
|
||||
struct nvme_nvm_completion *cqe = rq->special;
|
||||
|
||||
if (cqe)
|
||||
rqd->ppa_status = le64_to_cpu(cqe->result);
|
||||
|
||||
rqd->ppa_status = nvme_req(rq)->result.u64;
|
||||
nvm_end_io(rqd, error);
|
||||
|
||||
kfree(rq->cmd);
|
||||
kfree(nvme_req(rq)->cmd);
|
||||
blk_mq_free_request(rq);
|
||||
}
|
||||
|
||||
|
@ -500,20 +499,18 @@ static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
|
|||
struct bio *bio = rqd->bio;
|
||||
struct nvme_nvm_command *cmd;
|
||||
|
||||
rq = blk_mq_alloc_request(q, bio_data_dir(bio), 0);
|
||||
if (IS_ERR(rq))
|
||||
cmd = kzalloc(sizeof(struct nvme_nvm_command), GFP_KERNEL);
|
||||
if (!cmd)
|
||||
return -ENOMEM;
|
||||
|
||||
cmd = kzalloc(sizeof(struct nvme_nvm_command) +
|
||||
sizeof(struct nvme_nvm_completion), GFP_KERNEL);
|
||||
if (!cmd) {
|
||||
blk_mq_free_request(rq);
|
||||
rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY);
|
||||
if (IS_ERR(rq)) {
|
||||
kfree(cmd);
|
||||
return -ENOMEM;
|
||||
}
|
||||
rq->cmd_flags &= ~REQ_FAILFAST_DRIVER;
|
||||
|
||||
rq->cmd_type = REQ_TYPE_DRV_PRIV;
|
||||
rq->ioprio = bio_prio(bio);
|
||||
|
||||
if (bio_has_data(bio))
|
||||
rq->nr_phys_segments = bio_phys_segments(q, bio);
|
||||
|
||||
|
@ -522,10 +519,6 @@ static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
|
|||
|
||||
nvme_nvm_rqtocmd(rq, rqd, ns, cmd);
|
||||
|
||||
rq->cmd = (unsigned char *)cmd;
|
||||
rq->cmd_len = sizeof(struct nvme_nvm_command);
|
||||
rq->special = cmd + 1;
|
||||
|
||||
rq->end_io_data = rqd;
|
||||
|
||||
blk_execute_rq_nowait(q, NULL, rq, 0, nvme_nvm_end_io);
|
||||
|
@ -543,6 +536,7 @@ static int nvme_nvm_erase_block(struct nvm_dev *dev, struct nvm_rq *rqd)
|
|||
c.erase.nsid = cpu_to_le32(ns->ns_id);
|
||||
c.erase.spba = cpu_to_le64(rqd->ppa_addr.ppa);
|
||||
c.erase.length = cpu_to_le16(rqd->nr_ppas - 1);
|
||||
c.erase.control = cpu_to_le16(rqd->flags);
|
||||
|
||||
return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
|
||||
}
|
||||
|
@ -592,12 +586,10 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = {
|
|||
.max_phys_sect = 64,
|
||||
};
|
||||
|
||||
int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node,
|
||||
const struct attribute_group *attrs)
|
||||
int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node)
|
||||
{
|
||||
struct request_queue *q = ns->queue;
|
||||
struct nvm_dev *dev;
|
||||
int ret;
|
||||
|
||||
dev = nvm_alloc_dev(node);
|
||||
if (!dev)
|
||||
|
@ -606,18 +598,10 @@ int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node,
|
|||
dev->q = q;
|
||||
memcpy(dev->name, disk_name, DISK_NAME_LEN);
|
||||
dev->ops = &nvme_nvm_dev_ops;
|
||||
dev->parent_dev = ns->ctrl->device;
|
||||
dev->private_data = ns;
|
||||
ns->ndev = dev;
|
||||
|
||||
ret = nvm_register(dev);
|
||||
|
||||
ns->lba_shift = ilog2(dev->sec_size);
|
||||
|
||||
if (sysfs_create_group(&dev->dev.kobj, attrs))
|
||||
pr_warn("%s: failed to create sysfs group for identification\n",
|
||||
disk_name);
|
||||
return ret;
|
||||
return nvm_register(dev);
|
||||
}
|
||||
|
||||
void nvme_nvm_unregister(struct nvme_ns *ns)
|
||||
|
@ -625,6 +609,167 @@ void nvme_nvm_unregister(struct nvme_ns *ns)
|
|||
nvm_unregister(ns->ndev);
|
||||
}
|
||||
|
||||
static ssize_t nvm_dev_attr_show(struct device *dev,
|
||||
struct device_attribute *dattr, char *page)
|
||||
{
|
||||
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
|
||||
struct nvm_dev *ndev = ns->ndev;
|
||||
struct nvm_id *id;
|
||||
struct nvm_id_group *grp;
|
||||
struct attribute *attr;
|
||||
|
||||
if (!ndev)
|
||||
return 0;
|
||||
|
||||
id = &ndev->identity;
|
||||
grp = &id->groups[0];
|
||||
attr = &dattr->attr;
|
||||
|
||||
if (strcmp(attr->name, "version") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id);
|
||||
} else if (strcmp(attr->name, "vendor_opcode") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt);
|
||||
} else if (strcmp(attr->name, "capabilities") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->cap);
|
||||
} else if (strcmp(attr->name, "device_mode") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", id->dom);
|
||||
} else if (strcmp(attr->name, "media_manager") == 0) {
|
||||
if (!ndev->mt)
|
||||
return scnprintf(page, PAGE_SIZE, "%s\n", "none");
|
||||
return scnprintf(page, PAGE_SIZE, "%s\n", ndev->mt->name);
|
||||
} else if (strcmp(attr->name, "ppa_format") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE,
|
||||
"0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
|
||||
id->ppaf.ch_offset, id->ppaf.ch_len,
|
||||
id->ppaf.lun_offset, id->ppaf.lun_len,
|
||||
id->ppaf.pln_offset, id->ppaf.pln_len,
|
||||
id->ppaf.blk_offset, id->ppaf.blk_len,
|
||||
id->ppaf.pg_offset, id->ppaf.pg_len,
|
||||
id->ppaf.sect_offset, id->ppaf.sect_len);
|
||||
} else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->mtype);
|
||||
} else if (strcmp(attr->name, "flash_media_type") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->fmtype);
|
||||
} else if (strcmp(attr->name, "num_channels") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_ch);
|
||||
} else if (strcmp(attr->name, "num_luns") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_lun);
|
||||
} else if (strcmp(attr->name, "num_planes") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_pln);
|
||||
} else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_blk);
|
||||
} else if (strcmp(attr->name, "num_pages") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->num_pg);
|
||||
} else if (strcmp(attr->name, "page_size") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->fpg_sz);
|
||||
} else if (strcmp(attr->name, "hw_sector_size") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->csecs);
|
||||
} else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->sos);
|
||||
} else if (strcmp(attr->name, "read_typ") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->trdt);
|
||||
} else if (strcmp(attr->name, "read_max") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->trdm);
|
||||
} else if (strcmp(attr->name, "prog_typ") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tprt);
|
||||
} else if (strcmp(attr->name, "prog_max") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tprm);
|
||||
} else if (strcmp(attr->name, "erase_typ") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tbet);
|
||||
} else if (strcmp(attr->name, "erase_max") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n", grp->tbem);
|
||||
} else if (strcmp(attr->name, "multiplane_modes") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "0x%08x\n", grp->mpos);
|
||||
} else if (strcmp(attr->name, "media_capabilities") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "0x%08x\n", grp->mccap);
|
||||
} else if (strcmp(attr->name, "max_phys_secs") == 0) {
|
||||
return scnprintf(page, PAGE_SIZE, "%u\n",
|
||||
ndev->ops->max_phys_sect);
|
||||
} else {
|
||||
return scnprintf(page,
|
||||
PAGE_SIZE,
|
||||
"Unhandled attr(%s) in `nvm_dev_attr_show`\n",
|
||||
attr->name);
|
||||
}
|
||||
}
|
||||
|
||||
#define NVM_DEV_ATTR_RO(_name) \
|
||||
DEVICE_ATTR(_name, S_IRUGO, nvm_dev_attr_show, NULL)
|
||||
|
||||
static NVM_DEV_ATTR_RO(version);
|
||||
static NVM_DEV_ATTR_RO(vendor_opcode);
|
||||
static NVM_DEV_ATTR_RO(capabilities);
|
||||
static NVM_DEV_ATTR_RO(device_mode);
|
||||
static NVM_DEV_ATTR_RO(ppa_format);
|
||||
static NVM_DEV_ATTR_RO(media_manager);
|
||||
|
||||
static NVM_DEV_ATTR_RO(media_type);
|
||||
static NVM_DEV_ATTR_RO(flash_media_type);
|
||||
static NVM_DEV_ATTR_RO(num_channels);
|
||||
static NVM_DEV_ATTR_RO(num_luns);
|
||||
static NVM_DEV_ATTR_RO(num_planes);
|
||||
static NVM_DEV_ATTR_RO(num_blocks);
|
||||
static NVM_DEV_ATTR_RO(num_pages);
|
||||
static NVM_DEV_ATTR_RO(page_size);
|
||||
static NVM_DEV_ATTR_RO(hw_sector_size);
|
||||
static NVM_DEV_ATTR_RO(oob_sector_size);
|
||||
static NVM_DEV_ATTR_RO(read_typ);
|
||||
static NVM_DEV_ATTR_RO(read_max);
|
||||
static NVM_DEV_ATTR_RO(prog_typ);
|
||||
static NVM_DEV_ATTR_RO(prog_max);
|
||||
static NVM_DEV_ATTR_RO(erase_typ);
|
||||
static NVM_DEV_ATTR_RO(erase_max);
|
||||
static NVM_DEV_ATTR_RO(multiplane_modes);
|
||||
static NVM_DEV_ATTR_RO(media_capabilities);
|
||||
static NVM_DEV_ATTR_RO(max_phys_secs);
|
||||
|
||||
static struct attribute *nvm_dev_attrs[] = {
|
||||
&dev_attr_version.attr,
|
||||
&dev_attr_vendor_opcode.attr,
|
||||
&dev_attr_capabilities.attr,
|
||||
&dev_attr_device_mode.attr,
|
||||
&dev_attr_media_manager.attr,
|
||||
|
||||
&dev_attr_ppa_format.attr,
|
||||
&dev_attr_media_type.attr,
|
||||
&dev_attr_flash_media_type.attr,
|
||||
&dev_attr_num_channels.attr,
|
||||
&dev_attr_num_luns.attr,
|
||||
&dev_attr_num_planes.attr,
|
||||
&dev_attr_num_blocks.attr,
|
||||
&dev_attr_num_pages.attr,
|
||||
&dev_attr_page_size.attr,
|
||||
&dev_attr_hw_sector_size.attr,
|
||||
&dev_attr_oob_sector_size.attr,
|
||||
&dev_attr_read_typ.attr,
|
||||
&dev_attr_read_max.attr,
|
||||
&dev_attr_prog_typ.attr,
|
||||
&dev_attr_prog_max.attr,
|
||||
&dev_attr_erase_typ.attr,
|
||||
&dev_attr_erase_max.attr,
|
||||
&dev_attr_multiplane_modes.attr,
|
||||
&dev_attr_media_capabilities.attr,
|
||||
&dev_attr_max_phys_secs.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct attribute_group nvm_dev_attr_group = {
|
||||
.name = "lightnvm",
|
||||
.attrs = nvm_dev_attrs,
|
||||
};
|
||||
|
||||
int nvme_nvm_register_sysfs(struct nvme_ns *ns)
|
||||
{
|
||||
return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
|
||||
&nvm_dev_attr_group);
|
||||
}
|
||||
|
||||
void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
|
||||
{
|
||||
sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
|
||||
&nvm_dev_attr_group);
|
||||
}
|
||||
|
||||
/* move to shared place when used in multiple places. */
|
||||
#define PCI_VENDOR_ID_CNEX 0x1d1d
|
||||
#define PCI_DEVICE_ID_CNEX_WL 0x2807
|
||||
|
|
|
@ -79,6 +79,20 @@ enum nvme_quirks {
|
|||
NVME_QUIRK_DELAY_BEFORE_CHK_RDY = (1 << 3),
|
||||
};
|
||||
|
||||
/*
|
||||
* Common request structure for NVMe passthrough. All drivers must have
|
||||
* this structure as the first member of their request-private data.
|
||||
*/
|
||||
struct nvme_request {
|
||||
struct nvme_command *cmd;
|
||||
union nvme_result result;
|
||||
};
|
||||
|
||||
static inline struct nvme_request *nvme_req(struct request *req)
|
||||
{
|
||||
return blk_mq_rq_to_pdu(req);
|
||||
}
|
||||
|
||||
/* The below value is the specific amount of delay needed before checking
|
||||
* readiness in case of the PCI_DEVICE(0x1c58, 0x0003), which needs the
|
||||
* NVME_QUIRK_DELAY_BEFORE_CHK_RDY quirk enabled. The value (in ms) was
|
||||
|
@ -222,8 +236,10 @@ static inline unsigned nvme_map_len(struct request *rq)
|
|||
|
||||
static inline void nvme_cleanup_cmd(struct request *req)
|
||||
{
|
||||
if (req_op(req) == REQ_OP_DISCARD)
|
||||
kfree(req->completion_data);
|
||||
if (req->rq_flags & RQF_SPECIAL_PAYLOAD) {
|
||||
kfree(page_address(req->special_vec.bv_page) +
|
||||
req->special_vec.bv_offset);
|
||||
}
|
||||
}
|
||||
|
||||
static inline int nvme_error_status(u16 status)
|
||||
|
@ -261,8 +277,8 @@ void nvme_queue_scan(struct nvme_ctrl *ctrl);
|
|||
void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
|
||||
|
||||
#define NVME_NR_AERS 1
|
||||
void nvme_complete_async_event(struct nvme_ctrl *ctrl,
|
||||
struct nvme_completion *cqe);
|
||||
void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status,
|
||||
union nvme_result *res);
|
||||
void nvme_queue_async_events(struct nvme_ctrl *ctrl);
|
||||
|
||||
void nvme_stop_queues(struct nvme_ctrl *ctrl);
|
||||
|
@ -278,7 +294,7 @@ int nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
|
|||
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
|
||||
void *buf, unsigned bufflen);
|
||||
int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
|
||||
struct nvme_completion *cqe, void *buffer, unsigned bufflen,
|
||||
union nvme_result *result, void *buffer, unsigned bufflen,
|
||||
unsigned timeout, int qid, int at_head, int flags);
|
||||
int nvme_submit_user_cmd(struct request_queue *q, struct nvme_command *cmd,
|
||||
void __user *ubuffer, unsigned bufflen, u32 *result,
|
||||
|
@ -307,36 +323,33 @@ int nvme_sg_get_version_num(int __user *ip);
|
|||
|
||||
#ifdef CONFIG_NVM
|
||||
int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id);
|
||||
int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node,
|
||||
const struct attribute_group *attrs);
|
||||
int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
|
||||
void nvme_nvm_unregister(struct nvme_ns *ns);
|
||||
|
||||
static inline struct nvme_ns *nvme_get_ns_from_dev(struct device *dev)
|
||||
{
|
||||
if (dev->type->devnode)
|
||||
return dev_to_disk(dev)->private_data;
|
||||
|
||||
return (container_of(dev, struct nvm_dev, dev))->private_data;
|
||||
}
|
||||
int nvme_nvm_register_sysfs(struct nvme_ns *ns);
|
||||
void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
|
||||
#else
|
||||
static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
|
||||
int node,
|
||||
const struct attribute_group *attrs)
|
||||
int node)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
|
||||
|
||||
static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
|
||||
static inline int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif /* CONFIG_NVM */
|
||||
|
||||
static inline struct nvme_ns *nvme_get_ns_from_dev(struct device *dev)
|
||||
{
|
||||
return dev_to_disk(dev)->private_data;
|
||||
}
|
||||
#endif /* CONFIG_NVM */
|
||||
|
||||
int __init nvme_core_init(void);
|
||||
void nvme_core_exit(void);
|
||||
|
|
|
@ -141,6 +141,7 @@ struct nvme_queue {
|
|||
* allocated to store the PRP list.
|
||||
*/
|
||||
struct nvme_iod {
|
||||
struct nvme_request req;
|
||||
struct nvme_queue *nvmeq;
|
||||
int aborted;
|
||||
int npages; /* In the PRP list. 0 means small pool in use */
|
||||
|
@ -302,14 +303,14 @@ static void __nvme_submit_cmd(struct nvme_queue *nvmeq,
|
|||
static __le64 **iod_list(struct request *req)
|
||||
{
|
||||
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
|
||||
return (__le64 **)(iod->sg + req->nr_phys_segments);
|
||||
return (__le64 **)(iod->sg + blk_rq_nr_phys_segments(req));
|
||||
}
|
||||
|
||||
static int nvme_init_iod(struct request *rq, unsigned size,
|
||||
struct nvme_dev *dev)
|
||||
{
|
||||
struct nvme_iod *iod = blk_mq_rq_to_pdu(rq);
|
||||
int nseg = rq->nr_phys_segments;
|
||||
int nseg = blk_rq_nr_phys_segments(rq);
|
||||
|
||||
if (nseg > NVME_INT_PAGES || size > NVME_INT_BYTES(dev)) {
|
||||
iod->sg = kmalloc(nvme_iod_alloc_size(dev, size, nseg), GFP_ATOMIC);
|
||||
|
@ -324,11 +325,11 @@ static int nvme_init_iod(struct request *rq, unsigned size,
|
|||
iod->nents = 0;
|
||||
iod->length = size;
|
||||
|
||||
if (!(rq->cmd_flags & REQ_DONTPREP)) {
|
||||
if (!(rq->rq_flags & RQF_DONTPREP)) {
|
||||
rq->retries = 0;
|
||||
rq->cmd_flags |= REQ_DONTPREP;
|
||||
rq->rq_flags |= RQF_DONTPREP;
|
||||
}
|
||||
return 0;
|
||||
return BLK_MQ_RQ_QUEUE_OK;
|
||||
}
|
||||
|
||||
static void nvme_free_iod(struct nvme_dev *dev, struct request *req)
|
||||
|
@ -339,8 +340,6 @@ static void nvme_free_iod(struct nvme_dev *dev, struct request *req)
|
|||
__le64 **list = iod_list(req);
|
||||
dma_addr_t prp_dma = iod->first_dma;
|
||||
|
||||
nvme_cleanup_cmd(req);
|
||||
|
||||
if (iod->npages == 0)
|
||||
dma_pool_free(dev->prp_small_pool, list[0], prp_dma);
|
||||
for (i = 0; i < iod->npages; i++) {
|
||||
|
@ -510,7 +509,7 @@ static int nvme_map_data(struct nvme_dev *dev, struct request *req,
|
|||
DMA_TO_DEVICE : DMA_FROM_DEVICE;
|
||||
int ret = BLK_MQ_RQ_QUEUE_ERROR;
|
||||
|
||||
sg_init_table(iod->sg, req->nr_phys_segments);
|
||||
sg_init_table(iod->sg, blk_rq_nr_phys_segments(req));
|
||||
iod->nents = blk_rq_map_sg(q, req, iod->sg);
|
||||
if (!iod->nents)
|
||||
goto out;
|
||||
|
@ -566,6 +565,7 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
|
|||
}
|
||||
}
|
||||
|
||||
nvme_cleanup_cmd(req);
|
||||
nvme_free_iod(dev, req);
|
||||
}
|
||||
|
||||
|
@ -596,22 +596,21 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
}
|
||||
}
|
||||
|
||||
map_len = nvme_map_len(req);
|
||||
ret = nvme_init_iod(req, map_len, dev);
|
||||
if (ret)
|
||||
ret = nvme_setup_cmd(ns, req, &cmnd);
|
||||
if (ret != BLK_MQ_RQ_QUEUE_OK)
|
||||
return ret;
|
||||
|
||||
ret = nvme_setup_cmd(ns, req, &cmnd);
|
||||
if (ret)
|
||||
goto out;
|
||||
map_len = nvme_map_len(req);
|
||||
ret = nvme_init_iod(req, map_len, dev);
|
||||
if (ret != BLK_MQ_RQ_QUEUE_OK)
|
||||
goto out_free_cmd;
|
||||
|
||||
if (req->nr_phys_segments)
|
||||
if (blk_rq_nr_phys_segments(req))
|
||||
ret = nvme_map_data(dev, req, map_len, &cmnd);
|
||||
|
||||
if (ret)
|
||||
goto out;
|
||||
if (ret != BLK_MQ_RQ_QUEUE_OK)
|
||||
goto out_cleanup_iod;
|
||||
|
||||
cmnd.common.command_id = req->tag;
|
||||
blk_mq_start_request(req);
|
||||
|
||||
spin_lock_irq(&nvmeq->q_lock);
|
||||
|
@ -621,14 +620,16 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
else
|
||||
ret = BLK_MQ_RQ_QUEUE_ERROR;
|
||||
spin_unlock_irq(&nvmeq->q_lock);
|
||||
goto out;
|
||||
goto out_cleanup_iod;
|
||||
}
|
||||
__nvme_submit_cmd(nvmeq, &cmnd);
|
||||
nvme_process_cq(nvmeq);
|
||||
spin_unlock_irq(&nvmeq->q_lock);
|
||||
return BLK_MQ_RQ_QUEUE_OK;
|
||||
out:
|
||||
out_cleanup_iod:
|
||||
nvme_free_iod(dev, req);
|
||||
out_free_cmd:
|
||||
nvme_cleanup_cmd(req);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -703,13 +704,13 @@ static void __nvme_process_cq(struct nvme_queue *nvmeq, unsigned int *tag)
|
|||
*/
|
||||
if (unlikely(nvmeq->qid == 0 &&
|
||||
cqe.command_id >= NVME_AQ_BLKMQ_DEPTH)) {
|
||||
nvme_complete_async_event(&nvmeq->dev->ctrl, &cqe);
|
||||
nvme_complete_async_event(&nvmeq->dev->ctrl,
|
||||
cqe.status, &cqe.result);
|
||||
continue;
|
||||
}
|
||||
|
||||
req = blk_mq_tag_to_rq(*nvmeq->tags, cqe.command_id);
|
||||
if (req->cmd_type == REQ_TYPE_DRV_PRIV && req->special)
|
||||
memcpy(req->special, &cqe, sizeof(cqe));
|
||||
nvme_req(req)->result = cqe.result;
|
||||
blk_mq_complete_request(req, le16_to_cpu(cqe.status) >> 1);
|
||||
|
||||
}
|
||||
|
|
|
@ -28,7 +28,6 @@
|
|||
|
||||
#include <rdma/ib_verbs.h>
|
||||
#include <rdma/rdma_cm.h>
|
||||
#include <rdma/ib_cm.h>
|
||||
#include <linux/nvme-rdma.h>
|
||||
|
||||
#include "nvme.h"
|
||||
|
@ -66,6 +65,7 @@ struct nvme_rdma_qe {
|
|||
|
||||
struct nvme_rdma_queue;
|
||||
struct nvme_rdma_request {
|
||||
struct nvme_request req;
|
||||
struct ib_mr *mr;
|
||||
struct nvme_rdma_qe sqe;
|
||||
struct ib_sge sge[1 + NVME_RDMA_MAX_INLINE_SEGMENTS];
|
||||
|
@ -241,7 +241,9 @@ out_free_ring:
|
|||
|
||||
static void nvme_rdma_qp_event(struct ib_event *event, void *context)
|
||||
{
|
||||
pr_debug("QP event %d\n", event->event);
|
||||
pr_debug("QP event %s (%d)\n",
|
||||
ib_event_msg(event->event), event->event);
|
||||
|
||||
}
|
||||
|
||||
static int nvme_rdma_wait_for_cm(struct nvme_rdma_queue *queue)
|
||||
|
@ -963,8 +965,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
|
|||
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
|
||||
struct nvme_rdma_device *dev = queue->device;
|
||||
struct ib_device *ibdev = dev->dev;
|
||||
int nents, count;
|
||||
int ret;
|
||||
int count, ret;
|
||||
|
||||
req->num_sge = 1;
|
||||
req->inline_data = false;
|
||||
|
@ -976,16 +977,14 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
|
|||
return nvme_rdma_set_sg_null(c);
|
||||
|
||||
req->sg_table.sgl = req->first_sgl;
|
||||
ret = sg_alloc_table_chained(&req->sg_table, rq->nr_phys_segments,
|
||||
req->sg_table.sgl);
|
||||
ret = sg_alloc_table_chained(&req->sg_table,
|
||||
blk_rq_nr_phys_segments(rq), req->sg_table.sgl);
|
||||
if (ret)
|
||||
return -ENOMEM;
|
||||
|
||||
nents = blk_rq_map_sg(rq->q, rq, req->sg_table.sgl);
|
||||
BUG_ON(nents > rq->nr_phys_segments);
|
||||
req->nents = nents;
|
||||
req->nents = blk_rq_map_sg(rq->q, rq, req->sg_table.sgl);
|
||||
|
||||
count = ib_dma_map_sg(ibdev, req->sg_table.sgl, nents,
|
||||
count = ib_dma_map_sg(ibdev, req->sg_table.sgl, req->nents,
|
||||
rq_data_dir(rq) == WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
|
||||
if (unlikely(count <= 0)) {
|
||||
sg_free_table_chained(&req->sg_table, true);
|
||||
|
@ -1130,13 +1129,10 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
|
|||
static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
|
||||
struct nvme_completion *cqe, struct ib_wc *wc, int tag)
|
||||
{
|
||||
u16 status = le16_to_cpu(cqe->status);
|
||||
struct request *rq;
|
||||
struct nvme_rdma_request *req;
|
||||
int ret = 0;
|
||||
|
||||
status >>= 1;
|
||||
|
||||
rq = blk_mq_tag_to_rq(nvme_rdma_tagset(queue), cqe->command_id);
|
||||
if (!rq) {
|
||||
dev_err(queue->ctrl->ctrl.device,
|
||||
|
@ -1147,9 +1143,6 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
|
|||
}
|
||||
req = blk_mq_rq_to_pdu(rq);
|
||||
|
||||
if (rq->cmd_type == REQ_TYPE_DRV_PRIV && rq->special)
|
||||
memcpy(rq->special, cqe, sizeof(*cqe));
|
||||
|
||||
if (rq->tag == tag)
|
||||
ret = 1;
|
||||
|
||||
|
@ -1157,8 +1150,8 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
|
|||
wc->ex.invalidate_rkey == req->mr->rkey)
|
||||
req->mr->need_inval = false;
|
||||
|
||||
blk_mq_complete_request(rq, status);
|
||||
|
||||
req->req.result = cqe->result;
|
||||
blk_mq_complete_request(rq, le16_to_cpu(cqe->status) >> 1);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1186,7 +1179,8 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag)
|
|||
*/
|
||||
if (unlikely(nvme_rdma_queue_idx(queue) == 0 &&
|
||||
cqe->command_id >= NVME_RDMA_AQ_BLKMQ_DEPTH))
|
||||
nvme_complete_async_event(&queue->ctrl->ctrl, cqe);
|
||||
nvme_complete_async_event(&queue->ctrl->ctrl, cqe->status,
|
||||
&cqe->result);
|
||||
else
|
||||
ret = nvme_rdma_process_nvme_rsp(queue, cqe, wc, tag);
|
||||
ib_dma_sync_single_for_device(ibdev, qe->dma, len, DMA_FROM_DEVICE);
|
||||
|
@ -1433,10 +1427,9 @@ static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
sizeof(struct nvme_command), DMA_TO_DEVICE);
|
||||
|
||||
ret = nvme_setup_cmd(ns, rq, c);
|
||||
if (ret)
|
||||
if (ret != BLK_MQ_RQ_QUEUE_OK)
|
||||
return ret;
|
||||
|
||||
c->common.command_id = rq->tag;
|
||||
blk_mq_start_request(rq);
|
||||
|
||||
map_len = nvme_map_len(rq);
|
||||
|
@ -1944,6 +1937,14 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
|
|||
opts->queue_size = ctrl->ctrl.maxcmd;
|
||||
}
|
||||
|
||||
if (opts->queue_size > ctrl->ctrl.sqsize + 1) {
|
||||
/* warn if sqsize is lower than queue_size */
|
||||
dev_warn(ctrl->ctrl.device,
|
||||
"queue_size %zu > ctrl sqsize %u, clamping down\n",
|
||||
opts->queue_size, ctrl->ctrl.sqsize + 1);
|
||||
opts->queue_size = ctrl->ctrl.sqsize + 1;
|
||||
}
|
||||
|
||||
if (opts->nr_io_queues) {
|
||||
ret = nvme_rdma_create_io_queues(ctrl);
|
||||
if (ret)
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue