License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2022-01-24 17:39:13 +08:00
|
|
|
/*
|
|
|
|
* Portions Copyright (C) 1992 Drew Eckhardt
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifndef _LINUX_BLKDEV_H
|
|
|
|
#define _LINUX_BLKDEV_H
|
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/blk_types.h>
|
|
|
|
#include <linux/device.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/list.h>
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
#include <linux/llist.h>
|
2020-10-16 11:10:21 +08:00
|
|
|
#include <linux/minmax.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/timer.h>
|
|
|
|
#include <linux/workqueue.h>
|
|
|
|
#include <linux/wait.h>
|
|
|
|
#include <linux/bio.h>
|
2008-09-11 16:57:55 +08:00
|
|
|
#include <linux/gfp.h>
|
2022-01-24 17:39:13 +08:00
|
|
|
#include <linux/kdev_t.h>
|
2013-01-10 00:05:13 +08:00
|
|
|
#include <linux/rcupdate.h>
|
2014-07-02 00:34:38 +08:00
|
|
|
#include <linux/percpu-refcount.h>
|
2016-10-18 14:40:33 +08:00
|
|
|
#include <linux/blkzoned.h>
|
2022-01-24 17:39:13 +08:00
|
|
|
#include <linux/sched.h>
|
blk-mq: Use request queue-wide tags for tagset-wide sbitmap
The tags used for an IO scheduler are currently per hctx.
As such, when q->nr_hw_queues grows, so does the request queue total IO
scheduler tag depth.
This may cause problems for SCSI MQ HBAs whose total driver depth is
fixed.
Ming and Yanhui report higher CPU usage and lower throughput in scenarios
where the fixed total driver tag depth is appreciably lower than the total
scheduler tag depth:
https://lore.kernel.org/linux-block/440dfcfc-1a2c-bd98-1161-cec4d78c6dfc@huawei.com/T/#mc0d6d4f95275a2743d1c8c3e4dc9ff6c9aa3a76b
In that scenario, since the scheduler tag is got first, much contention
is introduced since a driver tag may not be available after we have got
the sched tag.
Improve this scenario by introducing request queue-wide tags for when
a tagset-wide sbitmap is used. The static sched requests are still
allocated per hctx, as requests are initialised per hctx, as in
blk_mq_init_request(..., hctx_idx, ...) ->
set->ops->init_request(.., hctx_idx, ...).
For simplicity of resizing the request queue sbitmap when updating the
request queue depth, just init at the max possible size, so we don't need
to deal with the possibly with swapping out a new sbitmap for old if
we need to grow.
Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/1620907258-30910-3-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-05-13 20:00:58 +08:00
|
|
|
#include <linux/sbitmap.h>
|
2022-01-24 17:39:13 +08:00
|
|
|
#include <linux/uuid.h>
|
|
|
|
#include <linux/xarray.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-27 01:46:22 +08:00
|
|
|
struct module;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct request_queue;
|
|
|
|
struct elevator_queue;
|
2006-03-24 03:00:26 +08:00
|
|
|
struct blk_trace;
|
2007-07-09 18:38:05 +08:00
|
|
|
struct request;
|
|
|
|
struct sg_io_hdr;
|
2012-04-17 04:57:25 +08:00
|
|
|
struct blkcg_gq;
|
2014-09-25 23:23:43 +08:00
|
|
|
struct blk_flush_queue;
|
2021-10-12 19:12:24 +08:00
|
|
|
struct kiocb;
|
2015-10-15 20:10:48 +08:00
|
|
|
struct pr_ops;
|
2018-07-03 23:32:35 +08:00
|
|
|
struct rq_qos;
|
blk-stat: convert to callback-based statistics reporting
Currently, statistics are gathered in ~0.13s windows, and users grab the
statistics whenever they need them. This is not ideal for both in-tree
users:
1. Writeback throttling wants its own dynamically sized window of
statistics. Since the blk-stats statistics are reset after every
window and the wbt windows don't line up with the blk-stats windows,
wbt doesn't see every I/O.
2. Polling currently grabs the statistics on every I/O. Again, depending
on how the window lines up, we may miss some I/Os. It's also
unnecessary overhead to get the statistics on every I/O; the hybrid
polling heuristic would be just as happy with the statistics from the
previous full window.
This reworks the blk-stats infrastructure to be callback-based: users
register a callback that they want called at a given time with all of
the statistics from the window during which the callback was active.
Users can dynamically bucketize the statistics. wbt and polling both
currently use read vs. write, but polling can be extended to further
subdivide based on request size.
The callbacks are kept on an RCU list, and each callback has percpu
stats buffers. There will only be a few users, so the overhead on the
I/O completion side is low. The stats flushing is also simplified
considerably: since the timer function is responsible for clearing the
statistics, we don't have to worry about stale statistics.
wbt is a trivial conversion. After the conversion, the windowing problem
mentioned above is fixed.
For polling, we register an extra callback that caches the previous
window's statistics in the struct request_queue for the hybrid polling
heuristic to use.
Since we no longer have a single stats buffer for the request queue,
this also removes the sysfs and debugfs stats entries. To replace those,
we add a debugfs entry for the poll statistics.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2017-03-21 23:56:08 +08:00
|
|
|
struct blk_queue_stats;
|
|
|
|
struct blk_stat_callback;
|
blk-crypto: rename blk_keyslot_manager to blk_crypto_profile
blk_keyslot_manager is misnamed because it doesn't necessarily manage
keyslots. It actually does several different things:
- Contains the crypto capabilities of the device.
- Provides functions to control the inline encryption hardware.
Originally these were just for programming/evicting keyslots;
however, new functionality (hardware-wrapped keys) will require new
functions here which are unrelated to keyslots. Moreover,
device-mapper devices already (ab)use "keyslot_evict" to pass key
eviction requests to their underlying devices even though
device-mapper devices don't have any keyslots themselves (so it
really should be "evict_key", not "keyslot_evict").
- Sometimes (but not always!) it manages keyslots. Originally it
always did, but device-mapper devices don't have keyslots
themselves, so they use a "passthrough keyslot manager" which
doesn't actually manage keyslots. This hack works, but the
terminology is unnatural. Also, some hardware doesn't have keyslots
and thus also uses a "passthrough keyslot manager" (support for such
hardware is yet to be upstreamed, but it will happen eventually).
Let's stop having keyslot managers which don't actually manage keyslots.
Instead, rename blk_keyslot_manager to blk_crypto_profile.
This is a fairly big change, since for consistency it also has to update
keyslot manager-related function names, variable names, and comments --
not just the actual struct name. However it's still a fairly
straightforward change, as it doesn't change any actual functionality.
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20211018180453.40441-4-ebiggers@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-19 02:04:52 +08:00
|
|
|
struct blk_crypto_profile;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
extern const struct device_type disk_type;
|
|
|
|
extern struct device_type part_type;
|
|
|
|
extern struct class block_class;
|
|
|
|
|
2018-02-15 22:53:17 +08:00
|
|
|
/* Must be consistent with blk_mq_poll_stats_bkt() */
|
2017-04-21 06:59:11 +08:00
|
|
|
#define BLK_MQ_POLL_STATS_BKTS 16
|
|
|
|
|
2019-03-18 22:44:41 +08:00
|
|
|
/* Doing classic polling */
|
|
|
|
#define BLK_MQ_POLL_CLASSIC -1
|
|
|
|
|
2012-04-14 04:11:28 +08:00
|
|
|
/*
|
|
|
|
* Maximum number of blkcg policies allowed to be registered concurrently.
|
|
|
|
* Defined here to simplify include dependency.
|
|
|
|
*/
|
2021-07-17 20:33:28 +08:00
|
|
|
#define BLKCG_MAX_POLS 6
|
2012-04-14 04:11:28 +08:00
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
#define DISK_MAX_PARTS 256
|
|
|
|
#define DISK_NAME_LEN 32
|
|
|
|
|
|
|
|
#define PARTITION_META_INFO_VOLNAMELTH 64
|
|
|
|
/*
|
|
|
|
* Enough for the string representation of any kind of UUID plus NULL.
|
|
|
|
* EFI UUID is 36 characters. MSDOS UUID is 11 characters.
|
|
|
|
*/
|
|
|
|
#define PARTITION_META_INFO_UUIDLTH (UUID_STRING_LEN + 1)
|
|
|
|
|
|
|
|
struct partition_meta_info {
|
|
|
|
char uuid[PARTITION_META_INFO_UUIDLTH];
|
|
|
|
u8 volname[PARTITION_META_INFO_VOLNAMELTH];
|
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* DOC: genhd capability flags
|
|
|
|
*
|
|
|
|
* ``GENHD_FL_REMOVABLE``: indicates that the block device gives access to
|
|
|
|
* removable media. When set, the device remains present even when media is not
|
|
|
|
* inserted. Shall not be set for devices which are removed entirely when the
|
|
|
|
* media is removed.
|
|
|
|
*
|
|
|
|
* ``GENHD_FL_HIDDEN``: the block device is hidden; it doesn't produce events,
|
|
|
|
* doesn't appear in sysfs, and can't be opened from userspace or using
|
|
|
|
* blkdev_get*. Used for the underlying components of multipath devices.
|
|
|
|
*
|
|
|
|
* ``GENHD_FL_NO_PART``: partition support is disabled. The kernel will not
|
|
|
|
* scan for partitions from add_disk, and users can't add partitions manually.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
enum {
|
|
|
|
GENHD_FL_REMOVABLE = 1 << 0,
|
|
|
|
GENHD_FL_HIDDEN = 1 << 1,
|
|
|
|
GENHD_FL_NO_PART = 1 << 2,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
DISK_EVENT_MEDIA_CHANGE = 1 << 0, /* media changed */
|
|
|
|
DISK_EVENT_EJECT_REQUEST = 1 << 1, /* eject requested */
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
/* Poll even if events_poll_msecs is unset */
|
|
|
|
DISK_EVENT_FLAG_POLL = 1 << 0,
|
|
|
|
/* Forward events to udev */
|
|
|
|
DISK_EVENT_FLAG_UEVENT = 1 << 1,
|
|
|
|
/* Block event polling when open for exclusive write */
|
|
|
|
DISK_EVENT_FLAG_BLOCK_ON_EXCL_WRITE = 1 << 2,
|
|
|
|
};
|
|
|
|
|
|
|
|
struct disk_events;
|
|
|
|
struct badblocks;
|
|
|
|
|
|
|
|
struct blk_integrity {
|
|
|
|
const struct blk_integrity_profile *profile;
|
|
|
|
unsigned char flags;
|
|
|
|
unsigned char tuple_size;
|
|
|
|
unsigned char interval_exp;
|
|
|
|
unsigned char tag_size;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct gendisk {
|
|
|
|
/*
|
|
|
|
* major/first_minor/minors should not be set by any new driver, the
|
|
|
|
* block core will take care of allocating them automatically.
|
|
|
|
*/
|
|
|
|
int major;
|
|
|
|
int first_minor;
|
|
|
|
int minors;
|
|
|
|
|
|
|
|
char disk_name[DISK_NAME_LEN]; /* name of major driver */
|
|
|
|
|
|
|
|
unsigned short events; /* supported events */
|
|
|
|
unsigned short event_flags; /* flags related to event processing */
|
|
|
|
|
|
|
|
struct xarray part_tbl;
|
|
|
|
struct block_device *part0;
|
|
|
|
|
|
|
|
const struct block_device_operations *fops;
|
|
|
|
struct request_queue *queue;
|
|
|
|
void *private_data;
|
|
|
|
|
2022-07-28 00:22:57 +08:00
|
|
|
struct bio_set bio_split;
|
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
int flags;
|
|
|
|
unsigned long state;
|
|
|
|
#define GD_NEED_PART_SCAN 0
|
|
|
|
#define GD_READ_ONLY 1
|
|
|
|
#define GD_DEAD 2
|
|
|
|
#define GD_NATIVE_CAPACITY 3
|
2022-02-15 17:45:10 +08:00
|
|
|
#define GD_ADDED 4
|
2022-05-27 13:58:06 +08:00
|
|
|
#define GD_SUPPRESS_PART_SCAN 5
|
2022-06-19 14:05:51 +08:00
|
|
|
#define GD_OWNS_QUEUE 6
|
2022-01-24 17:39:13 +08:00
|
|
|
|
|
|
|
struct mutex open_mutex; /* open/close mutex */
|
|
|
|
unsigned open_partitions; /* number of open partitions */
|
|
|
|
|
|
|
|
struct backing_dev_info *bdi;
|
2022-11-14 12:26:36 +08:00
|
|
|
struct kobject queue_kobj; /* the queue/ directory */
|
2022-01-24 17:39:13 +08:00
|
|
|
struct kobject *slave_dir;
|
|
|
|
#ifdef CONFIG_BLOCK_HOLDER_DEPRECATED
|
|
|
|
struct list_head slave_bdevs;
|
|
|
|
#endif
|
|
|
|
struct timer_rand_state *random;
|
|
|
|
atomic_t sync_io; /* RAID */
|
|
|
|
struct disk_events *ev;
|
|
|
|
#ifdef CONFIG_BLK_DEV_INTEGRITY
|
|
|
|
struct kobject integrity_kobj;
|
|
|
|
#endif /* CONFIG_BLK_DEV_INTEGRITY */
|
2022-07-06 15:03:50 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_BLK_DEV_ZONED
|
|
|
|
/*
|
|
|
|
* Zoned block device information for request dispatch control.
|
|
|
|
* nr_zones is the total number of zones of the device. This is always
|
|
|
|
* 0 for regular block devices. conv_zones_bitmap is a bitmap of nr_zones
|
|
|
|
* bits which indicates if a zone is conventional (bit set) or
|
|
|
|
* sequential (bit clear). seq_zones_wlock is a bitmap of nr_zones
|
|
|
|
* bits which indicates if a zone is write locked, that is, if a write
|
|
|
|
* request targeting the zone was dispatched.
|
|
|
|
*
|
|
|
|
* Reads of this information must be protected with blk_queue_enter() /
|
|
|
|
* blk_queue_exit(). Modifying this information is only allowed while
|
|
|
|
* no requests are being processed. See also blk_mq_freeze_queue() and
|
|
|
|
* blk_mq_unfreeze_queue().
|
|
|
|
*/
|
|
|
|
unsigned int nr_zones;
|
|
|
|
unsigned int max_open_zones;
|
|
|
|
unsigned int max_active_zones;
|
|
|
|
unsigned long *conv_zones_bitmap;
|
|
|
|
unsigned long *seq_zones_wlock;
|
|
|
|
#endif /* CONFIG_BLK_DEV_ZONED */
|
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
#if IS_ENABLED(CONFIG_CDROM)
|
|
|
|
struct cdrom_device_info *cdi;
|
|
|
|
#endif
|
|
|
|
int node_id;
|
|
|
|
struct badblocks *bb;
|
|
|
|
struct lockdep_map lockdep_map;
|
|
|
|
u64 diskseq;
|
2022-06-29 14:20:12 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Independent sector access ranges. This is always NULL for
|
|
|
|
* devices that do not have multiple independent access ranges.
|
|
|
|
*/
|
|
|
|
struct blk_independent_access_ranges *ia_ranges;
|
2022-01-24 17:39:13 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static inline bool disk_live(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return !inode_unhashed(disk->part0->bd_inode);
|
|
|
|
}
|
|
|
|
|
2022-03-30 13:29:06 +08:00
|
|
|
/**
|
|
|
|
* disk_openers - returns how many openers are there for a disk
|
|
|
|
* @disk: disk to check
|
|
|
|
*
|
|
|
|
* This returns the number of openers for a disk. Note that this value is only
|
|
|
|
* stable if disk->open_mutex is held.
|
|
|
|
*
|
|
|
|
* Note: Due to a quirk in the block layer open code, each open partition is
|
|
|
|
* only counted once even if there are multiple openers.
|
|
|
|
*/
|
|
|
|
static inline unsigned int disk_openers(struct gendisk *disk)
|
|
|
|
{
|
2022-03-30 13:29:07 +08:00
|
|
|
return atomic_read(&disk->part0->bd_openers);
|
2022-03-30 13:29:06 +08:00
|
|
|
}
|
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
/*
|
|
|
|
* The gendisk is refcounted by the part0 block_device, and the bd_device
|
|
|
|
* therein is also used for device model presentation in sysfs.
|
|
|
|
*/
|
|
|
|
#define dev_to_disk(device) \
|
|
|
|
(dev_to_bdev(device)->bd_disk)
|
|
|
|
#define disk_to_dev(disk) \
|
|
|
|
(&((disk)->part0->bd_device))
|
|
|
|
|
|
|
|
#if IS_REACHABLE(CONFIG_CDROM)
|
|
|
|
#define disk_to_cdi(disk) ((disk)->cdi)
|
|
|
|
#else
|
|
|
|
#define disk_to_cdi(disk) NULL
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static inline dev_t disk_devt(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return MKDEV(disk->major, disk->first_minor);
|
|
|
|
}
|
|
|
|
|
2021-12-18 17:41:56 +08:00
|
|
|
static inline int blk_validate_block_size(unsigned long bsize)
|
2021-10-26 22:40:12 +08:00
|
|
|
{
|
|
|
|
if (bsize < 512 || bsize > PAGE_SIZE || !is_power_of_2(bsize))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-07-15 02:06:32 +08:00
|
|
|
static inline bool blk_op_is_passthrough(blk_opf_t op)
|
2017-12-18 15:40:43 +08:00
|
|
|
{
|
2021-06-24 20:39:34 +08:00
|
|
|
op &= REQ_OP_MASK;
|
2017-12-18 15:40:43 +08:00
|
|
|
return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
|
|
|
|
}
|
|
|
|
|
2016-10-18 14:40:29 +08:00
|
|
|
/*
|
|
|
|
* Zoned block device models (zoned limit).
|
2020-07-20 14:12:49 +08:00
|
|
|
*
|
|
|
|
* Note: This needs to be ordered from the least to the most severe
|
|
|
|
* restrictions for the inheritance in blk_stack_limits() to work.
|
2016-10-18 14:40:29 +08:00
|
|
|
*/
|
|
|
|
enum blk_zoned_model {
|
2020-07-20 14:12:49 +08:00
|
|
|
BLK_ZONED_NONE = 0, /* Regular block device */
|
|
|
|
BLK_ZONED_HA, /* Host-aware zoned block device */
|
|
|
|
BLK_ZONED_HM, /* Host-managed zoned block device */
|
2016-10-18 14:40:29 +08:00
|
|
|
};
|
|
|
|
|
2021-03-31 15:30:00 +08:00
|
|
|
/*
|
|
|
|
* BLK_BOUNCE_NONE: never bounce (default)
|
|
|
|
* BLK_BOUNCE_HIGH: bounce all highmem pages
|
|
|
|
*/
|
|
|
|
enum blk_bounce {
|
|
|
|
BLK_BOUNCE_NONE,
|
|
|
|
BLK_BOUNCE_HIGH,
|
|
|
|
};
|
|
|
|
|
2009-05-23 05:17:51 +08:00
|
|
|
struct queue_limits {
|
2021-03-31 15:30:00 +08:00
|
|
|
enum blk_bounce bounce;
|
2009-05-23 05:17:51 +08:00
|
|
|
unsigned long seg_boundary_mask;
|
2015-08-20 05:24:05 +08:00
|
|
|
unsigned long virt_boundary_mask;
|
2009-05-23 05:17:51 +08:00
|
|
|
|
|
|
|
unsigned int max_hw_sectors;
|
2015-11-14 05:46:48 +08:00
|
|
|
unsigned int max_dev_sectors;
|
2014-06-06 03:38:39 +08:00
|
|
|
unsigned int chunk_sectors;
|
2009-05-23 05:17:51 +08:00
|
|
|
unsigned int max_sectors;
|
|
|
|
unsigned int max_segment_size;
|
2009-05-23 05:17:53 +08:00
|
|
|
unsigned int physical_block_size;
|
2020-01-15 21:35:25 +08:00
|
|
|
unsigned int logical_block_size;
|
2009-05-23 05:17:53 +08:00
|
|
|
unsigned int alignment_offset;
|
|
|
|
unsigned int io_min;
|
|
|
|
unsigned int io_opt;
|
2009-09-30 19:54:20 +08:00
|
|
|
unsigned int max_discard_sectors;
|
2015-07-16 23:14:26 +08:00
|
|
|
unsigned int max_hw_discard_sectors;
|
2022-04-15 12:52:57 +08:00
|
|
|
unsigned int max_secure_erase_sectors;
|
2016-12-01 04:28:59 +08:00
|
|
|
unsigned int max_write_zeroes_sectors;
|
2020-05-12 16:55:47 +08:00
|
|
|
unsigned int max_zone_append_sectors;
|
2009-11-10 18:50:21 +08:00
|
|
|
unsigned int discard_granularity;
|
|
|
|
unsigned int discard_alignment;
|
2021-01-28 12:47:30 +08:00
|
|
|
unsigned int zone_write_granularity;
|
2009-05-23 05:17:51 +08:00
|
|
|
|
2010-02-26 13:20:39 +08:00
|
|
|
unsigned short max_segments;
|
2010-09-11 02:50:10 +08:00
|
|
|
unsigned short max_integrity_segments;
|
2017-02-08 21:46:49 +08:00
|
|
|
unsigned short max_discard_segments;
|
2009-05-23 05:17:51 +08:00
|
|
|
|
2009-05-23 05:17:53 +08:00
|
|
|
unsigned char misaligned;
|
2009-11-10 18:50:21 +08:00
|
|
|
unsigned char discard_misaligned;
|
2013-07-12 13:39:53 +08:00
|
|
|
unsigned char raid_partial_stripes_expensive;
|
2016-10-18 14:40:29 +08:00
|
|
|
enum blk_zoned_model zoned;
|
2022-11-11 02:44:57 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Drivers that set dma_alignment to less than 511 must be prepared to
|
|
|
|
* handle individual bvec's that are not a multiple of a SECTOR_SIZE
|
|
|
|
* due to possible offsets.
|
|
|
|
*/
|
|
|
|
unsigned int dma_alignment;
|
2009-05-23 05:17:51 +08:00
|
|
|
};
|
|
|
|
|
2019-11-11 10:39:30 +08:00
|
|
|
typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,
|
|
|
|
void *data);
|
|
|
|
|
2022-07-06 15:03:40 +08:00
|
|
|
void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model);
|
2020-09-15 15:33:46 +08:00
|
|
|
|
2016-10-18 14:40:33 +08:00
|
|
|
#ifdef CONFIG_BLK_DEV_ZONED
|
|
|
|
|
2019-11-11 10:39:30 +08:00
|
|
|
#define BLK_ALL_ZONES ((unsigned int)-1)
|
|
|
|
int blkdev_report_zones(struct block_device *bdev, sector_t sector,
|
|
|
|
unsigned int nr_zones, report_zones_cb cb, void *data);
|
2022-07-06 15:03:45 +08:00
|
|
|
unsigned int bdev_nr_zones(struct block_device *bdev);
|
2022-07-15 02:06:27 +08:00
|
|
|
extern int blkdev_zone_mgmt(struct block_device *bdev, enum req_op op,
|
2019-10-27 22:05:45 +08:00
|
|
|
sector_t sectors, sector_t nr_sectors,
|
|
|
|
gfp_t gfp_mask);
|
2020-05-12 16:55:49 +08:00
|
|
|
int blk_revalidate_disk_zones(struct gendisk *disk,
|
|
|
|
void (*update_driver_data)(struct gendisk *disk));
|
2016-10-18 14:40:33 +08:00
|
|
|
|
2016-10-18 14:40:35 +08:00
|
|
|
extern int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
|
|
|
|
unsigned int cmd, unsigned long arg);
|
2019-10-27 22:05:46 +08:00
|
|
|
extern int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
|
|
unsigned int cmd, unsigned long arg);
|
2016-10-18 14:40:35 +08:00
|
|
|
|
|
|
|
#else /* CONFIG_BLK_DEV_ZONED */
|
|
|
|
|
2022-07-06 15:03:45 +08:00
|
|
|
static inline unsigned int bdev_nr_zones(struct block_device *bdev)
|
2018-10-12 18:08:43 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2018-10-12 18:08:50 +08:00
|
|
|
|
2016-10-18 14:40:35 +08:00
|
|
|
static inline int blkdev_report_zones_ioctl(struct block_device *bdev,
|
|
|
|
fmode_t mode, unsigned int cmd,
|
|
|
|
unsigned long arg)
|
|
|
|
{
|
|
|
|
return -ENOTTY;
|
|
|
|
}
|
|
|
|
|
2019-10-27 22:05:46 +08:00
|
|
|
static inline int blkdev_zone_mgmt_ioctl(struct block_device *bdev,
|
|
|
|
fmode_t mode, unsigned int cmd,
|
|
|
|
unsigned long arg)
|
2016-10-18 14:40:35 +08:00
|
|
|
{
|
|
|
|
return -ENOTTY;
|
|
|
|
}
|
|
|
|
|
2016-10-18 14:40:33 +08:00
|
|
|
#endif /* CONFIG_BLK_DEV_ZONED */
|
|
|
|
|
block: Add independent access ranges support
The Concurrent Positioning Ranges VPD page (for SCSI) and data log page
(for ATA) contain parameters describing the set of contiguous LBAs that
can be served independently by a single LUN multi-actuator hard-disk.
Similarly, a logically defined block device composed of multiple disks
can in some cases execute requests directed at different sector ranges
in parallel. A dm-linear device aggregating 2 block devices together is
an example.
This patch implements support for exposing a block device independent
access ranges to the user through sysfs to allow optimizing device
accesses to increase performance.
To describe the set of independent sector ranges of a device (actuators
of a multi-actuator HDDs or table entries of a dm-linear device),
The type struct blk_independent_access_ranges is introduced. This
structure describes the sector ranges using an array of
struct blk_independent_access_range structures. This range structure
defines the start sector and number of sectors of the access range.
The ranges in the array cannot overlap and must contain all sectors
within the device capacity.
The function disk_set_independent_access_ranges() allows a device
driver to signal to the block layer that a device has multiple
independent access ranges. In this case, a struct
blk_independent_access_ranges is attached to the device request queue
by the function disk_set_independent_access_ranges(). The function
disk_alloc_independent_access_ranges() is provided for drivers to
allocate this structure.
struct blk_independent_access_ranges contains kobjects (struct kobject)
to expose to the user through sysfs the set of independent access ranges
supported by a device. When the device is initialized, sysfs
registration of the ranges information is done from blk_register_queue()
using the block layer internal function
disk_register_independent_access_ranges(). If a driver calls
disk_set_independent_access_ranges() for a registered queue, e.g. when a
device is revalidated, disk_set_independent_access_ranges() will execute
disk_register_independent_access_ranges() to update the sysfs attribute
files. The sysfs file structure created starts from the
independent_access_ranges sub-directory and contains the start sector
and number of sectors of each range, with the information for each range
grouped in numbered sub-directories.
E.g. for a dual actuator HDD, the user sees:
$ tree /sys/block/sdk/queue/independent_access_ranges/
/sys/block/sdk/queue/independent_access_ranges/
|-- 0
| |-- nr_sectors
| `-- sector
`-- 1
|-- nr_sectors
`-- sector
For a regular device with a single access range, the
independent_access_ranges sysfs directory does not exist.
Device revalidation may lead to changes to this structure and to the
attribute values. When manipulated, the queue sysfs_lock and
sysfs_dir_lock mutexes are held for atomicity, similarly to how the
blk-mq and elevator sysfs queue sub-directories are protected.
The code related to the management of independent access ranges is
added in the new file block/blk-ia-ranges.c.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20211027022223.183838-2-damien.lemoal@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-27 10:22:19 +08:00
|
|
|
/*
|
|
|
|
* Independent access ranges: struct blk_independent_access_range describes
|
|
|
|
* a range of contiguous sectors that can be accessed using device command
|
|
|
|
* execution resources that are independent from the resources used for
|
|
|
|
* other access ranges. This is typically found with single-LUN multi-actuator
|
|
|
|
* HDDs where each access range is served by a different set of heads.
|
|
|
|
* The set of independent ranges supported by the device is defined using
|
|
|
|
* struct blk_independent_access_ranges. The independent ranges must not overlap
|
|
|
|
* and must include all sectors within the disk capacity (no sector holes
|
|
|
|
* allowed).
|
|
|
|
* For a device with multiple ranges, requests targeting sectors in different
|
|
|
|
* ranges can be executed in parallel. A request can straddle an access range
|
|
|
|
* boundary.
|
|
|
|
*/
|
|
|
|
struct blk_independent_access_range {
|
|
|
|
struct kobject kobj;
|
|
|
|
sector_t sector;
|
|
|
|
sector_t nr_sectors;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct blk_independent_access_ranges {
|
|
|
|
struct kobject kobj;
|
|
|
|
bool sysfs_registered;
|
|
|
|
unsigned int nr_ia_ranges;
|
|
|
|
struct blk_independent_access_range ia_range[];
|
|
|
|
};
|
|
|
|
|
2011-07-14 03:17:23 +08:00
|
|
|
struct request_queue {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct request *last_merge;
|
2008-10-31 17:05:07 +08:00
|
|
|
struct elevator_queue *elevator;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-10-01 23:48:42 +08:00
|
|
|
struct percpu_ref q_usage_counter;
|
|
|
|
|
blk-stat: convert to callback-based statistics reporting
Currently, statistics are gathered in ~0.13s windows, and users grab the
statistics whenever they need them. This is not ideal for both in-tree
users:
1. Writeback throttling wants its own dynamically sized window of
statistics. Since the blk-stats statistics are reset after every
window and the wbt windows don't line up with the blk-stats windows,
wbt doesn't see every I/O.
2. Polling currently grabs the statistics on every I/O. Again, depending
on how the window lines up, we may miss some I/Os. It's also
unnecessary overhead to get the statistics on every I/O; the hybrid
polling heuristic would be just as happy with the statistics from the
previous full window.
This reworks the blk-stats infrastructure to be callback-based: users
register a callback that they want called at a given time with all of
the statistics from the window during which the callback was active.
Users can dynamically bucketize the statistics. wbt and polling both
currently use read vs. write, but polling can be extended to further
subdivide based on request size.
The callbacks are kept on an RCU list, and each callback has percpu
stats buffers. There will only be a few users, so the overhead on the
I/O completion side is low. The stats flushing is also simplified
considerably: since the timer function is responsible for clearing the
statistics, we don't have to worry about stale statistics.
wbt is a trivial conversion. After the conversion, the windowing problem
mentioned above is fixed.
For polling, we register an extra callback that caches the previous
window's statistics in the struct request_queue for the hybrid polling
heuristic to use.
Since we no longer have a single stats buffer for the request queue,
this also removes the sysfs and debugfs stats entries. To replace those,
we add a debugfs entry for the poll statistics.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2017-03-21 23:56:08 +08:00
|
|
|
struct blk_queue_stats *stats;
|
2018-07-03 23:32:35 +08:00
|
|
|
struct rq_qos *rq_qos;
|
block: hook up writeback throttling
Enable throttling of buffered writeback to make it a lot
more smooth, and has way less impact on other system activity.
Background writeback should be, by definition, background
activity. The fact that we flush huge bundles of it at the time
means that it potentially has heavy impacts on foreground workloads,
which isn't ideal. We can't easily limit the sizes of writes that
we do, since that would impact file system layout in the presence
of delayed allocation. So just throttle back buffered writeback,
unless someone is waiting for it.
The algorithm for when to throttle takes its inspiration in the
CoDel networking scheduling algorithm. Like CoDel, blk-wb monitors
the minimum latencies of requests over a window of time. In that
window of time, if the minimum latency of any request exceeds a
given target, then a scale count is incremented and the queue depth
is shrunk. The next monitoring window is shrunk accordingly. Unlike
CoDel, if we hit a window that exhibits good behavior, then we
simply increment the scale count and re-calculate the limits for that
scale value. This prevents us from oscillating between a
close-to-ideal value and max all the time, instead remaining in the
windows where we get good behavior.
Unlike CoDel, blk-wb allows the scale count to to negative. This
happens if we primarily have writes going on. Unlike positive
scale counts, this doesn't change the size of the monitoring window.
When the heavy writers finish, blk-bw quickly snaps back to it's
stable state of a zero scale count.
The patch registers a sysfs entry, 'wb_lat_usec'. This sets the latency
target to me met. It defaults to 2 msec for non-rotational storage, and
75 msec for rotational storage. Setting this value to '0' disables
blk-wb. Generally, a user would not have to touch this setting.
We don't enable WBT on devices that are managed with CFQ, and have
a non-root block cgroup attached. If we have a proportional share setup
on this particular disk, then the wbt throttling will interfere with
that. We don't have a strong need for wbt for that case, since we will
rely on CFQ doing that for us.
Signed-off-by: Jens Axboe <axboe@fb.com>
2016-11-10 03:38:14 +08:00
|
|
|
|
2016-12-14 00:24:51 +08:00
|
|
|
const struct blk_mq_ops *mq_ops;
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
|
|
|
|
/* sw queues */
|
2014-06-03 11:24:06 +08:00
|
|
|
struct blk_mq_ctx __percpu *queue_ctx;
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
|
2016-03-31 00:21:08 +08:00
|
|
|
unsigned int queue_depth;
|
|
|
|
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
/* hw dispatch queues */
|
2022-03-08 15:32:19 +08:00
|
|
|
struct xarray hctx_table;
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
unsigned int nr_hw_queues;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* The queue owner gets to use this for whatever they like.
|
|
|
|
* ll_rw_blk doesn't touch it.
|
|
|
|
*/
|
|
|
|
void *queuedata;
|
|
|
|
|
|
|
|
/*
|
2011-07-14 03:17:23 +08:00
|
|
|
* various queue flags, see QUEUE_* below
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2011-07-14 03:17:23 +08:00
|
|
|
unsigned long queue_flags;
|
2018-09-27 05:01:04 +08:00
|
|
|
/*
|
|
|
|
* Number of contexts that have called blk_set_pm_only(). If this
|
2020-12-09 13:29:50 +08:00
|
|
|
* counter is above zero then only RQF_PM requests are processed.
|
2018-09-27 05:01:04 +08:00
|
|
|
*/
|
|
|
|
atomic_t pm_only;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-12-14 07:33:37 +08:00
|
|
|
/*
|
|
|
|
* ida allocated id for this queue. Used to index queues from
|
|
|
|
* ioctx.
|
|
|
|
*/
|
|
|
|
int id;
|
|
|
|
|
2018-11-16 03:17:28 +08:00
|
|
|
spinlock_t queue_lock;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-08-16 21:46:24 +08:00
|
|
|
struct gendisk *disk;
|
|
|
|
|
2022-11-14 12:26:36 +08:00
|
|
|
refcount_t refs;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
/*
|
|
|
|
* mq queue kobject
|
|
|
|
*/
|
2018-11-20 09:44:35 +08:00
|
|
|
struct kobject *mq_kobj;
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
|
2015-10-22 01:20:18 +08:00
|
|
|
#ifdef CONFIG_BLK_DEV_INTEGRITY
|
|
|
|
struct blk_integrity integrity;
|
|
|
|
#endif /* CONFIG_BLK_DEV_INTEGRITY */
|
|
|
|
|
2014-12-04 08:00:23 +08:00
|
|
|
#ifdef CONFIG_PM
|
2013-03-23 11:42:26 +08:00
|
|
|
struct device *dev;
|
2020-08-19 20:34:03 +08:00
|
|
|
enum rpm_status rpm_status;
|
2013-03-23 11:42:26 +08:00
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* queue settings
|
|
|
|
*/
|
|
|
|
unsigned long nr_requests; /* Max # of requests */
|
|
|
|
|
2008-03-04 18:18:17 +08:00
|
|
|
unsigned int dma_pad_mask;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
block: Keyslot Manager for Inline Encryption
Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size) along
with a data transfer request to a storage device, and the inline encryption
hardware will use that context to en/decrypt the data. The inline
encryption hardware is part of the storage device, and it conceptually sits
on the data path between system memory and the storage device.
Inline Encryption hardware implementations often function around the
concept of "keyslots". These implementations often have a limited number
of "keyslots", each of which can hold a key (we say that a key can be
"programmed" into a keyslot). Requests made to the storage device may have
a keyslot and a data unit number associated with them, and the inline
encryption hardware will en/decrypt the data in the requests using the key
programmed into that associated keyslot and the data unit number specified
with the request.
As keyslots are limited, and programming keys may be expensive in many
implementations, and multiple requests may use exactly the same encryption
contexts, we introduce a Keyslot Manager to efficiently manage keyslots.
We also introduce a blk_crypto_key, which will represent the key that's
programmed into keyslots managed by keyslot managers. The keyslot manager
also functions as the interface that upper layers will use to program keys
into inline encryption hardware. For more information on the Keyslot
Manager, refer to documentation found in block/keyslot-manager.c and
linux/keyslot-manager.h.
Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-05-14 08:37:17 +08:00
|
|
|
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
blk-crypto: rename blk_keyslot_manager to blk_crypto_profile
blk_keyslot_manager is misnamed because it doesn't necessarily manage
keyslots. It actually does several different things:
- Contains the crypto capabilities of the device.
- Provides functions to control the inline encryption hardware.
Originally these were just for programming/evicting keyslots;
however, new functionality (hardware-wrapped keys) will require new
functions here which are unrelated to keyslots. Moreover,
device-mapper devices already (ab)use "keyslot_evict" to pass key
eviction requests to their underlying devices even though
device-mapper devices don't have any keyslots themselves (so it
really should be "evict_key", not "keyslot_evict").
- Sometimes (but not always!) it manages keyslots. Originally it
always did, but device-mapper devices don't have keyslots
themselves, so they use a "passthrough keyslot manager" which
doesn't actually manage keyslots. This hack works, but the
terminology is unnatural. Also, some hardware doesn't have keyslots
and thus also uses a "passthrough keyslot manager" (support for such
hardware is yet to be upstreamed, but it will happen eventually).
Let's stop having keyslot managers which don't actually manage keyslots.
Instead, rename blk_keyslot_manager to blk_crypto_profile.
This is a fairly big change, since for consistency it also has to update
keyslot manager-related function names, variable names, and comments --
not just the actual struct name. However it's still a fairly
straightforward change, as it doesn't change any actual functionality.
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20211018180453.40441-4-ebiggers@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-19 02:04:52 +08:00
|
|
|
struct blk_crypto_profile *crypto_profile;
|
blk-crypto: show crypto capabilities in sysfs
Add sysfs files that expose the inline encryption capabilities of
request queues:
/sys/block/$disk/queue/crypto/max_dun_bits
/sys/block/$disk/queue/crypto/modes/$mode
/sys/block/$disk/queue/crypto/num_keyslots
Userspace can use these new files to decide what encryption settings to
use, or whether to use inline encryption at all. This also brings the
crypto capabilities in line with the other queue properties, which are
already discoverable via the queue directory in sysfs.
Design notes:
- Place the new files in a new subdirectory "crypto" to group them
together and to avoid complicating the main "queue" directory. This
also makes it possible to replace "crypto" with a symlink later if
we ever make the blk_crypto_profiles into real kobjects (see below).
- It was necessary to define a new kobject that corresponds to the
crypto subdirectory. For now, this kobject just contains a pointer
to the blk_crypto_profile. Note that multiple queues (and hence
multiple such kobjects) may refer to the same blk_crypto_profile.
An alternative design would more closely match the current kernel
data structures: the blk_crypto_profile could be a kobject itself,
located directly under the host controller device's kobject, while
/sys/block/$disk/queue/crypto would be a symlink to it.
I decided not to do that for now because it would require a lot more
changes, such as no longer embedding blk_crypto_profile in other
structures, and also because I'm not sure we can rule out moving the
crypto capabilities into 'struct queue_limits' in the future. (Even
if multiple queues share the same crypto engine, maybe the supported
data unit sizes could differ due to other queue properties.) It
would also still be possible to switch to that design later without
breaking userspace, by replacing the directory with a symlink.
- Use "max_dun_bits" instead of "max_dun_bytes". Currently, the
kernel internally stores this value in bytes, but that's an
implementation detail. It probably makes more sense to talk about
this value in bits, and choosing bits is more future-proof.
- "modes" is a sub-subdirectory, since there may be multiple supported
crypto modes, sysfs is supposed to have one value per file, and it
makes sense to group all the mode files together.
- Each mode had to be named. The crypto API names like "xts(aes)" are
not appropriate because they don't specify the key size. Therefore,
I assigned new names. The exact names chosen are arbitrary, but
they happen to match the names used in log messages in fs/crypto/.
- The "num_keyslots" file is a bit different from the others in that
it is only useful to know for performance reasons. However, it's
included as it can still be useful. For example, a user might not
want to use inline encryption if there aren't very many keyslots.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20220124215938.2769-4-ebiggers@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-01-25 05:59:38 +08:00
|
|
|
struct kobject *crypto_kobject;
|
block: Keyslot Manager for Inline Encryption
Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size) along
with a data transfer request to a storage device, and the inline encryption
hardware will use that context to en/decrypt the data. The inline
encryption hardware is part of the storage device, and it conceptually sits
on the data path between system memory and the storage device.
Inline Encryption hardware implementations often function around the
concept of "keyslots". These implementations often have a limited number
of "keyslots", each of which can hold a key (we say that a key can be
"programmed" into a keyslot). Requests made to the storage device may have
a keyslot and a data unit number associated with them, and the inline
encryption hardware will en/decrypt the data in the requests using the key
programmed into that associated keyslot and the data unit number specified
with the request.
As keyslots are limited, and programming keys may be expensive in many
implementations, and multiple requests may use exactly the same encryption
contexts, we introduce a Keyslot Manager to efficiently manage keyslots.
We also introduce a blk_crypto_key, which will represent the key that's
programmed into keyslots managed by keyslot managers. The keyslot manager
also functions as the interface that upper layers will use to program keys
into inline encryption hardware. For more information on the Keyslot
Manager, refer to documentation found in block/keyslot-manager.c and
linux/keyslot-manager.h.
Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-05-14 08:37:17 +08:00
|
|
|
#endif
|
|
|
|
|
2008-09-14 20:55:09 +08:00
|
|
|
unsigned int rq_timeout;
|
2016-11-15 04:03:03 +08:00
|
|
|
int poll_nsec;
|
blk-stat: convert to callback-based statistics reporting
Currently, statistics are gathered in ~0.13s windows, and users grab the
statistics whenever they need them. This is not ideal for both in-tree
users:
1. Writeback throttling wants its own dynamically sized window of
statistics. Since the blk-stats statistics are reset after every
window and the wbt windows don't line up with the blk-stats windows,
wbt doesn't see every I/O.
2. Polling currently grabs the statistics on every I/O. Again, depending
on how the window lines up, we may miss some I/Os. It's also
unnecessary overhead to get the statistics on every I/O; the hybrid
polling heuristic would be just as happy with the statistics from the
previous full window.
This reworks the blk-stats infrastructure to be callback-based: users
register a callback that they want called at a given time with all of
the statistics from the window during which the callback was active.
Users can dynamically bucketize the statistics. wbt and polling both
currently use read vs. write, but polling can be extended to further
subdivide based on request size.
The callbacks are kept on an RCU list, and each callback has percpu
stats buffers. There will only be a few users, so the overhead on the
I/O completion side is low. The stats flushing is also simplified
considerably: since the timer function is responsible for clearing the
statistics, we don't have to worry about stale statistics.
wbt is a trivial conversion. After the conversion, the windowing problem
mentioned above is fixed.
For polling, we register an extra callback that caches the previous
window's statistics in the struct request_queue for the hybrid polling
heuristic to use.
Since we no longer have a single stats buffer for the request queue,
this also removes the sysfs and debugfs stats entries. To replace those,
we add a debugfs entry for the poll statistics.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2017-03-21 23:56:08 +08:00
|
|
|
|
|
|
|
struct blk_stat_callback *poll_cb;
|
2021-11-14 05:03:26 +08:00
|
|
|
struct blk_rq_stat *poll_stat;
|
blk-stat: convert to callback-based statistics reporting
Currently, statistics are gathered in ~0.13s windows, and users grab the
statistics whenever they need them. This is not ideal for both in-tree
users:
1. Writeback throttling wants its own dynamically sized window of
statistics. Since the blk-stats statistics are reset after every
window and the wbt windows don't line up with the blk-stats windows,
wbt doesn't see every I/O.
2. Polling currently grabs the statistics on every I/O. Again, depending
on how the window lines up, we may miss some I/Os. It's also
unnecessary overhead to get the statistics on every I/O; the hybrid
polling heuristic would be just as happy with the statistics from the
previous full window.
This reworks the blk-stats infrastructure to be callback-based: users
register a callback that they want called at a given time with all of
the statistics from the window during which the callback was active.
Users can dynamically bucketize the statistics. wbt and polling both
currently use read vs. write, but polling can be extended to further
subdivide based on request size.
The callbacks are kept on an RCU list, and each callback has percpu
stats buffers. There will only be a few users, so the overhead on the
I/O completion side is low. The stats flushing is also simplified
considerably: since the timer function is responsible for clearing the
statistics, we don't have to worry about stale statistics.
wbt is a trivial conversion. After the conversion, the windowing problem
mentioned above is fixed.
For polling, we register an extra callback that caches the previous
window's statistics in the struct request_queue for the hybrid polling
heuristic to use.
Since we no longer have a single stats buffer for the request queue,
this also removes the sysfs and debugfs stats entries. To replace those,
we add a debugfs entry for the poll statistics.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2017-03-21 23:56:08 +08:00
|
|
|
|
2008-09-14 20:55:09 +08:00
|
|
|
struct timer_list timeout;
|
2015-10-30 20:57:30 +08:00
|
|
|
struct work_struct timeout_work;
|
2008-09-14 20:55:09 +08:00
|
|
|
|
2021-10-05 18:23:39 +08:00
|
|
|
atomic_t nr_active_requests_shared_tags;
|
2020-08-19 23:20:26 +08:00
|
|
|
|
2021-10-05 18:23:39 +08:00
|
|
|
struct blk_mq_tags *sched_shared_tags;
|
blk-mq: Use request queue-wide tags for tagset-wide sbitmap
The tags used for an IO scheduler are currently per hctx.
As such, when q->nr_hw_queues grows, so does the request queue total IO
scheduler tag depth.
This may cause problems for SCSI MQ HBAs whose total driver depth is
fixed.
Ming and Yanhui report higher CPU usage and lower throughput in scenarios
where the fixed total driver tag depth is appreciably lower than the total
scheduler tag depth:
https://lore.kernel.org/linux-block/440dfcfc-1a2c-bd98-1161-cec4d78c6dfc@huawei.com/T/#mc0d6d4f95275a2743d1c8c3e4dc9ff6c9aa3a76b
In that scenario, since the scheduler tag is got first, much contention
is introduced since a driver tag may not be available after we have got
the sched tag.
Improve this scenario by introducing request queue-wide tags for when
a tagset-wide sbitmap is used. The static sched requests are still
allocated per hctx, as requests are initialised per hctx, as in
blk_mq_init_request(..., hctx_idx, ...) ->
set->ops->init_request(.., hctx_idx, ...).
For simplicity of resizing the request queue sbitmap when updating the
request queue depth, just init at the max possible size, so we don't need
to deal with the possibly with swapping out a new sbitmap for old if
we need to grow.
Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/1620907258-30910-3-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-05-13 20:00:58 +08:00
|
|
|
|
2011-12-14 07:33:41 +08:00
|
|
|
struct list_head icq_list;
|
2012-03-06 05:15:18 +08:00
|
|
|
#ifdef CONFIG_BLK_CGROUP
|
2012-04-14 04:11:33 +08:00
|
|
|
DECLARE_BITMAP (blkcg_pols, BLKCG_MAX_POLS);
|
2012-04-17 04:57:25 +08:00
|
|
|
struct blkcg_gq *root_blkg;
|
2012-03-06 05:15:19 +08:00
|
|
|
struct list_head blkg_list;
|
2012-03-06 05:15:18 +08:00
|
|
|
#endif
|
2011-12-14 07:33:41 +08:00
|
|
|
|
2009-05-23 05:17:51 +08:00
|
|
|
struct queue_limits limits;
|
|
|
|
|
2019-09-05 17:51:31 +08:00
|
|
|
unsigned int required_elevator_features;
|
|
|
|
|
2005-06-23 15:08:19 +08:00
|
|
|
int node;
|
2006-09-29 16:59:40 +08:00
|
|
|
#ifdef CONFIG_BLK_DEV_IO_TRACE
|
2020-02-06 22:28:12 +08:00
|
|
|
struct blk_trace __rcu *blk_trace;
|
2006-09-29 16:59:40 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2010-09-03 17:56:16 +08:00
|
|
|
* for flush operations
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2014-09-25 23:23:43 +08:00
|
|
|
struct blk_flush_queue *fq;
|
2006-03-19 07:34:37 +08:00
|
|
|
|
2014-05-28 22:08:02 +08:00
|
|
|
struct list_head requeue_list;
|
|
|
|
spinlock_t requeue_lock;
|
2016-09-15 01:28:30 +08:00
|
|
|
struct delayed_work requeue_work;
|
2014-05-28 22:08:02 +08:00
|
|
|
|
2006-03-19 07:34:37 +08:00
|
|
|
struct mutex sysfs_lock;
|
block: split .sysfs_lock into two locks
The kernfs built-in lock of 'kn->count' is held in sysfs .show/.store
path. Meantime, inside block's .show/.store callback, q->sysfs_lock is
required.
However, when mq & iosched kobjects are removed via
blk_mq_unregister_dev() & elv_unregister_queue(), q->sysfs_lock is held
too. This way causes AB-BA lock because the kernfs built-in lock of
'kn-count' is required inside kobject_del() too, see the lockdep warning[1].
On the other hand, it isn't necessary to acquire q->sysfs_lock for
both blk_mq_unregister_dev() & elv_unregister_queue() because
clearing REGISTERED flag prevents storing to 'queue/scheduler'
from being happened. Also sysfs write(store) is exclusive, so no
necessary to hold the lock for elv_unregister_queue() when it is
called in switching elevator path.
So split .sysfs_lock into two: one is still named as .sysfs_lock for
covering sync .store, the other one is named as .sysfs_dir_lock
for covering kobjects and related status change.
sysfs itself can handle the race between add/remove kobjects and
showing/storing attributes under kobjects. For switching scheduler
via storing to 'queue/scheduler', we use the queue flag of
QUEUE_FLAG_REGISTERED with .sysfs_lock for avoiding the race, then
we can avoid to hold .sysfs_lock during removing/adding kobjects.
[1] lockdep warning
======================================================
WARNING: possible circular locking dependency detected
5.3.0-rc3-00044-g73277fc75ea0 #1380 Not tainted
------------------------------------------------------
rmmod/777 is trying to acquire lock:
00000000ac50e981 (kn->count#202){++++}, at: kernfs_remove_by_name_ns+0x59/0x72
but task is already holding lock:
00000000fb16ae21 (&q->sysfs_lock){+.+.}, at: blk_unregister_queue+0x78/0x10b
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&q->sysfs_lock){+.+.}:
__lock_acquire+0x95f/0xa2f
lock_acquire+0x1b4/0x1e8
__mutex_lock+0x14a/0xa9b
blk_mq_hw_sysfs_show+0x63/0xb6
sysfs_kf_seq_show+0x11f/0x196
seq_read+0x2cd/0x5f2
vfs_read+0xc7/0x18c
ksys_read+0xc4/0x13e
do_syscall_64+0xa7/0x295
entry_SYSCALL_64_after_hwframe+0x49/0xbe
-> #0 (kn->count#202){++++}:
check_prev_add+0x5d2/0xc45
validate_chain+0xed3/0xf94
__lock_acquire+0x95f/0xa2f
lock_acquire+0x1b4/0x1e8
__kernfs_remove+0x237/0x40b
kernfs_remove_by_name_ns+0x59/0x72
remove_files+0x61/0x96
sysfs_remove_group+0x81/0xa4
sysfs_remove_groups+0x3b/0x44
kobject_del+0x44/0x94
blk_mq_unregister_dev+0x83/0xdd
blk_unregister_queue+0xa0/0x10b
del_gendisk+0x259/0x3fa
null_del_dev+0x8b/0x1c3 [null_blk]
null_exit+0x5c/0x95 [null_blk]
__se_sys_delete_module+0x204/0x337
do_syscall_64+0xa7/0x295
entry_SYSCALL_64_after_hwframe+0x49/0xbe
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&q->sysfs_lock);
lock(kn->count#202);
lock(&q->sysfs_lock);
lock(kn->count#202);
*** DEADLOCK ***
2 locks held by rmmod/777:
#0: 00000000e69bd9de (&lock){+.+.}, at: null_exit+0x2e/0x95 [null_blk]
#1: 00000000fb16ae21 (&q->sysfs_lock){+.+.}, at: blk_unregister_queue+0x78/0x10b
stack backtrace:
CPU: 0 PID: 777 Comm: rmmod Not tainted 5.3.0-rc3-00044-g73277fc75ea0 #1380
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS ?-20180724_192412-buildhw-07.phx4
Call Trace:
dump_stack+0x9a/0xe6
check_noncircular+0x207/0x251
? print_circular_bug+0x32a/0x32a
? find_usage_backwards+0x84/0xb0
check_prev_add+0x5d2/0xc45
validate_chain+0xed3/0xf94
? check_prev_add+0xc45/0xc45
? mark_lock+0x11b/0x804
? check_usage_forwards+0x1ca/0x1ca
__lock_acquire+0x95f/0xa2f
lock_acquire+0x1b4/0x1e8
? kernfs_remove_by_name_ns+0x59/0x72
__kernfs_remove+0x237/0x40b
? kernfs_remove_by_name_ns+0x59/0x72
? kernfs_next_descendant_post+0x7d/0x7d
? strlen+0x10/0x23
? strcmp+0x22/0x44
kernfs_remove_by_name_ns+0x59/0x72
remove_files+0x61/0x96
sysfs_remove_group+0x81/0xa4
sysfs_remove_groups+0x3b/0x44
kobject_del+0x44/0x94
blk_mq_unregister_dev+0x83/0xdd
blk_unregister_queue+0xa0/0x10b
del_gendisk+0x259/0x3fa
? disk_events_poll_msecs_store+0x12b/0x12b
? check_flags+0x1ea/0x204
? mark_held_locks+0x1f/0x7a
null_del_dev+0x8b/0x1c3 [null_blk]
null_exit+0x5c/0x95 [null_blk]
__se_sys_delete_module+0x204/0x337
? free_module+0x39f/0x39f
? blkcg_maybe_throttle_current+0x8a/0x718
? rwlock_bug+0x62/0x62
? __blkcg_punt_bio_submit+0xd0/0xd0
? trace_hardirqs_on_thunk+0x1a/0x20
? mark_held_locks+0x1f/0x7a
? do_syscall_64+0x4c/0x295
do_syscall_64+0xa7/0x295
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fb696cdbe6b
Code: 73 01 c3 48 8b 0d 1d 20 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 008
RSP: 002b:00007ffec9588788 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0
RAX: ffffffffffffffda RBX: 0000559e589137c0 RCX: 00007fb696cdbe6b
RDX: 000000000000000a RSI: 0000000000000800 RDI: 0000559e58913828
RBP: 0000000000000000 R08: 00007ffec9587701 R09: 0000000000000000
R10: 00007fb696d4eae0 R11: 0000000000000206 R12: 00007ffec95889b0
R13: 00007ffec95896b3 R14: 0000559e58913260 R15: 0000559e589137c0
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-08-27 19:01:48 +08:00
|
|
|
struct mutex sysfs_dir_lock;
|
2007-07-09 18:40:35 +08:00
|
|
|
|
blk-mq: always free hctx after request queue is freed
In normal queue cleanup path, hctx is released after request queue
is freed, see blk_mq_release().
However, in __blk_mq_update_nr_hw_queues(), hctx may be freed because
of hw queues shrinking. This way is easy to cause use-after-free,
because: one implicit rule is that it is safe to call almost all block
layer APIs if the request queue is alive; and one hctx may be retrieved
by one API, then the hctx can be freed by blk_mq_update_nr_hw_queues();
finally use-after-free is triggered.
Fixes this issue by always freeing hctx after releasing request queue.
If some hctxs are removed in blk_mq_update_nr_hw_queues(), introduce
a per-queue list to hold them, then try to resuse these hctxs if numa
node is matched.
Cc: Dongli Zhang <dongli.zhang@oracle.com>
Cc: James Smart <james.smart@broadcom.com>
Cc: Bart Van Assche <bart.vanassche@wdc.com>
Cc: linux-scsi@vger.kernel.org,
Cc: Martin K . Petersen <martin.petersen@oracle.com>,
Cc: Christoph Hellwig <hch@lst.de>,
Cc: James E . J . Bottomley <jejb@linux.vnet.ibm.com>,
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 09:52:27 +08:00
|
|
|
/*
|
|
|
|
* for reusing dead hctx instance in case of updating
|
|
|
|
* nr_hw_queues
|
|
|
|
*/
|
|
|
|
struct list_head unused_hctx_list;
|
|
|
|
spinlock_t unused_hctx_lock;
|
|
|
|
|
2019-05-21 11:25:55 +08:00
|
|
|
int mq_freeze_depth;
|
2012-03-06 05:14:58 +08:00
|
|
|
|
2010-09-16 05:06:35 +08:00
|
|
|
#ifdef CONFIG_BLK_DEV_THROTTLING
|
|
|
|
/* Throttle data */
|
|
|
|
struct throtl_data *td;
|
|
|
|
#endif
|
2013-01-10 00:05:13 +08:00
|
|
|
struct rcu_head rcu_head;
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
wait_queue_head_t mq_freeze_wq;
|
2019-05-21 11:25:55 +08:00
|
|
|
/*
|
|
|
|
* Protect concurrent access to q_usage_counter by
|
|
|
|
* percpu_ref_kill() and percpu_ref_reinit().
|
|
|
|
*/
|
|
|
|
struct mutex mq_freeze_lock;
|
2014-05-14 05:10:52 +08:00
|
|
|
|
2021-10-14 16:17:10 +08:00
|
|
|
int quiesce_depth;
|
|
|
|
|
2014-05-14 05:10:52 +08:00
|
|
|
struct blk_mq_tag_set *tag_set;
|
|
|
|
struct list_head tag_set_list;
|
2015-09-27 01:09:20 +08:00
|
|
|
|
2017-01-26 00:06:40 +08:00
|
|
|
struct dentry *debugfs_dir;
|
2017-05-04 22:24:40 +08:00
|
|
|
struct dentry *sched_debugfs_dir;
|
2018-12-17 09:46:00 +08:00
|
|
|
struct dentry *rqos_debugfs_dir;
|
2022-06-14 15:48:25 +08:00
|
|
|
/*
|
|
|
|
* Serializes all debugfs metadata operations using the above dentries.
|
|
|
|
*/
|
|
|
|
struct mutex debugfs_mutex;
|
2017-01-26 00:06:40 +08:00
|
|
|
|
2015-09-27 01:09:20 +08:00
|
|
|
bool mq_sysfs_init_done;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2020-04-28 09:54:56 +08:00
|
|
|
/* Keep blk_queue_flag_name[] in sync with the definitions below */
|
2019-02-10 06:42:07 +08:00
|
|
|
#define QUEUE_FLAG_STOPPED 0 /* queue is stopped */
|
|
|
|
#define QUEUE_FLAG_DYING 1 /* queue being torn down */
|
|
|
|
#define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */
|
|
|
|
#define QUEUE_FLAG_SAME_COMP 4 /* complete on same CPU-group */
|
|
|
|
#define QUEUE_FLAG_FAIL_IO 5 /* fake timeout */
|
|
|
|
#define QUEUE_FLAG_NONROT 6 /* non-rotational device (SSD) */
|
|
|
|
#define QUEUE_FLAG_VIRT QUEUE_FLAG_NONROT /* paravirt device */
|
|
|
|
#define QUEUE_FLAG_IO_STAT 7 /* do disk/partitions IO accounting */
|
|
|
|
#define QUEUE_FLAG_NOXMERGES 9 /* No extended merges */
|
|
|
|
#define QUEUE_FLAG_ADD_RANDOM 10 /* Contributes to random pool */
|
|
|
|
#define QUEUE_FLAG_SAME_FORCE 12 /* force complete on same CPU */
|
|
|
|
#define QUEUE_FLAG_INIT_DONE 14 /* queue is initialized */
|
2020-09-24 14:51:38 +08:00
|
|
|
#define QUEUE_FLAG_STABLE_WRITES 15 /* don't modify blks until WB is done */
|
2019-02-10 06:42:07 +08:00
|
|
|
#define QUEUE_FLAG_POLL 16 /* IO polling enabled if set */
|
|
|
|
#define QUEUE_FLAG_WC 17 /* Write back caching */
|
|
|
|
#define QUEUE_FLAG_FUA 18 /* device supports FUA writes */
|
|
|
|
#define QUEUE_FLAG_DAX 19 /* device supports DAX */
|
|
|
|
#define QUEUE_FLAG_STATS 20 /* track IO start and completion times */
|
|
|
|
#define QUEUE_FLAG_REGISTERED 22 /* queue has been registered to a disk */
|
|
|
|
#define QUEUE_FLAG_QUIESCED 24 /* queue has been quiesced */
|
|
|
|
#define QUEUE_FLAG_PCI_P2PDMA 25 /* device supports PCI p2p requests */
|
2019-08-02 01:26:35 +08:00
|
|
|
#define QUEUE_FLAG_ZONE_RESETALL 26 /* supports Zone Reset All */
|
2019-08-29 06:05:57 +08:00
|
|
|
#define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */
|
2020-09-24 04:06:51 +08:00
|
|
|
#define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */
|
|
|
|
#define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */
|
2022-06-16 09:44:00 +08:00
|
|
|
#define QUEUE_FLAG_SQ_SCHED 30 /* single queue style io dispatch */
|
2022-11-01 23:00:49 +08:00
|
|
|
#define QUEUE_FLAG_SKIP_TAGSET_QUIESCE 31 /* quiesce_tagset skip the queue*/
|
2006-01-06 16:51:03 +08:00
|
|
|
|
2022-10-03 21:35:34 +08:00
|
|
|
#define QUEUE_FLAG_MQ_DEFAULT ((1UL << QUEUE_FLAG_IO_STAT) | \
|
|
|
|
(1UL << QUEUE_FLAG_SAME_COMP) | \
|
|
|
|
(1UL << QUEUE_FLAG_NOWAIT))
|
2013-11-20 00:25:07 +08:00
|
|
|
|
2018-03-08 09:10:04 +08:00
|
|
|
void blk_queue_flag_set(unsigned int flag, struct request_queue *q);
|
|
|
|
void blk_queue_flag_clear(unsigned int flag, struct request_queue *q);
|
|
|
|
bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags)
|
2012-11-28 20:42:38 +08:00
|
|
|
#define blk_queue_dying(q) test_bit(QUEUE_FLAG_DYING, &(q)->queue_flags)
|
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-24 16:20:05 +08:00
|
|
|
#define blk_queue_init_done(q) test_bit(QUEUE_FLAG_INIT_DONE, &(q)->queue_flags)
|
2008-04-29 20:44:19 +08:00
|
|
|
#define blk_queue_nomerges(q) test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
|
2010-01-29 16:04:08 +08:00
|
|
|
#define blk_queue_noxmerges(q) \
|
|
|
|
test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
|
2008-09-24 19:03:33 +08:00
|
|
|
#define blk_queue_nonrot(q) test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags)
|
2020-09-24 14:51:38 +08:00
|
|
|
#define blk_queue_stable_writes(q) \
|
|
|
|
test_bit(QUEUE_FLAG_STABLE_WRITES, &(q)->queue_flags)
|
2009-01-23 17:54:44 +08:00
|
|
|
#define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
|
2010-06-09 16:42:09 +08:00
|
|
|
#define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags)
|
2019-08-02 01:26:35 +08:00
|
|
|
#define blk_queue_zone_resetall(q) \
|
|
|
|
test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
|
2016-06-24 05:05:50 +08:00
|
|
|
#define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
|
2018-10-05 05:27:41 +08:00
|
|
|
#define blk_queue_pci_p2pdma(q) \
|
|
|
|
test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
|
2019-08-29 06:05:57 +08:00
|
|
|
#ifdef CONFIG_BLK_RQ_ALLOC_TIME
|
|
|
|
#define blk_queue_rq_alloc_time(q) \
|
|
|
|
test_bit(QUEUE_FLAG_RQ_ALLOC_TIME, &(q)->queue_flags)
|
|
|
|
#else
|
|
|
|
#define blk_queue_rq_alloc_time(q) false
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-08-08 00:17:56 +08:00
|
|
|
#define blk_noretry_request(rq) \
|
|
|
|
((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
|
|
|
|
REQ_FAILFAST_DRIVER))
|
2017-06-19 04:24:27 +08:00
|
|
|
#define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags)
|
2018-09-27 05:01:04 +08:00
|
|
|
#define blk_queue_pm_only(q) atomic_read(&(q)->pm_only)
|
2019-08-27 19:01:47 +08:00
|
|
|
#define blk_queue_registered(q) test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
|
2022-06-16 09:44:00 +08:00
|
|
|
#define blk_queue_sq_sched(q) test_bit(QUEUE_FLAG_SQ_SCHED, &(q)->queue_flags)
|
2022-11-01 23:00:49 +08:00
|
|
|
#define blk_queue_skip_tagset_quiesce(q) \
|
|
|
|
test_bit(QUEUE_FLAG_SKIP_TAGSET_QUIESCE, &(q)->queue_flags)
|
2017-11-10 02:49:57 +08:00
|
|
|
|
2018-09-27 05:01:04 +08:00
|
|
|
extern void blk_set_pm_only(struct request_queue *q);
|
|
|
|
extern void blk_clear_pm_only(struct request_queue *q);
|
2010-08-08 00:17:56 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#define list_entry_rq(ptr) list_entry((ptr), struct request, queuelist)
|
|
|
|
|
2019-03-03 23:40:36 +08:00
|
|
|
#define dma_map_bvec(dev, bv, dir, attrs) \
|
|
|
|
dma_map_page_attrs(dev, (bv)->bv_page, (bv)->bv_offset, (bv)->bv_len, \
|
|
|
|
(dir), (attrs))
|
|
|
|
|
2018-11-16 03:22:51 +08:00
|
|
|
static inline bool queue_is_mq(struct request_queue *q)
|
2014-04-17 00:57:18 +08:00
|
|
|
{
|
block: remove dead elevator code
This removes a bunch of core and elevator related code. On the core
front, we remove anything related to queue running, draining,
initialization, plugging, and congestions. We also kill anything
related to request allocation, merging, retrieval, and completion.
Remove any checking for single queue IO schedulers, as they no
longer exist. This means we can also delete a bunch of code related
to request issue, adding, completion, etc - and all the SQ related
ops and helpers.
Also kill the load_default_modules(), as all that did was provide
for a way to load the default single queue elevator.
Tested-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-30 00:23:51 +08:00
|
|
|
return q->mq_ops;
|
2014-04-17 00:57:18 +08:00
|
|
|
}
|
|
|
|
|
2020-12-09 13:29:51 +08:00
|
|
|
#ifdef CONFIG_PM
|
|
|
|
static inline enum rpm_status queue_rpm_status(struct request_queue *q)
|
|
|
|
{
|
|
|
|
return q->rpm_status;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline enum rpm_status queue_rpm_status(struct request_queue *q)
|
|
|
|
{
|
|
|
|
return RPM_ACTIVE;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-10-18 14:40:29 +08:00
|
|
|
static inline enum blk_zoned_model
|
|
|
|
blk_queue_zoned_model(struct request_queue *q)
|
|
|
|
{
|
2020-08-20 15:56:58 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BLK_DEV_ZONED))
|
|
|
|
return q->limits.zoned;
|
|
|
|
return BLK_ZONED_NONE;
|
2016-10-18 14:40:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool blk_queue_is_zoned(struct request_queue *q)
|
|
|
|
{
|
|
|
|
switch (blk_queue_zoned_model(q)) {
|
|
|
|
case BLK_ZONED_HA:
|
|
|
|
case BLK_ZONED_HM:
|
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-06-16 05:55:21 +08:00
|
|
|
#ifdef CONFIG_BLK_DEV_ZONED
|
2022-07-06 15:03:50 +08:00
|
|
|
static inline unsigned int disk_nr_zones(struct gendisk *disk)
|
2018-10-12 18:08:48 +08:00
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
return blk_queue_is_zoned(disk->queue) ? disk->nr_zones : 0;
|
2018-10-12 18:08:48 +08:00
|
|
|
}
|
|
|
|
|
2022-07-06 15:03:50 +08:00
|
|
|
static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector)
|
2017-12-21 14:43:38 +08:00
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
if (!blk_queue_is_zoned(disk->queue))
|
2017-12-21 14:43:38 +08:00
|
|
|
return 0;
|
2022-07-06 15:03:50 +08:00
|
|
|
return sector >> ilog2(disk->queue->limits.chunk_sectors);
|
2017-12-21 14:43:38 +08:00
|
|
|
}
|
|
|
|
|
2022-07-06 15:03:50 +08:00
|
|
|
static inline bool disk_zone_is_seq(struct gendisk *disk, sector_t sector)
|
2017-12-21 14:43:38 +08:00
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
if (!blk_queue_is_zoned(disk->queue))
|
2017-12-21 14:43:38 +08:00
|
|
|
return false;
|
2022-07-06 15:03:50 +08:00
|
|
|
if (!disk->conv_zones_bitmap)
|
2019-12-03 17:39:05 +08:00
|
|
|
return true;
|
2022-07-06 15:03:50 +08:00
|
|
|
return !test_bit(disk_zone_no(disk, sector), disk->conv_zones_bitmap);
|
2017-12-21 14:43:38 +08:00
|
|
|
}
|
2020-07-15 05:18:23 +08:00
|
|
|
|
2022-07-06 15:03:44 +08:00
|
|
|
static inline void disk_set_max_open_zones(struct gendisk *disk,
|
2020-07-15 05:18:23 +08:00
|
|
|
unsigned int max_open_zones)
|
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
disk->max_open_zones = max_open_zones;
|
2020-07-15 05:18:23 +08:00
|
|
|
}
|
|
|
|
|
2022-07-06 15:03:44 +08:00
|
|
|
static inline void disk_set_max_active_zones(struct gendisk *disk,
|
2020-07-15 05:18:24 +08:00
|
|
|
unsigned int max_active_zones)
|
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
disk->max_active_zones = max_active_zones;
|
2020-07-15 05:18:24 +08:00
|
|
|
}
|
|
|
|
|
2022-07-06 15:03:43 +08:00
|
|
|
static inline unsigned int bdev_max_open_zones(struct block_device *bdev)
|
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
return bdev->bd_disk->max_open_zones;
|
2022-07-06 15:03:43 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int bdev_max_active_zones(struct block_device *bdev)
|
2020-07-15 05:18:24 +08:00
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
return bdev->bd_disk->max_active_zones;
|
2020-07-15 05:18:24 +08:00
|
|
|
}
|
2022-07-06 15:03:43 +08:00
|
|
|
|
2018-10-12 18:08:48 +08:00
|
|
|
#else /* CONFIG_BLK_DEV_ZONED */
|
2022-07-06 15:03:50 +08:00
|
|
|
static inline unsigned int disk_nr_zones(struct gendisk *disk)
|
2018-10-12 18:08:48 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2022-07-06 15:03:50 +08:00
|
|
|
static inline bool disk_zone_is_seq(struct gendisk *disk, sector_t sector)
|
2020-05-12 16:55:45 +08:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2022-07-06 15:03:50 +08:00
|
|
|
static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector)
|
2020-05-12 16:55:45 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2022-07-06 15:03:43 +08:00
|
|
|
static inline unsigned int bdev_max_open_zones(struct block_device *bdev)
|
2020-07-15 05:18:23 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2022-07-06 15:03:50 +08:00
|
|
|
|
2022-07-06 15:03:43 +08:00
|
|
|
static inline unsigned int bdev_max_active_zones(struct block_device *bdev)
|
2020-07-15 05:18:24 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2018-06-16 05:55:21 +08:00
|
|
|
#endif /* CONFIG_BLK_DEV_ZONED */
|
2017-12-21 14:43:38 +08:00
|
|
|
|
2016-03-31 00:21:08 +08:00
|
|
|
static inline unsigned int blk_queue_depth(struct request_queue *q)
|
|
|
|
{
|
|
|
|
if (q->queue_depth)
|
|
|
|
return q->queue_depth;
|
|
|
|
|
|
|
|
return q->nr_requests;
|
|
|
|
}
|
|
|
|
|
2007-07-09 18:38:05 +08:00
|
|
|
/*
|
|
|
|
* default timeout for SG_IO if none specified
|
|
|
|
*/
|
|
|
|
#define BLK_DEFAULT_SG_TIMEOUT (60 * HZ)
|
2008-12-06 06:49:18 +08:00
|
|
|
#define BLK_MIN_SG_TIMEOUT (7 * HZ)
|
2007-07-09 18:38:05 +08:00
|
|
|
|
2007-09-25 18:35:59 +08:00
|
|
|
/* This should not be used directly - use rq_for_each_segment */
|
2009-02-23 16:03:10 +08:00
|
|
|
#define for_each_bio(_bio) \
|
|
|
|
for (; _bio; _bio = _bio->bi_next)
|
2007-09-25 18:35:59 +08:00
|
|
|
|
2022-01-24 17:39:13 +08:00
|
|
|
int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
|
|
|
|
const struct attribute_group **groups);
|
|
|
|
static inline int __must_check add_disk(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return device_add_disk(NULL, disk, NULL);
|
|
|
|
}
|
|
|
|
void del_gendisk(struct gendisk *gp);
|
|
|
|
void invalidate_disk(struct gendisk *disk);
|
|
|
|
void set_disk_ro(struct gendisk *disk, bool read_only);
|
|
|
|
void disk_uevent(struct gendisk *disk, enum kobject_action action);
|
|
|
|
|
|
|
|
static inline int get_disk_ro(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return disk->part0->bd_read_only ||
|
|
|
|
test_bit(GD_READ_ONLY, &disk->state);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int bdev_read_only(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev->bd_read_only || get_disk_ro(bdev->bd_disk);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool set_capacity_and_notify(struct gendisk *disk, sector_t size);
|
|
|
|
bool disk_force_media_change(struct gendisk *disk, unsigned int events);
|
|
|
|
|
|
|
|
void add_disk_randomness(struct gendisk *disk) __latent_entropy;
|
|
|
|
void rand_initialize_disk(struct gendisk *disk);
|
|
|
|
|
|
|
|
static inline sector_t get_start_sect(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev->bd_start_sect;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline sector_t bdev_nr_sectors(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev->bd_nr_sectors;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline loff_t bdev_nr_bytes(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return (loff_t)bdev_nr_sectors(bdev) << SECTOR_SHIFT;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline sector_t get_capacity(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return bdev_nr_sectors(disk->part0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 sb_bdev_nr_blocks(struct super_block *sb)
|
|
|
|
{
|
|
|
|
return bdev_nr_sectors(sb->s_bdev) >>
|
|
|
|
(sb->s_blocksize_bits - SECTOR_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bdev_disk_changed(struct gendisk *disk, bool invalidate);
|
|
|
|
|
|
|
|
void put_disk(struct gendisk *disk);
|
|
|
|
struct gendisk *__blk_alloc_disk(int node, struct lock_class_key *lkclass);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* blk_alloc_disk - allocate a gendisk structure
|
|
|
|
* @node_id: numa node to allocate on
|
|
|
|
*
|
|
|
|
* Allocate and pre-initialize a gendisk structure for use with BIO based
|
|
|
|
* drivers.
|
|
|
|
*
|
|
|
|
* Context: can sleep
|
|
|
|
*/
|
|
|
|
#define blk_alloc_disk(node_id) \
|
|
|
|
({ \
|
|
|
|
static struct lock_class_key __key; \
|
|
|
|
\
|
|
|
|
__blk_alloc_disk(node_id, &__key); \
|
|
|
|
})
|
|
|
|
|
|
|
|
int __register_blkdev(unsigned int major, const char *name,
|
|
|
|
void (*probe)(dev_t devt));
|
|
|
|
#define register_blkdev(major, name) \
|
|
|
|
__register_blkdev(major, name, NULL)
|
|
|
|
void unregister_blkdev(unsigned int major, const char *name);
|
|
|
|
|
|
|
|
bool bdev_check_media_change(struct block_device *bdev);
|
|
|
|
int __invalidate_device(struct block_device *bdev, bool kill_dirty);
|
|
|
|
void set_capacity(struct gendisk *disk, sector_t size);
|
|
|
|
|
|
|
|
#ifdef CONFIG_BLOCK_HOLDER_DEPRECATED
|
|
|
|
int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk);
|
|
|
|
void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk);
|
|
|
|
#else
|
|
|
|
static inline int bd_link_disk_holder(struct block_device *bdev,
|
|
|
|
struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
static inline void bd_unlink_disk_holder(struct block_device *bdev,
|
|
|
|
struct gendisk *disk)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_BLOCK_HOLDER_DEPRECATED */
|
|
|
|
|
|
|
|
dev_t part_devt(struct gendisk *disk, u8 partno);
|
|
|
|
void inc_diskseq(struct gendisk *disk);
|
|
|
|
dev_t blk_lookup_devt(const char *name, int partno);
|
|
|
|
void blk_request_module(dev_t devt);
|
2009-11-26 16:16:19 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
extern int blk_register_queue(struct gendisk *disk);
|
|
|
|
extern void blk_unregister_queue(struct gendisk *disk);
|
2021-10-12 19:12:24 +08:00
|
|
|
void submit_bio_noacct(struct bio *bio);
|
2022-07-28 00:22:55 +08:00
|
|
|
struct bio *bio_split_to_limits(struct bio *bio);
|
2021-09-20 20:33:28 +08:00
|
|
|
|
2008-10-01 22:12:15 +08:00
|
|
|
extern int blk_lld_busy(struct request_queue *q);
|
2017-11-10 02:49:59 +08:00
|
|
|
extern int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags);
|
2015-11-20 05:29:28 +08:00
|
|
|
extern void blk_queue_exit(struct request_queue *q);
|
2005-04-17 06:20:36 +08:00
|
|
|
extern void blk_sync_queue(struct request_queue *q);
|
2021-06-11 05:44:36 +08:00
|
|
|
|
2019-06-21 01:59:16 +08:00
|
|
|
/* Helper to convert REQ_OP_XXX to its string format XXX */
|
2022-07-15 02:06:28 +08:00
|
|
|
extern const char *blk_op_str(enum req_op op);
|
2019-06-21 01:59:16 +08:00
|
|
|
|
2017-06-03 15:38:04 +08:00
|
|
|
int blk_status_to_errno(blk_status_t status);
|
|
|
|
blk_status_t errno_to_blk_status(int errno);
|
|
|
|
|
2021-10-12 19:12:19 +08:00
|
|
|
/* only poll the hardware once, don't continue until a completion was found */
|
|
|
|
#define BLK_POLL_ONESHOT (1 << 0)
|
2021-10-12 19:12:20 +08:00
|
|
|
/* do not sleep to wait for the expected completion time */
|
|
|
|
#define BLK_POLL_NOSLEEP (1 << 1)
|
2021-10-12 23:24:29 +08:00
|
|
|
int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags);
|
|
|
|
int iocb_bio_iopoll(struct kiocb *kiocb, struct io_comp_batch *iob,
|
|
|
|
unsigned int flags);
|
2015-11-06 01:44:55 +08:00
|
|
|
|
2007-07-24 15:28:11 +08:00
|
|
|
static inline struct request_queue *bdev_get_queue(struct block_device *bdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2021-10-14 22:03:26 +08:00
|
|
|
return bdev->bd_queue; /* this is never NULL */
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2020-03-26 01:49:54 +08:00
|
|
|
/* Helper to convert BLK_ZONE_ZONE_XXX to its string format XXX */
|
|
|
|
const char *blk_zone_cond_str(enum blk_zone_cond zone_cond);
|
|
|
|
|
2021-05-26 05:24:52 +08:00
|
|
|
static inline unsigned int bio_zone_no(struct bio *bio)
|
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
return disk_zone_no(bio->bi_bdev->bd_disk, bio->bi_iter.bi_sector);
|
2021-05-26 05:24:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int bio_zone_is_seq(struct bio *bio)
|
|
|
|
{
|
2022-07-06 15:03:50 +08:00
|
|
|
return disk_zone_is_seq(bio->bi_bdev->bd_disk, bio->bi_iter.bi_sector);
|
2021-05-26 05:24:52 +08:00
|
|
|
}
|
2017-12-21 14:43:38 +08:00
|
|
|
|
2022-06-14 17:09:29 +08:00
|
|
|
/*
|
|
|
|
* Return how much of the chunk is left to be used for I/O at a given offset.
|
|
|
|
*/
|
|
|
|
static inline unsigned int blk_chunk_sectors_left(sector_t offset,
|
|
|
|
unsigned int chunk_sectors)
|
|
|
|
{
|
|
|
|
if (unlikely(!is_power_of_2(chunk_sectors)))
|
|
|
|
return chunk_sectors - sector_div(offset, chunk_sectors);
|
|
|
|
return chunk_sectors - (offset & (chunk_sectors - 1));
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Access functions for manipulating queue properties
|
|
|
|
*/
|
2021-03-31 15:30:00 +08:00
|
|
|
void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
|
2010-02-26 13:20:38 +08:00
|
|
|
extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
|
2014-06-06 03:38:39 +08:00
|
|
|
extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
|
2010-02-26 13:20:39 +08:00
|
|
|
extern void blk_queue_max_segments(struct request_queue *, unsigned short);
|
2017-02-08 21:46:49 +08:00
|
|
|
extern void blk_queue_max_discard_segments(struct request_queue *,
|
|
|
|
unsigned short);
|
2022-04-15 12:52:57 +08:00
|
|
|
void blk_queue_max_secure_erase_sectors(struct request_queue *q,
|
|
|
|
unsigned int max_sectors);
|
2007-07-24 15:28:11 +08:00
|
|
|
extern void blk_queue_max_segment_size(struct request_queue *, unsigned int);
|
2009-09-30 19:54:20 +08:00
|
|
|
extern void blk_queue_max_discard_sectors(struct request_queue *q,
|
|
|
|
unsigned int max_discard_sectors);
|
2016-12-01 04:28:59 +08:00
|
|
|
extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q,
|
|
|
|
unsigned int max_write_same_sectors);
|
2020-01-15 21:35:25 +08:00
|
|
|
extern void blk_queue_logical_block_size(struct request_queue *, unsigned int);
|
2020-05-12 16:55:47 +08:00
|
|
|
extern void blk_queue_max_zone_append_sectors(struct request_queue *q,
|
|
|
|
unsigned int max_zone_append_sectors);
|
2010-10-14 03:18:03 +08:00
|
|
|
extern void blk_queue_physical_block_size(struct request_queue *, unsigned int);
|
2021-01-28 12:47:30 +08:00
|
|
|
void blk_queue_zone_write_granularity(struct request_queue *q,
|
|
|
|
unsigned int size);
|
2009-05-23 05:17:53 +08:00
|
|
|
extern void blk_queue_alignment_offset(struct request_queue *q,
|
|
|
|
unsigned int alignment);
|
2021-08-09 22:17:41 +08:00
|
|
|
void disk_update_readahead(struct gendisk *disk);
|
2009-07-31 23:49:11 +08:00
|
|
|
extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
|
2009-05-23 05:17:53 +08:00
|
|
|
extern void blk_queue_io_min(struct request_queue *q, unsigned int min);
|
2009-09-12 03:54:52 +08:00
|
|
|
extern void blk_limits_io_opt(struct queue_limits *limits, unsigned int opt);
|
2009-05-23 05:17:53 +08:00
|
|
|
extern void blk_queue_io_opt(struct request_queue *q, unsigned int opt);
|
2016-03-31 00:21:08 +08:00
|
|
|
extern void blk_set_queue_depth(struct request_queue *q, unsigned int depth);
|
2012-01-11 23:27:11 +08:00
|
|
|
extern void blk_set_stacking_limits(struct queue_limits *lim);
|
2009-05-23 05:17:53 +08:00
|
|
|
extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
|
|
|
|
sector_t offset);
|
|
|
|
extern void disk_stack_limits(struct gendisk *disk, struct block_device *bdev,
|
|
|
|
sector_t offset);
|
2008-07-04 15:30:03 +08:00
|
|
|
extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int);
|
2007-07-24 15:28:11 +08:00
|
|
|
extern void blk_queue_segment_boundary(struct request_queue *, unsigned long);
|
2015-08-20 05:24:05 +08:00
|
|
|
extern void blk_queue_virt_boundary(struct request_queue *, unsigned long);
|
2007-07-24 15:28:11 +08:00
|
|
|
extern void blk_queue_dma_alignment(struct request_queue *, int);
|
2008-01-01 06:37:00 +08:00
|
|
|
extern void blk_queue_update_dma_alignment(struct request_queue *, int);
|
2008-09-14 20:55:09 +08:00
|
|
|
extern void blk_queue_rq_timeout(struct request_queue *, unsigned int);
|
2016-04-13 02:32:46 +08:00
|
|
|
extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua);
|
2021-09-20 20:33:23 +08:00
|
|
|
|
block: Add independent access ranges support
The Concurrent Positioning Ranges VPD page (for SCSI) and data log page
(for ATA) contain parameters describing the set of contiguous LBAs that
can be served independently by a single LUN multi-actuator hard-disk.
Similarly, a logically defined block device composed of multiple disks
can in some cases execute requests directed at different sector ranges
in parallel. A dm-linear device aggregating 2 block devices together is
an example.
This patch implements support for exposing a block device independent
access ranges to the user through sysfs to allow optimizing device
accesses to increase performance.
To describe the set of independent sector ranges of a device (actuators
of a multi-actuator HDDs or table entries of a dm-linear device),
The type struct blk_independent_access_ranges is introduced. This
structure describes the sector ranges using an array of
struct blk_independent_access_range structures. This range structure
defines the start sector and number of sectors of the access range.
The ranges in the array cannot overlap and must contain all sectors
within the device capacity.
The function disk_set_independent_access_ranges() allows a device
driver to signal to the block layer that a device has multiple
independent access ranges. In this case, a struct
blk_independent_access_ranges is attached to the device request queue
by the function disk_set_independent_access_ranges(). The function
disk_alloc_independent_access_ranges() is provided for drivers to
allocate this structure.
struct blk_independent_access_ranges contains kobjects (struct kobject)
to expose to the user through sysfs the set of independent access ranges
supported by a device. When the device is initialized, sysfs
registration of the ranges information is done from blk_register_queue()
using the block layer internal function
disk_register_independent_access_ranges(). If a driver calls
disk_set_independent_access_ranges() for a registered queue, e.g. when a
device is revalidated, disk_set_independent_access_ranges() will execute
disk_register_independent_access_ranges() to update the sysfs attribute
files. The sysfs file structure created starts from the
independent_access_ranges sub-directory and contains the start sector
and number of sectors of each range, with the information for each range
grouped in numbered sub-directories.
E.g. for a dual actuator HDD, the user sees:
$ tree /sys/block/sdk/queue/independent_access_ranges/
/sys/block/sdk/queue/independent_access_ranges/
|-- 0
| |-- nr_sectors
| `-- sector
`-- 1
|-- nr_sectors
`-- sector
For a regular device with a single access range, the
independent_access_ranges sysfs directory does not exist.
Device revalidation may lead to changes to this structure and to the
attribute values. When manipulated, the queue sysfs_lock and
sysfs_dir_lock mutexes are held for atomicity, similarly to how the
blk-mq and elevator sysfs queue sub-directories are protected.
The code related to the management of independent access ranges is
added in the new file block/blk-ia-ranges.c.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20211027022223.183838-2-damien.lemoal@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-27 10:22:19 +08:00
|
|
|
struct blk_independent_access_ranges *
|
|
|
|
disk_alloc_independent_access_ranges(struct gendisk *disk, int nr_ia_ranges);
|
|
|
|
void disk_set_independent_access_ranges(struct gendisk *disk,
|
|
|
|
struct blk_independent_access_ranges *iars);
|
|
|
|
|
2021-09-20 20:33:23 +08:00
|
|
|
/*
|
|
|
|
* Elevator features for blk_queue_required_elevator_features:
|
|
|
|
*/
|
|
|
|
/* Supports zoned block devices sequential write constraint */
|
|
|
|
#define ELEVATOR_F_ZBD_SEQ_WRITE (1U << 0)
|
|
|
|
|
2019-09-05 17:51:31 +08:00
|
|
|
extern void blk_queue_required_elevator_features(struct request_queue *q,
|
|
|
|
unsigned int features);
|
2019-08-28 20:35:42 +08:00
|
|
|
extern bool blk_queue_can_use_dma_map_merging(struct request_queue *q,
|
|
|
|
struct device *dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-12-14 07:33:38 +08:00
|
|
|
bool __must_check blk_get_queue(struct request_queue *);
|
2007-07-24 15:28:11 +08:00
|
|
|
extern void blk_put_queue(struct request_queue *);
|
2022-02-17 15:52:31 +08:00
|
|
|
|
|
|
|
void blk_mark_disk_dead(struct gendisk *disk);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-06-20 15:16:43 +08:00
|
|
|
#ifdef CONFIG_BLOCK
|
2011-07-08 14:19:21 +08:00
|
|
|
/*
|
2011-09-21 16:00:16 +08:00
|
|
|
* blk_plug permits building a queue of related requests by holding the I/O
|
|
|
|
* fragments for a short period. This allows merging of sequential requests
|
|
|
|
* into single larger request. As the requests are moved from a per-task list to
|
|
|
|
* the device's request_queue in a batch, this results in improved scalability
|
|
|
|
* as the lock contention for request_queue lock is reduced.
|
|
|
|
*
|
|
|
|
* It is ok not to disable preemption when adding the request to the plug list
|
2021-10-20 22:41:19 +08:00
|
|
|
* or when attempting a merge. For details, please see schedule() where
|
|
|
|
* blk_flush_plug() is called.
|
2011-07-08 14:19:21 +08:00
|
|
|
*/
|
2011-03-08 20:19:51 +08:00
|
|
|
struct blk_plug {
|
2021-10-19 00:12:12 +08:00
|
|
|
struct request *mq_list; /* blk-mq requests */
|
2021-10-06 20:34:11 +08:00
|
|
|
|
|
|
|
/* if ios_left is > 1, we can batch tag/rq allocations */
|
|
|
|
struct request *cached_rq;
|
|
|
|
unsigned short nr_ios;
|
|
|
|
|
2018-11-24 13:04:33 +08:00
|
|
|
unsigned short rq_count;
|
2021-10-06 20:34:11 +08:00
|
|
|
|
2018-11-28 08:13:56 +08:00
|
|
|
bool multiple_queues;
|
2021-10-19 20:02:30 +08:00
|
|
|
bool has_elevator;
|
2020-06-05 01:23:39 +08:00
|
|
|
bool nowait;
|
2021-10-06 20:34:11 +08:00
|
|
|
|
|
|
|
struct list_head cb_list; /* md requires an unplug callback */
|
2011-03-08 20:19:51 +08:00
|
|
|
};
|
2011-07-08 14:19:20 +08:00
|
|
|
|
2012-07-31 15:08:14 +08:00
|
|
|
struct blk_plug_cb;
|
2012-07-31 15:08:15 +08:00
|
|
|
typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool);
|
2011-04-18 15:52:22 +08:00
|
|
|
struct blk_plug_cb {
|
|
|
|
struct list_head list;
|
2012-07-31 15:08:14 +08:00
|
|
|
blk_plug_cb_fn callback;
|
|
|
|
void *data;
|
2011-04-18 15:52:22 +08:00
|
|
|
};
|
2012-07-31 15:08:14 +08:00
|
|
|
extern struct blk_plug_cb *blk_check_plugged(blk_plug_cb_fn unplug,
|
|
|
|
void *data, int size);
|
2011-03-08 20:19:51 +08:00
|
|
|
extern void blk_start_plug(struct blk_plug *);
|
2021-10-06 20:34:11 +08:00
|
|
|
extern void blk_start_plug_nr_ios(struct blk_plug *, unsigned short);
|
2011-03-08 20:19:51 +08:00
|
|
|
extern void blk_finish_plug(struct blk_plug *);
|
|
|
|
|
2022-01-27 15:05:49 +08:00
|
|
|
void __blk_flush_plug(struct blk_plug *plug, bool from_schedule);
|
|
|
|
static inline void blk_flush_plug(struct blk_plug *plug, bool async)
|
2011-03-08 20:19:51 +08:00
|
|
|
{
|
2022-01-27 15:05:49 +08:00
|
|
|
if (plug)
|
|
|
|
__blk_flush_plug(plug, async);
|
2011-03-08 20:19:51 +08:00
|
|
|
}
|
|
|
|
|
2021-01-26 22:52:35 +08:00
|
|
|
int blkdev_issue_flush(struct block_device *bdev);
|
2020-06-20 15:16:43 +08:00
|
|
|
long nr_blockdev_pages(void);
|
|
|
|
#else /* CONFIG_BLOCK */
|
|
|
|
struct blk_plug {
|
|
|
|
};
|
|
|
|
|
2021-10-06 20:34:11 +08:00
|
|
|
static inline void blk_start_plug_nr_ios(struct blk_plug *plug,
|
|
|
|
unsigned short nr_ios)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2020-06-20 15:16:43 +08:00
|
|
|
static inline void blk_start_plug(struct blk_plug *plug)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void blk_finish_plug(struct blk_plug *plug)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2021-10-20 22:41:19 +08:00
|
|
|
static inline void blk_flush_plug(struct blk_plug *plug, bool async)
|
2020-06-20 15:16:43 +08:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2021-01-26 22:52:35 +08:00
|
|
|
static inline int blkdev_issue_flush(struct block_device *bdev)
|
2020-06-20 15:16:43 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline long nr_blockdev_pages(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_BLOCK */
|
|
|
|
|
2020-05-14 16:45:09 +08:00
|
|
|
extern void blk_io_schedule(void);
|
|
|
|
|
2022-04-15 12:52:57 +08:00
|
|
|
int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|
|
|
sector_t nr_sects, gfp_t gfp_mask);
|
|
|
|
int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|
|
|
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop);
|
|
|
|
int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
|
|
|
|
sector_t nr_sects, gfp_t gfp);
|
2017-04-06 01:21:08 +08:00
|
|
|
|
|
|
|
#define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */
|
2017-04-06 01:21:10 +08:00
|
|
|
#define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */
|
2017-04-06 01:21:08 +08:00
|
|
|
|
2016-12-01 04:28:58 +08:00
|
|
|
extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
|
|
|
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,
|
2017-04-06 01:21:08 +08:00
|
|
|
unsigned flags);
|
2010-04-28 21:55:09 +08:00
|
|
|
extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
2017-04-06 01:21:08 +08:00
|
|
|
sector_t nr_sects, gfp_t gfp_mask, unsigned flags);
|
|
|
|
|
2010-08-18 17:29:10 +08:00
|
|
|
static inline int sb_issue_discard(struct super_block *sb, sector_t block,
|
|
|
|
sector_t nr_blocks, gfp_t gfp_mask, unsigned long flags)
|
2008-08-06 01:01:53 +08:00
|
|
|
{
|
2018-03-15 06:48:06 +08:00
|
|
|
return blkdev_issue_discard(sb->s_bdev,
|
|
|
|
block << (sb->s_blocksize_bits -
|
|
|
|
SECTOR_SHIFT),
|
|
|
|
nr_blocks << (sb->s_blocksize_bits -
|
|
|
|
SECTOR_SHIFT),
|
2022-04-15 12:52:57 +08:00
|
|
|
gfp_mask);
|
2008-08-06 01:01:53 +08:00
|
|
|
}
|
2010-10-28 09:30:04 +08:00
|
|
|
static inline int sb_issue_zeroout(struct super_block *sb, sector_t block,
|
2010-10-28 11:44:47 +08:00
|
|
|
sector_t nr_blocks, gfp_t gfp_mask)
|
2010-10-28 09:30:04 +08:00
|
|
|
{
|
|
|
|
return blkdev_issue_zeroout(sb->s_bdev,
|
2018-03-15 06:48:06 +08:00
|
|
|
block << (sb->s_blocksize_bits -
|
|
|
|
SECTOR_SHIFT),
|
|
|
|
nr_blocks << (sb->s_blocksize_bits -
|
|
|
|
SECTOR_SHIFT),
|
2017-04-06 01:21:08 +08:00
|
|
|
gfp_mask, 0);
|
2010-10-28 09:30:04 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-09-03 13:40:57 +08:00
|
|
|
static inline bool bdev_is_partition(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev->bd_partno;
|
|
|
|
}
|
|
|
|
|
2010-02-26 13:20:37 +08:00
|
|
|
enum blk_default_limits {
|
|
|
|
BLK_MAX_SEGMENTS = 128,
|
|
|
|
BLK_SAFE_MAX_SECTORS = 255,
|
2015-08-14 02:57:57 +08:00
|
|
|
BLK_DEF_MAX_SECTORS = 2560,
|
2010-02-26 13:20:37 +08:00
|
|
|
BLK_MAX_SEGMENT_SIZE = 65536,
|
|
|
|
BLK_SEG_BOUNDARY_MASK = 0xFFFFFFFFUL,
|
|
|
|
};
|
2008-12-03 19:55:08 +08:00
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned long queue_segment_boundary(const struct request_queue *q)
|
2009-05-23 05:17:50 +08:00
|
|
|
{
|
2009-05-23 05:17:51 +08:00
|
|
|
return q->limits.seg_boundary_mask;
|
2009-05-23 05:17:50 +08:00
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned long queue_virt_boundary(const struct request_queue *q)
|
2015-08-20 05:24:05 +08:00
|
|
|
{
|
|
|
|
return q->limits.virt_boundary_mask;
|
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned int queue_max_sectors(const struct request_queue *q)
|
2009-05-23 05:17:50 +08:00
|
|
|
{
|
2009-05-23 05:17:51 +08:00
|
|
|
return q->limits.max_sectors;
|
2009-05-23 05:17:50 +08:00
|
|
|
}
|
|
|
|
|
2021-07-24 15:20:21 +08:00
|
|
|
static inline unsigned int queue_max_bytes(struct request_queue *q)
|
|
|
|
{
|
|
|
|
return min_t(unsigned int, queue_max_sectors(q), INT_MAX >> 9) << 9;
|
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned int queue_max_hw_sectors(const struct request_queue *q)
|
2009-05-23 05:17:50 +08:00
|
|
|
{
|
2009-05-23 05:17:51 +08:00
|
|
|
return q->limits.max_hw_sectors;
|
2009-05-23 05:17:50 +08:00
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned short queue_max_segments(const struct request_queue *q)
|
2009-05-23 05:17:50 +08:00
|
|
|
{
|
2010-02-26 13:20:39 +08:00
|
|
|
return q->limits.max_segments;
|
2009-05-23 05:17:50 +08:00
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned short queue_max_discard_segments(const struct request_queue *q)
|
2017-02-08 21:46:49 +08:00
|
|
|
{
|
|
|
|
return q->limits.max_discard_segments;
|
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned int queue_max_segment_size(const struct request_queue *q)
|
2009-05-23 05:17:50 +08:00
|
|
|
{
|
2009-05-23 05:17:51 +08:00
|
|
|
return q->limits.max_segment_size;
|
2009-05-23 05:17:50 +08:00
|
|
|
}
|
|
|
|
|
2020-05-12 16:55:47 +08:00
|
|
|
static inline unsigned int queue_max_zone_append_sectors(const struct request_queue *q)
|
|
|
|
{
|
2020-10-07 20:35:08 +08:00
|
|
|
|
|
|
|
const struct queue_limits *l = &q->limits;
|
|
|
|
|
|
|
|
return min(l->max_zone_append_sectors, l->max_sectors);
|
2020-05-12 16:55:47 +08:00
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:46 +08:00
|
|
|
static inline unsigned int
|
|
|
|
bdev_max_zone_append_sectors(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return queue_max_zone_append_sectors(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2022-07-09 07:18:38 +08:00
|
|
|
static inline unsigned int bdev_max_segments(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return queue_max_segments(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2020-01-15 21:35:25 +08:00
|
|
|
static inline unsigned queue_logical_block_size(const struct request_queue *q)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int retval = 512;
|
|
|
|
|
2009-05-23 05:17:51 +08:00
|
|
|
if (q && q->limits.logical_block_size)
|
|
|
|
retval = q->limits.logical_block_size;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
2020-01-15 21:35:25 +08:00
|
|
|
static inline unsigned int bdev_logical_block_size(struct block_device *bdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-05-23 05:17:49 +08:00
|
|
|
return queue_logical_block_size(bdev_get_queue(bdev));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned int queue_physical_block_size(const struct request_queue *q)
|
2009-05-23 05:17:53 +08:00
|
|
|
{
|
|
|
|
return q->limits.physical_block_size;
|
|
|
|
}
|
|
|
|
|
2010-10-14 03:18:03 +08:00
|
|
|
static inline unsigned int bdev_physical_block_size(struct block_device *bdev)
|
2009-10-04 02:52:01 +08:00
|
|
|
{
|
|
|
|
return queue_physical_block_size(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned int queue_io_min(const struct request_queue *q)
|
2009-05-23 05:17:53 +08:00
|
|
|
{
|
|
|
|
return q->limits.io_min;
|
|
|
|
}
|
|
|
|
|
2009-10-04 02:52:01 +08:00
|
|
|
static inline int bdev_io_min(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return queue_io_min(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline unsigned int queue_io_opt(const struct request_queue *q)
|
2009-05-23 05:17:53 +08:00
|
|
|
{
|
|
|
|
return q->limits.io_opt;
|
|
|
|
}
|
|
|
|
|
2009-10-04 02:52:01 +08:00
|
|
|
static inline int bdev_io_opt(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return queue_io_opt(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2021-01-28 12:47:30 +08:00
|
|
|
static inline unsigned int
|
|
|
|
queue_zone_write_granularity(const struct request_queue *q)
|
|
|
|
{
|
|
|
|
return q->limits.zone_write_granularity;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int
|
|
|
|
bdev_zone_write_granularity(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return queue_zone_write_granularity(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:49 +08:00
|
|
|
int bdev_alignment_offset(struct block_device *bdev);
|
2022-04-15 12:52:52 +08:00
|
|
|
unsigned int bdev_discard_alignment(struct block_device *bdev);
|
block: split discard into aligned requests
When a disk has large discard_granularity and small max_discard_sectors,
discards are not split with optimal alignment. In the limit case of
discard_granularity == max_discard_sectors, no request could be aligned
correctly, so in fact you might end up with no discarded logical blocks
at all.
Another example that helps showing the condition in the patch is with
discard_granularity == 64, max_discard_sectors == 128. A request that is
submitted for 256 sectors 2..257 will be split in two: 2..129, 130..257.
However, only 2 aligned blocks out of 3 are included in the request;
128..191 may be left intact and not discarded. With this patch, the
first request will be truncated to ensure good alignment of what's left,
and the split will be 2..127, 128..255, 256..257. The patch will also
take into account the discard_alignment.
At most one extra request will be introduced, because the first request
will be reduced by at most granularity-1 sectors, and granularity
must be less than max_discard_sectors. Subsequent requests will run
on round_down(max_discard_sectors, granularity) sectors, as in the
current code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Tested-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2012-08-02 15:48:50 +08:00
|
|
|
|
2022-04-15 12:52:54 +08:00
|
|
|
static inline unsigned int bdev_max_discard_sectors(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev_get_queue(bdev)->limits.max_discard_sectors;
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:56 +08:00
|
|
|
static inline unsigned int bdev_discard_granularity(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev_get_queue(bdev)->limits.discard_granularity;
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:57 +08:00
|
|
|
static inline unsigned int
|
|
|
|
bdev_max_secure_erase_sectors(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return bdev_get_queue(bdev)->limits.max_secure_erase_sectors;
|
|
|
|
}
|
|
|
|
|
2016-12-01 04:28:59 +08:00
|
|
|
static inline unsigned int bdev_write_zeroes_sectors(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(bdev);
|
|
|
|
|
|
|
|
if (q)
|
|
|
|
return q->limits.max_write_zeroes_sectors;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:42 +08:00
|
|
|
static inline bool bdev_nonrot(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return blk_queue_nonrot(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:45 +08:00
|
|
|
static inline bool bdev_stable_writes(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return test_bit(QUEUE_FLAG_STABLE_WRITES,
|
|
|
|
&bdev_get_queue(bdev)->queue_flags);
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:43 +08:00
|
|
|
static inline bool bdev_write_cache(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags);
|
|
|
|
}
|
|
|
|
|
2022-04-15 12:52:44 +08:00
|
|
|
static inline bool bdev_fua(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags);
|
|
|
|
}
|
|
|
|
|
2022-09-27 15:58:15 +08:00
|
|
|
static inline bool bdev_nowait(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return test_bit(QUEUE_FLAG_NOWAIT, &bdev_get_queue(bdev)->queue_flags);
|
|
|
|
}
|
|
|
|
|
2016-10-18 14:40:29 +08:00
|
|
|
static inline enum blk_zoned_model bdev_zoned_model(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(bdev);
|
|
|
|
|
|
|
|
if (q)
|
|
|
|
return blk_queue_zoned_model(q);
|
|
|
|
|
|
|
|
return BLK_ZONED_NONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool bdev_is_zoned(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(bdev);
|
|
|
|
|
|
|
|
if (q)
|
|
|
|
return blk_queue_is_zoned(q);
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2022-09-29 15:47:44 +08:00
|
|
|
static inline bool bdev_op_is_zoned_write(struct block_device *bdev,
|
|
|
|
blk_opf_t op)
|
|
|
|
{
|
|
|
|
if (!bdev_is_zoned(bdev))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return op == REQ_OP_WRITE || op == REQ_OP_WRITE_ZEROES;
|
|
|
|
}
|
|
|
|
|
2019-07-10 12:53:10 +08:00
|
|
|
static inline sector_t bdev_zone_sectors(struct block_device *bdev)
|
2016-10-18 14:40:33 +08:00
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(bdev);
|
|
|
|
|
2022-07-06 15:03:49 +08:00
|
|
|
if (!blk_queue_is_zoned(q))
|
|
|
|
return 0;
|
|
|
|
return q->limits.chunk_sectors;
|
2017-12-21 14:43:38 +08:00
|
|
|
}
|
2016-10-18 14:40:33 +08:00
|
|
|
|
2019-08-02 06:50:40 +08:00
|
|
|
static inline int queue_dma_alignment(const struct request_queue *q)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2022-11-11 02:44:57 +08:00
|
|
|
return q ? q->limits.dma_alignment : 511;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2022-06-11 03:58:23 +08:00
|
|
|
static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return queue_dma_alignment(bdev_get_queue(bdev));
|
|
|
|
}
|
|
|
|
|
2022-06-11 03:58:28 +08:00
|
|
|
static inline bool bdev_iter_is_aligned(struct block_device *bdev,
|
|
|
|
struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
return iov_iter_is_aligned(iter, bdev_dma_alignment(bdev),
|
|
|
|
bdev_logical_block_size(bdev) - 1);
|
|
|
|
}
|
|
|
|
|
2010-09-15 19:08:27 +08:00
|
|
|
static inline int blk_rq_aligned(struct request_queue *q, unsigned long addr,
|
2008-08-28 14:05:58 +08:00
|
|
|
unsigned int len)
|
|
|
|
{
|
|
|
|
unsigned int alignment = queue_dma_alignment(q) | q->dma_pad_mask;
|
2010-09-15 19:08:27 +08:00
|
|
|
return !(addr & alignment) && !(len & alignment);
|
2008-08-28 14:05:58 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* assumes size > 256 */
|
|
|
|
static inline unsigned int blksize_bits(unsigned int size)
|
|
|
|
{
|
2022-10-30 13:20:08 +08:00
|
|
|
return order_base_2(size >> SECTOR_SHIFT) + SECTOR_SHIFT;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2005-09-10 15:27:17 +08:00
|
|
|
static inline unsigned int block_size(struct block_device *bdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2020-06-26 16:01:55 +08:00
|
|
|
return 1 << bdev->bd_inode->i_blkbits;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-04-08 23:15:35 +08:00
|
|
|
int kblockd_schedule_work(struct work_struct *work);
|
2017-04-10 23:54:55 +08:00
|
|
|
int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#define MODULE_ALIAS_BLOCKDEV(major,minor) \
|
|
|
|
MODULE_ALIAS("block-major-" __stringify(major) "-" __stringify(minor))
|
|
|
|
#define MODULE_ALIAS_BLOCKDEV_MAJOR(major) \
|
|
|
|
MODULE_ALIAS("block-major-" __stringify(major) "-*")
|
|
|
|
|
2020-05-14 08:37:19 +08:00
|
|
|
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
|
|
|
|
|
blk-crypto: rename blk_keyslot_manager to blk_crypto_profile
blk_keyslot_manager is misnamed because it doesn't necessarily manage
keyslots. It actually does several different things:
- Contains the crypto capabilities of the device.
- Provides functions to control the inline encryption hardware.
Originally these were just for programming/evicting keyslots;
however, new functionality (hardware-wrapped keys) will require new
functions here which are unrelated to keyslots. Moreover,
device-mapper devices already (ab)use "keyslot_evict" to pass key
eviction requests to their underlying devices even though
device-mapper devices don't have any keyslots themselves (so it
really should be "evict_key", not "keyslot_evict").
- Sometimes (but not always!) it manages keyslots. Originally it
always did, but device-mapper devices don't have keyslots
themselves, so they use a "passthrough keyslot manager" which
doesn't actually manage keyslots. This hack works, but the
terminology is unnatural. Also, some hardware doesn't have keyslots
and thus also uses a "passthrough keyslot manager" (support for such
hardware is yet to be upstreamed, but it will happen eventually).
Let's stop having keyslot managers which don't actually manage keyslots.
Instead, rename blk_keyslot_manager to blk_crypto_profile.
This is a fairly big change, since for consistency it also has to update
keyslot manager-related function names, variable names, and comments --
not just the actual struct name. However it's still a fairly
straightforward change, as it doesn't change any actual functionality.
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20211018180453.40441-4-ebiggers@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-19 02:04:52 +08:00
|
|
|
bool blk_crypto_register(struct blk_crypto_profile *profile,
|
|
|
|
struct request_queue *q);
|
2020-05-14 08:37:19 +08:00
|
|
|
|
|
|
|
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
|
|
|
|
|
blk-crypto: rename blk_keyslot_manager to blk_crypto_profile
blk_keyslot_manager is misnamed because it doesn't necessarily manage
keyslots. It actually does several different things:
- Contains the crypto capabilities of the device.
- Provides functions to control the inline encryption hardware.
Originally these were just for programming/evicting keyslots;
however, new functionality (hardware-wrapped keys) will require new
functions here which are unrelated to keyslots. Moreover,
device-mapper devices already (ab)use "keyslot_evict" to pass key
eviction requests to their underlying devices even though
device-mapper devices don't have any keyslots themselves (so it
really should be "evict_key", not "keyslot_evict").
- Sometimes (but not always!) it manages keyslots. Originally it
always did, but device-mapper devices don't have keyslots
themselves, so they use a "passthrough keyslot manager" which
doesn't actually manage keyslots. This hack works, but the
terminology is unnatural. Also, some hardware doesn't have keyslots
and thus also uses a "passthrough keyslot manager" (support for such
hardware is yet to be upstreamed, but it will happen eventually).
Let's stop having keyslot managers which don't actually manage keyslots.
Instead, rename blk_keyslot_manager to blk_crypto_profile.
This is a fairly big change, since for consistency it also has to update
keyslot manager-related function names, variable names, and comments --
not just the actual struct name. However it's still a fairly
straightforward change, as it doesn't change any actual functionality.
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20211018180453.40441-4-ebiggers@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-19 02:04:52 +08:00
|
|
|
static inline bool blk_crypto_register(struct blk_crypto_profile *profile,
|
|
|
|
struct request_queue *q)
|
2020-05-14 08:37:19 +08:00
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
|
|
|
|
|
2021-10-21 14:06:01 +08:00
|
|
|
enum blk_unique_id {
|
|
|
|
/* these match the Designator Types specified in SPC */
|
|
|
|
BLK_UID_T10 = 1,
|
|
|
|
BLK_UID_EUI64 = 2,
|
|
|
|
BLK_UID_NAA = 3,
|
|
|
|
};
|
|
|
|
|
|
|
|
#define NFL4_UFLG_MASK 0x0000003F
|
2020-05-14 08:37:19 +08:00
|
|
|
|
2007-10-09 01:26:20 +08:00
|
|
|
struct block_device_operations {
|
2021-10-12 19:12:24 +08:00
|
|
|
void (*submit_bio)(struct bio *bio);
|
2022-03-05 10:08:03 +08:00
|
|
|
int (*poll_bio)(struct bio *bio, struct io_comp_batch *iob,
|
|
|
|
unsigned int flags);
|
[PATCH] beginning of methods conversion
To keep the size of changesets sane we split the switch by drivers;
to keep the damn thing bisectable we do the following:
1) rename the affected methods, add ones with correct
prototypes, make (few) callers handle both. That's this changeset.
2) for each driver convert to new methods. *ALL* drivers
are converted in this series.
3) kill the old (renamed) methods.
Note that it _is_ a flagday; all in-tree drivers are converted and by the
end of this series no trace of old methods remain. The only reason why
we do that this way is to keep the damn thing bisectable and allow per-driver
debugging if anything goes wrong.
New methods:
open(bdev, mode)
release(disk, mode)
ioctl(bdev, mode, cmd, arg) /* Called without BKL */
compat_ioctl(bdev, mode, cmd, arg)
locked_ioctl(bdev, mode, cmd, arg) /* Called with BKL, legacy */
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2008-03-02 22:09:22 +08:00
|
|
|
int (*open) (struct block_device *, fmode_t);
|
2013-05-06 09:52:57 +08:00
|
|
|
void (*release) (struct gendisk *, fmode_t);
|
2022-07-15 02:06:29 +08:00
|
|
|
int (*rw_page)(struct block_device *, sector_t, struct page *, enum req_op);
|
[PATCH] beginning of methods conversion
To keep the size of changesets sane we split the switch by drivers;
to keep the damn thing bisectable we do the following:
1) rename the affected methods, add ones with correct
prototypes, make (few) callers handle both. That's this changeset.
2) for each driver convert to new methods. *ALL* drivers
are converted in this series.
3) kill the old (renamed) methods.
Note that it _is_ a flagday; all in-tree drivers are converted and by the
end of this series no trace of old methods remain. The only reason why
we do that this way is to keep the damn thing bisectable and allow per-driver
debugging if anything goes wrong.
New methods:
open(bdev, mode)
release(disk, mode)
ioctl(bdev, mode, cmd, arg) /* Called without BKL */
compat_ioctl(bdev, mode, cmd, arg)
locked_ioctl(bdev, mode, cmd, arg) /* Called with BKL, legacy */
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2008-03-02 22:09:22 +08:00
|
|
|
int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long);
|
|
|
|
int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long);
|
implement in-kernel gendisk events handling
Currently, media presence polling for removeable block devices is done
from userland. There are several issues with this.
* Polling is done by periodically opening the device. For SCSI
devices, the command sequence generated by such action involves a
few different commands including TEST_UNIT_READY. This behavior,
while perfectly legal, is different from Windows which only issues
single command, GET_EVENT_STATUS_NOTIFICATION. Unfortunately, some
ATAPI devices lock up after being periodically queried such command
sequences.
* There is no reliable and unintrusive way for a userland program to
tell whether the target device is safe for media presence polling.
For example, polling for media presence during an on-going burning
session can make it fail. The polling program can avoid this by
opening the device with O_EXCL but then it risks making a valid
exclusive user of the device fail w/ -EBUSY.
* Userland polling is unnecessarily heavy and in-kernel implementation
is lighter and better coordinated (workqueue, timer slack).
This patch implements framework for in-kernel disk event handling,
which includes media presence polling.
* bdops->check_events() is added, which supercedes ->media_changed().
It should check whether there's any pending event and return if so.
Currently, two events are defined - DISK_EVENT_MEDIA_CHANGE and
DISK_EVENT_EJECT_REQUEST. ->check_events() is guaranteed not to be
called parallelly.
* gendisk->events and ->async_events are added. These should be
initialized by block driver before passing the device to add_disk().
The former contains the mask of all supported events and the latter
the mask of all events which the device can report without polling.
/sys/block/*/events[_async] export these to userland.
* Kernel parameter block.events_dfl_poll_msecs controls the system
polling interval (default is 0 which means disable) and
/sys/block/*/events_poll_msecs control polling intervals for
individual devices (default is -1 meaning use system setting). Note
that if a device can report all supported events asynchronously and
its polling interval isn't explicitly set, the device won't be
polled regardless of the system polling interval.
* If a device is opened exclusively with write access, event checking
is automatically disabled until all write exclusive accesses are
released.
* There are event 'clearing' events. For example, both of currently
defined events are cleared after the device has been successfully
opened. This information is passed to ->check_events() callback
using @clearing argument as a hint.
* Event checking is always performed from system_nrt_wq and timer
slack is set to 25% for polling.
* Nothing changes for drivers which implement ->media_changed() but
not ->check_events(). Going forward, all drivers will be converted
to ->check_events() and ->media_change() will be dropped.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2010-12-09 03:57:37 +08:00
|
|
|
unsigned int (*check_events) (struct gendisk *disk,
|
|
|
|
unsigned int clearing);
|
2010-05-16 02:09:29 +08:00
|
|
|
void (*unlock_native_capacity) (struct gendisk *);
|
2007-10-09 01:26:20 +08:00
|
|
|
int (*getgeo)(struct block_device *, struct hd_geometry *);
|
2020-11-03 18:00:11 +08:00
|
|
|
int (*set_read_only)(struct block_device *bdev, bool ro);
|
2022-02-15 17:45:10 +08:00
|
|
|
void (*free_disk)(struct gendisk *disk);
|
2010-05-17 13:32:43 +08:00
|
|
|
/* this callback is with swap_lock and sometimes page table lock held */
|
|
|
|
void (*swap_slot_free_notify) (struct block_device *, unsigned long);
|
2018-10-12 18:08:49 +08:00
|
|
|
int (*report_zones)(struct gendisk *, sector_t sector,
|
2019-11-11 10:39:30 +08:00
|
|
|
unsigned int nr_zones, report_zones_cb cb, void *data);
|
2021-10-21 14:06:01 +08:00
|
|
|
/* returns the length of the identifier or a negative errno: */
|
|
|
|
int (*get_unique_id)(struct gendisk *disk, u8 id[16],
|
|
|
|
enum blk_unique_id id_type);
|
2007-10-09 01:26:20 +08:00
|
|
|
struct module *owner;
|
2015-10-15 20:10:48 +08:00
|
|
|
const struct pr_ops *pr_ops;
|
2021-08-20 08:45:33 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Special callback for probing GPT entry at a given sector.
|
|
|
|
* Needed by Android devices, used by GPT scanner and MMC blk
|
|
|
|
* driver.
|
|
|
|
*/
|
|
|
|
int (*alternative_gpt_sector)(struct gendisk *disk, sector_t *sector);
|
2007-10-09 01:26:20 +08:00
|
|
|
};
|
|
|
|
|
2019-11-28 22:48:10 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
extern int blkdev_compat_ptr_ioctl(struct block_device *, fmode_t,
|
|
|
|
unsigned int, unsigned long);
|
|
|
|
#else
|
|
|
|
#define blkdev_compat_ptr_ioctl NULL
|
|
|
|
#endif
|
|
|
|
|
2014-06-05 07:07:46 +08:00
|
|
|
extern int bdev_read_page(struct block_device *, sector_t, struct page *);
|
|
|
|
extern int bdev_write_page(struct block_device *, sector_t, struct page *,
|
|
|
|
struct writeback_control *);
|
2017-12-21 14:43:38 +08:00
|
|
|
|
2018-11-14 12:16:54 +08:00
|
|
|
static inline void blk_wake_io_task(struct task_struct *waiter)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If we're polling, the task itself is doing the completions. For
|
|
|
|
* that case, we don't need to signal a wakeup, it's enough to just
|
|
|
|
* mark us as RUNNING.
|
|
|
|
*/
|
|
|
|
if (waiter == current)
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
else
|
|
|
|
wake_up_process(waiter);
|
|
|
|
}
|
|
|
|
|
2022-04-18 10:27:13 +08:00
|
|
|
unsigned long bdev_start_io_acct(struct block_device *bdev,
|
2022-07-15 02:06:28 +08:00
|
|
|
unsigned int sectors, enum req_op op,
|
2022-04-18 10:27:13 +08:00
|
|
|
unsigned long start_time);
|
2022-07-15 02:06:28 +08:00
|
|
|
void bdev_end_io_acct(struct block_device *bdev, enum req_op op,
|
2020-05-27 13:24:04 +08:00
|
|
|
unsigned long start_time);
|
|
|
|
|
2021-01-24 18:02:37 +08:00
|
|
|
unsigned long bio_start_io_acct(struct bio *bio);
|
|
|
|
void bio_end_io_acct_remapped(struct bio *bio, unsigned long start_time,
|
|
|
|
struct block_device *orig_bdev);
|
2020-05-27 13:24:04 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* bio_end_io_acct - end I/O accounting for bio based drivers
|
|
|
|
* @bio: bio to end account for
|
2022-01-27 14:41:25 +08:00
|
|
|
* @start_time: start time returned by bio_start_io_acct()
|
2020-05-27 13:24:04 +08:00
|
|
|
*/
|
|
|
|
static inline void bio_end_io_acct(struct bio *bio, unsigned long start_time)
|
|
|
|
{
|
2021-01-24 18:02:37 +08:00
|
|
|
return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev);
|
2020-05-27 13:24:04 +08:00
|
|
|
}
|
|
|
|
|
2020-06-20 15:16:41 +08:00
|
|
|
int bdev_read_only(struct block_device *bdev);
|
|
|
|
int set_blocksize(struct block_device *bdev, int size);
|
|
|
|
|
2020-11-23 20:38:40 +08:00
|
|
|
int lookup_bdev(const char *pathname, dev_t *dev);
|
2020-06-20 15:16:41 +08:00
|
|
|
|
|
|
|
void blkdev_show(struct seq_file *seqf, off_t offset);
|
|
|
|
|
|
|
|
#define BDEVNAME_SIZE 32 /* Largest string for a blockdev identifier */
|
|
|
|
#define BDEVT_SIZE 10 /* Largest string for MAJ:MIN for blkdev */
|
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
#define BLKDEV_MAJOR_MAX 512
|
|
|
|
#else
|
|
|
|
#define BLKDEV_MAJOR_MAX 0
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
|
|
|
|
void *holder);
|
|
|
|
struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder);
|
2020-11-26 04:20:08 +08:00
|
|
|
int bd_prepare_to_claim(struct block_device *bdev, void *holder);
|
|
|
|
void bd_abort_claiming(struct block_device *bdev, void *holder);
|
2020-06-20 15:16:41 +08:00
|
|
|
void blkdev_put(struct block_device *bdev, fmode_t mode);
|
|
|
|
|
2020-11-26 16:23:26 +08:00
|
|
|
/* just for blk-cgroup, don't use elsewhere */
|
|
|
|
struct block_device *blkdev_get_no_open(dev_t dev);
|
|
|
|
void blkdev_put_no_open(struct block_device *bdev);
|
|
|
|
|
|
|
|
struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
|
|
|
|
void bdev_add(struct block_device *bdev, dev_t dev);
|
2020-06-20 15:16:44 +08:00
|
|
|
struct block_device *I_BDEV(struct inode *inode);
|
2021-01-09 19:13:32 +08:00
|
|
|
int truncate_bdev_range(struct block_device *bdev, fmode_t mode, loff_t lstart,
|
|
|
|
loff_t lend);
|
2020-06-20 15:16:41 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
void invalidate_bdev(struct block_device *bdev);
|
|
|
|
int sync_blockdev(struct block_device *bdev);
|
2022-04-12 11:23:10 +08:00
|
|
|
int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
|
2021-10-19 14:25:25 +08:00
|
|
|
int sync_blockdev_nowait(struct block_device *bdev);
|
2021-10-19 14:25:30 +08:00
|
|
|
void sync_bdevs(bool wait);
|
2022-08-27 14:58:45 +08:00
|
|
|
void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
|
2022-01-24 17:39:13 +08:00
|
|
|
void printk_all_partitions(void);
|
2020-06-20 15:16:41 +08:00
|
|
|
#else
|
|
|
|
static inline void invalidate_bdev(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
static inline int sync_blockdev(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2021-10-19 14:25:25 +08:00
|
|
|
static inline int sync_blockdev_nowait(struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2021-10-19 14:25:30 +08:00
|
|
|
static inline void sync_bdevs(bool wait)
|
|
|
|
{
|
|
|
|
}
|
2022-08-27 14:58:45 +08:00
|
|
|
static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
|
|
|
|
{
|
|
|
|
}
|
2022-01-24 17:39:13 +08:00
|
|
|
static inline void printk_all_partitions(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_BLOCK */
|
|
|
|
|
2020-06-20 15:16:41 +08:00
|
|
|
int fsync_bdev(struct block_device *bdev);
|
|
|
|
|
2020-11-24 18:54:06 +08:00
|
|
|
int freeze_bdev(struct block_device *bdev);
|
|
|
|
int thaw_bdev(struct block_device *bdev);
|
2020-06-20 15:16:41 +08:00
|
|
|
|
2021-10-12 23:24:29 +08:00
|
|
|
struct io_comp_batch {
|
|
|
|
struct request *req_list;
|
|
|
|
bool need_ts;
|
|
|
|
void (*complete)(struct io_comp_batch *);
|
|
|
|
};
|
|
|
|
|
|
|
|
#define DEFINE_IO_COMP_BATCH(name) struct io_comp_batch name = { }
|
|
|
|
|
2020-06-20 15:16:41 +08:00
|
|
|
#endif /* _LINUX_BLKDEV_H */
|