2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2001 Sistina Software (UK) Limited.
|
dm table: rework reference counting
Rework table reference counting.
The existing code uses a reference counter. When the last reference is
dropped and the counter reaches zero, the table destructor is called.
Table reference counters are acquired/released from upcalls from other
kernel code (dm_any_congested, dm_merge_bvec, dm_unplug_all).
If the reference counter reaches zero in one of the upcalls, the table
destructor is called from almost random kernel code.
This leads to various problems:
* dm_any_congested being called under a spinlock, which calls the
destructor, which calls some sleeping function.
* the destructor attempting to take a lock that is already taken by the
same process.
* stale reference from some other kernel code keeps the table
constructed, which keeps some devices open, even after successful
return from "dmsetup remove". This can confuse lvm and prevent closing
of underlying devices or reusing device minor numbers.
The patch changes reference counting so that the table destructor can be
called only at predetermined places.
The table has always exactly one reference from either mapped_device->map
or hash_cell->new_map. After this patch, this reference is not counted
in table->holders. A pair of dm_create_table/dm_destroy_table functions
is used for table creation/destruction.
Temporary references from the other code increase table->holders. A pair
of dm_table_get/dm_table_put functions is used to manipulate it.
When the table is about to be destroyed, we wait for table->holders to
reach 0. Then, we call the table destructor. We use active waiting with
msleep(1), because the situation happens rarely (to one user in 5 years)
and removing the device isn't performance-critical task: the user doesn't
care if it takes one tick more or not.
This way, the destructor is called only at specific points
(dm_table_destroy function) and the above problems associated with lazy
destruction can't happen.
Finally remove the temporary protection added to dm_any_congested().
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-01-06 11:05:10 +08:00
|
|
|
* Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* This file is released under the GPL.
|
|
|
|
*/
|
|
|
|
|
2016-05-13 04:28:10 +08:00
|
|
|
#include "dm-core.h"
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/namei.h>
|
|
|
|
#include <linux/ctype.h>
|
2009-12-15 10:01:06 +08:00
|
|
|
#include <linux/string.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/interrupt.h>
|
2006-03-27 17:18:20 +08:00
|
|
|
#include <linux/mutex.h>
|
dm table: rework reference counting
Rework table reference counting.
The existing code uses a reference counter. When the last reference is
dropped and the counter reaches zero, the table destructor is called.
Table reference counters are acquired/released from upcalls from other
kernel code (dm_any_congested, dm_merge_bvec, dm_unplug_all).
If the reference counter reaches zero in one of the upcalls, the table
destructor is called from almost random kernel code.
This leads to various problems:
* dm_any_congested being called under a spinlock, which calls the
destructor, which calls some sleeping function.
* the destructor attempting to take a lock that is already taken by the
same process.
* stale reference from some other kernel code keeps the table
constructed, which keeps some devices open, even after successful
return from "dmsetup remove". This can confuse lvm and prevent closing
of underlying devices or reusing device minor numbers.
The patch changes reference counting so that the table destructor can be
called only at predetermined places.
The table has always exactly one reference from either mapped_device->map
or hash_cell->new_map. After this patch, this reference is not counted
in table->holders. A pair of dm_create_table/dm_destroy_table functions
is used for table creation/destruction.
Temporary references from the other code increase table->holders. A pair
of dm_table_get/dm_table_put functions is used to manipulate it.
When the table is about to be destroyed, we wait for table->holders to
reach 0. Then, we call the table destructor. We use active waiting with
msleep(1), because the situation happens rarely (to one user in 5 years)
and removing the device isn't performance-critical task: the user doesn't
care if it takes one tick more or not.
This way, the destructor is called only at specific points
(dm_table_destroy function) and the above problems associated with lazy
destruction can't happen.
Finally remove the temporary protection added to dm_any_congested().
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-01-06 11:05:10 +08:00
|
|
|
#include <linux/delay.h>
|
2011-07-27 07:09:06 +08:00
|
|
|
#include <linux/atomic.h>
|
2015-03-08 13:51:47 +08:00
|
|
|
#include <linux/blk-mq.h>
|
dm table: fall back to getting device using name_to_dev_t()
If a device is used as the root filesystem, it can't be built
off of devices which are within the root filesystem (just like
command line arguments to root=). For this reason, Linux has a
pseudo-filesystem for root= and MD initialization (based on the
function name_to_dev_t) which handles different ways of specifying
devices including PARTUUID and major:minor.
Switch to using name_to_dev_t() in dm_get_device(). Rather than
having DM assume that all things which are not major:minor are paths in
an already-mounted filesystem, change dm_get_device() to first attempt
to look up the device in the filesystem, and if not found it will fall
back to using name_to_dev_t().
In terms of backwards compatibility, there are some cases where
behavior will be different:
- If you have a file in the current working directory named 1:2 and
you initialze DM there, then it will try to use that file rather
than the disk with that major:minor pair as a backing device.
- Similarly for other bdev types which name_to_dev_t() knows how to
interpret, the previous behavior was to repeatedly check for the
existence of the file (e.g., while waiting for rootfs to come up)
but the new behavior is to use the name_to_dev_t() interpretation.
For example, if you have a file named /dev/ubiblock0_0 which is
a symlink to /dev/sda3, but it is not yet present when DM starts
to initialize, then the name_to_dev_t() interpretation will take
precedence.
These incompatibilities would only show up in really strange setups
with bad practices so we shouldn't have to worry about them.
Signed-off-by: Dan Ehrenberg <dehrenberg@chromium.org>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2015-02-11 07:20:51 +08:00
|
|
|
#include <linux/mount.h>
|
2017-07-26 21:35:09 +08:00
|
|
|
#include <linux/dax.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-06-26 15:27:35 +08:00
|
|
|
#define DM_MSG_PREFIX "table"
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#define MAX_DEPTH 16
|
|
|
|
#define NODE_SIZE L1_CACHE_BYTES
|
|
|
|
#define KEYS_PER_NODE (NODE_SIZE / sizeof(sector_t))
|
|
|
|
#define CHILDREN_PER_NODE (KEYS_PER_NODE + 1)
|
|
|
|
|
|
|
|
struct dm_table {
|
2006-03-27 17:17:54 +08:00
|
|
|
struct mapped_device *md;
|
2017-04-28 01:11:23 +08:00
|
|
|
enum dm_queue_mode type;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* btree table */
|
|
|
|
unsigned int depth;
|
|
|
|
unsigned int counts[MAX_DEPTH]; /* in nodes */
|
|
|
|
sector_t *index[MAX_DEPTH];
|
|
|
|
|
|
|
|
unsigned int num_targets;
|
|
|
|
unsigned int num_allocated;
|
|
|
|
sector_t *highs;
|
|
|
|
struct dm_target *targets;
|
|
|
|
|
2011-11-01 04:19:04 +08:00
|
|
|
struct target_type *immutable_target_type;
|
2016-05-25 09:16:51 +08:00
|
|
|
|
|
|
|
bool integrity_supported:1;
|
|
|
|
bool singleton:1;
|
|
|
|
bool all_blk_mq:1;
|
2017-01-05 03:23:51 +08:00
|
|
|
unsigned integrity_added:1;
|
2010-08-12 11:14:08 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Indicates the rw permissions for the new logical
|
|
|
|
* device. This should be a combination of FMODE_READ
|
|
|
|
* and FMODE_WRITE.
|
|
|
|
*/
|
2008-09-03 03:28:45 +08:00
|
|
|
fmode_t mode;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* a list of devices used by this table */
|
|
|
|
struct list_head devices;
|
|
|
|
|
|
|
|
/* events get handed up using this callback */
|
|
|
|
void (*event_fn)(void *);
|
|
|
|
void *event_context;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
|
|
|
|
struct dm_md_mempools *mempools;
|
2011-01-14 04:00:01 +08:00
|
|
|
|
|
|
|
struct list_head target_callbacks;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Similar to ceiling(log_size(n))
|
|
|
|
*/
|
|
|
|
static unsigned int int_log(unsigned int n, unsigned int base)
|
|
|
|
{
|
|
|
|
int result = 0;
|
|
|
|
|
|
|
|
while (n > 1) {
|
|
|
|
n = dm_div_up(n, base);
|
|
|
|
result++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate the index of the child node of the n'th node k'th key.
|
|
|
|
*/
|
|
|
|
static inline unsigned int get_child(unsigned int n, unsigned int k)
|
|
|
|
{
|
|
|
|
return (n * CHILDREN_PER_NODE) + k;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the n'th node of level l from table t.
|
|
|
|
*/
|
|
|
|
static inline sector_t *get_node(struct dm_table *t,
|
|
|
|
unsigned int l, unsigned int n)
|
|
|
|
{
|
|
|
|
return t->index[l] + (n * KEYS_PER_NODE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the highest key that you could lookup from the n'th
|
|
|
|
* node on level l of the btree.
|
|
|
|
*/
|
|
|
|
static sector_t high(struct dm_table *t, unsigned int l, unsigned int n)
|
|
|
|
{
|
|
|
|
for (; l < t->depth - 1; l++)
|
|
|
|
n = get_child(n, CHILDREN_PER_NODE - 1);
|
|
|
|
|
|
|
|
if (n >= t->counts[l])
|
|
|
|
return (sector_t) - 1;
|
|
|
|
|
|
|
|
return get_node(t, l, n)[KEYS_PER_NODE - 1];
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fills in a level of the btree based on the highs of the level
|
|
|
|
* below it.
|
|
|
|
*/
|
|
|
|
static int setup_btree_index(unsigned int l, struct dm_table *t)
|
|
|
|
{
|
|
|
|
unsigned int n, k;
|
|
|
|
sector_t *node;
|
|
|
|
|
|
|
|
for (n = 0U; n < t->counts[l]; n++) {
|
|
|
|
node = get_node(t, l, n);
|
|
|
|
|
|
|
|
for (k = 0U; k < KEYS_PER_NODE; k++)
|
|
|
|
node[k] = high(t, l + 1, get_child(n, k));
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size)
|
|
|
|
{
|
|
|
|
unsigned long size;
|
|
|
|
void *addr;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check that we're not going to overflow.
|
|
|
|
*/
|
|
|
|
if (nmemb > (ULONG_MAX / elem_size))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
size = nmemb * elem_size;
|
2011-08-02 19:32:02 +08:00
|
|
|
addr = vzalloc(size);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return addr;
|
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_vcalloc);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* highs, and targets are managed as dynamic arrays during a
|
|
|
|
* table load.
|
|
|
|
*/
|
|
|
|
static int alloc_targets(struct dm_table *t, unsigned int num)
|
|
|
|
{
|
|
|
|
sector_t *n_highs;
|
|
|
|
struct dm_target *n_targets;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate both the target array and offset array at once.
|
2007-12-13 22:15:25 +08:00
|
|
|
* Append an empty entry to catch sectors beyond the end of
|
|
|
|
* the device.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2007-12-13 22:15:25 +08:00
|
|
|
n_highs = (sector_t *) dm_vcalloc(num + 1, sizeof(struct dm_target) +
|
2005-04-17 06:20:36 +08:00
|
|
|
sizeof(sector_t));
|
|
|
|
if (!n_highs)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
n_targets = (struct dm_target *) (n_highs + num);
|
|
|
|
|
dm table: remove unused buggy code that extends the targets array
A device mapper table is allocated in the following way:
* The function dm_table_create is called, it gets the number of targets
as an argument -- it allocates a targets array accordingly.
* For each target, we call dm_table_add_target.
If we add more targets than were specified in dm_table_create, the
function dm_table_add_target reallocates the targets array. However,
this reallocation code is wrong - it moves the targets array to a new
location, while some target constructors hold pointers to the array in
the old location.
The following DM target drivers save the pointer to the target
structure, so they corrupt memory if the target array is moved:
multipath, raid, mirror, snapshot, stripe, switch, thin, verity.
Under normal circumstances, the reallocation function is not called
(because dm_table_create is called with the correct number of targets),
so the buggy reallocation code is not used.
Prior to the fix "dm table: fail dm_table_create on dm_round_up
overflow", the reallocation code could only be used in case the user
specifies too large a value in param->target_count, such as 0xffffffff.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2013-11-23 08:51:39 +08:00
|
|
|
memset(n_highs, -1, sizeof(*n_highs) * num);
|
2005-04-17 06:20:36 +08:00
|
|
|
vfree(t->highs);
|
|
|
|
|
|
|
|
t->num_allocated = num;
|
|
|
|
t->highs = n_highs;
|
|
|
|
t->targets = n_targets;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-09-03 03:28:45 +08:00
|
|
|
int dm_table_create(struct dm_table **result, fmode_t mode,
|
2006-03-27 17:17:54 +08:00
|
|
|
unsigned num_targets, struct mapped_device *md)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2007-10-20 05:38:51 +08:00
|
|
|
struct dm_table *t = kzalloc(sizeof(*t), GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!t)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&t->devices);
|
2011-01-14 04:00:01 +08:00
|
|
|
INIT_LIST_HEAD(&t->target_callbacks);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!num_targets)
|
|
|
|
num_targets = KEYS_PER_NODE;
|
|
|
|
|
|
|
|
num_targets = dm_round_up(num_targets, KEYS_PER_NODE);
|
|
|
|
|
2013-11-23 08:52:06 +08:00
|
|
|
if (!num_targets) {
|
|
|
|
kfree(t);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (alloc_targets(t, num_targets)) {
|
|
|
|
kfree(t);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2016-05-25 09:16:51 +08:00
|
|
|
t->type = DM_TYPE_NONE;
|
2005-04-17 06:20:36 +08:00
|
|
|
t->mode = mode;
|
2006-03-27 17:17:54 +08:00
|
|
|
t->md = md;
|
2005-04-17 06:20:36 +08:00
|
|
|
*result = t;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
static void free_devices(struct list_head *devices, struct mapped_device *md)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct list_head *tmp, *next;
|
|
|
|
|
2008-02-08 10:09:59 +08:00
|
|
|
list_for_each_safe(tmp, next, devices) {
|
2008-10-10 20:37:09 +08:00
|
|
|
struct dm_dev_internal *dd =
|
|
|
|
list_entry(tmp, struct dm_dev_internal, list);
|
2014-08-14 02:53:43 +08:00
|
|
|
DMWARN("%s: dm_table_destroy: dm_put_device call missing for %s",
|
|
|
|
dm_device_name(md), dd->dm_dev->name);
|
|
|
|
dm_put_table_device(md, dd->dm_dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(dd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
dm table: rework reference counting
Rework table reference counting.
The existing code uses a reference counter. When the last reference is
dropped and the counter reaches zero, the table destructor is called.
Table reference counters are acquired/released from upcalls from other
kernel code (dm_any_congested, dm_merge_bvec, dm_unplug_all).
If the reference counter reaches zero in one of the upcalls, the table
destructor is called from almost random kernel code.
This leads to various problems:
* dm_any_congested being called under a spinlock, which calls the
destructor, which calls some sleeping function.
* the destructor attempting to take a lock that is already taken by the
same process.
* stale reference from some other kernel code keeps the table
constructed, which keeps some devices open, even after successful
return from "dmsetup remove". This can confuse lvm and prevent closing
of underlying devices or reusing device minor numbers.
The patch changes reference counting so that the table destructor can be
called only at predetermined places.
The table has always exactly one reference from either mapped_device->map
or hash_cell->new_map. After this patch, this reference is not counted
in table->holders. A pair of dm_create_table/dm_destroy_table functions
is used for table creation/destruction.
Temporary references from the other code increase table->holders. A pair
of dm_table_get/dm_table_put functions is used to manipulate it.
When the table is about to be destroyed, we wait for table->holders to
reach 0. Then, we call the table destructor. We use active waiting with
msleep(1), because the situation happens rarely (to one user in 5 years)
and removing the device isn't performance-critical task: the user doesn't
care if it takes one tick more or not.
This way, the destructor is called only at specific points
(dm_table_destroy function) and the above problems associated with lazy
destruction can't happen.
Finally remove the temporary protection added to dm_any_congested().
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-01-06 11:05:10 +08:00
|
|
|
void dm_table_destroy(struct dm_table *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
|
2009-12-11 07:52:23 +08:00
|
|
|
if (!t)
|
|
|
|
return;
|
|
|
|
|
2010-08-12 11:14:03 +08:00
|
|
|
/* free the indexes */
|
2005-04-17 06:20:36 +08:00
|
|
|
if (t->depth >= 2)
|
|
|
|
vfree(t->index[t->depth - 2]);
|
|
|
|
|
|
|
|
/* free the targets */
|
|
|
|
for (i = 0; i < t->num_targets; i++) {
|
|
|
|
struct dm_target *tgt = t->targets + i;
|
|
|
|
|
|
|
|
if (tgt->type->dtr)
|
|
|
|
tgt->type->dtr(tgt);
|
|
|
|
|
|
|
|
dm_put_target_type(tgt->type);
|
|
|
|
}
|
|
|
|
|
|
|
|
vfree(t->highs);
|
|
|
|
|
|
|
|
/* free the device list */
|
2014-08-14 02:53:43 +08:00
|
|
|
free_devices(&t->devices, t->md);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
dm_free_md_mempools(t->mempools);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(t);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* See if we've already got a device in the list.
|
|
|
|
*/
|
2008-10-10 20:37:09 +08:00
|
|
|
static struct dm_dev_internal *find_device(struct list_head *l, dev_t dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-10-10 20:37:09 +08:00
|
|
|
struct dm_dev_internal *dd;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
list_for_each_entry (dd, l, list)
|
2014-08-14 02:53:43 +08:00
|
|
|
if (dd->dm_dev->bdev->bd_dev == dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
return dd;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2009-09-05 03:40:22 +08:00
|
|
|
* If possible, this checks an area of a destination device is invalid.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2009-09-05 03:40:22 +08:00
|
|
|
static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-05-29 20:02:52 +08:00
|
|
|
struct request_queue *q;
|
2009-06-22 17:12:34 +08:00
|
|
|
struct queue_limits *limits = data;
|
|
|
|
struct block_device *bdev = dev->bdev;
|
|
|
|
sector_t dev_size =
|
|
|
|
i_size_read(bdev->bd_inode) >> SECTOR_SHIFT;
|
2009-06-22 17:12:30 +08:00
|
|
|
unsigned short logical_block_size_sectors =
|
2009-06-22 17:12:34 +08:00
|
|
|
limits->logical_block_size >> SECTOR_SHIFT;
|
2009-06-22 17:12:30 +08:00
|
|
|
char b[BDEVNAME_SIZE];
|
2007-05-09 17:32:57 +08:00
|
|
|
|
2011-05-29 20:02:52 +08:00
|
|
|
/*
|
|
|
|
* Some devices exist without request functions,
|
|
|
|
* such as loop devices not yet bound to backing files.
|
|
|
|
* Forbid the use of such devices.
|
|
|
|
*/
|
|
|
|
q = bdev_get_queue(bdev);
|
|
|
|
if (!q || !q->make_request_fn) {
|
|
|
|
DMWARN("%s: %s is not yet initialised: "
|
|
|
|
"start=%llu, len=%llu, dev_size=%llu",
|
|
|
|
dm_device_name(ti->table->md), bdevname(bdev, b),
|
|
|
|
(unsigned long long)start,
|
|
|
|
(unsigned long long)len,
|
|
|
|
(unsigned long long)dev_size);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2007-05-09 17:32:57 +08:00
|
|
|
if (!dev_size)
|
2009-09-05 03:40:22 +08:00
|
|
|
return 0;
|
2007-05-09 17:32:57 +08:00
|
|
|
|
2009-07-24 03:30:42 +08:00
|
|
|
if ((start >= dev_size) || (start + len > dev_size)) {
|
2009-09-05 03:40:24 +08:00
|
|
|
DMWARN("%s: %s too small for target: "
|
|
|
|
"start=%llu, len=%llu, dev_size=%llu",
|
|
|
|
dm_device_name(ti->table->md), bdevname(bdev, b),
|
|
|
|
(unsigned long long)start,
|
|
|
|
(unsigned long long)len,
|
|
|
|
(unsigned long long)dev_size);
|
2009-09-05 03:40:22 +08:00
|
|
|
return 1;
|
2009-06-22 17:12:30 +08:00
|
|
|
}
|
|
|
|
|
2017-05-09 07:40:43 +08:00
|
|
|
/*
|
|
|
|
* If the target is mapped to zoned block device(s), check
|
|
|
|
* that the zones are not partially mapped.
|
|
|
|
*/
|
|
|
|
if (bdev_zoned_model(bdev) != BLK_ZONED_NONE) {
|
|
|
|
unsigned int zone_sectors = bdev_zone_sectors(bdev);
|
|
|
|
|
|
|
|
if (start & (zone_sectors - 1)) {
|
|
|
|
DMWARN("%s: start=%llu not aligned to h/w zone size %u of %s",
|
|
|
|
dm_device_name(ti->table->md),
|
|
|
|
(unsigned long long)start,
|
|
|
|
zone_sectors, bdevname(bdev, b));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: The last zone of a zoned block device may be smaller
|
|
|
|
* than other zones. So for a target mapping the end of a
|
|
|
|
* zoned block device with such a zone, len would not be zone
|
|
|
|
* aligned. We do not allow such last smaller zone to be part
|
|
|
|
* of the mapping here to ensure that mappings with multiple
|
|
|
|
* devices do not end up with a smaller zone in the middle of
|
|
|
|
* the sector range.
|
|
|
|
*/
|
|
|
|
if (len & (zone_sectors - 1)) {
|
|
|
|
DMWARN("%s: len=%llu not aligned to h/w zone size %u of %s",
|
|
|
|
dm_device_name(ti->table->md),
|
|
|
|
(unsigned long long)len,
|
|
|
|
zone_sectors, bdevname(bdev, b));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-06-22 17:12:30 +08:00
|
|
|
if (logical_block_size_sectors <= 1)
|
2009-09-05 03:40:22 +08:00
|
|
|
return 0;
|
2009-06-22 17:12:30 +08:00
|
|
|
|
|
|
|
if (start & (logical_block_size_sectors - 1)) {
|
|
|
|
DMWARN("%s: start=%llu not aligned to h/w "
|
2009-09-05 03:40:24 +08:00
|
|
|
"logical block size %u of %s",
|
2009-06-22 17:12:30 +08:00
|
|
|
dm_device_name(ti->table->md),
|
|
|
|
(unsigned long long)start,
|
2009-06-22 17:12:34 +08:00
|
|
|
limits->logical_block_size, bdevname(bdev, b));
|
2009-09-05 03:40:22 +08:00
|
|
|
return 1;
|
2009-06-22 17:12:30 +08:00
|
|
|
}
|
|
|
|
|
2009-07-24 03:30:42 +08:00
|
|
|
if (len & (logical_block_size_sectors - 1)) {
|
2009-06-22 17:12:30 +08:00
|
|
|
DMWARN("%s: len=%llu not aligned to h/w "
|
2009-09-05 03:40:24 +08:00
|
|
|
"logical block size %u of %s",
|
2009-06-22 17:12:30 +08:00
|
|
|
dm_device_name(ti->table->md),
|
2009-07-24 03:30:42 +08:00
|
|
|
(unsigned long long)len,
|
2009-06-22 17:12:34 +08:00
|
|
|
limits->logical_block_size, bdevname(bdev, b));
|
2009-09-05 03:40:22 +08:00
|
|
|
return 1;
|
2009-06-22 17:12:30 +08:00
|
|
|
}
|
|
|
|
|
2009-09-05 03:40:22 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2009-04-03 02:55:28 +08:00
|
|
|
* This upgrades the mode on an already open dm_dev, being
|
2005-04-17 06:20:36 +08:00
|
|
|
* careful to leave things as they were if we fail to reopen the
|
2009-04-03 02:55:28 +08:00
|
|
|
* device and not to touch the existing bdev field in case
|
|
|
|
* it is accessed concurrently inside dm_table_any_congested().
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-09-03 03:28:45 +08:00
|
|
|
static int upgrade_mode(struct dm_dev_internal *dd, fmode_t new_mode,
|
2008-10-10 20:37:09 +08:00
|
|
|
struct mapped_device *md)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int r;
|
2014-08-14 02:53:43 +08:00
|
|
|
struct dm_dev *old_dev, *new_dev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
old_dev = dd->dm_dev;
|
2009-04-03 02:55:28 +08:00
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
r = dm_get_table_device(md, dd->dm_dev->bdev->bd_dev,
|
|
|
|
dd->dm_dev->mode | new_mode, &new_dev);
|
2009-04-03 02:55:28 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
dd->dm_dev = new_dev;
|
|
|
|
dm_put_table_device(md, old_dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-04-03 02:55:28 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2016-02-02 12:29:18 +08:00
|
|
|
/*
|
|
|
|
* Convert the path to a device
|
|
|
|
*/
|
|
|
|
dev_t dm_get_dev_t(const char *path)
|
|
|
|
{
|
2017-04-19 04:51:46 +08:00
|
|
|
dev_t dev;
|
2016-02-02 12:29:18 +08:00
|
|
|
struct block_device *bdev;
|
|
|
|
|
|
|
|
bdev = lookup_bdev(path);
|
|
|
|
if (IS_ERR(bdev))
|
|
|
|
dev = name_to_dev_t(path);
|
|
|
|
else {
|
|
|
|
dev = bdev->bd_dev;
|
|
|
|
bdput(bdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
return dev;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dm_get_dev_t);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Add a device to the list, or just increment the usage count if
|
|
|
|
* it's already present.
|
|
|
|
*/
|
2011-08-02 19:32:04 +08:00
|
|
|
int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
|
|
|
|
struct dm_dev **result)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int r;
|
2016-02-02 12:29:18 +08:00
|
|
|
dev_t dev;
|
2008-10-10 20:37:09 +08:00
|
|
|
struct dm_dev_internal *dd;
|
2011-08-02 19:32:04 +08:00
|
|
|
struct dm_table *t = ti->table;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-03-27 00:22:50 +08:00
|
|
|
BUG_ON(!t);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-02-02 12:29:18 +08:00
|
|
|
dev = dm_get_dev_t(path);
|
|
|
|
if (!dev)
|
|
|
|
return -ENODEV;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
dd = find_device(&t->devices, dev);
|
|
|
|
if (!dd) {
|
|
|
|
dd = kmalloc(sizeof(*dd), GFP_KERNEL);
|
|
|
|
if (!dd)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
if ((r = dm_get_table_device(t->md, dev, mode, &dd->dm_dev))) {
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(dd);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2017-10-20 15:37:38 +08:00
|
|
|
refcount_set(&dd->count, 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
list_add(&dd->list, &t->devices);
|
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
} else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) {
|
2006-03-27 17:17:59 +08:00
|
|
|
r = upgrade_mode(dd, mode, t->md);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
2017-10-20 15:37:38 +08:00
|
|
|
refcount_inc(&dd->count);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
*result = dd->dm_dev;
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_get_device);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-06-03 22:30:28 +08:00
|
|
|
static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-06-22 17:12:34 +08:00
|
|
|
struct queue_limits *limits = data;
|
|
|
|
struct block_device *bdev = dev->bdev;
|
2007-07-24 15:28:11 +08:00
|
|
|
struct request_queue *q = bdev_get_queue(bdev);
|
2008-10-10 20:37:13 +08:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
|
|
|
|
if (unlikely(!q)) {
|
|
|
|
DMWARN("%s: Cannot set limits for nonexistent device %s",
|
|
|
|
dm_device_name(ti->table->md), bdevname(bdev, b));
|
2009-06-22 17:12:34 +08:00
|
|
|
return 0;
|
2008-10-10 20:37:13 +08:00
|
|
|
}
|
2006-10-03 16:15:42 +08:00
|
|
|
|
2010-01-11 16:21:50 +08:00
|
|
|
if (bdev_stack_limits(limits, bdev, start) < 0)
|
|
|
|
DMWARN("%s: adding target device %s caused an alignment inconsistency: "
|
2009-09-05 03:40:24 +08:00
|
|
|
"physical_block_size=%u, logical_block_size=%u, "
|
|
|
|
"alignment_offset=%u, start=%llu",
|
|
|
|
dm_device_name(ti->table->md), bdevname(bdev, b),
|
|
|
|
q->limits.physical_block_size,
|
|
|
|
q->limits.logical_block_size,
|
|
|
|
q->limits.alignment_offset,
|
2010-01-11 16:21:50 +08:00
|
|
|
(unsigned long long) start << SECTOR_SHIFT);
|
2006-10-03 16:15:42 +08:00
|
|
|
|
2017-05-09 07:40:43 +08:00
|
|
|
limits->zoned = blk_queue_zoned_model(q);
|
|
|
|
|
2009-06-22 17:12:34 +08:00
|
|
|
return 0;
|
2006-10-03 16:15:42 +08:00
|
|
|
}
|
2006-03-27 17:17:49 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2011-08-02 19:32:04 +08:00
|
|
|
* Decrement a device's use count and remove it if necessary.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-10-10 20:37:09 +08:00
|
|
|
void dm_put_device(struct dm_target *ti, struct dm_dev *d)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2014-08-14 02:53:43 +08:00
|
|
|
int found = 0;
|
|
|
|
struct list_head *devices = &ti->table->devices;
|
|
|
|
struct dm_dev_internal *dd;
|
2008-10-10 20:37:09 +08:00
|
|
|
|
2014-08-14 02:53:43 +08:00
|
|
|
list_for_each_entry(dd, devices, list) {
|
|
|
|
if (dd->dm_dev == d) {
|
|
|
|
found = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!found) {
|
|
|
|
DMWARN("%s: device %s not in table devices list",
|
|
|
|
dm_device_name(ti->table->md), d->name);
|
|
|
|
return;
|
|
|
|
}
|
2017-10-20 15:37:38 +08:00
|
|
|
if (refcount_dec_and_test(&dd->count)) {
|
2014-08-14 02:53:43 +08:00
|
|
|
dm_put_table_device(ti->table->md, d);
|
2005-04-17 06:20:36 +08:00
|
|
|
list_del(&dd->list);
|
|
|
|
kfree(dd);
|
|
|
|
}
|
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_put_device);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Checks to see if the target joins onto the end of the table.
|
|
|
|
*/
|
|
|
|
static int adjoin(struct dm_table *table, struct dm_target *ti)
|
|
|
|
{
|
|
|
|
struct dm_target *prev;
|
|
|
|
|
|
|
|
if (!table->num_targets)
|
|
|
|
return !ti->begin;
|
|
|
|
|
|
|
|
prev = &table->targets[table->num_targets - 1];
|
|
|
|
return (ti->begin == (prev->begin + prev->len));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used to dynamically allocate the arg array.
|
2013-11-01 01:55:45 +08:00
|
|
|
*
|
|
|
|
* We do first allocation with GFP_NOIO because dm-mpath and dm-thin must
|
|
|
|
* process messages even if some device is suspended. These messages have a
|
|
|
|
* small fixed number of arguments.
|
|
|
|
*
|
|
|
|
* On the other hand, dm-switch needs to process bulk data using messages and
|
|
|
|
* excessive use of GFP_NOIO could cause trouble.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
static char **realloc_argv(unsigned *array_size, char **old_argv)
|
|
|
|
{
|
|
|
|
char **argv;
|
|
|
|
unsigned new_size;
|
2013-11-01 01:55:45 +08:00
|
|
|
gfp_t gfp;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-11-01 01:55:45 +08:00
|
|
|
if (*array_size) {
|
|
|
|
new_size = *array_size * 2;
|
|
|
|
gfp = GFP_KERNEL;
|
|
|
|
} else {
|
|
|
|
new_size = 8;
|
|
|
|
gfp = GFP_NOIO;
|
|
|
|
}
|
|
|
|
argv = kmalloc(new_size * sizeof(*argv), gfp);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (argv) {
|
|
|
|
memcpy(argv, old_argv, *array_size * sizeof(*argv));
|
|
|
|
*array_size = new_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(old_argv);
|
|
|
|
return argv;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Destructively splits up the argument list to pass to ctr.
|
|
|
|
*/
|
|
|
|
int dm_split_args(int *argc, char ***argvp, char *input)
|
|
|
|
{
|
|
|
|
char *start, *end = input, *out, **argv = NULL;
|
|
|
|
unsigned array_size = 0;
|
|
|
|
|
|
|
|
*argc = 0;
|
2006-06-26 15:27:31 +08:00
|
|
|
|
|
|
|
if (!input) {
|
|
|
|
*argvp = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
argv = realloc_argv(&array_size, argv);
|
|
|
|
if (!argv)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
/* Skip whitespace */
|
2009-12-15 10:01:06 +08:00
|
|
|
start = skip_spaces(end);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!*start)
|
|
|
|
break; /* success, we hit the end */
|
|
|
|
|
|
|
|
/* 'out' is used to remove any back-quotes */
|
|
|
|
end = out = start;
|
|
|
|
while (*end) {
|
|
|
|
/* Everything apart from '\0' can be quoted */
|
|
|
|
if (*end == '\\' && *(end + 1)) {
|
|
|
|
*out++ = *(end + 1);
|
|
|
|
end += 2;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (isspace(*end))
|
|
|
|
break; /* end of token */
|
|
|
|
|
|
|
|
*out++ = *end++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* have we already filled the array ? */
|
|
|
|
if ((*argc + 1) > array_size) {
|
|
|
|
argv = realloc_argv(&array_size, argv);
|
|
|
|
if (!argv)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we know this is whitespace */
|
|
|
|
if (*end)
|
|
|
|
end++;
|
|
|
|
|
|
|
|
/* terminate the string and put it in the array */
|
|
|
|
*out = '\0';
|
|
|
|
argv[*argc] = start;
|
|
|
|
(*argc)++;
|
|
|
|
}
|
|
|
|
|
|
|
|
*argvp = argv;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-06-22 17:12:31 +08:00
|
|
|
/*
|
|
|
|
* Impose necessary and sufficient conditions on a devices's table such
|
|
|
|
* that any incoming bio which respects its logical_block_size can be
|
|
|
|
* processed successfully. If it falls across the boundary between
|
|
|
|
* two or more targets, the size of each piece it gets split into must
|
|
|
|
* be compatible with the logical_block_size of the target processing it.
|
|
|
|
*/
|
2009-06-22 17:12:34 +08:00
|
|
|
static int validate_hardware_logical_block_alignment(struct dm_table *table,
|
|
|
|
struct queue_limits *limits)
|
2009-06-22 17:12:31 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* This function uses arithmetic modulo the logical_block_size
|
|
|
|
* (in units of 512-byte sectors).
|
|
|
|
*/
|
|
|
|
unsigned short device_logical_block_size_sects =
|
2009-06-22 17:12:34 +08:00
|
|
|
limits->logical_block_size >> SECTOR_SHIFT;
|
2009-06-22 17:12:31 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Offset of the start of the next table entry, mod logical_block_size.
|
|
|
|
*/
|
|
|
|
unsigned short next_target_start = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Given an aligned bio that extends beyond the end of a
|
|
|
|
* target, how many sectors must the next target handle?
|
|
|
|
*/
|
|
|
|
unsigned short remaining = 0;
|
|
|
|
|
|
|
|
struct dm_target *uninitialized_var(ti);
|
2009-06-22 17:12:34 +08:00
|
|
|
struct queue_limits ti_limits;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2009-06-22 17:12:31 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Check each entry in the table in turn.
|
|
|
|
*/
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(table); i++) {
|
|
|
|
ti = dm_table_get_target(table, i);
|
2009-06-22 17:12:31 +08:00
|
|
|
|
2012-01-11 23:27:11 +08:00
|
|
|
blk_set_stacking_limits(&ti_limits);
|
2009-06-22 17:12:34 +08:00
|
|
|
|
|
|
|
/* combine all target devices' limits */
|
|
|
|
if (ti->type->iterate_devices)
|
|
|
|
ti->type->iterate_devices(ti, dm_set_device_limits,
|
|
|
|
&ti_limits);
|
|
|
|
|
2009-06-22 17:12:31 +08:00
|
|
|
/*
|
|
|
|
* If the remaining sectors fall entirely within this
|
|
|
|
* table entry are they compatible with its logical_block_size?
|
|
|
|
*/
|
|
|
|
if (remaining < ti->len &&
|
2009-06-22 17:12:34 +08:00
|
|
|
remaining & ((ti_limits.logical_block_size >>
|
2009-06-22 17:12:31 +08:00
|
|
|
SECTOR_SHIFT) - 1))
|
|
|
|
break; /* Error */
|
|
|
|
|
|
|
|
next_target_start =
|
|
|
|
(unsigned short) ((next_target_start + ti->len) &
|
|
|
|
(device_logical_block_size_sects - 1));
|
|
|
|
remaining = next_target_start ?
|
|
|
|
device_logical_block_size_sects - next_target_start : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (remaining) {
|
|
|
|
DMWARN("%s: table line %u (start sect %llu len %llu) "
|
2009-09-05 03:40:24 +08:00
|
|
|
"not aligned to h/w logical block size %u",
|
2009-06-22 17:12:31 +08:00
|
|
|
dm_device_name(table->md), i,
|
|
|
|
(unsigned long long) ti->begin,
|
|
|
|
(unsigned long long) ti->len,
|
2009-06-22 17:12:34 +08:00
|
|
|
limits->logical_block_size);
|
2009-06-22 17:12:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
int dm_table_add_target(struct dm_table *t, const char *type,
|
|
|
|
sector_t start, sector_t len, char *params)
|
|
|
|
{
|
|
|
|
int r = -EINVAL, argc;
|
|
|
|
char **argv;
|
|
|
|
struct dm_target *tgt;
|
|
|
|
|
2011-11-01 04:19:00 +08:00
|
|
|
if (t->singleton) {
|
|
|
|
DMERR("%s: target type %s must appear alone in table",
|
|
|
|
dm_device_name(t->md), t->targets->type->name);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
dm table: remove unused buggy code that extends the targets array
A device mapper table is allocated in the following way:
* The function dm_table_create is called, it gets the number of targets
as an argument -- it allocates a targets array accordingly.
* For each target, we call dm_table_add_target.
If we add more targets than were specified in dm_table_create, the
function dm_table_add_target reallocates the targets array. However,
this reallocation code is wrong - it moves the targets array to a new
location, while some target constructors hold pointers to the array in
the old location.
The following DM target drivers save the pointer to the target
structure, so they corrupt memory if the target array is moved:
multipath, raid, mirror, snapshot, stripe, switch, thin, verity.
Under normal circumstances, the reallocation function is not called
(because dm_table_create is called with the correct number of targets),
so the buggy reallocation code is not used.
Prior to the fix "dm table: fail dm_table_create on dm_round_up
overflow", the reallocation code could only be used in case the user
specifies too large a value in param->target_count, such as 0xffffffff.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2013-11-23 08:51:39 +08:00
|
|
|
BUG_ON(t->num_targets >= t->num_allocated);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
tgt = t->targets + t->num_targets;
|
|
|
|
memset(tgt, 0, sizeof(*tgt));
|
|
|
|
|
|
|
|
if (!len) {
|
2006-06-26 15:27:35 +08:00
|
|
|
DMERR("%s: zero-length target", dm_device_name(t->md));
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
tgt->type = dm_get_target_type(type);
|
|
|
|
if (!tgt->type) {
|
2016-10-21 09:35:32 +08:00
|
|
|
DMERR("%s: %s: unknown target type", dm_device_name(t->md), type);
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2011-11-01 04:19:00 +08:00
|
|
|
if (dm_target_needs_singleton(tgt->type)) {
|
|
|
|
if (t->num_targets) {
|
2016-10-21 09:35:32 +08:00
|
|
|
tgt->error = "singleton target type must appear alone in table";
|
|
|
|
goto bad;
|
2011-11-01 04:19:00 +08:00
|
|
|
}
|
2016-05-25 09:16:51 +08:00
|
|
|
t->singleton = true;
|
2011-11-01 04:19:00 +08:00
|
|
|
}
|
|
|
|
|
2011-11-01 04:19:02 +08:00
|
|
|
if (dm_target_always_writeable(tgt->type) && !(t->mode & FMODE_WRITE)) {
|
2016-10-21 09:35:32 +08:00
|
|
|
tgt->error = "target type may not be included in a read-only table";
|
|
|
|
goto bad;
|
2011-11-01 04:19:02 +08:00
|
|
|
}
|
|
|
|
|
2011-11-01 04:19:04 +08:00
|
|
|
if (t->immutable_target_type) {
|
|
|
|
if (t->immutable_target_type != tgt->type) {
|
2016-10-21 09:35:32 +08:00
|
|
|
tgt->error = "immutable target type cannot be mixed with other target types";
|
|
|
|
goto bad;
|
2011-11-01 04:19:04 +08:00
|
|
|
}
|
|
|
|
} else if (dm_target_is_immutable(tgt->type)) {
|
|
|
|
if (t->num_targets) {
|
2016-10-21 09:35:32 +08:00
|
|
|
tgt->error = "immutable target type cannot be mixed with other target types";
|
|
|
|
goto bad;
|
2011-11-01 04:19:04 +08:00
|
|
|
}
|
|
|
|
t->immutable_target_type = tgt->type;
|
|
|
|
}
|
|
|
|
|
2017-01-05 03:23:51 +08:00
|
|
|
if (dm_target_has_integrity(tgt->type))
|
|
|
|
t->integrity_added = 1;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
tgt->table = t;
|
|
|
|
tgt->begin = start;
|
|
|
|
tgt->len = len;
|
|
|
|
tgt->error = "Unknown error";
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Does this target adjoin the previous one ?
|
|
|
|
*/
|
|
|
|
if (!adjoin(t, tgt)) {
|
|
|
|
tgt->error = "Gap in table";
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
|
|
|
|
r = dm_split_args(&argc, &argv, params);
|
|
|
|
if (r) {
|
|
|
|
tgt->error = "couldn't split parameters (insufficient memory)";
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
|
|
|
|
r = tgt->type->ctr(tgt, argc, argv);
|
|
|
|
kfree(argv);
|
|
|
|
if (r)
|
|
|
|
goto bad;
|
|
|
|
|
|
|
|
t->highs[t->num_targets++] = tgt->begin + tgt->len - 1;
|
|
|
|
|
2013-03-02 06:45:47 +08:00
|
|
|
if (!tgt->num_discard_bios && tgt->discards_supported)
|
|
|
|
DMWARN("%s: %s: ignoring discards_supported because num_discard_bios is zero.",
|
2011-08-02 19:32:01 +08:00
|
|
|
dm_device_name(t->md), type);
|
2010-08-12 11:14:08 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
bad:
|
2006-06-26 15:27:35 +08:00
|
|
|
DMERR("%s: %s: %s", dm_device_name(t->md), type, tgt->error);
|
2005-04-17 06:20:36 +08:00
|
|
|
dm_put_target_type(tgt->type);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2011-08-02 19:32:04 +08:00
|
|
|
/*
|
|
|
|
* Target argument parsing helpers.
|
|
|
|
*/
|
2017-06-23 02:32:45 +08:00
|
|
|
static int validate_next_arg(const struct dm_arg *arg,
|
|
|
|
struct dm_arg_set *arg_set,
|
2011-08-02 19:32:04 +08:00
|
|
|
unsigned *value, char **error, unsigned grouped)
|
|
|
|
{
|
|
|
|
const char *arg_str = dm_shift_arg(arg_set);
|
dm: reject trailing characters in sccanf input
Device mapper uses sscanf to convert arguments to numbers. The problem is that
the way we use it ignores additional unmatched characters in the scanned string.
For example, this `if (sscanf(string, "%d", &number) == 1)' will match a number,
but also it will match number with some garbage appended, like "123abc".
As a result, device mapper accepts garbage after some numbers. For example
the command `dmsetup create vg1-new --table "0 16384 linear 254:1bla 34816bla"'
will pass without an error.
This patch fixes all sscanf uses in device mapper. It appends "%c" with
a pointer to a dummy character variable to every sscanf statement.
The construct `if (sscanf(string, "%d%c", &number, &dummy) == 1)' succeeds
only if string is a null-terminated number (optionally preceded by some
whitespace characters). If there is some character appended after the number,
sscanf matches "%c", writes the character to the dummy variable and returns 2.
We check the return value for 1 and consequently reject numbers with some
garbage appended.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2012-03-29 01:41:26 +08:00
|
|
|
char dummy;
|
2011-08-02 19:32:04 +08:00
|
|
|
|
|
|
|
if (!arg_str ||
|
dm: reject trailing characters in sccanf input
Device mapper uses sscanf to convert arguments to numbers. The problem is that
the way we use it ignores additional unmatched characters in the scanned string.
For example, this `if (sscanf(string, "%d", &number) == 1)' will match a number,
but also it will match number with some garbage appended, like "123abc".
As a result, device mapper accepts garbage after some numbers. For example
the command `dmsetup create vg1-new --table "0 16384 linear 254:1bla 34816bla"'
will pass without an error.
This patch fixes all sscanf uses in device mapper. It appends "%c" with
a pointer to a dummy character variable to every sscanf statement.
The construct `if (sscanf(string, "%d%c", &number, &dummy) == 1)' succeeds
only if string is a null-terminated number (optionally preceded by some
whitespace characters). If there is some character appended after the number,
sscanf matches "%c", writes the character to the dummy variable and returns 2.
We check the return value for 1 and consequently reject numbers with some
garbage appended.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2012-03-29 01:41:26 +08:00
|
|
|
(sscanf(arg_str, "%u%c", value, &dummy) != 1) ||
|
2011-08-02 19:32:04 +08:00
|
|
|
(*value < arg->min) ||
|
|
|
|
(*value > arg->max) ||
|
|
|
|
(grouped && arg_set->argc < *value)) {
|
|
|
|
*error = arg->error;
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-06-23 02:32:45 +08:00
|
|
|
int dm_read_arg(const struct dm_arg *arg, struct dm_arg_set *arg_set,
|
2011-08-02 19:32:04 +08:00
|
|
|
unsigned *value, char **error)
|
|
|
|
{
|
|
|
|
return validate_next_arg(arg, arg_set, value, error, 0);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(dm_read_arg);
|
|
|
|
|
2017-06-23 02:32:45 +08:00
|
|
|
int dm_read_arg_group(const struct dm_arg *arg, struct dm_arg_set *arg_set,
|
2011-08-02 19:32:04 +08:00
|
|
|
unsigned *value, char **error)
|
|
|
|
{
|
|
|
|
return validate_next_arg(arg, arg_set, value, error, 1);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(dm_read_arg_group);
|
|
|
|
|
|
|
|
const char *dm_shift_arg(struct dm_arg_set *as)
|
|
|
|
{
|
|
|
|
char *r;
|
|
|
|
|
|
|
|
if (as->argc) {
|
|
|
|
as->argc--;
|
|
|
|
r = *as->argv;
|
|
|
|
as->argv++;
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(dm_shift_arg);
|
|
|
|
|
|
|
|
void dm_consume_args(struct dm_arg_set *as, unsigned num_args)
|
|
|
|
{
|
|
|
|
BUG_ON(as->argc < num_args);
|
|
|
|
as->argc -= num_args;
|
|
|
|
as->argv += num_args;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(dm_consume_args);
|
|
|
|
|
2017-04-28 01:11:23 +08:00
|
|
|
static bool __table_type_bio_based(enum dm_queue_mode table_type)
|
2016-06-23 07:54:53 +08:00
|
|
|
{
|
|
|
|
return (table_type == DM_TYPE_BIO_BASED ||
|
|
|
|
table_type == DM_TYPE_DAX_BIO_BASED);
|
|
|
|
}
|
|
|
|
|
2017-04-28 01:11:23 +08:00
|
|
|
static bool __table_type_request_based(enum dm_queue_mode table_type)
|
2015-05-29 16:51:03 +08:00
|
|
|
{
|
|
|
|
return (table_type == DM_TYPE_REQUEST_BASED ||
|
|
|
|
table_type == DM_TYPE_MQ_REQUEST_BASED);
|
|
|
|
}
|
|
|
|
|
2017-04-28 01:11:23 +08:00
|
|
|
void dm_table_set_type(struct dm_table *t, enum dm_queue_mode type)
|
2016-05-25 09:16:51 +08:00
|
|
|
{
|
|
|
|
t->type = type;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dm_table_set_type);
|
|
|
|
|
2016-06-23 07:54:53 +08:00
|
|
|
static int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && blk_queue_dax(q);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dm_table_supports_dax(struct dm_table *t)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2016-06-23 07:54:53 +08:00
|
|
|
|
|
|
|
/* Ensure that all targets support DAX. */
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
2016-06-23 07:54:53 +08:00
|
|
|
|
|
|
|
if (!ti->type->direct_access)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!ti->type->iterate_devices ||
|
|
|
|
!ti->type->iterate_devices(ti, device_supports_dax, NULL))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2016-05-25 09:16:51 +08:00
|
|
|
static int dm_table_determine_type(struct dm_table *t)
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
{
|
|
|
|
unsigned i;
|
2013-08-23 06:21:38 +08:00
|
|
|
unsigned bio_based = 0, request_based = 0, hybrid = 0;
|
2016-11-16 07:33:16 +08:00
|
|
|
unsigned sq_count = 0, mq_count = 0;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
struct dm_target *tgt;
|
|
|
|
struct dm_dev_internal *dd;
|
2016-05-25 09:16:51 +08:00
|
|
|
struct list_head *devices = dm_table_get_devices(t);
|
2017-04-28 01:11:23 +08:00
|
|
|
enum dm_queue_mode live_md_type = dm_get_md_type(t->md);
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
|
2016-05-25 09:16:51 +08:00
|
|
|
if (t->type != DM_TYPE_NONE) {
|
|
|
|
/* target already set the table's type */
|
|
|
|
if (t->type == DM_TYPE_BIO_BASED)
|
|
|
|
return 0;
|
2016-06-23 07:54:53 +08:00
|
|
|
BUG_ON(t->type == DM_TYPE_DAX_BIO_BASED);
|
2016-05-25 09:16:51 +08:00
|
|
|
goto verify_rq_based;
|
|
|
|
}
|
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
for (i = 0; i < t->num_targets; i++) {
|
|
|
|
tgt = t->targets + i;
|
2013-08-23 06:21:38 +08:00
|
|
|
if (dm_target_hybrid(tgt))
|
|
|
|
hybrid = 1;
|
|
|
|
else if (dm_target_request_based(tgt))
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
request_based = 1;
|
|
|
|
else
|
|
|
|
bio_based = 1;
|
|
|
|
|
|
|
|
if (bio_based && request_based) {
|
|
|
|
DMWARN("Inconsistent table: different target types"
|
|
|
|
" can't be mixed up");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-23 06:21:38 +08:00
|
|
|
if (hybrid && !bio_based && !request_based) {
|
|
|
|
/*
|
|
|
|
* The targets can work either way.
|
|
|
|
* Determine the type from the live device.
|
|
|
|
* Default to bio-based if device is new.
|
|
|
|
*/
|
2015-05-29 16:51:03 +08:00
|
|
|
if (__table_type_request_based(live_md_type))
|
2013-08-23 06:21:38 +08:00
|
|
|
request_based = 1;
|
|
|
|
else
|
|
|
|
bio_based = 1;
|
|
|
|
}
|
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
if (bio_based) {
|
|
|
|
/* We must use this table as bio-based */
|
|
|
|
t->type = DM_TYPE_BIO_BASED;
|
2016-06-25 05:09:35 +08:00
|
|
|
if (dm_table_supports_dax(t) ||
|
|
|
|
(list_empty(devices) && live_md_type == DM_TYPE_DAX_BIO_BASED))
|
2016-06-23 07:54:53 +08:00
|
|
|
t->type = DM_TYPE_DAX_BIO_BASED;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
BUG_ON(!request_based); /* No targets in this table */
|
|
|
|
|
2016-05-25 09:16:51 +08:00
|
|
|
/*
|
|
|
|
* The only way to establish DM_TYPE_MQ_REQUEST_BASED is by
|
|
|
|
* having a compatible target use dm_table_set_type.
|
|
|
|
*/
|
|
|
|
t->type = DM_TYPE_REQUEST_BASED;
|
|
|
|
|
|
|
|
verify_rq_based:
|
2014-12-19 05:26:47 +08:00
|
|
|
/*
|
|
|
|
* Request-based dm supports only tables that have a single target now.
|
|
|
|
* To support multiple targets, request splitting support is needed,
|
|
|
|
* and that needs lots of changes in the block-layer.
|
|
|
|
* (e.g. request completion process for partial completion.)
|
|
|
|
*/
|
|
|
|
if (t->num_targets > 1) {
|
|
|
|
DMWARN("Request-based dm doesn't support multiple targets yet");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2016-11-24 02:51:09 +08:00
|
|
|
if (list_empty(devices)) {
|
|
|
|
int srcu_idx;
|
|
|
|
struct dm_table *live_table = dm_get_live_table(t->md, &srcu_idx);
|
|
|
|
|
|
|
|
/* inherit live table's type and all_blk_mq */
|
|
|
|
if (live_table) {
|
|
|
|
t->type = live_table->type;
|
|
|
|
t->all_blk_mq = live_table->all_blk_mq;
|
|
|
|
}
|
|
|
|
dm_put_live_table(t->md, srcu_idx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
/* Non-request-stackable devices can't be used for request-based dm */
|
|
|
|
list_for_each_entry(dd, devices, list) {
|
2014-12-18 10:08:12 +08:00
|
|
|
struct request_queue *q = bdev_get_queue(dd->dm_dev->bdev);
|
|
|
|
|
|
|
|
if (!blk_queue_stackable(q)) {
|
|
|
|
DMERR("table load rejected: including"
|
|
|
|
" non-request-stackable devices");
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2014-12-18 10:08:12 +08:00
|
|
|
|
|
|
|
if (q->mq_ops)
|
2016-11-16 07:33:16 +08:00
|
|
|
mq_count++;
|
|
|
|
else
|
|
|
|
sq_count++;
|
2014-12-18 10:08:12 +08:00
|
|
|
}
|
2016-11-16 07:33:16 +08:00
|
|
|
if (sq_count && mq_count) {
|
|
|
|
DMERR("table load rejected: not all devices are blk-mq request-stackable");
|
|
|
|
return -EINVAL;
|
2016-05-25 09:16:51 +08:00
|
|
|
}
|
2016-11-16 07:33:16 +08:00
|
|
|
t->all_blk_mq = mq_count > 0;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
|
2016-12-08 08:56:06 +08:00
|
|
|
if (t->type == DM_TYPE_MQ_REQUEST_BASED && !t->all_blk_mq) {
|
|
|
|
DMERR("table load rejected: all devices are not blk-mq request-stackable");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-04-28 01:11:23 +08:00
|
|
|
enum dm_queue_mode dm_table_get_type(struct dm_table *t)
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
{
|
|
|
|
return t->type;
|
|
|
|
}
|
|
|
|
|
2011-11-01 04:19:04 +08:00
|
|
|
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return t->immutable_target_type;
|
|
|
|
}
|
|
|
|
|
2016-02-01 06:22:27 +08:00
|
|
|
struct dm_target *dm_table_get_immutable_target(struct dm_table *t)
|
|
|
|
{
|
|
|
|
/* Immutable target is implicitly a singleton */
|
|
|
|
if (t->num_targets > 1 ||
|
|
|
|
!dm_target_is_immutable(t->targets[0].type))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return t->targets;
|
|
|
|
}
|
|
|
|
|
2016-02-07 07:38:46 +08:00
|
|
|
struct dm_target *dm_table_get_wildcard_target(struct dm_table *t)
|
|
|
|
{
|
2017-04-19 04:51:46 +08:00
|
|
|
struct dm_target *ti;
|
|
|
|
unsigned i;
|
2016-02-07 07:38:46 +08:00
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
2016-02-07 07:38:46 +08:00
|
|
|
if (dm_target_is_wildcard(ti->type))
|
|
|
|
return ti;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-06-23 07:54:53 +08:00
|
|
|
bool dm_table_bio_based(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return __table_type_bio_based(dm_table_get_type(t));
|
|
|
|
}
|
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
bool dm_table_request_based(struct dm_table *t)
|
|
|
|
{
|
2015-05-29 16:51:03 +08:00
|
|
|
return __table_type_request_based(dm_table_get_type(t));
|
2014-12-18 10:08:12 +08:00
|
|
|
}
|
|
|
|
|
2016-05-25 09:16:51 +08:00
|
|
|
bool dm_table_all_blk_mq_devices(struct dm_table *t)
|
2014-12-18 10:08:12 +08:00
|
|
|
{
|
2016-05-25 09:16:51 +08:00
|
|
|
return t->all_blk_mq;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
}
|
|
|
|
|
2015-03-12 03:01:09 +08:00
|
|
|
static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md)
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
{
|
2017-04-28 01:11:23 +08:00
|
|
|
enum dm_queue_mode type = dm_table_get_type(t);
|
2016-02-01 02:28:26 +08:00
|
|
|
unsigned per_io_data_size = 0;
|
2015-06-26 22:01:13 +08:00
|
|
|
struct dm_target *tgt;
|
2012-12-22 04:23:38 +08:00
|
|
|
unsigned i;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
|
2015-06-26 22:01:13 +08:00
|
|
|
if (unlikely(type == DM_TYPE_NONE)) {
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
DMWARN("no table type is set, can't allocate mempools");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2016-06-23 07:54:53 +08:00
|
|
|
if (__table_type_bio_based(type))
|
2015-06-26 22:01:13 +08:00
|
|
|
for (i = 0; i < t->num_targets; i++) {
|
|
|
|
tgt = t->targets + i;
|
2016-02-01 02:28:26 +08:00
|
|
|
per_io_data_size = max(per_io_data_size, tgt->per_io_data_size);
|
2015-06-26 22:01:13 +08:00
|
|
|
}
|
|
|
|
|
2016-02-01 02:28:26 +08:00
|
|
|
t->mempools = dm_alloc_md_mempools(md, type, t->integrity_supported, per_io_data_size);
|
2015-06-26 21:42:57 +08:00
|
|
|
if (!t->mempools)
|
|
|
|
return -ENOMEM;
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void dm_table_free_md_mempools(struct dm_table *t)
|
|
|
|
{
|
|
|
|
dm_free_md_mempools(t->mempools);
|
|
|
|
t->mempools = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return t->mempools;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static int setup_indexes(struct dm_table *t)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
unsigned int total = 0;
|
|
|
|
sector_t *indexes;
|
|
|
|
|
|
|
|
/* allocate the space for *all* the indexes */
|
|
|
|
for (i = t->depth - 2; i >= 0; i--) {
|
|
|
|
t->counts[i] = dm_div_up(t->counts[i + 1], CHILDREN_PER_NODE);
|
|
|
|
total += t->counts[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
indexes = (sector_t *) dm_vcalloc(total, (unsigned long) NODE_SIZE);
|
|
|
|
if (!indexes)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* set up internal nodes, bottom-up */
|
2008-02-08 10:10:04 +08:00
|
|
|
for (i = t->depth - 2; i >= 0; i--) {
|
2005-04-17 06:20:36 +08:00
|
|
|
t->index[i] = indexes;
|
|
|
|
indexes += (KEYS_PER_NODE * t->counts[i]);
|
|
|
|
setup_btree_index(i, t);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Builds the btree to index the map.
|
|
|
|
*/
|
2010-08-12 11:14:03 +08:00
|
|
|
static int dm_table_build_index(struct dm_table *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int r = 0;
|
|
|
|
unsigned int leaf_nodes;
|
|
|
|
|
|
|
|
/* how many indexes will the btree have ? */
|
|
|
|
leaf_nodes = dm_div_up(t->num_targets, KEYS_PER_NODE);
|
|
|
|
t->depth = 1 + int_log(leaf_nodes, CHILDREN_PER_NODE);
|
|
|
|
|
|
|
|
/* leaf layer has already been set up */
|
|
|
|
t->counts[t->depth - 1] = leaf_nodes;
|
|
|
|
t->index[t->depth - 1] = t->highs;
|
|
|
|
|
|
|
|
if (t->depth >= 2)
|
|
|
|
r = setup_indexes(t);
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
static bool integrity_profile_exists(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
return !!blk_get_integrity(disk);
|
|
|
|
}
|
|
|
|
|
2011-04-02 03:02:31 +08:00
|
|
|
/*
|
|
|
|
* Get a disk whose integrity profile reflects the table's profile.
|
|
|
|
* Returns NULL if integrity support was inconsistent or unavailable.
|
|
|
|
*/
|
2015-10-22 01:19:49 +08:00
|
|
|
static struct gendisk * dm_table_get_integrity_disk(struct dm_table *t)
|
2011-04-02 03:02:31 +08:00
|
|
|
{
|
|
|
|
struct list_head *devices = dm_table_get_devices(t);
|
|
|
|
struct dm_dev_internal *dd = NULL;
|
|
|
|
struct gendisk *prev_disk = NULL, *template_disk = NULL;
|
2017-04-19 04:51:48 +08:00
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
struct dm_target *ti = dm_table_get_target(t, i);
|
|
|
|
if (!dm_target_passes_integrity(ti->type))
|
|
|
|
goto no_integrity;
|
|
|
|
}
|
2011-04-02 03:02:31 +08:00
|
|
|
|
|
|
|
list_for_each_entry(dd, devices, list) {
|
2014-08-14 02:53:43 +08:00
|
|
|
template_disk = dd->dm_dev->bdev->bd_disk;
|
2015-10-22 01:19:49 +08:00
|
|
|
if (!integrity_profile_exists(template_disk))
|
2011-04-02 03:02:31 +08:00
|
|
|
goto no_integrity;
|
|
|
|
else if (prev_disk &&
|
|
|
|
blk_integrity_compare(prev_disk, template_disk) < 0)
|
|
|
|
goto no_integrity;
|
|
|
|
prev_disk = template_disk;
|
|
|
|
}
|
|
|
|
|
|
|
|
return template_disk;
|
|
|
|
|
|
|
|
no_integrity:
|
|
|
|
if (prev_disk)
|
|
|
|
DMWARN("%s: integrity not set: %s and %s profile mismatch",
|
|
|
|
dm_device_name(t->md),
|
|
|
|
prev_disk->disk_name,
|
|
|
|
template_disk->disk_name);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-08-12 11:14:03 +08:00
|
|
|
/*
|
2015-10-22 01:19:49 +08:00
|
|
|
* Register the mapped device for blk_integrity support if the
|
|
|
|
* underlying devices have an integrity profile. But all devices may
|
|
|
|
* not have matching profiles (checking all devices isn't reliable
|
2011-04-02 03:02:31 +08:00
|
|
|
* during table load because this table may use other DM device(s) which
|
2015-10-22 01:19:49 +08:00
|
|
|
* must be resumed before they will have an initialized integity
|
|
|
|
* profile). Consequently, stacked DM devices force a 2 stage integrity
|
|
|
|
* profile validation: First pass during table load, final pass during
|
|
|
|
* resume.
|
2010-08-12 11:14:03 +08:00
|
|
|
*/
|
2015-10-22 01:19:49 +08:00
|
|
|
static int dm_table_register_integrity(struct dm_table *t)
|
2010-08-12 11:14:03 +08:00
|
|
|
{
|
2015-10-22 01:19:49 +08:00
|
|
|
struct mapped_device *md = t->md;
|
2011-04-02 03:02:31 +08:00
|
|
|
struct gendisk *template_disk = NULL;
|
2010-08-12 11:14:03 +08:00
|
|
|
|
2017-01-05 03:23:51 +08:00
|
|
|
/* If target handles integrity itself do not register it here. */
|
|
|
|
if (t->integrity_added)
|
|
|
|
return 0;
|
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
template_disk = dm_table_get_integrity_disk(t);
|
2011-04-02 03:02:31 +08:00
|
|
|
if (!template_disk)
|
|
|
|
return 0;
|
2010-08-12 11:14:03 +08:00
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
if (!integrity_profile_exists(dm_disk(md))) {
|
2016-05-25 09:16:51 +08:00
|
|
|
t->integrity_supported = true;
|
2015-10-22 01:19:49 +08:00
|
|
|
/*
|
|
|
|
* Register integrity profile during table load; we can do
|
|
|
|
* this because the final profile must match during resume.
|
|
|
|
*/
|
|
|
|
blk_integrity_register(dm_disk(md),
|
|
|
|
blk_get_integrity(template_disk));
|
|
|
|
return 0;
|
2011-04-02 03:02:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2015-10-22 01:19:49 +08:00
|
|
|
* If DM device already has an initialized integrity
|
2011-04-02 03:02:31 +08:00
|
|
|
* profile the new profile should not conflict.
|
|
|
|
*/
|
2015-10-22 01:19:49 +08:00
|
|
|
if (blk_integrity_compare(dm_disk(md), template_disk) < 0) {
|
2011-04-02 03:02:31 +08:00
|
|
|
DMWARN("%s: conflict with existing integrity profile: "
|
|
|
|
"%s profile mismatch",
|
|
|
|
dm_device_name(t->md),
|
|
|
|
template_disk->disk_name);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
/* Preserve existing integrity profile */
|
2016-05-25 09:16:51 +08:00
|
|
|
t->integrity_supported = true;
|
2010-08-12 11:14:03 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Prepares the table for use by building the indices,
|
|
|
|
* setting the type, and allocating mempools.
|
|
|
|
*/
|
|
|
|
int dm_table_complete(struct dm_table *t)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
2016-05-25 09:16:51 +08:00
|
|
|
r = dm_table_determine_type(t);
|
2010-08-12 11:14:03 +08:00
|
|
|
if (r) {
|
2016-05-25 09:16:51 +08:00
|
|
|
DMERR("unable to determine table type");
|
2010-08-12 11:14:03 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
r = dm_table_build_index(t);
|
|
|
|
if (r) {
|
|
|
|
DMERR("unable to build btrees");
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
r = dm_table_register_integrity(t);
|
2010-08-12 11:14:03 +08:00
|
|
|
if (r) {
|
|
|
|
DMERR("could not register integrity profile.");
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2015-03-12 03:01:09 +08:00
|
|
|
r = dm_table_alloc_md_mempools(t, t->md);
|
2010-08-12 11:14:03 +08:00
|
|
|
if (r)
|
|
|
|
DMERR("unable to allocate mempools");
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2006-03-27 17:18:20 +08:00
|
|
|
static DEFINE_MUTEX(_event_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
void dm_table_event_callback(struct dm_table *t,
|
|
|
|
void (*fn)(void *), void *context)
|
|
|
|
{
|
2006-03-27 17:18:20 +08:00
|
|
|
mutex_lock(&_event_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
t->event_fn = fn;
|
|
|
|
t->event_context = context;
|
2006-03-27 17:18:20 +08:00
|
|
|
mutex_unlock(&_event_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void dm_table_event(struct dm_table *t)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* You can no longer call dm_table_event() from interrupt
|
|
|
|
* context, use a bottom half instead.
|
|
|
|
*/
|
|
|
|
BUG_ON(in_interrupt());
|
|
|
|
|
2006-03-27 17:18:20 +08:00
|
|
|
mutex_lock(&_event_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (t->event_fn)
|
|
|
|
t->event_fn(t->event_context);
|
2006-03-27 17:18:20 +08:00
|
|
|
mutex_unlock(&_event_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_table_event);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
sector_t dm_table_get_size(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return t->num_targets ? (t->highs[t->num_targets - 1] + 1) : 0;
|
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_table_get_size);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
struct dm_target *dm_table_get_target(struct dm_table *t, unsigned int index)
|
|
|
|
{
|
2006-06-26 15:27:27 +08:00
|
|
|
if (index >= t->num_targets)
|
2005-04-17 06:20:36 +08:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return t->targets + index;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Search the btree for the correct target.
|
2007-12-13 22:15:25 +08:00
|
|
|
*
|
|
|
|
* Caller should check returned pointer with dm_target_is_valid()
|
|
|
|
* to trap I/O beyond end of device.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
|
|
|
|
{
|
|
|
|
unsigned int l, n = 0, k = 0;
|
|
|
|
sector_t *node;
|
|
|
|
|
|
|
|
for (l = 0; l < t->depth; l++) {
|
|
|
|
n = get_child(n, k);
|
|
|
|
node = get_node(t, l, n);
|
|
|
|
|
|
|
|
for (k = 0; k < KEYS_PER_NODE; k++)
|
|
|
|
if (node[k] >= sector)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return &t->targets[(KEYS_PER_NODE * n) + k];
|
|
|
|
}
|
|
|
|
|
2012-09-27 06:45:45 +08:00
|
|
|
static int count_device(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
unsigned *num_devices = data;
|
|
|
|
|
|
|
|
(*num_devices)++;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check whether a table has no data devices attached using each
|
|
|
|
* target's iterate_devices method.
|
|
|
|
* Returns false if the result is unknown because a target doesn't
|
|
|
|
* support iterate_devices.
|
|
|
|
*/
|
|
|
|
bool dm_table_has_no_data_devices(struct dm_table *table)
|
|
|
|
{
|
2017-04-19 04:51:46 +08:00
|
|
|
struct dm_target *ti;
|
|
|
|
unsigned i, num_devices;
|
2012-09-27 06:45:45 +08:00
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(table); i++) {
|
|
|
|
ti = dm_table_get_target(table, i);
|
2012-09-27 06:45:45 +08:00
|
|
|
|
|
|
|
if (!ti->type->iterate_devices)
|
|
|
|
return false;
|
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
num_devices = 0;
|
2012-09-27 06:45:45 +08:00
|
|
|
ti->type->iterate_devices(ti, count_device, &num_devices);
|
|
|
|
if (num_devices)
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-05-09 07:40:43 +08:00
|
|
|
static int device_is_zoned_model(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
enum blk_zoned_model *zoned_model = data;
|
|
|
|
|
|
|
|
return q && blk_queue_zoned_model(q) == *zoned_model;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dm_table_supports_zoned_model(struct dm_table *t,
|
|
|
|
enum blk_zoned_model zoned_model)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
|
|
|
|
|
|
|
if (zoned_model == BLK_ZONED_HM &&
|
|
|
|
!dm_target_supports_zoned_hm(ti->type))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!ti->type->iterate_devices ||
|
|
|
|
!ti->type->iterate_devices(ti, device_is_zoned_model, &zoned_model))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int device_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
unsigned int *zone_sectors = data;
|
|
|
|
|
|
|
|
return q && blk_queue_zone_sectors(q) == *zone_sectors;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dm_table_matches_zone_sectors(struct dm_table *t,
|
|
|
|
unsigned int zone_sectors)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
|
|
|
|
|
|
|
if (!ti->type->iterate_devices ||
|
|
|
|
!ti->type->iterate_devices(ti, device_matches_zone_sectors, &zone_sectors))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int validate_hardware_zoned_model(struct dm_table *table,
|
|
|
|
enum blk_zoned_model zoned_model,
|
|
|
|
unsigned int zone_sectors)
|
|
|
|
{
|
|
|
|
if (zoned_model == BLK_ZONED_NONE)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (!dm_table_supports_zoned_model(table, zoned_model)) {
|
|
|
|
DMERR("%s: zoned model is not consistent across all devices",
|
|
|
|
dm_device_name(table->md));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check zone size validity and compatibility */
|
|
|
|
if (!zone_sectors || !is_power_of_2(zone_sectors))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!dm_table_matches_zone_sectors(table, zone_sectors)) {
|
|
|
|
DMERR("%s: zone sectors is not consistent across all devices",
|
|
|
|
dm_device_name(table->md));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-06-22 17:12:34 +08:00
|
|
|
/*
|
|
|
|
* Establish the new table's queue_limits and validate them.
|
|
|
|
*/
|
|
|
|
int dm_calculate_queue_limits(struct dm_table *table,
|
|
|
|
struct queue_limits *limits)
|
|
|
|
{
|
2017-04-19 04:51:46 +08:00
|
|
|
struct dm_target *ti;
|
2009-06-22 17:12:34 +08:00
|
|
|
struct queue_limits ti_limits;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2017-05-09 07:40:43 +08:00
|
|
|
enum blk_zoned_model zoned_model = BLK_ZONED_NONE;
|
|
|
|
unsigned int zone_sectors = 0;
|
2009-06-22 17:12:34 +08:00
|
|
|
|
2012-01-11 23:27:11 +08:00
|
|
|
blk_set_stacking_limits(limits);
|
2009-06-22 17:12:34 +08:00
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(table); i++) {
|
2012-01-11 23:27:11 +08:00
|
|
|
blk_set_stacking_limits(&ti_limits);
|
2009-06-22 17:12:34 +08:00
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
ti = dm_table_get_target(table, i);
|
2009-06-22 17:12:34 +08:00
|
|
|
|
|
|
|
if (!ti->type->iterate_devices)
|
|
|
|
goto combine_limits;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Combine queue limits of all the devices this target uses.
|
|
|
|
*/
|
|
|
|
ti->type->iterate_devices(ti, dm_set_device_limits,
|
|
|
|
&ti_limits);
|
|
|
|
|
2017-05-09 07:40:43 +08:00
|
|
|
if (zoned_model == BLK_ZONED_NONE && ti_limits.zoned != BLK_ZONED_NONE) {
|
|
|
|
/*
|
|
|
|
* After stacking all limits, validate all devices
|
|
|
|
* in table support this zoned model and zone sectors.
|
|
|
|
*/
|
|
|
|
zoned_model = ti_limits.zoned;
|
|
|
|
zone_sectors = ti_limits.chunk_sectors;
|
|
|
|
}
|
|
|
|
|
2009-09-05 03:40:25 +08:00
|
|
|
/* Set I/O hints portion of queue limits */
|
|
|
|
if (ti->type->io_hints)
|
|
|
|
ti->type->io_hints(ti, &ti_limits);
|
|
|
|
|
2009-06-22 17:12:34 +08:00
|
|
|
/*
|
|
|
|
* Check each device area is consistent with the target's
|
|
|
|
* overall queue limits.
|
|
|
|
*/
|
2009-09-05 03:40:22 +08:00
|
|
|
if (ti->type->iterate_devices(ti, device_area_is_invalid,
|
|
|
|
&ti_limits))
|
2009-06-22 17:12:34 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
combine_limits:
|
|
|
|
/*
|
|
|
|
* Merge this target's queue limits into the overall limits
|
|
|
|
* for the table.
|
|
|
|
*/
|
|
|
|
if (blk_stack_limits(limits, &ti_limits, 0) < 0)
|
2010-01-11 16:21:50 +08:00
|
|
|
DMWARN("%s: adding target device "
|
2009-06-22 17:12:34 +08:00
|
|
|
"(start sect %llu len %llu) "
|
2010-01-11 16:21:50 +08:00
|
|
|
"caused an alignment inconsistency",
|
2009-06-22 17:12:34 +08:00
|
|
|
dm_device_name(table->md),
|
|
|
|
(unsigned long long) ti->begin,
|
|
|
|
(unsigned long long) ti->len);
|
2017-05-09 07:40:43 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* FIXME: this should likely be moved to blk_stack_limits(), would
|
|
|
|
* also eliminate limits->zoned stacking hack in dm_set_device_limits()
|
|
|
|
*/
|
|
|
|
if (limits->zoned == BLK_ZONED_NONE && ti_limits.zoned != BLK_ZONED_NONE) {
|
|
|
|
/*
|
|
|
|
* By default, the stacked limits zoned model is set to
|
|
|
|
* BLK_ZONED_NONE in blk_set_stacking_limits(). Update
|
|
|
|
* this model using the first target model reported
|
|
|
|
* that is not BLK_ZONED_NONE. This will be either the
|
|
|
|
* first target device zoned model or the model reported
|
|
|
|
* by the target .io_hints.
|
|
|
|
*/
|
|
|
|
limits->zoned = ti_limits.zoned;
|
|
|
|
}
|
2009-06-22 17:12:34 +08:00
|
|
|
}
|
|
|
|
|
2017-05-09 07:40:43 +08:00
|
|
|
/*
|
|
|
|
* Verify that the zoned model and zone sectors, as determined before
|
|
|
|
* any .io_hints override, are the same across all devices in the table.
|
|
|
|
* - this is especially relevant if .io_hints is emulating a disk-managed
|
|
|
|
* zoned model (aka BLK_ZONED_NONE) on host-managed zoned block devices.
|
|
|
|
* BUT...
|
|
|
|
*/
|
|
|
|
if (limits->zoned != BLK_ZONED_NONE) {
|
|
|
|
/*
|
|
|
|
* ...IF the above limits stacking determined a zoned model
|
|
|
|
* validate that all of the table's devices conform to it.
|
|
|
|
*/
|
|
|
|
zoned_model = limits->zoned;
|
|
|
|
zone_sectors = limits->chunk_sectors;
|
|
|
|
}
|
|
|
|
if (validate_hardware_zoned_model(table, zoned_model, zone_sectors))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2009-06-22 17:12:34 +08:00
|
|
|
return validate_hardware_logical_block_alignment(table, limits);
|
|
|
|
}
|
|
|
|
|
2009-04-09 07:27:12 +08:00
|
|
|
/*
|
2015-10-22 01:19:49 +08:00
|
|
|
* Verify that all devices have an integrity profile that matches the
|
|
|
|
* DM device's registered integrity profile. If the profiles don't
|
|
|
|
* match then unregister the DM device's integrity profile.
|
2009-04-09 07:27:12 +08:00
|
|
|
*/
|
2015-10-22 01:19:49 +08:00
|
|
|
static void dm_table_verify_integrity(struct dm_table *t)
|
2009-04-09 07:27:12 +08:00
|
|
|
{
|
2011-04-02 03:02:31 +08:00
|
|
|
struct gendisk *template_disk = NULL;
|
2009-04-09 07:27:12 +08:00
|
|
|
|
2017-01-05 03:23:51 +08:00
|
|
|
if (t->integrity_added)
|
|
|
|
return;
|
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
if (t->integrity_supported) {
|
|
|
|
/*
|
|
|
|
* Verify that the original integrity profile
|
|
|
|
* matches all the devices in this table.
|
|
|
|
*/
|
|
|
|
template_disk = dm_table_get_integrity_disk(t);
|
|
|
|
if (template_disk &&
|
|
|
|
blk_integrity_compare(dm_disk(t->md), template_disk) >= 0)
|
|
|
|
return;
|
|
|
|
}
|
2009-04-09 07:27:12 +08:00
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
if (integrity_profile_exists(dm_disk(t->md))) {
|
2011-09-26 06:26:17 +08:00
|
|
|
DMWARN("%s: unable to establish an integrity profile",
|
|
|
|
dm_device_name(t->md));
|
2015-10-22 01:19:49 +08:00
|
|
|
blk_integrity_unregister(dm_disk(t->md));
|
|
|
|
}
|
2009-04-09 07:27:12 +08:00
|
|
|
}
|
|
|
|
|
2011-08-02 19:32:08 +08:00
|
|
|
static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
2016-04-14 03:33:19 +08:00
|
|
|
unsigned long flush = (unsigned long) data;
|
2011-08-02 19:32:08 +08:00
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
2016-04-14 03:33:19 +08:00
|
|
|
return q && (q->queue_flags & flush);
|
2011-08-02 19:32:08 +08:00
|
|
|
}
|
|
|
|
|
2016-04-14 03:33:19 +08:00
|
|
|
static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
|
2011-08-02 19:32:08 +08:00
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2011-08-02 19:32:08 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Require at least one underlying device to support flushes.
|
|
|
|
* t->devices includes internal dm devices such as mirror logs
|
|
|
|
* so we need to use iterate_devices here, which targets
|
|
|
|
* supporting flushes must provide.
|
|
|
|
*/
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
2011-08-02 19:32:08 +08:00
|
|
|
|
2013-03-02 06:45:47 +08:00
|
|
|
if (!ti->num_flush_bios)
|
2011-08-02 19:32:08 +08:00
|
|
|
continue;
|
|
|
|
|
2012-07-27 22:08:07 +08:00
|
|
|
if (ti->flush_supported)
|
2015-03-31 01:43:18 +08:00
|
|
|
return true;
|
2012-07-27 22:08:07 +08:00
|
|
|
|
2011-08-02 19:32:08 +08:00
|
|
|
if (ti->type->iterate_devices &&
|
2016-04-14 03:33:19 +08:00
|
|
|
ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
|
2015-03-31 01:43:18 +08:00
|
|
|
return true;
|
2011-08-02 19:32:08 +08:00
|
|
|
}
|
|
|
|
|
2015-03-31 01:43:18 +08:00
|
|
|
return false;
|
2011-08-02 19:32:08 +08:00
|
|
|
}
|
|
|
|
|
2017-07-26 21:35:09 +08:00
|
|
|
static int device_dax_write_cache_enabled(struct dm_target *ti,
|
|
|
|
struct dm_dev *dev, sector_t start,
|
|
|
|
sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct dax_device *dax_dev = dev->dax_dev;
|
|
|
|
|
|
|
|
if (!dax_dev)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (dax_write_cache_enabled(dax_dev))
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int dm_table_supports_dax_write_cache(struct dm_table *t)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
|
|
|
|
|
|
|
if (ti->type->iterate_devices &&
|
|
|
|
ti->type->iterate_devices(ti,
|
|
|
|
device_dax_write_cache_enabled, NULL))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2011-11-01 04:18:50 +08:00
|
|
|
static int device_is_nonrot(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && blk_queue_nonrot(q);
|
|
|
|
}
|
|
|
|
|
2012-09-27 06:45:43 +08:00
|
|
|
static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && !blk_queue_add_random(q);
|
|
|
|
}
|
|
|
|
|
dm table: propagate QUEUE_FLAG_NO_SG_MERGE
Commit 05f1dd5 ("block: add queue flag for disabling SG merging")
introduced a new queue flag: QUEUE_FLAG_NO_SG_MERGE. This gets set by
default in blk_mq_init_queue for mq-enabled devices. The effect of
the flag is to bypass the SG segment merging. Instead, the
bio->bi_vcnt is used as the number of hardware segments.
With a device mapper target on top of a device with
QUEUE_FLAG_NO_SG_MERGE set, we can end up sending down more segments
than a driver is prepared to handle. I ran into this when backporting
the virtio_blk mq support. It triggerred this BUG_ON, in
virtio_queue_rq:
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
The queue's max is set here:
blk_queue_max_segments(q, vblk->sg_elems-2);
Basically, what happens is that a bio is built up for the dm device
(which does not have the QUEUE_FLAG_NO_SG_MERGE flag set) using
bio_add_page. That path will call into __blk_recalc_rq_segments, so
what you end up with is bi_phys_segments being much smaller than bi_vcnt
(and bi_vcnt grows beyond the maximum sg elements). Then, when the bio
is submitted, it gets cloned. When the cloned bio is submitted, it will
end up in blk_recount_segments, here:
if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags))
bio->bi_phys_segments = bio->bi_vcnt;
and now we've set bio->bi_phys_segments to a number that is beyond what
was registered as queue_max_segments by the driver.
The right way to fix this is to propagate the queue flag up the stack.
The rules for propagating the flag are simple:
- if the flag is set for any underlying device, it must be set for the
upper device
- consequently, if the flag is not set for any underlying device, it
should not be set for the upper device.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 3.16+
2014-08-08 23:03:41 +08:00
|
|
|
static int queue_supports_sg_merge(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && !test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
|
|
|
|
}
|
|
|
|
|
2012-09-27 06:45:43 +08:00
|
|
|
static bool dm_table_all_devices_attribute(struct dm_table *t,
|
|
|
|
iterate_devices_callout_fn func)
|
2011-11-01 04:18:50 +08:00
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2011-11-01 04:18:50 +08:00
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
2011-11-01 04:18:50 +08:00
|
|
|
|
|
|
|
if (!ti->type->iterate_devices ||
|
2012-09-27 06:45:43 +08:00
|
|
|
!ti->type->iterate_devices(ti, func, NULL))
|
2015-03-31 01:43:18 +08:00
|
|
|
return false;
|
2011-11-01 04:18:50 +08:00
|
|
|
}
|
|
|
|
|
2015-03-31 01:43:18 +08:00
|
|
|
return true;
|
2011-11-01 04:18:50 +08:00
|
|
|
}
|
|
|
|
|
2012-12-22 04:23:36 +08:00
|
|
|
static int device_not_write_same_capable(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && !q->limits.max_write_same_sectors;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dm_table_supports_write_same(struct dm_table *t)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2012-12-22 04:23:36 +08:00
|
|
|
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
2012-12-22 04:23:36 +08:00
|
|
|
|
2013-03-02 06:45:47 +08:00
|
|
|
if (!ti->num_write_same_bios)
|
2012-12-22 04:23:36 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!ti->type->iterate_devices ||
|
2013-05-10 21:37:16 +08:00
|
|
|
ti->type->iterate_devices(ti, device_not_write_same_capable, NULL))
|
2012-12-22 04:23:36 +08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-04-06 01:21:05 +08:00
|
|
|
static int device_not_write_zeroes_capable(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && !q->limits.max_write_zeroes_sectors;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dm_table_supports_write_zeroes(struct dm_table *t)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
|
|
|
unsigned i = 0;
|
|
|
|
|
|
|
|
while (i < dm_table_get_num_targets(t)) {
|
|
|
|
ti = dm_table_get_target(t, i++);
|
|
|
|
|
|
|
|
if (!ti->num_write_zeroes_bios)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!ti->type->iterate_devices ||
|
|
|
|
ti->type->iterate_devices(ti, device_not_write_zeroes_capable, NULL))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2014-07-11 00:23:07 +08:00
|
|
|
static int device_discard_capable(struct dm_target *ti, struct dm_dev *dev,
|
|
|
|
sector_t start, sector_t len, void *data)
|
|
|
|
{
|
|
|
|
struct request_queue *q = bdev_get_queue(dev->bdev);
|
|
|
|
|
|
|
|
return q && blk_queue_discard(q);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dm_table_supports_discards(struct dm_table *t)
|
|
|
|
{
|
|
|
|
struct dm_target *ti;
|
2017-04-19 04:51:46 +08:00
|
|
|
unsigned i;
|
2014-07-11 00:23:07 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Unless any target used by the table set discards_supported,
|
|
|
|
* require at least one underlying device to support discards.
|
|
|
|
* t->devices includes internal dm devices such as mirror logs
|
|
|
|
* so we need to use iterate_devices here, which targets
|
|
|
|
* supporting discard selectively must provide.
|
|
|
|
*/
|
2017-04-19 04:51:46 +08:00
|
|
|
for (i = 0; i < dm_table_get_num_targets(t); i++) {
|
|
|
|
ti = dm_table_get_target(t, i);
|
2014-07-11 00:23:07 +08:00
|
|
|
|
|
|
|
if (!ti->num_discard_bios)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (ti->discards_supported)
|
2015-03-31 01:43:18 +08:00
|
|
|
return true;
|
2014-07-11 00:23:07 +08:00
|
|
|
|
|
|
|
if (ti->type->iterate_devices &&
|
|
|
|
ti->type->iterate_devices(ti, device_discard_capable, NULL))
|
2015-03-31 01:43:18 +08:00
|
|
|
return true;
|
2014-07-11 00:23:07 +08:00
|
|
|
}
|
|
|
|
|
2015-03-31 01:43:18 +08:00
|
|
|
return false;
|
2014-07-11 00:23:07 +08:00
|
|
|
}
|
|
|
|
|
2009-06-22 17:12:34 +08:00
|
|
|
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
|
|
|
|
struct queue_limits *limits)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2016-03-31 00:14:14 +08:00
|
|
|
bool wc = false, fua = false;
|
2011-08-02 19:32:08 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2009-06-22 17:12:32 +08:00
|
|
|
* Copy table's limits to the DM device's request_queue
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2009-06-22 17:12:34 +08:00
|
|
|
q->limits = *limits;
|
2008-04-30 01:12:35 +08:00
|
|
|
|
2010-08-12 11:14:08 +08:00
|
|
|
if (!dm_table_supports_discards(t))
|
|
|
|
queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, q);
|
|
|
|
else
|
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
|
|
|
|
|
2016-04-14 03:33:19 +08:00
|
|
|
if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) {
|
2016-03-31 00:14:14 +08:00
|
|
|
wc = true;
|
2016-04-14 03:33:19 +08:00
|
|
|
if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_FUA)))
|
2016-03-31 00:14:14 +08:00
|
|
|
fua = true;
|
2011-08-02 19:32:08 +08:00
|
|
|
}
|
2016-03-31 00:14:14 +08:00
|
|
|
blk_queue_write_cache(q, wc, fua);
|
2011-08-02 19:32:08 +08:00
|
|
|
|
2017-07-26 21:35:09 +08:00
|
|
|
if (dm_table_supports_dax_write_cache(t))
|
|
|
|
dax_write_cache(t->md->dax_dev, true);
|
|
|
|
|
2012-09-27 06:45:43 +08:00
|
|
|
/* Ensure that all underlying devices are non-rotational. */
|
|
|
|
if (dm_table_all_devices_attribute(t, device_is_nonrot))
|
2011-11-01 04:18:50 +08:00
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
|
|
|
|
else
|
|
|
|
queue_flag_clear_unlocked(QUEUE_FLAG_NONROT, q);
|
|
|
|
|
2012-12-22 04:23:36 +08:00
|
|
|
if (!dm_table_supports_write_same(t))
|
|
|
|
q->limits.max_write_same_sectors = 0;
|
2017-04-06 01:21:05 +08:00
|
|
|
if (!dm_table_supports_write_zeroes(t))
|
|
|
|
q->limits.max_write_zeroes_sectors = 0;
|
2012-12-22 04:23:30 +08:00
|
|
|
|
dm table: propagate QUEUE_FLAG_NO_SG_MERGE
Commit 05f1dd5 ("block: add queue flag for disabling SG merging")
introduced a new queue flag: QUEUE_FLAG_NO_SG_MERGE. This gets set by
default in blk_mq_init_queue for mq-enabled devices. The effect of
the flag is to bypass the SG segment merging. Instead, the
bio->bi_vcnt is used as the number of hardware segments.
With a device mapper target on top of a device with
QUEUE_FLAG_NO_SG_MERGE set, we can end up sending down more segments
than a driver is prepared to handle. I ran into this when backporting
the virtio_blk mq support. It triggerred this BUG_ON, in
virtio_queue_rq:
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
The queue's max is set here:
blk_queue_max_segments(q, vblk->sg_elems-2);
Basically, what happens is that a bio is built up for the dm device
(which does not have the QUEUE_FLAG_NO_SG_MERGE flag set) using
bio_add_page. That path will call into __blk_recalc_rq_segments, so
what you end up with is bi_phys_segments being much smaller than bi_vcnt
(and bi_vcnt grows beyond the maximum sg elements). Then, when the bio
is submitted, it gets cloned. When the cloned bio is submitted, it will
end up in blk_recount_segments, here:
if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags))
bio->bi_phys_segments = bio->bi_vcnt;
and now we've set bio->bi_phys_segments to a number that is beyond what
was registered as queue_max_segments by the driver.
The right way to fix this is to propagate the queue flag up the stack.
The rules for propagating the flag are simple:
- if the flag is set for any underlying device, it must be set for the
upper device
- consequently, if the flag is not set for any underlying device, it
should not be set for the upper device.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 3.16+
2014-08-08 23:03:41 +08:00
|
|
|
if (dm_table_all_devices_attribute(t, queue_supports_sg_merge))
|
|
|
|
queue_flag_clear_unlocked(QUEUE_FLAG_NO_SG_MERGE, q);
|
|
|
|
else
|
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_NO_SG_MERGE, q);
|
|
|
|
|
2015-10-22 01:19:49 +08:00
|
|
|
dm_table_verify_integrity(t);
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
|
2012-09-27 06:45:43 +08:00
|
|
|
/*
|
|
|
|
* Determine whether or not this queue's I/O timings contribute
|
|
|
|
* to the entropy pool, Only request-based targets use this.
|
|
|
|
* Clear QUEUE_FLAG_ADD_RANDOM if any underlying device does not
|
|
|
|
* have it set.
|
|
|
|
*/
|
|
|
|
if (blk_queue_add_random(q) && dm_table_all_devices_attribute(t, device_is_not_random))
|
|
|
|
queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q);
|
|
|
|
|
dm: enable request based option
This patch enables request-based dm.
o Request-based dm and bio-based dm coexist, since there are
some target drivers which are more fitting to bio-based dm.
Also, there are other bio-based devices in the kernel
(e.g. md, loop).
Since bio-based device can't receive struct request,
there are some limitations on device stacking between
bio-based and request-based.
type of underlying device
bio-based request-based
----------------------------------------------
bio-based OK OK
request-based -- OK
The device type is recognized by the queue flag in the kernel,
so dm follows that.
o The type of a dm device is decided at the first table binding time.
Once the type of a dm device is decided, the type can't be changed.
o Mempool allocations are deferred to at the table loading time, since
mempools for request-based dm are different from those for bio-based
dm and needed mempool type is fixed by the type of table.
o Currently, request-based dm supports only tables that have a single
target. To support multiple targets, we need to support request
splitting or prevent bio/request from spanning multiple targets.
The former needs lots of changes in the block layer, and the latter
needs that all target drivers support merge() function.
Both will take a time.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
2009-06-22 17:12:36 +08:00
|
|
|
/*
|
|
|
|
* QUEUE_FLAG_STACKABLE must be set after all queue settings are
|
|
|
|
* visible to other CPUs because, once the flag is set, incoming bios
|
|
|
|
* are processed by request-based dm, which refers to the queue
|
|
|
|
* settings.
|
|
|
|
* Until the flag set, bios are passed to bio-based dm and queued to
|
|
|
|
* md->deferred where queue settings are not needed yet.
|
|
|
|
* Those bios are passed to request-based dm at the resume time.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
|
|
|
if (dm_table_request_based(t))
|
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_STACKABLE, q);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
unsigned int dm_table_get_num_targets(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return t->num_targets;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct list_head *dm_table_get_devices(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return &t->devices;
|
|
|
|
}
|
|
|
|
|
2008-09-03 03:28:45 +08:00
|
|
|
fmode_t dm_table_get_mode(struct dm_table *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
return t->mode;
|
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_table_get_mode);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-10-29 08:13:31 +08:00
|
|
|
enum suspend_mode {
|
|
|
|
PRESUSPEND,
|
|
|
|
PRESUSPEND_UNDO,
|
|
|
|
POSTSUSPEND,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void suspend_targets(struct dm_table *t, enum suspend_mode mode)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int i = t->num_targets;
|
|
|
|
struct dm_target *ti = t->targets;
|
|
|
|
|
2017-04-28 01:11:21 +08:00
|
|
|
lockdep_assert_held(&t->md->suspend_lock);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
while (i--) {
|
2014-10-29 08:13:31 +08:00
|
|
|
switch (mode) {
|
|
|
|
case PRESUSPEND:
|
|
|
|
if (ti->type->presuspend)
|
|
|
|
ti->type->presuspend(ti);
|
|
|
|
break;
|
|
|
|
case PRESUSPEND_UNDO:
|
|
|
|
if (ti->type->presuspend_undo)
|
|
|
|
ti->type->presuspend_undo(ti);
|
|
|
|
break;
|
|
|
|
case POSTSUSPEND:
|
2005-04-17 06:20:36 +08:00
|
|
|
if (ti->type->postsuspend)
|
|
|
|
ti->type->postsuspend(ti);
|
2014-10-29 08:13:31 +08:00
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
ti++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void dm_table_presuspend_targets(struct dm_table *t)
|
|
|
|
{
|
2005-07-29 12:15:57 +08:00
|
|
|
if (!t)
|
|
|
|
return;
|
|
|
|
|
2014-10-29 08:13:31 +08:00
|
|
|
suspend_targets(t, PRESUSPEND);
|
|
|
|
}
|
|
|
|
|
|
|
|
void dm_table_presuspend_undo_targets(struct dm_table *t)
|
|
|
|
{
|
|
|
|
if (!t)
|
|
|
|
return;
|
|
|
|
|
|
|
|
suspend_targets(t, PRESUSPEND_UNDO);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void dm_table_postsuspend_targets(struct dm_table *t)
|
|
|
|
{
|
2005-07-29 12:15:57 +08:00
|
|
|
if (!t)
|
|
|
|
return;
|
|
|
|
|
2014-10-29 08:13:31 +08:00
|
|
|
suspend_targets(t, POSTSUSPEND);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-10-03 16:15:36 +08:00
|
|
|
int dm_table_resume_targets(struct dm_table *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-10-03 16:15:36 +08:00
|
|
|
int i, r = 0;
|
|
|
|
|
2017-04-28 01:11:21 +08:00
|
|
|
lockdep_assert_held(&t->md->suspend_lock);
|
|
|
|
|
2006-10-03 16:15:36 +08:00
|
|
|
for (i = 0; i < t->num_targets; i++) {
|
|
|
|
struct dm_target *ti = t->targets + i;
|
|
|
|
|
|
|
|
if (!ti->type->preresume)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
r = ti->type->preresume(ti);
|
2013-10-25 02:10:29 +08:00
|
|
|
if (r) {
|
|
|
|
DMERR("%s: %s: preresume failed, error = %d",
|
|
|
|
dm_device_name(t->md), ti->type->name, r);
|
2006-10-03 16:15:36 +08:00
|
|
|
return r;
|
2013-10-25 02:10:29 +08:00
|
|
|
}
|
2006-10-03 16:15:36 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
for (i = 0; i < t->num_targets; i++) {
|
|
|
|
struct dm_target *ti = t->targets + i;
|
|
|
|
|
|
|
|
if (ti->type->resume)
|
|
|
|
ti->type->resume(ti);
|
|
|
|
}
|
2006-10-03 16:15:36 +08:00
|
|
|
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-01-14 04:00:01 +08:00
|
|
|
void dm_table_add_target_callbacks(struct dm_table *t, struct dm_target_callbacks *cb)
|
|
|
|
{
|
|
|
|
list_add(&cb->list, &t->target_callbacks);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dm_table_add_target_callbacks);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
int dm_table_any_congested(struct dm_table *t, int bdi_bits)
|
|
|
|
{
|
2008-10-10 20:37:09 +08:00
|
|
|
struct dm_dev_internal *dd;
|
2008-02-08 10:09:59 +08:00
|
|
|
struct list_head *devices = dm_table_get_devices(t);
|
2011-01-14 04:00:01 +08:00
|
|
|
struct dm_target_callbacks *cb;
|
2005-04-17 06:20:36 +08:00
|
|
|
int r = 0;
|
|
|
|
|
2008-02-08 10:09:59 +08:00
|
|
|
list_for_each_entry(dd, devices, list) {
|
2014-08-14 02:53:43 +08:00
|
|
|
struct request_queue *q = bdev_get_queue(dd->dm_dev->bdev);
|
2008-10-10 20:37:13 +08:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
|
|
|
|
if (likely(q))
|
2017-02-02 22:56:50 +08:00
|
|
|
r |= bdi_congested(q->backing_dev_info, bdi_bits);
|
2008-10-10 20:37:13 +08:00
|
|
|
else
|
|
|
|
DMWARN_LIMIT("%s: any_congested: nonexistent device %s",
|
|
|
|
dm_device_name(t->md),
|
2014-08-14 02:53:43 +08:00
|
|
|
bdevname(dd->dm_dev->bdev, b));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-01-14 04:00:01 +08:00
|
|
|
list_for_each_entry(cb, &t->target_callbacks, list)
|
|
|
|
if (cb->congested_fn)
|
|
|
|
r |= cb->congested_fn(cb, bdi_bits);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2006-03-27 17:17:54 +08:00
|
|
|
struct mapped_device *dm_table_get_md(struct dm_table *t)
|
|
|
|
{
|
|
|
|
return t->md;
|
|
|
|
}
|
2011-08-02 19:32:04 +08:00
|
|
|
EXPORT_SYMBOL(dm_table_get_md);
|
2006-03-27 17:17:54 +08:00
|
|
|
|
2014-02-28 22:33:43 +08:00
|
|
|
void dm_table_run_md_queue_async(struct dm_table *t)
|
|
|
|
{
|
|
|
|
struct mapped_device *md;
|
|
|
|
struct request_queue *queue;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (!dm_table_request_based(t))
|
|
|
|
return;
|
|
|
|
|
|
|
|
md = dm_table_get_md(t);
|
|
|
|
queue = dm_get_md_queue(md);
|
|
|
|
if (queue) {
|
2015-03-08 13:51:47 +08:00
|
|
|
if (queue->mq_ops)
|
|
|
|
blk_mq_run_hw_queues(queue, true);
|
|
|
|
else {
|
|
|
|
spin_lock_irqsave(queue->queue_lock, flags);
|
|
|
|
blk_run_queue_async(queue);
|
|
|
|
spin_unlock_irqrestore(queue->queue_lock, flags);
|
|
|
|
}
|
2014-02-28 22:33:43 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(dm_table_run_md_queue_async);
|
|
|
|
|