dm thin: set minimum_io_size to pool's data block size
Before, if the block layer's limit stacking didn't establish an optimal_io_size that was compatible with the thin-pool's data block size we'd set optimal_io_size to the data block size and minimum_io_size to 0 (which the block layer adjusts to be physical_block_size). Update pool_io_hints() to set both minimum_io_size and optimal_io_size to the thin-pool's data block size. This fixes an issue reported where mkfs.xfs would create more XFS Allocation Groups on thinp volumes than on a normal linear LV of comparable size, see: https://bugzilla.redhat.com/show_bug.cgi?id=1003227 Reported-by: Chris Murphy <lists@colorremedies.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
This commit is contained in:
parent
298a9fa08a
commit
fdfb4c8c1a
|
@ -3177,7 +3177,7 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits)
|
|||
*/
|
||||
if (io_opt_sectors < pool->sectors_per_block ||
|
||||
do_div(io_opt_sectors, pool->sectors_per_block)) {
|
||||
blk_limits_io_min(limits, 0);
|
||||
blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT);
|
||||
blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT);
|
||||
}
|
||||
|
||||
|
|
Loading…
Reference in New Issue