btrfs: fix race when defragmenting leads to unnecessary IO

When defragmenting we skip ranges that have holes or inline extents, so that
we don't do unnecessary IO and waste space. We do this check when calling
should_defrag_range() at btrfs_defrag_file(). However we do it without
holding the inode's lock. The reason we do it like this is to avoid
blocking other tasks for too long, that possibly want to operate on other
file ranges, since after the call to should_defrag_range() and before
locking the inode, we trigger a synchronous page cache readahead. However
before we were able to lock the inode, some other task might have punched
a hole in our range, or we may now have an inline extent there, in which
case we should not set the range for defrag anymore since that would cause
unnecessary IO and make us waste space (i.e. allocating extents to contain
zeros for a hole).

So after we locked the inode and the range in the iotree, check again if
we have holes or an inline extent, and if we do, just skip the range.

I hit this while testing my next patch that fixes races when updating an
inode's number of bytes (subject "btrfs: update the number of bytes used
by an inode atomically"), and it depends on this change in order to work
correctly. Alternatively I could rework that other patch to detect holes
and flag their range with the 'new delalloc' bit, but this itself fixes
an efficiency problem due a race that from a functional point of view is
not harmful (it could be triggered with btrfs/062 from fstests).

CC: stable@vger.kernel.org # 5.4+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
Filipe Manana 2020-11-04 11:07:33 +00:00 committed by David Sterba
parent 5893dfb98f
commit 7f458a3873
1 changed files with 39 additions and 0 deletions

View File

@ -1275,6 +1275,7 @@ static int cluster_pages_for_defrag(struct inode *inode,
u64 page_end;
u64 page_cnt;
u64 start = (u64)start_index << PAGE_SHIFT;
u64 search_start;
int ret;
int i;
int i_done;
@ -1371,6 +1372,40 @@ again:
lock_extent_bits(&BTRFS_I(inode)->io_tree,
page_start, page_end - 1, &cached_state);
/*
* When defragmenting we skip ranges that have holes or inline extents,
* (check should_defrag_range()), to avoid unnecessary IO and wasting
* space. At btrfs_defrag_file(), we check if a range should be defragged
* before locking the inode and then, if it should, we trigger a sync
* page cache readahead - we lock the inode only after that to avoid
* blocking for too long other tasks that possibly want to operate on
* other file ranges. But before we were able to get the inode lock,
* some other task may have punched a hole in the range, or we may have
* now an inline extent, in which case we should not defrag. So check
* for that here, where we have the inode and the range locked, and bail
* out if that happened.
*/
search_start = page_start;
while (search_start < page_end) {
struct extent_map *em;
em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, search_start,
page_end - search_start);
if (IS_ERR(em)) {
ret = PTR_ERR(em);
goto out_unlock_range;
}
if (em->block_start >= EXTENT_MAP_LAST_BYTE) {
free_extent_map(em);
/* Ok, 0 means we did not defrag anything */
ret = 0;
goto out_unlock_range;
}
search_start = extent_map_end(em);
free_extent_map(em);
}
clear_extent_bit(&BTRFS_I(inode)->io_tree, page_start,
page_end - 1, EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING |
EXTENT_DEFRAG, 0, 0, &cached_state);
@ -1401,6 +1436,10 @@ again:
btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT);
extent_changeset_free(data_reserved);
return i_done;
out_unlock_range:
unlock_extent_cached(&BTRFS_I(inode)->io_tree,
page_start, page_end - 1, &cached_state);
out:
for (i = 0; i < i_done; i++) {
unlock_page(pages[i]);