btrfs: fix extent_state leak in btrfs_lock_and_flush_ordered_range

btrfs_lock_and_flush_ordered_range() loads given "*cached_state" into
cachedp, which, in general, is NULL. Then, lock_extent_bits() updates
"cachedp", but it never goes backs to the caller. Thus the caller still
see its "cached_state" to be NULL and never free the state allocated
under btrfs_lock_and_flush_ordered_range(). As a result, we will
see massive state leak with e.g. fstests btrfs/005. Fix this bug by
properly handling the pointers.

Fixes: bd80d94efb ("btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range")
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
Naohiro Aota 2019-07-26 16:47:05 +09:00 committed by David Sterba
parent 6e7ca09b58
commit a3b46b86ca
1 changed files with 6 additions and 5 deletions

View File

@ -985,13 +985,14 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
struct extent_state **cached_state)
{
struct btrfs_ordered_extent *ordered;
struct extent_state *cachedp = NULL;
struct extent_state *cache = NULL;
struct extent_state **cachedp = &cache;
if (cached_state)
cachedp = *cached_state;
cachedp = cached_state;
while (1) {
lock_extent_bits(tree, start, end, &cachedp);
lock_extent_bits(tree, start, end, cachedp);
ordered = btrfs_lookup_ordered_range(inode, start,
end - start + 1);
if (!ordered) {
@ -1001,10 +1002,10 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
* aren't exposing it outside of this function
*/
if (!cached_state)
refcount_dec(&cachedp->refs);
refcount_dec(&cache->refs);
break;
}
unlock_extent_cached(tree, start, end, &cachedp);
unlock_extent_cached(tree, start, end, cachedp);
btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1);
btrfs_put_ordered_extent(ordered);
}